This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Unimodular Random Measured Metric Spaces and Palm Theory on Them

Ali Khezeli 111Inria Paris, [email protected] 222Department of Applied Mathematics, Faculty of Mathematical Sciences, Tarbiat Modares University, P.O. Box 14115-134, Tehran, Iran, [email protected]
Abstract

In this work, we define the notion of unimodular random measured metric spaces as a common generalization of various other notions. This includes the discrete cases like unimodular graphs and stationary point processes, as well as the non-discrete cases like stationary random measures and the continuum metric spaces arising as scaling limits of graphs. We provide various examples and prove many general results; e.g., on weak limits, re-rooting invariance, random walks, ergodic decomposition, amenability and balancing transport kernels. In addition, we generalize the Palm theory to point processes and random measures on a given unimodular space. This is useful for Palm calculations and also for reducing some problems to the discrete cases.

1 Introduction

The mass transport principle (MTP) refers to some key equations that appear in various forms in a number of subfields of probability theory and other fields of mathematics. These equations capture (and sometimes formalize) the intuitive notion of stochastic homogeneity, which, in different contexts, appears as stationarity, unimodularity, typicality of a point, re-rooting invariance, involution invariance, or just invariance. The similarity of the different versions of the MTP results in fruitful connections between various fields.

In Subsection 1.1, we provide a quick survey of the MTP in the literature. Then, the contributions of the present paper are introduced in the next parts of the introduction.

1.1 The Mass Transport Principle in the Literature

The mass transport principle (MTP) was first used in [20] for studying percolation on trees. It was developed further in [7] for studying group-invariant percolation on graphs and in [8] for proving recurrence of local limits of planar graphs. [4] also established a property called involution invariance for local limits of finite graphs333The MTP for local limits of graphs was also observed in [8].. This property turned out to be a special case of the MTP, and in fact, equivalent to it. Then, [3] defined unimodular random graphs by simply using the MTP as the definition, provided various general examples, and established many properties for them. The term unimodular was chosen in [3] because, in order that a fixed transitive graph GG satisfies the MTP, it is necessary and sufficient that the automorphism group of GG is a unimodular group.

It is useful to recall the MTP for unimodular graphs. A mass transport or a transport function is a function gg that takes as input a graph GG and two vertices of GG and outputs a value in 0\mathbb{R}^{\geq 0}. We think of g(G,u,v)g(G,u,v) as the mass going from uu to vv. The assumptions on gg are measurability (to be defined suitably) and invariance, where the latter means here that gg is invariant under isomorphisms. A random rooted graph [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}] (where the graph 𝑮\boldsymbol{G} and the root vertex 𝒐\boldsymbol{o} are random) is called unimodular if it satisfies the following MTP for any transport function gg:

𝔼[x𝑮g(𝑮,𝒐,x)]=𝔼[x𝑮g(𝑮,x,𝒐)].\displaystyle\mathbb{E}\left[\sum_{x\in\boldsymbol{G}}g(\boldsymbol{G},\boldsymbol{o},x)\right]=\mathbb{E}\left[\sum_{x\in\boldsymbol{G}}g(\boldsymbol{G},x,\boldsymbol{o})\right]. (1.1)

Similar versions of the MTP exist for point processes in d\mathbb{R}^{d}. For a recent application, one can mention the use of the MTP in constructing perfect matching between point processes and fair tessellations (see e.g., [25] and [13]). These MTPs are special cases of much older theorems in stochastic geometry; e.g., Mecke’s formula, Mecke’s theorem and Neveu’s exchange formula, which had been widely used in stochastic geometry. In particular, Mecke’s theorem (see e.g., [33]) can be rephrased as follows: If Φ0\Phi_{0} is the Palm version of a stationary point process (i.e., conditioned to contain the origin, or equivaletnly, seeing the point process form a typical point), then

𝔼[xΦ0g(Φ0,0,x)]=𝔼[xΦ0g(Φ0,x,0)].\displaystyle\mathbb{E}\left[\sum_{x\in\Phi_{0}}g(\Phi_{0},0,x)\right]=\mathbb{E}\left[\sum_{x\in\Phi_{0}}g(\Phi_{0},x,0)\right]. (1.2)

This equation looks similar to (1.1) with the difference that the random objects have different natures and the invariance condition for the transport function g=g(Φ,u,v)g=g(\Phi,u,v) is the invariance under Euclidean translations. In fact, (1.2) characterizes a larger class of point processes which are called point-stationary point processes in [33] (e.g., the zero set or the graph of the simple random walk). See Subsection 4.1 for further discussion.

Due to the similarity of the MTP in (point-) stationary point processes and unimodular random graphs, a lot of connections have been observed between the two theories. In particular, [3] noted that any graph that is constructed on a stationary point process (with some equivariance and measurability condition) gives rise to a unimodular random graph. Also, various results and proof techniques in one theory can be translated into the other one with minor modifications (see [6]). The previous work [6] unifies these two notions by defining unimodular random discrete spaces.

The theorems mentioned above for stationary point processes are in fact more general and apply to stationary random measures. In particular, Mecke’s theorem in the general form can be rephrased equivalently in a MTP form as follows: If Ψ0\Psi_{0} is the Palm version of a stationary random measure on d\mathbb{R}^{d}, then

𝔼[g(Ψ0,0,x)𝑑Ψ(x)]=𝔼[g(Ψ0,x,0)𝑑Ψ(x)]\mathbb{E}\left[\int g(\Psi_{0},0,x)d\Psi(x)\right]=\mathbb{E}\left[\int g(\Psi_{0},x,0)d\Psi(x)\right] (1.3)

for every function g=g(Ψ0,u,v)0g=g(\Psi_{0},u,v)\geq 0 that is translation-invariant and measurable. As in the previous case, (1.3) characterizes a larger class of random measures, which are called mass-stationary random measures in [33]. See Subsection 4.1 for further discussion.

As mentioned above, the MTP (1.1) is satisfied for local limits of graphs. Similar properties have been established for some instances of scaling limits of graphs, where scaling limit means that the graph-distance metric is scaled by some small factor and the limit means the limit of a sequence of random metric spaces (sometime, the metric spaces are rooted; i.e., have a distinguished point, and/or measured; i.e., have a distinguished measure). Most notably, various instances that have a compact rooted measured scaling limit satisfy the so called re-rooting invariance property; i.e., the distribution is invariant under changing the root randomly (according to the distinguished measure on the model). This is the case for the Brownian continuum random tree [2], stable trees [15] and the Brownian map [34]. Also, in a few examples where the scaling limit is not compact, an MTP similar to (1.3) has been observed (e.g., in [10]). However, a general rigorous MTP result for scaling limits seems to be missing in the literature. This is done in this work by providing a general form of the MTP and by showing that it is preserved under weak limits. This general result readily implies the MTP and re-rooting invariance in the known examples of scaling limits. We will also provide some re-rooting invariance properties in the non-compact case as well.

In this occasion, we should also mention the mass transport principle in the theory of countable Borel equivalence relations. This will be introduced in the next subsection and will be discussed further in Subsection 4.6.

1.2 Unification of the MTP by Unimodular rmm Spaces

In the previous subsection, we recalled that the theories of unimodular graphs, point processes, random measures and scaling limits exhibit the mass transport principle in various forms. The similarity of these formulas provide fruitful connections between these theories. So, it is natural to ask for a general theory of the MTP that unifies these notions. We saw that the discrete cases (unimodular graphs and point processes) can be unified by the notion of unimodular discrete spaces [6] (these cases are also tightly connected to the theory of countable Borel equivalence relations).

The first main task of this work is providing a unification of the MTP in the discrete and non-discrete cases. The unification is provided by introducing unimodular random rooted measured metric spaces, or in short, unimodular rmm spaces in Section 2. This can be thought of a common generalization of unimodular graphs, (point-) stationary point processes, (mass-) stationary random measures and the random continuum spaces arising in scaling limits. In this part, establishing the MTP for general scaling limits is novel and is one of the main motivations of this work.

Unimodular rmm spaces are defined as random triples [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], where 𝑿\boldsymbol{X} is a boundedly-compact metric space, 𝒐\boldsymbol{o} is a point of 𝑿\boldsymbol{X} called the root, and 𝝁\boldsymbol{\mu} is a boundedly-finite Borel measure on 𝑿\boldsymbol{X}. The Gromov-Hausdorff-Prokhorov metric, given in the most general form in [31], provides the measure-theoretic requirements for this definition. Then, the definition of unimodularity in this setting is given by modifying the MTP (1.3) accordingly. Note that a distinguished measure is needed on 𝑿\boldsymbol{X} in order that the MTP makes sense. Many specific examples and general categories of unimodular rmm spaces are discussed in Section 4.

In Sections 3 and 5, we provide general results on unimodular spaces. In particular, we prove that unimodularity is preserved under weak convergence. This implies that all (measured) scaling limits are unimodular (which is one of the main motivations of this work), and hence, in the compact case satisfy the re-rooting invariance property. We also study re-rooting in the non-compact case and the properties of random walks on a unimodular rmm space. We define ergodicity and prove an ergodic decomposition theorem. Also, various definitions of amenability are provided and proved to be equivalent, based on similar definitions in other fields of mathematics; e.g., definition by local means, approximate means, hyperfinitenss and Folner condition. It is also proved that the number of topological ends of a unimodular rmm space belongs to {0,1,2,}\{0,1,2,\infty\} almost surely.

We should mention that the discrete cases of the MTP, discussed above, are connected to the theory of countable Borel equivalence relations (CBER), which has totally different roots (ergodic theory of dynamical systems, group actions and orbit equivalence theory)444This theory is almost as old as Mecke’s theorem and is very useful in probability theory, but still deserves to be more recognized in the probability community. Some results in this theory have been re-invented later by probabilists; e.g., the existence of one-ended treeing in the amenable case (see e.g., [12], [24] and [40]) and factor point processes (see [32]).. There exists a MTP-like formula in this theory, which in fact defines when an equivalence relation is measure-preserving; i.e., the measure is invariant. It is a generalization of the measure-preserving property for dynamical systems and group actions. As discussed in [3], this notion is connected to unimodular graphs via graphings. While the two theories have substantial overlap, their view point and motivations are different. In fact, the theory of CBERs can be thought of as the mathematical ground under the theory of unimodular graphs; just like the distinction of measure theory and probability theory. The unification of the MTP in this work does not cover CBERs since we focus on random metric spaces. We will still use this theory in proving some of the results. This will be discussed further in Subsection 4.6.

1.3 Generalization of Palm Theory

In this work, we also consider random measures and point processes on a given unimodular rmm space. In this view, the base space can be thought of a generalization of the Euclidean or hyperbolic spaces. Heuristically, the intensity of a point process Φ\Phi on [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is the expected number of points of Φ\Phi per unit measure (measured by 𝝁\boldsymbol{\mu}). Also, the mean of some quantity on the points of Φ\Phi is the expectation of that quantity at a typical point of Φ\Phi. These notions will be formalized by generalizing the Palm theory in Section 6, which is the second main task of this work. We will generalize various notions and theorems in stochastic geometry; e.g., the Campbell measure, Pam distribution, the Campbell formula, Mecke’s theorem, Neveu’s exchange formula and the existence of balancing transport kernels. We also show that the Palm theory generalizes Palm inversion at the same time (i.e., reconstructing the probability measure from the Palm distribution) and also generalizes various constructions of unimodular graphs by adding vertices and edges to another given unimodular graph (e.g., the dual of a unimodular planar graph).

An important application of the Palm theory in this work is to construct a countable Borel equivalence relation equipped with an invariant measure. This is done by means of adding a Poisson point process and considering its Palm version and is used to prove some of the theorems (e.g., amenability and invariant disintegration) using the results of the theory of countable Borel equivalence relations. These applications are included in Section 7.

2 Unimodular rmm Spaces

In this section, we define unimodular rmm spaces as a common generalization of unimodular graphs, stationary point processes, stationary random measures, etc. The definition is based on a mass transport principle similar to (1.1), (1.2) and (1.3). Note that in the MTP, a distinguished point and a measures should be presumed (since the spaces are not necessarily discrete). So we start by discussing rooted measured metric spaces and some notation.

2.1 Notation

If XX is a metric space, the closed ball of radius rr centered at xXx\in X is denoted by B¯r(x):={yX:d(x,y)r}\overline{B}_{r}(x):=\{y\in X:d(x,y)\leq r\}. The open ball is denoted by Br(x)B_{r}(x). Also, all measures are assumed to be Borel measures. If f:X0f:X\to\mathbb{R}^{\geq 0} is measurable, the measure fμf\mu on XX is defined by (fμ)(A):=Af𝑑μ(f\mu)(A):=\int_{A}fd\mu. If μ\mu is a probability measure, biasing μ\mu by ff means considering the probability measure 1c(fμ)\frac{1}{c}(f\mu), where c=Xf𝑑μc=\int_{X}fd\mu. The Prokhorov metric between finite measures on XX is denoted by dPd_{P}.

2.2 The Space of Rooted Measured Metric Spaces

A rooted measured metric space, abbreviated by a rmm space, is a tuple (X,o,μ)(X,o,\mu) where XX is a metric space, μ\mu is a non-negative Borel measure on XX and oo is a distinguished point of XX called the root. For simplicity of notation, the metric on XX is always denoted by dd if there is no ambiguity, otherwise, it will be denoted by dXd^{X}. In this paper, we always assume that the metric space is boundedly compact (i.e. every closed ball is compact) and μ\mu is boundedly finite (i.e. every ball has finite measure under μ\mu). Similarly, a doubly-rooted measured metric space is a tuple (X,o1,o2,μ)(X,o_{1},o_{2},\mu), where μ\mu is as before and o1o_{1} and o2o_{2} are two ordered distinguished points of XX. Note that we do not require that XX is a geodesic space. It can be even disconnected.

An isomorphism (or a GHP-isometry) between two rmm spaces (X,o,μ)(X,o,\mu) and (X,o,μ)(X^{\prime},o^{\prime},\mu^{\prime}) is a bijective isometry ρ:XX\rho:X\rightarrow X^{\prime} such that ρ(o)=o\rho(o)=o^{\prime} and ρμ=μ\rho_{*}\mu=\mu^{\prime}, where ρ(μ)\rho_{*}(\mu) represents the push forward of the measure μ\mu under ρ\rho. If such ρ\rho exists, then (X,o,μ)(X,o,\mu) and (X,o,μ)(X^{\prime},o^{\prime},\mu^{\prime}) are called isomorphic. Isomorphisms between doubly-rooted spaces are defined similarly. Under this equivalence relation, the equivalence class containing (X,o,μ)(X,o,\mu) (resp. (X,o1,o2,μ)(X,o_{1},o_{2},\mu)) is denoted by [X,o,μ][X,o,\mu](resp. [X,o1,o2,μ][X,o_{1},o_{2},\mu]).

Let \mathcal{M}_{*} be the set of equivalence classes of rmm spaces under isomorphisms. Similarly, define \mathcal{M}_{**} for doubly-rooted measured metric spaces. These sets become Polish spaces under the Gromov-Hausdorff-Prokhorov (GHP) metric (see [31] for the general case of the GHP metric and its history). To keep focus on the main goals, we skip the definition of the metric and we just mention the GHP topology (see Section 3 of [29]): A sequence [Xn,on,μn][X_{n},o_{n},\mu_{n}] converges to [X,o,μ][X,o,\mu] when these spaces can be embedded isometrically in a common boundedly-compact metric space ZZ such that, after the embedding, ono_{n} converges to oo, XnX_{n} converges to XX as closed subsets of ZZ (with the Fell topology) and μn\mu_{n} converges to μ\mu in the vague topology (convergence against compactly-supported continuous functions).

Polishness of \mathcal{M}_{*} allows one to use classical tools in probability theory for studying random elements of \mathcal{M}_{*}, which are called random rmm spaces here. In particular, every random rooted graph defines a random rmm space (equipped with the graph-distance metric and the counting measure on the vertices). The same is true for point processes. Also, d\mathbb{R}^{d}, rooted at the origin and equipped with a random measures on d\mathbb{R}^{d}, defines a random rmm space. The condition of boundedly-compactness matches the conditions of locally-finiteness in each example.

Convention 2.1.

A random rmm spaces is shown by a tuple of bold symbols like [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. By convention, we will use the familiar symbols \mathbb{P} and 𝔼\mathbb{E} for most random objects (instead of writing them in the form of long integrals) even if they live in different spaces and even if extra randomness is introduced. The following explanation helps to reduce the possible confusions. Note that the tuple determines just one random element of \mathcal{M}_{*} and the three symbols 𝑿,𝒐\boldsymbol{X},\boldsymbol{o} and 𝝁\boldsymbol{\mu} are meaningless separately. Any formula containing these symbols should be well defined for an isomorphism class of rmm spaces. By contrast, in Section 6, we will consider more than one probability measure on \mathcal{M}_{*} or similar spaces. In this case, one may also think of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] as a symbol for a generic element of \mathcal{M}_{*} and use formulas like [[𝑿,𝒐,𝝁]A]\mathbb{P}\left[[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]\in A\right] and Q([𝑿,𝒐,𝝁]A)Q([\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]\in A) for different probability measures \mathbb{P} and QQ.

2.3 Unimodularity

In this subsection, we define unimodular rmm spaces and a few basic examples.

Definition 2.2.

A transport function is a measurable function g:0g:\mathcal{M}_{**}\to\mathbb{R}^{\geq 0}. The value g(X,u,v,μ)g(X,u,v,\mu) is interpreted as the mass that uu sends to vv and is also abbreviated by g(u,v)g(u,v) if XX and μ\mu are understood. Let g+(u):=g(u,x)𝑑μ(x)g^{+}(u):=\int g(u,x)d\mu(x) and g(u):=g(x,u)𝑑μ(x)g^{-}(u):=\int g(x,u)d\mu(x) represent the outgoing mass from uu and the incoming mass to uu respectively.

Definition 2.3.

A random rmm space is called unimodular if the following mass transport principle holds: For every transport function gg,

𝔼[𝑿g(𝒐,x)𝑑𝝁(x)]=𝔼[𝑿g(x,𝒐)𝑑𝝁(x)].\mathbb{E}\left[\int_{\boldsymbol{X}}g(\boldsymbol{o},x)d\boldsymbol{\mu}(x)\right]=\mathbb{E}\left[\int_{\boldsymbol{X}}g(x,\boldsymbol{o})d\boldsymbol{\mu}(x)\right]. (2.1)

It is called nontrivial if 𝝁0\boldsymbol{\mu}\neq 0 a.s. and proper if supp(𝝁)=𝑿\mathrm{supp}(\boldsymbol{\mu})=\boldsymbol{X} a.s.

In words, the expected outgoing mass from the root should be equal to the expected incoming mass to the root. The case 𝝁:=0\boldsymbol{\mu}:=0 trivially satisfies the MTP and is not very interesting. However, there are some interesting classes of non-proper examples (e.g., when extending a unimodular graph, Example 6.23, or d\mathbb{R}^{d} or any other space equipped with a point process or a random measure, Example 4.1 and Definition 6.1). Another basic example is when [𝑿,𝒐][\boldsymbol{X},\boldsymbol{o}] is arbitrary and 𝝁:=δ𝒐\boldsymbol{\mu}:=\delta_{\boldsymbol{o}} is the Dirac measure at 𝒐\boldsymbol{o}. More generally, finite measures provide basic examples explained below.

Example 2.4 (Finite Measure).

When 𝝁\boldsymbol{\mu} is a finite measure a.s., unimodularity is equivalent to re-rooting invariance: If 𝒐\boldsymbol{o}^{\prime} is an additional random point of 𝑿\boldsymbol{X} chosen with distribution proportional to 𝝁\boldsymbol{\mu}, then [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o}^{\prime},\boldsymbol{\mu}] has the same distribution as [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] (see Theorem 3.8). Loosely speaking, the root is a random point of 𝑿\boldsymbol{X} chosen with distribution proportional to 𝝁\boldsymbol{\mu}.

Although the infinite case is more interesting for our purpose, the finite case appears in many important examples of scaling limits when the limiting space is compact (see Subsection 4.3). We will see that the re-rooting invariance property in these examples is a quick corollary of the fact that weak convergence preserves unimodularity (Lemma 3.1).

Example 2.5 (Product).

Let [𝑿i,𝒐i,𝝁i][\boldsymbol{X}_{i},\boldsymbol{o}_{i},\boldsymbol{\mu}_{i}] be unimodular for i=1,2i=1,2. Then, [𝑿1×𝑿2,(𝒐1,𝒐2),𝝁1𝝁2][\boldsymbol{X}_{1}\times\boldsymbol{X}_{2},(\boldsymbol{o}_{1},\boldsymbol{o}_{2}),\boldsymbol{\mu}_{1}\otimes\boldsymbol{\mu}_{2}] is also unimodular. Here, the metric on 𝑿1×𝑿2\boldsymbol{X}_{1}\times\boldsymbol{X}_{2} can be the max-metric or the sum-metric.

Example 2.6 (Biasing).

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be unimodular and b:0b:\mathcal{M}_{*}\to\mathbb{R}^{\geq 0} be a measurable function such that 𝔼[b(𝒐)]=1\mathbb{E}\left[b(\boldsymbol{o})\right]=1. Let b𝝁b\boldsymbol{\mu} be the measure on 𝑿\boldsymbol{X} defined by b𝝁(A):=Ab(x)𝑑𝝁(x)b\boldsymbol{\mu}(A):=\int_{A}b(x)d\boldsymbol{\mu}(x). Let [𝑿,𝒐,𝝁][\boldsymbol{X}^{\prime},\boldsymbol{o}^{\prime},\boldsymbol{\mu}^{\prime}] be obtained by changing the measure 𝝁\boldsymbol{\mu} to b𝝁b\boldsymbol{\mu} and then by biasing the probability measure by b(𝒐)b(\boldsymbol{o}); i.e.,

𝔼[h(𝑿,𝒐,𝝁)]=𝔼[b(𝑿,𝒐,𝝁)h(𝑿,𝒐,b𝝁)].\mathbb{E}\left[h(\boldsymbol{X}^{\prime},\boldsymbol{o}^{\prime},\boldsymbol{\mu}^{\prime})\right]=\mathbb{E}\left[b(\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu})h(\boldsymbol{X},\boldsymbol{o},b\boldsymbol{\mu})\right].

Then, [𝑿,𝒐,𝝁][\boldsymbol{X}^{\prime},\boldsymbol{o}^{\prime},\boldsymbol{\mu}^{\prime}] is a unimodular rmm space. This can be seen by verifying the MTP directly. In fact, in Example 6.22, this will be shown to be the Palm version of b𝝁b\boldsymbol{\mu} regarded as an additional measure on [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}].

A heuristic interpretation of unimodularity is that the root is a typical point of the measure 𝝁\boldsymbol{\mu}. When 𝝁\boldsymbol{\mu} is a finite measure, this heuristic is rigorous according to Example 2.4. In the general case, the heuristic is that the expectation of any (equivariant) quantity evaluated at the root is an interpretation of the average of that quantity over the points of the metric space. More precisely, for measurable functions f(𝒐):=f(𝑿,𝒐,𝝁)f(\boldsymbol{o}):=f(\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}), the expectation 𝔼[f(𝒐)]\mathbb{E}\left[f(\boldsymbol{o})\right] is an interpretation of the average of f(𝑿,,𝝁)f(\boldsymbol{X},\cdot,\boldsymbol{\mu}) over the points of 𝑿\boldsymbol{X}, where the average is taken according to the measure 𝝁\boldsymbol{\mu} (in fact, this should be modified in the non-ergodic case; see Subsection 5.4). See Corollary 5.8 for a rigorous statement.

3 Basic Properties of Unimodularity

In this section, we will provide basic properties of unimodularity, which are useful in the discussion of the examples in the next section. Further properties will be provided in Section 5.

3.1 Weak Limits

Lemma 3.1 (Weak Limits).

Unimodularity is preserved under weak limits.

This is similar to the case of unimodular graphs, but the proof is more involved since a convergent sequence of rmm spaces does not necessarily stabilize in a given window. This will be handled by a generalization of Strassen’s theorem.

Lemma 3.1 and Example 2.4 give naturally the following generalization of soficity:

Problem 3.2.

Is every unimodular rmm space sofic? i.e., is it the weak limit of a sequence of deterministic compact measured metric spaces (Xn,μn)(X_{n},\mu_{n}) rooted at a random point with distribution proportional to μn\mu_{n}?

This generalizes Question 10.1 of [3], which involves unimodular graphs. Note that every such weak limit is unimodular. Also, one can assume that XnX_{n} is a finite metric space and μn\mu_{n} is a multiple of the counting measure without loss of generality.

Proof of Lemma 3.1.

Assume [𝑿n,𝒐n,𝝁n][\boldsymbol{X}_{n},\boldsymbol{o}_{n},\boldsymbol{\mu}_{n}] is unimodular (n=1,2,n=1,2,\ldots) and converges to [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. To prove unimodularity of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], it is enough to prove the MTP (2.1) for bounded continuous functions gg. Define f:0f:\mathcal{M}_{*}\to\mathbb{R}^{\geq 0} by

f(X,o,μ):=Xg(X,o,p,μ)𝑑μ(p).f(X,o,\mu):=\int_{X}g(X,o,p,\mu)d\mu(p).

By approximating gg by simpler functions, it is enough to assume that for some R<R<\infty, g(X,o,p,μ)=0g(X,o,p,\mu)=0 whenever d(o,p)Rd(o,p)\geq R and also ff and gg are bounded by RR. In this case, we will prove in the next paragraph that ff is a bounded continuous function. So, weak convergence implies that 𝔼[f(𝑿n,𝒐n,𝝁n)]𝔼[f(𝑿,𝒐,𝝁)]\mathbb{E}\left[f(\boldsymbol{X}_{n},\boldsymbol{o}_{n},\boldsymbol{\mu}_{n})\right]\to\mathbb{E}\left[f(\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu})\right]. A similar argument holds when oo and pp are swapped in the definition of ff. Now, the MTP for [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is implied by taking limit from the MTP for [𝑿n,𝒐n,𝝁n][\boldsymbol{X}_{n},\boldsymbol{o}_{n},\boldsymbol{\mu}_{n}] and claim is proved.

The rest is devoted to proving the continuity of ff and can be skipped at first reading. Assume [Xn,on,μn][X_{n},o_{n},\mu_{n}] are deterministic rmm spaces converging to [X,o,μ][X,o,\mu]. One can assume that XnX_{n}’s are subspaces of a common boundedly-compact metric space ZZ converging to XZX\subseteq Z, onoo_{n}\to o and μn\mu_{n} converges vaguely to μ\mu. So there exists ϵn0\epsilon_{n}\geq 0 and measures μn\mu^{\prime}_{n} sandwiched between the restrictions of μn\mu_{n} to B¯R+3ϵn(o)\overline{B}_{R+3-\epsilon_{n}}(o) and B¯R+3+ϵn(o)\overline{B}_{R+3+\epsilon_{n}}(o) (the balls are in ZZ) such that, if μ\mu^{\prime} is the restriction of μ\mu to B¯R+3(o)\overline{B}_{R+3}(o), then dP(μn,μ)ϵnd_{P}(\mu^{\prime}_{n},\mu^{\prime})\leq\epsilon_{n} (see Section 3 of [29]). We may assume ϵn<1\epsilon_{n}<1. By the generalized Strassen theorem (Theorem 2.1 of [31]), there exists an approximate coupling of μn\mu^{\prime}_{n} and μ\mu^{\prime}; i.e., a measure αn\alpha_{n} on Xn×XX_{n}\times X such that

|π1αnμn|+|π2αnμ|+αn({(x,y)Xn×X:dZ(x,y)>ϵn})ϵn.\left|\pi_{1*}\alpha_{n}-\mu^{\prime}_{n}\right|+\left|\pi_{2*}\alpha_{n}-\mu^{\prime}\right|+\alpha_{n}(\{(x,y)\in X_{n}\times X:d^{Z}(x,y)>\epsilon_{n}\})\leq\epsilon_{n}.

Here, π1\pi_{1} and π2\pi_{2} are the projections from Xn×XX_{n}\times X to XnX_{n} and XX respectively. Also, by the assumptions on gg, the convergence and a compactness argument, there exists M<M<\infty such that for all nn, sup{g(Xn,on,p,μn):pXn}M\sup\{g(X_{n},o_{n},p,\mu_{n}):p\in X_{n}\}\leq M and the same holds for XX. In addition, by considering the modulus of continuity of gg on a suitable compact set, one finds that δn:=sup{|g(Xn,on,p,μn)g(X,o,q,μ)|:dZ(p,q)ϵn}0\delta_{n}:=\sup\{\left|g(X_{n},o_{n},p,\mu_{n})-g(X,o,q,\mu)\right|:d^{Z}(p,q)\leq\epsilon_{n}\}\to 0. Now,

|f(Xn,on,μn)f(X,o,μ)|\displaystyle\left|f(X_{n},o_{n},\mu_{n})-f(X,o,\mu)\right|
=\displaystyle= |Xng(Xn,on,p,μn)𝑑μn(p)Xg(X,o,q,μ)𝑑μ(q)|\displaystyle\left|\int_{X_{n}}g(X_{n},o_{n},p,\mu_{n})d\mu_{n}(p)-\int_{X}g(X,o,q,\mu)d\mu(q)\right|
\displaystyle\leq M|π1αnμn|+M|π2αnμ|+\displaystyle M\left|\pi_{1*}\alpha_{n}-\mu^{\prime}_{n}\right|+M\left|\pi_{2*}\alpha_{n}-\mu^{\prime}\right|+
|Xng(Xn,on,p,μn)d(π1αn)(p)Xg(X,o,q,μ)d(π2αn)(q)|\displaystyle\left|\int_{X_{n}}g(X_{n},o_{n},p,\mu_{n})d(\pi_{1*}\alpha_{n})(p)-\int_{X}g(X,o,q,\mu)d(\pi_{2*}\alpha_{n})(q)\right|
\displaystyle\leq M|π1αnμn|+M|π2αnμ|+\displaystyle M\left|\pi_{1*}\alpha_{n}-\mu^{\prime}_{n}\right|+M\left|\pi_{2*}\alpha_{n}-\mu^{\prime}\right|+
|g(Xn,on,p,μn)g(X,o,q,μ)|𝑑αn(p,q)\displaystyle\int\int\left|g(X_{n},o_{n},p,\mu_{n})-g(X,o,q,\mu)\right|d\alpha_{n}(p,q)
\displaystyle\leq M(|π1αnμn|+|π2αnμ|+αn({(x,y)Xn×X:d(x,y)>ϵn}))\displaystyle M\big{(}\left|\pi_{1*}\alpha_{n}-\mu^{\prime}_{n}\right|+\left|\pi_{2*}\alpha_{n}-\mu^{\prime}\right|+\alpha_{n}(\{(x,y)\in X_{n}\times X:d(x,y)>\epsilon_{n}\})\big{)}
+|g(Xn,on,p,μn)g(X,o,q,μ)|1{d(p,q)ϵn}𝑑αn(p,q)\displaystyle+\int\int\left|g(X_{n},o_{n},p,\mu_{n})-g(X,o,q,\mu)\right|1_{\{d(p,q)\leq\epsilon_{n}\}}d\alpha_{n}(p,q)
\displaystyle\leq Mϵn+δn|αn|\displaystyle M\epsilon_{n}+\delta_{n}\left|\alpha_{n}\right|
\displaystyle\leq Mϵn+δnϵn+δn|μ|.\displaystyle M\epsilon_{n}+\delta_{n}\epsilon_{n}+\delta_{n}\left|\mu^{\prime}\right|.

It follows that |f(Xn,on,μn)f(X,o,μ)|0\left|f(X_{n},o_{n},\mu_{n})-f(X,o,\mu)\right|\to 0 and the continuity of ff is proved. This finishes the proof of the lemma. ∎

3.2 Subset Selection

Unimodularity means heuristically that the root is a typical point. In particular, every property of the points that has zero chance to be seen at the root, is observed at almost no other point. This is formalized in the following easy but important lemma.

Definition 3.3.

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a random rmm space and AA\subseteq\mathcal{M}_{*} be measurable. The set 𝑺:=𝑺(𝑿,𝝁):={p𝑿:(𝑿,p,𝝁)A}\boldsymbol{S}:=\boldsymbol{S}(\boldsymbol{X},\boldsymbol{\mu}):=\{p\in\boldsymbol{X}:(\boldsymbol{X},p,\boldsymbol{\mu})\in A\} is called a factor subset of 𝑿\boldsymbol{X}. Note that the factor subset is a function of 𝑿\boldsymbol{X} and 𝝁\boldsymbol{\mu} and does not depend on 𝒐\boldsymbol{o}.

Lemma 3.4 (Everything Happens At Root).

Let [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a nontrivial unimodular rmm space. For every factor subset 𝐒\boldsymbol{S},

𝒐𝑺 a.s.\displaystyle\boldsymbol{o}\in\boldsymbol{S}\text{ a.s. } \displaystyle\iff 𝑺 has full measure w.r.t. 𝝁, a.s.,\displaystyle\boldsymbol{S}\text{ has full measure w.r.t. }\boldsymbol{\mu},\text{ a.s.},
[𝒐𝑺]>0\displaystyle\mathbb{P}\left[\boldsymbol{o}\in\boldsymbol{S}\right]>0 \displaystyle\iff [𝝁(𝑺)>0]>0.\displaystyle\mathbb{P}\left[\boldsymbol{\mu}(\boldsymbol{S})>0\right]>0.
Proof.

It is enough to prove the first claim. Define g(u,v):=1{v𝑺}g(u,v):=1_{\{v\not\in\boldsymbol{S}\}}. Then, g+(𝒐)=𝝁(𝑿𝑺)g^{+}(\boldsymbol{o})=\boldsymbol{\mu}(\boldsymbol{X}\setminus\boldsymbol{S}) and g(𝒐)=𝝁(𝑿)1{𝒐𝑺}g^{-}(\boldsymbol{o})=\boldsymbol{\mu}(\boldsymbol{X})1_{\{\boldsymbol{o}\not\in\boldsymbol{S}\}}. Since 𝝁(𝑿)>0\boldsymbol{\mu}(\boldsymbol{X})>0 a.s., the claim follows by the MTP (2.1). ∎

This is a generalization of Lemma 2.3 of [3]. It can also be generalized by allowing extra randomness as well, but some care is needed that will be discussed in Subsection 5.1.

By letting 𝑺:=supp(𝝁)\boldsymbol{S}:=\mathrm{supp}(\boldsymbol{\mu}), one immediately obtains:

Corollary 3.5.

If [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is a nontrivial unimodular rmm space, then 𝐨supp(𝛍)\boldsymbol{o}\in\mathrm{supp}(\boldsymbol{\mu}) a.s.

Remark 3.6.

Note that, unlike the discrete cases, one cannot replace the last statement in Lemma 3.4 with [𝑺]>0\mathbb{P}\left[\boldsymbol{S}\neq\emptyset\right]>0. In the language of Borel equivalence relations, despite the case of countable equivalence relations, the saturation of a null set can have positive measure.

Lemma 3.7 (Bounded Selection).

If 𝛍(𝐗)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s., then every factor subset 𝐒\boldsymbol{S} of 𝐗\boldsymbol{X} is either empty or unbounded a.s. Also, 𝛍(𝐒){0,}\boldsymbol{\mu}(\boldsymbol{S})\in\{0,\infty\} a.s.

This generalizes Corollary 2.10 of [5] for unimodular graphs. As mentioned before Proposition 4 of [35], this is related to Poincaré’s recurrence theorem. See also Lemma 5.15 for a further generalization.

Proof.

Let AA be an event in which 𝑺\boldsymbol{S} is nonempty and bounded. Hence, 𝝁(𝑺)<\boldsymbol{\mu}(\boldsymbol{S})<\infty on AA. By replacing it with a suitable neighborhood of 𝑺\boldsymbol{S} if necessary, one may assume 𝝁(𝑺)>0\boldsymbol{\mu}(\boldsymbol{S})>0 on AA. So it is enough to prove the second claim. Let BB be the event 0<𝝁(𝑺)<0<\boldsymbol{\mu}(\boldsymbol{S})<\infty. Let g(u,v):=(𝝁(𝑺))11{v𝑺}1Bg(u,v):=(\boldsymbol{\mu}(\boldsymbol{S}))^{-1}1_{\{v\in\boldsymbol{S}\}}1_{B}. Then, g+(𝒐)=1Bg^{+}(\boldsymbol{o})=1_{B} and g(𝒐)=g^{-}(\boldsymbol{o})=\infty if 𝒐𝑺\boldsymbol{o}\in\boldsymbol{S} and BB holds. If [B]>0\mathbb{P}\left[B\right]>0, then the latter holds with positive probability by Lemma 3.4. This contradicts the MTP for gg. ∎

3.3 Re-rooting Invariance

In the following proposition, re-rooting a unimodular rmm space is considered even in the non-compact case. Assume for each rmm space (X,o,μ)(X,o,\mu) a probability measure ko=k(X,o,μ)k_{o}=k_{(X,o,\mu)} is given on XX. This will be considered as the law for changing the root from oo to a new root. By letting (X,μ)(X,\mu) fixed and letting oo vary, this can be regarded as a Markovian kernel k(X,μ)k^{(X,\mu)} on XX (assuming the following measurability condition). It is called an equivariant Markovian kernel if it is invariant under the isomorphisms of rmm spaces and for each measurable set AA\subseteq\mathcal{M}_{**}, the function [X,o,μ]ko({yX:[X,o,y,μ]A})[X,o,\mu]\mapsto k_{o}(\{y\in X:[X,o,y,\mu]\in A\}) is a measurable function on \mathcal{M}_{*}.

Given a deterministic pair (X,μ)(X,\mu), an equivariant Markovian kernel kk transports μ\mu to another measure on XX defined by μ():=Xky()𝑑μ(y)\mu^{\prime}(\cdot):=\int_{X}k_{y}(\cdot)d\mu(y). If μ=μ\mu^{\prime}=\mu, then μ\mu is called a stationary measure for the kernel on XX.

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a unimodular rmm space and kk be an equivariant Markovian kernel. Conditional to [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], choose 𝒐𝑿\boldsymbol{o}^{\prime}\in\boldsymbol{X} randomly with distribution k𝒐()k_{\boldsymbol{o}}(\cdot), which is regarded as a new root.

Theorem 3.8 (Re-Rooting Invariance).

Assume [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is unimodular, kk is an equivariant Markovian kernel and 𝐨\boldsymbol{o}^{\prime} is a new root chosen with law k𝐨()k_{\boldsymbol{o}}(\cdot). If almost surely, 𝛍\boldsymbol{\mu} is a stationary measure for the Markovian kernel k(𝐗,𝛍)k^{(\boldsymbol{X},\boldsymbol{\mu})} on 𝐗\boldsymbol{X}, then [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o}^{\prime},\boldsymbol{\mu}] has the same distribution as [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}].

In particular, in the compact case, this theorem implies the re-rooting invariance mentioned in Example 2.4. This also generalizes the invariance of Palm distributions under bijective point-shifts and a similar statement for unimodular graphs (Proposition 3.6 of [5]). See also Subsection 5.2 for a generalization to the case where an initial biasing is considered.

Before proving the theorem, we need some lemmas. We will first use the MTP when k(𝒐)k(\boldsymbol{o}) is absolutely continuous w.r.t. 𝝁\boldsymbol{\mu}, and then, we will deduce the general case from the first case.

Lemma 3.9.

If k𝐨k_{\boldsymbol{o}} is absolutely continuous w.r.t. 𝛍\boldsymbol{\mu} almost surely, then the law of [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o}^{\prime},\boldsymbol{\mu}] is absolutely continuous w.r.t. the law of [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. In addition, if the Radon-Nikodym derivative dk𝐨/d𝛍dk_{\boldsymbol{o}}/d\boldsymbol{\mu} is given by f(𝐗,𝐨,,𝛍)f(\boldsymbol{X},\boldsymbol{o},\cdot,\boldsymbol{\mu}), where f:0f:\mathcal{M}_{**}\to\mathbb{R}^{\geq 0} is a measurable function, then the distribution of [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o}^{\prime},\boldsymbol{\mu}] is obtained by biasing the distribution of [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] by f(𝐨)f^{-}(\boldsymbol{o}).

In fact, the proof shows that there always exists such a measurable Radon-Nikodym derivative ff.

Proof.

Let α\alpha and β\beta be the σ\sigma-finite measures on \mathcal{M}_{**} defined by

α(A)\displaystyle\alpha(A) =\displaystyle= 𝔼[𝑿1A[𝑿,𝒐,x,𝝁]𝑑k𝒐(x)],\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}1_{A}[\boldsymbol{X},\boldsymbol{o},x,\boldsymbol{\mu}]dk_{\boldsymbol{o}}(x)\right],
β(A)\displaystyle\beta(A) =\displaystyle= 𝔼[𝑿1A[𝑿,𝒐,x,𝝁]𝑑𝝁(x)].\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}1_{A}[\boldsymbol{X},\boldsymbol{o},x,\boldsymbol{\mu}]d\boldsymbol{\mu}(x)\right].

α\alpha is just the distribution of [𝑿,𝒐,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{o}^{\prime},\boldsymbol{\mu}]. It can be easily seen that α\alpha is absolutely continuous w.r.t. β\beta. Let f(X,o,o,μ)f(X,o,o^{\prime},\mu) be the Radon-Nikodym derivative of α\alpha w.r.t. β\beta at [X,o,o,μ][X,o,o^{\prime},\mu]. It can be shown that ff satisfies the assumptions mentioned in the lemma. Now, let AA be an event in \mathcal{M}_{*}. One has

[[𝑿,𝒐,𝝁]A]\displaystyle\mathbb{P}\left[[\boldsymbol{X},\boldsymbol{o}^{\prime},\boldsymbol{\mu}]\in A\right] =\displaystyle= 𝔼[𝑿1A[𝑿,x,𝝁]𝑑k𝒐(x)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}1_{A}[\boldsymbol{X},x,\boldsymbol{\mu}]dk_{\boldsymbol{o}}(x)\right]
=\displaystyle= 𝔼[𝑿f(𝒐,x)1A[𝑿,x,𝝁]𝑑𝝁(x)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}f(\boldsymbol{o},x)1_{A}[\boldsymbol{X},x,\boldsymbol{\mu}]d\boldsymbol{\mu}(x)\right]
=\displaystyle= 𝔼[𝑿f(x,𝒐)1A[𝑿,𝒐,𝝁]𝑑𝝁(x)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}f(x,\boldsymbol{o})1_{A}[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]d\boldsymbol{\mu}(x)\right]
=\displaystyle= 𝔼[1A[𝑿,𝒐,𝝁]f(𝒐)],\displaystyle\mathbb{E}\left[1_{A}[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]f^{-}(\boldsymbol{o})\right],

where the third equality holds by the MTP (2.1). So the claim is proved. ∎

Lemma 3.10.

There exists a transport function hh such that hh is symmetric, h>0h>0 and h+()=h()=1h^{+}(\cdot)=h^{-}(\cdot)=1 except on an event that has zero measure w.r.t. every nontrivial unimodular rmm space [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. Indeed, if g>0g>0 is an arbitrary transport function such that g+(𝐨)=1g^{+}(\boldsymbol{o})=1 a.s., then one can let

h(X,u,v,μ):=Xg(u,x)g(v,x)g(x)𝑑μ(x).h(X,u,v,\mu):=\int_{X}\frac{g(u,x)g(v,x)}{g^{-}(x)}d\mu(x). (3.1)

In fact, this is just the composition of the Markovian kernel corresponding to gg (given (X,μ)(X,\mu)) with its time-reversal. This ensures that the new kernel preserves μ\mu.

Proof.

One can construct gg as follows. Given (X,o,μ)(X,o,\mu), let N>0N>0 be the smallest integer such that μ(B¯N(o))>0\mu(\overline{B}_{N}(o))>0 (unless μ=0\mu=0). For each k1k\geq 1, let gk(o,)g_{k}(o,\cdot) be a constant function on B¯N+k(o)\overline{B}_{N+k}(o) such that gk+(o)=1g_{k}^{+}(o)=1. Then, let g:=k2kgkg:=\sum_{k}2^{-k}g_{k}. Now, define hh by (3.1). Assuming

0<g(x)< for 𝝁a.e. x𝑿 almost surely,0<g^{-}(x)<\infty\text{ for }\boldsymbol{\mu}-\text{a.e. }x\in\boldsymbol{X}\text{ almost surely}, (3.2)

it is straightforward to check that hh is well defined and h+(𝒐)=h(𝒐)=1h^{+}(\boldsymbol{o})=h^{-}(\boldsymbol{o})=1 a.s. So it remains to prove (3.2). Since g>0g>0, one has g(𝒐)>0g^{-}(\boldsymbol{o})>0 a.s. Also, the MTP (2.1) gives 𝔼[g(𝒐)]=1\mathbb{E}\left[g^{-}(\boldsymbol{o})\right]=1, and hence, g(𝒐)<g^{-}(\boldsymbol{o})<\infty a.s. So, Lemma 3.4 implies (3.2) and the claim is proved. ∎

Remark 3.11.

Given any measurable function b0b\geq 0 on \mathcal{M}_{*}, one can modify the proof of Lemma 3.10 to have h+(𝒐)=h(𝒐)=b(𝒐)h^{+}(\boldsymbol{o})=h^{-}(\boldsymbol{o})=b(\boldsymbol{o}) a.s. Note that in this case, the kernel has b()𝝁b(\cdot)\boldsymbol{\mu} as a stationary measure (this is also readily implied by Lemma 3.10 and Example 2.6). Also, the kernel h~\tilde{h} defined by hh (i.e., h~𝒐:=b(𝒐)1h(𝒐,)𝝁\tilde{h}_{\boldsymbol{o}}:=b(\boldsymbol{o})^{-1}h(\boldsymbol{o},\cdot)\boldsymbol{\mu}) can be arbitrarily close to the trivial kernel in the sense that dP(h~𝒐,δ𝒐)d_{P}(\tilde{h}_{\boldsymbol{o}},\delta_{\boldsymbol{o}}) is less than an arbitrary constant ϵ>0\epsilon>0 a.s.

Proof of Theorem 3.8.

In the first case, assume that, as in Lemma 3.9, k𝒐()k_{\boldsymbol{o}}(\cdot) is absolutely continuous w.r.t. 𝝁\boldsymbol{\mu} a.s. and its Radon-Nikodym derivative is given by f:0f:\mathcal{M}_{**}\to\mathbb{R}^{\geq 0}. Since k𝒐()k_{\boldsymbol{o}}(\cdot) is a probability measure, f+(𝒐)=1f^{+}(\boldsymbol{o})=1 a.s. Also, since the Markovian kernel k(𝑿,𝝁)k^{(\boldsymbol{X},\boldsymbol{\mu})} preserves 𝝁\boldsymbol{\mu}, one has f()=1f^{-}(\cdot)=1 a.e. on 𝑿\boldsymbol{X}. Therefore, f(𝒐)=1f^{-}(\boldsymbol{o})=1 a.s. by Lemma 3.4. Therefore, by the second part of Lemma 3.9, [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o}^{\prime},\boldsymbol{\mu}] has the same distribution as [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}].

Now consider the general case. By Lemma 3.10, there exists a transport function h>0h>0 such that h+(𝒐)=h(𝒐)=1h^{+}(\boldsymbol{o})=h^{-}(\boldsymbol{o})=1 a.s. Compose the Markovian kernel corresponding to hh with kk to obtain a new equivariant kernel η\eta:

η𝒐():=𝑿h(𝒐,x)kx()𝑑𝝁(x).\eta_{\boldsymbol{o}}(\cdot):=\int_{\boldsymbol{X}}h(\boldsymbol{o},x)k_{x}(\cdot)d\boldsymbol{\mu}(x).

Now, η\eta is an equivariant Markovian kernel that preserves 𝝁\boldsymbol{\mu} a.s. and one can check that η𝒐()\eta_{\boldsymbol{o}}(\cdot) is absolutely continuous w.r.t. 𝝁\boldsymbol{\mu} a.s. So, the first case implies that the composition of the two root changes preserves the distribution of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. But the first root-change (corresponding to hh) already preserves the distribution of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] since h(𝒐)=1h^{-}(\boldsymbol{o})=1 a.s. This implies that the second root-change also preserves it and the claim is proved. ∎

4 General Categories of Examples

In this section, we present various general or specific examples in unimodular rmm spaces. We also study the connection with Borel equivalence relations in Subsection 4.6.

4.1 Point Processes and Random Measures

A point process in d\mathbb{R}^{d} is a random discrete subset of d\mathbb{R}^{d}. More generally, a random measure on d\mathbb{R}^{d} is a random boundedly-finite measure on d\mathbb{R}^{d}. A point process or random measure Φ\Phi is stationary if Φ+t\Phi+t has the same distribution as Φ\Phi for every tdt\in\mathbb{R}^{d}. If so, the Palm version of Φ\Phi can be defined in various ways and means heuristically seeing Φ\Phi from a typical point or conditioning on 0Φ0\in\Phi. Formally, if BdB\subseteq\mathbb{R}^{d} is an arbitrary bounded Borel set, bias by Φ(B)\Phi(B) and move the origin to a random point with distribution Φ|B{\left.\kern-1.2pt\Phi\vphantom{\big{|}}\right|_{B}}; i.e.,

[Φ0A]=1λΦ𝔼[B1A(Φx)𝑑Φ(x)],\mathbb{P}\left[\Phi_{0}\in A\right]=\frac{1}{\lambda_{\Phi}}\mathbb{E}\left[\int_{B}1_{A}(\Phi-x)d\Phi(x)\right], (4.1)

where λΦ\lambda_{\Phi} is a constant called the intensity of Φ\Phi, which is assumed to be positive and finite here. For other methods to defined the Palm version, one can mention the use of the Campbell measure and tessellations, which will be generalized in Section 6.

The Palm version satisfies Mecke’s theorem [37], which can be rephrased equivalently555Mecke’s theorem is stated in different notations in the literature, but it is easy to transform the equation in this form. in the MTPs (1.2) and (1.3) (see also [30] and [33]). Mecke proved in addition that with some finite moment condition, this equation characterizes the Palm versions of stationary point process. Also, one can reconstruct the stationary version from the Palm version via a formula which is known as Palm inversion. Without the finite moment condition, the MTPs (1.2) and (1.3) define larger classes of point processes and random measures, which are called point-stationary point processes and mass-stationary random measures in [33].666The definition of point-stationarity in [33] is more complicated, but it is proved in [33] that it is equivalent to Mecke’s condition. For example, one can mention the zero set of the simple random walk, its graph, and the local time at zero of Brownian motion (see [6] for further examples). The MTPs (1.2) and (1.3) directly imply the following.

Example 4.1.

If Φ0\Phi_{0} is the Palm version of a stationary point process in d\mathbb{R}^{d} (or more generally, a point-stationary point process), then [Φ0,0][\Phi_{0},0] is a unimodular rmm space (equipped with the counting measure). If Ψ0\Psi_{0} is the Palm version of a stationary random measure (or more generally, a mass-stationary random measure), then [supp(Ψ0),0,Ψ0][\mathrm{supp}(\Psi_{0}),0,\Psi_{0}] and [d,0,Ψ0][\mathbb{R}^{d},0,\Psi_{0}] are unimodular. Note that the latter may be improper.

Remark 4.2.

Since rmm spaces are considered up to isomorphisms, some geometry is lost (e.g., the coordinate axes) when considering point processes and random measures as rmm spaces. To fix this, one can extend \mathcal{M}_{*} to rmm spaces equipped with some additional geometric structure, which will be discussed in Subsection 5.1. In this sense, one can say that unimodular rmm spaces generalize (point-) stationary point processes and (mass-) stationary random measures.
We will also provide a further generalization in Section 6 by defining stationary point processes and random measures on a given unimodular rmm space. In this viewpoint, unimodular rmm spaces generalize the base space d\mathbb{R}^{d}.

4.2 Unimodular Graphs and Discrete Spaces

As mentioned in the introduction, a unimodular (random) graph [3] is a random rooted graph that satisfies the MTP (1.1). Also, the MTP provides many connections between unimodular graphs and (point-) stationary point process. As a common generalization, [6] defines unimodular discrete spaces, which are random boundedly-finite discrete metric spaces that satisfy a similar MTP. Since the Benjamini-Schramm topology for rooted graphs is consistent with the GHP topology (see e.g., [29]), The MTP implies the following.

Example 4.3.

If [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}] is a unimodular graph or a unimodular discrete space, then it is also a unimodular rmm space (equipped with the counting measure).

In considering graphs as rmm spaces, some information might be lost (e.g., parallel edges and loops). However, one can keep these information by considering rmm spaces equipped with some additional structure (see Subsection 5.1). In this sense, unimodular rmm spaces generalize unimodular graphs.

More generally, [3] defines unimodular networks, which are unimodular graphs equipped with some marks on the vertices and edges. Also, [6] defines unimodular marked discrete spaces. Similarly to the above example, these can be regarded as unimodular rmm spaces equipped with suitable additional structures.

Example 4.4.

The following are examples of unimodular rmm spaces which are graphs (or discrete spaces) equipped with a measure different from the counting measure. In these examples, the biasing is a special case of the Palm theory developed in Section 6 (see Example 6.22).

  1. (i)

    The Palm version of non-simple stationary point processes.

  2. (ii)

    If [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}] is a unimodular graph, then it is known that a stationary distribution of the simple random walk is obtained by biasing the probability measure by deg(𝒐)\mathrm{deg}(\boldsymbol{o}). It can be seen that, after this biasing, [𝑮,𝒐,deg()][\boldsymbol{G},\boldsymbol{o},\mathrm{deg}(\cdot)] is unimodular.

  3. (iii)

    Let 𝑺\boldsymbol{S} be the image of a process (Yn)n(Y_{n})_{n\in\mathbb{Z}} on d\mathbb{R}^{d} that has stationary increments and Y0=0Y_{0}=0 (e.g., a random walk). Let 𝝁\boldsymbol{\mu} be the counting measure on 𝑺\boldsymbol{S} and 𝒎\boldsymbol{m} be the mutiplicity measure on 𝑺\boldsymbol{S}. Assuming that 𝑺\boldsymbol{S} is discrete and 𝒎\boldsymbol{m} is finite, [𝑺,0,𝒎][\boldsymbol{S},0,\boldsymbol{m}] is unimodular (this is easily implied by the MTP on the index set \mathbb{Z}). However, to make [𝑺,0,𝝁][\boldsymbol{S},0,\boldsymbol{\mu}] unimodular, one needs to bias by 𝒎(0)1\boldsymbol{m}(0)^{-1} (see Subsection 4.3 of [6]).

  4. (iv)

    In some random rooted graphs [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}], the MTP holds only when the sum is made on a specific subset 𝑺\boldsymbol{S} containing the root. These cases are sometimes called locally unimodular and are unimodular rmm spaces by letting 𝝁\boldsymbol{\mu} be the counting measure on 𝑺\boldsymbol{S} (not on 𝑮\boldsymbol{G}). An example is the graph of a null-recurrent Markov chain on \mathbb{Z}, where 𝑺\boldsymbol{S} is a level set of the graph. Another example appears in extending a unimodular graph as described in Example 6.23.

4.3 Scaling Limits

Let 𝑮n\boldsymbol{G}_{n} be a sequence of finite graphs (or metric spaces), which might be deterministic or random, and let 𝒐n\boldsymbol{o}_{n} be a vertex in 𝑮n\boldsymbol{G}_{n} chosen uniformly at random. A (measured) scaling limit of 𝑮n\boldsymbol{G}_{n} is the weak limit of a sequence of the form [ϵn𝑮n,𝒐n,δn𝝁n][\epsilon_{n}\boldsymbol{G}_{n},\boldsymbol{o}_{n},\delta_{n}\boldsymbol{\mu}_{n}] as random elements in \mathcal{M}_{*}. Here, 𝝁n\boldsymbol{\mu}_{n} is the uniform measure on the vertices of 𝑮n\boldsymbol{G}_{n} and ϵn𝑮n\epsilon_{n}\boldsymbol{G}_{n} means that the graph-distance metric is scaled by ϵn\epsilon_{n}. Likewise, a subsequential (measured) scaling limit is defined as the limit along a subsequence. The coefficients ϵn\epsilon_{n} and δn\delta_{n} may also depend on the non-rooted graph 𝑮n\boldsymbol{G}_{n} (but not on 𝒐\boldsymbol{o}). Since 𝒐n\boldsymbol{o}_{n} is chosen uniformly, [𝑮n,𝒐n][\boldsymbol{G}_{n},\boldsymbol{o}_{n}] is a unimodular graph (in fact, it is enough to have the re-rooting invariance property, see Example 2.4).

Another setting for scaling limits is zooming-out a given unimodular graph or rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]; i.e., (subsequential) limits of [ϵn𝑿,𝒐,δn𝝁][\epsilon_{n}\boldsymbol{X},\boldsymbol{o},\delta_{n}\boldsymbol{\mu}] when the factors ϵn\epsilon_{n} and δn\delta_{n} do not depend on 𝒐\boldsymbol{o} and converge to zero appropriately. There are also examples of zooming-in a given rmm space (even compact) at the root, which is the case when ϵn,δn\epsilon_{n},\delta_{n}\to\infty appropriately. Various examples will be mentioned below.

In each of these settings, Lemma 3.1 implies the following.

Lemma 4.5 (Scaling Limit).

Under the above assumptions, every measured scaling limit is unimodular and satisfies the MTP (2.1). In particular, if the limit is compact a.s., then it satisfies the re-rooting invariance property. The same is true for subsequential scaling limits, if the subsequence is chosen not depending on 𝐨\boldsymbol{o}.

Non-compact scaling limits have also some re-rooting invariance property, which is stated in Theorem 3.8.

Remark 4.6.

Unimodularity does not make sense for non-measured scaling limits. However, in many cases, by a pre-compactness argument, it is possible to deduce the existence of a subsequential measured scaling limit. For instance, this is always the case if the scaling limit is compact.

Example 4.7 (Brownian Motion).

The zero set of the simple random walk (SRW) on \mathbb{Z} scales to the zero set of Brownian motion equipped with the local time measure. The graph of the SRW scales to the graph of Brownian motion with the measured induced from the time axis (the metric is distorted differently, but unimodularity is preserved anyway). The image of the SRW on d\mathbb{Z}^{d} (d3d\geq 3) scales to the image of Brownian motion on d\mathbb{R}^{d} equipped with the push-forward of the measure on \mathbb{R}. By Example 4.4, the latter is unimodular. These examples provide unimodular rmm spaces (and also mass-stationary random measures).

Example 4.8 (Brownian Trees).

The Brownian continuum random tree (BCRT) [1] is the scaling limit of random trees on nn vertices. The re-rooting invariance property (observed in [2]) is a direct corollary of Lemma 4.5 above. Aldous also proved that by choosing larger scaling factors suitably, a non-compact scaling limit is obtained, which is called the self-similar continuum random tree (SSCRT). Lemma 4.5 implies that the SSCRT is also a unimodular rmm space and satisfies the MTP, which seems to be a new result.

Example 4.9 (Stable Trees).

Stable trees generalize the BCRT and are the scaling limit of Galton-Watson trees with infinite variance conditioned to be large [14]. A re-rooting invariance property of stable trees is proved in [15]. Note that the Galton-Watson trees are not re-rooting invariant. However, one can prove that the stable trees are the scaling limits of critical unimodular Galton-Watson (UGW) trees (Example 1.1 of [3]) as well. Since UGW trees are unimodular, the re-rooting invariance property is implied by Lemma 3.1 directly.

Example 4.10 (Self-Similar Unimodular Spaces).

Let KdK\subseteq\mathbb{R}^{d} be a self-similar set such that K=fi(K)K=\cup f_{i}(K), where each fif_{i} is a homothety (see [9]). Equip KK with the unique self-similar probability measure μ\mu on it and choose 𝒐K\boldsymbol{o}\in K with distribution μ\mu. Assume all homothety ratios are equal and the open set condition holds. In this case, by zooming-in at 𝒐\boldsymbol{o}, there exists a subsequential scaling limit. This can be proved similarly to Theorem 4.14 of [6] (regarding self-similar unimodular discrete spaces) and the proof is skipped for brevity (the limit can be constructed by adding to KK some isometric copies and continuing recursively, similarly to Remark 4.21 of [6]). In addition, the limit is the same as the subsequential scaling limit when zooming-out the unimodular discrete self-similar set defined in [6]. Hence, the scaling limit is unimodular.

Example 4.11 (Micro-Sets).

More general than Example 4.10, micro-sets are defined for an arbitrary compact set KdK\subseteq\mathbb{R}^{d} by zooming-in at a given point (see Subsection 2.4 of [9]). Let μ\mu be a probability measure on KK and choose 𝒐\boldsymbol{o} randomly with distribution μ\mu. By modifying the definition of micro-sets slightly to allow non-compact micro-sets (using convergence of closed subsets of d\mathbb{R}^{d}), and also by scaling μ\mu at the same time, one obtains that every subsequential measured micro-set at 𝒐\boldsymbol{o} defined as a weak limit (the subsequence and the scaling parameters should not depend on 𝒐\boldsymbol{o}) is unimodular.

Example 4.12 (Brownian Web).

Brownian web is the scaling limit of various drainage network models (see e.g., [18]), which are random trees embedded in the plane. The limit is usually defined with a different notion of convergence (as collections of paths in the plane), but it is also proved recently in [11] that the scaling limit exists in the Gromov-Hausdorff topology as well. It is natural to expect that the measured scaling limit also exists. Nevertheless, the limit is the completion of the skeleton of the Brownian web, and hence, has a natural measure induced from the Lebesgue measure on 2\mathbb{R}^{2}. By stationarity, the resulting measured continuum tree is a unimodular rmm space.

Example 4.13 (Uniform Spanning Forest).

Let 𝑪\boldsymbol{C} be the connected component containing 0 of the uniform spanning forest of d\mathbb{Z}^{d} (see Section 7 of [3]). It is known that [𝑪,0][\boldsymbol{C},0] is unimodular (the same is true in any unimodular graph); indeed, it is point-stationary. As random closed subsets of d\mathbb{R}^{d} equipped with a measure, one can use precompactness to show that there exists subsequential measured scaling limits. If one proves the existence of the scaling limit (or if the subsequence is chosen in a translation-invariant way), then the limit is a unimodular rmm space. It is also conjectured that the scaling limit of 𝑪\boldsymbol{C} exists as random real trees embedded in d\mathbb{R}^{d}, and for d5d\geq 5, the limit is identical to the Brownian-embedded SSCRT (see Example 4.8 and [41]).

Example 4.14 (Planar Maps).

The scaling limit of random planar graphs and maps have been of great interest in probability theory and in physics. For instance, the Brownian map and the Brownian disk are compact random metric spaces arising as the scaling limits of some models of uniform random planar triangulations and quadrangulations (see e.g., [34]). The Brownian plane is a non-compact model obtained by zooming-out the uniform infinite planar quadrangulation and also by zooming-in the Brownian map. Therefore, the Brownian plane, equipped with the volume measure (i.e., the scaling limit of the counting measure), is a unimodular rmm space.
In addition, [10] defines the hyperbolic Brownian plane as a limit of a sequence of planar triangulations scaled properly. It is observed in [10] that this model satisfies a version of the MTP. Indeed, this means that the hyperbolic Brownian plane is a unimodular rmm space and is a direct corollary of Lemma 3.1.
Also, in the uniform infinite half-plane triangulation, the root is on the boundary, which is a bi-infinite path. It is proved that the scaling limit of this model exists. Equipping it with the length measure, which is the scaling limit of the counting measure on the boundary, a unimodular rmm space is obtained (which is improper).

4.4 Groups and Deterministic Spaces

In this subsection, we investigate when a deterministic rmm space [X,o,μ][X,o,\mu] is a unimodular rmm space. By Lemma 3.4, it is necessary that (X,μ)(X,\mu) is transitive; i.e., [X,o,μ][X,o,\mu] is isomorphic to [X,y,μ][X,y,\mu] for every yXy\in X. However, transitivity is not enough. The following generalizes the analogous result about transitive graphs (see [7]).

Proposition 4.15.

Let [X,o,μ][X,o,\mu] be a deterministic rmm space.

  1. (i)

    [X,o,μ][X,o,\mu] is unimodular if and only if (X,μ)(X,\mu) is transitive and the automorphism group of (X,μ)(X,\mu) is a unimodular group.

  2. (ii)

    Assume Γ\Gamma is a closed subgroup of the automorphism group of (X,μ)(X,\mu). Then, the MTP holds for all Γ\Gamma-invariant functions on X×XX\times X if and only if Γ\Gamma is unimodular and acts transitively on XX.

The first part is proved in a more general form in Theorem  4.18. The second part can also be proved similarly and the proof is skipped for brevity.

Corollary 4.16 (Unimodular Groups).

Every unimodular group Γ\Gamma equipped with the Haar measure and a boundedly-compact left-invariant metric dd (e.g., any finitely generated group equipped with a Cayley graph) is a unimodular rmm space.

It is also easy to prove this corollary by verifying the MTP directly.

Example 4.17.

The Euclidean space n\mathbb{R}^{n} is a unimodular group, and hence, [d,0,Leb][\mathbb{R}^{d},0,\mathrm{Leb}] is unimodular. The hyperbolic space n\mathbb{H}^{n} is not a group, but Proposition 4.15 implies that it is unimodular. However, the hyperbolic plane with one distinguished ideal point (which can be regarded as a rmm space with some additional structure; see Subsection 5.1) is not unimodular since its automorphism group is not unimodular.

4.5 Quasi-Transitive Spaces

Let (X,μ)(X,\mu) be a deterministic measured metric space. In this subsection, we investigate when one can choose a random root 𝒐X\boldsymbol{o}\in X such that [X,𝒐,μ][X,\boldsymbol{o},\mu] is unimodular. This generalizes the case of quasi-transitive graphs;777This is the only motivation for the title of this subsection. see [7] and Section 3 of [3].

Let Γ\Gamma be the automorphism group of (X,μ)(X,\mu). If Γ\Gamma contains only the identity function, then it is straightforward to see that μ\mu should be a finite measure and the distribution of 𝒐\boldsymbol{o} should be proportional to μ\mu (see Example 2.4). However, if Γ\Gamma is nontrivial, the situation is more involved.

The orbit of xXx\in X is the set Γx={γx:γΓ}\Gamma x=\{\gamma x:\gamma\in\Gamma\}. Boundedly-compactness of XX implies that Γ\Gamma is a locally-compact topological group. Let ||\left|\cdot\right| be a left-invariant Haar measure on Γ\Gamma. Let BB be an arbitrary open set that intersects every orbit in a nonempty bounded set. For instance, one can let B:=n1Bn(o)ΓB¯n2(o)B:=\bigcup_{n\geq 1}B_{n}(o)\setminus\Gamma\overline{B}_{n-2}(o) for an arbitrary oXo\in X. For xXx\in X, define

h(x):=|Γx,B|1, where Γx,B:={γΓ:γ1xB}.h(x):=\left|\Gamma_{x,B}\right|^{-1},\text{ where }\Gamma_{x,B}:=\{\gamma\in\Gamma:\gamma^{-1}x\in B\}.

The assumptions imply 0<h(x)<0<h(x)<\infty. Also, for αΓ\alpha\in\Gamma, one has Γαx,B=αΓx,B\Gamma_{\alpha x,B}=\alpha\Gamma_{x,B}, and hence, h(αx)=h(x)h(\alpha x)=h(x).

Theorem 4.18.

There exists a random point 𝐨X\boldsymbol{o}\in X such that [X,𝐨,μ][X,\boldsymbol{o},\mu] is unimodular if and only if Γ\Gamma is a unimodular group and Bh𝑑μ<\int_{B}hd\mu<\infty. In this case, the distribution of [X,𝐨,μ][X,\boldsymbol{o},\mu] is unique. In addition, 𝐨\boldsymbol{o} can be chosen with distribution proportional to (hμ)|B{\left.\kern-1.2pt(h\mu)\vphantom{\big{|}}\right|_{B}}.

This generalizes Theorem 3.1 of [3]. Indeed, in the case where XX is a graph or network and μ\mu is the counting measure, one can choose BB by choosing exactly one point from every orbit. In this case, for xBx\in B, h(x)h(x) is the inverse of the measure of the stabilizer of xx. One can also generalize the if side of the claim to the case where Γ\Gamma is a closed subgroup of the automorphism group that acts transitively.

Example 4.19.

Let XX be a horoball in the hyperbolic plane corresponding to an ideal point ω\omega and let μ\mu be the volume measure on XX. Theorem 4.18 shows that one can choose a random point 𝒐X\boldsymbol{o}\in X such that [X,𝒐,μ][X,\boldsymbol{o},\mu] is unimodular. Indeed, if BB is the region between any two lines passing through ω\omega, then it is enough to choose 𝒐\boldsymbol{o} uniformly in BB (note that Γ\Gamma is isomorphic to the automorphism group of \mathbb{R}). This can be thought of as a continuum version of the canopy tree.
For a simpler example, let α\alpha be a finite measure on \mathbb{R} and let μ:=α×Leb\mu:=\alpha\times\mathrm{Leb}. Then, [2,𝒐,μ][\mathbb{R}^{2},\boldsymbol{o},\mu] is unimodular, where 𝒐\boldsymbol{o} is chosen on the xx axis with distribution proportional to α\alpha.

Proof of Theorem 4.18.

For every measurable function f:X0f:X\to\mathbb{R}^{\geq 0}, one has

Xf(y)𝑑μ(y)=BΓh(x)f(γx)𝑑γ𝑑μ(x).\int_{X}f(y)d\mu(y)=\int_{B}\int_{\Gamma}h(x)f(\gamma x)d\gamma d\mu(x).

This can be seen by the change of variable y:=γxy:=\gamma x in the right hand side and noting that hh and μ\mu are Γ\Gamma-invariant. First, assume that a:=Bh(x)𝑑μ(x)<a:=\int_{B}h(x)d\mu(x)<\infty and 𝒐1a(hμ)|B\boldsymbol{o}\sim\frac{1}{a}{\left.\kern-1.2pt(h\mu)\vphantom{\big{|}}\right|_{B}}. Therefore, for every transport function gg,

a𝔼[g(𝒐,y)𝑑μ(y)]\displaystyle a\mathbb{E}\left[\int g(\boldsymbol{o},y)d\mu(y)\right] =\displaystyle= Bh(z)Xg(z,y)𝑑μ(y)𝑑μ(z)\displaystyle\int_{B}h(z)\int_{X}g(z,y)d\mu(y)d\mu(z)
=\displaystyle= Bh(z)BΓh(x)g(z,γx)𝑑γ𝑑μ(x)\displaystyle\int_{B}h(z)\int_{B}\int_{\Gamma}h(x)g(z,\gamma x)d\gamma d\mu(x)
=\displaystyle= ΓBBh(z)h(x)g(z,γx)𝑑μ(x)𝑑μ(z)𝑑γ.\displaystyle\int_{\Gamma}\int_{B}\int_{B}h(z)h(x)g(z,\gamma x)d\mu(x)d\mu(z)d\gamma.

Similarly,

a𝔼[g(y,𝒐)𝑑μ(y)]\displaystyle a\mathbb{E}\left[\int g(y,\boldsymbol{o})d\mu(y)\right] =\displaystyle= ΓBBh(z)h(x)g(γz,x)𝑑μ(x)𝑑μ(z)𝑑γ.\displaystyle\int_{\Gamma}\int_{B}\int_{B}h(z)h(x)g(\gamma z,x)d\mu(x)d\mu(z)d\gamma.

Note that g(γz,x)=g(z,γ1x)g(\gamma z,x)=g(z,\gamma^{-1}x). If Γ\Gamma is unimodular, then the change of variable γγ1\gamma\to\gamma^{-1} preserves the Haar measure. This implies that the two above formulas are equal and the unimodularity of [X,𝒐,μ][X,\boldsymbol{o},\mu] is proved.

Conversely, assume 𝒐X\boldsymbol{o}\in X is a random point such that [X,𝒐,μ][X,\boldsymbol{o},\mu] is unimodular. Let \mathbb{P} and ν\nu be the distributions of 𝒐\boldsymbol{o} and [X,𝒐,μ][X,\boldsymbol{o},\mu] respectively. For AXA\subseteq X, define [A]:={[X,u,μ]:uA}[A]:=\{[X,u,\mu]:u\in A\}. So ν([A])=[𝒐ΓA]\nu([A])=\mathbb{P}\left[\boldsymbol{o}\in\Gamma A\right]. Let C,DXC,D\subseteq X be arbitrary and define g(u,v):=|{γ:γ1uC,γ1vD}|g(u,v):=\left|\{\gamma:\gamma^{-1}u\in C,\gamma^{-1}v\in D\}\right|. One has

𝔼[Xg(𝒐,y)𝑑μ(y)]\displaystyle\mathbb{E}\left[\int_{X}g(\boldsymbol{o},y)d\mu(y)\right] =\displaystyle= 𝔼[XΓ1C(γ1𝒐)1D(γ1y)𝑑γ𝑑μ(y)]\displaystyle\mathbb{E}\left[\int_{X}\int_{\Gamma}1_{C}(\gamma^{-1}\boldsymbol{o})1_{D}(\gamma^{-1}y)d\gamma d\mu(y)\right]
=\displaystyle= 𝔼[XΓ1C(γ1𝒐)1D(y)𝑑γ𝑑μ(y)]\displaystyle\mathbb{E}\left[\int_{X}\int_{\Gamma}1_{C}(\gamma^{-1}\boldsymbol{o})1_{D}(y)d\gamma d\mu(y)\right]
=\displaystyle= μ(D)𝔼[Γ1C(γ1𝒐)𝑑γ],\displaystyle\mu(D)\ \mathbb{E}\left[\int_{\Gamma}1_{C}(\gamma^{-1}\boldsymbol{o})d\gamma\right],

where the second equality is by the change of variable y:=γ1yy^{\prime}:=\gamma^{-1}y. Similarly,

𝔼[Xg(y,𝒐)𝑑μ(y)]\displaystyle\mathbb{E}\left[\int_{X}g(y,\boldsymbol{o})d\mu(y)\right] =\displaystyle= μ(C)𝔼[Γ1D(γ1𝒐)𝑑γ].\displaystyle\mu(C)\ \mathbb{E}\left[\int_{\Gamma}1_{D}(\gamma^{-1}\boldsymbol{o})d\gamma\right].

Since gg is Γ\Gamma-invariant, the MTP implies that these two equations are equal. Since it holds for arbitrary CC and DD, there exists a constant bb such that

𝔼[Γ1C(γ1𝒐)𝑑γ]=bμ(C),CX.\mathbb{E}\left[\int_{\Gamma}1_{C}(\gamma^{-1}\boldsymbol{o})d\gamma\right]=b\mu(C),\quad\forall C\subseteq X. (4.2)

Thus, for every measurable function ff on XX, one has 𝔼[Γf(γ1𝒐)𝑑γ]=bXf𝑑μ\mathbb{E}\left[\int_{\Gamma}f(\gamma^{-1}\boldsymbol{o})d\gamma\right]=b\int_{X}fd\mu. In particular, let f=h1A1Bf=h1_{A}1_{B}, where AXA\subseteq X is any Γ\Gamma-invariant set. Hence,

bXh1A1B𝑑μ\displaystyle b\int_{X}h1_{A}1_{B}d\mu =\displaystyle= 𝔼[Γh(γ1𝒐)1A(γ1𝒐)1B(γ1𝒐)𝑑γ]\displaystyle\mathbb{E}\left[\int_{\Gamma}h(\gamma^{-1}\boldsymbol{o})1_{A}(\gamma^{-1}\boldsymbol{o})1_{B}(\gamma^{-1}\boldsymbol{o})d\gamma\right]
=\displaystyle= 𝔼[h(𝒐)1A(𝒐)Γ1B(γ1𝒐)𝑑γ]\displaystyle\mathbb{E}\left[h(\boldsymbol{o})1_{A}(\boldsymbol{o})\int_{\Gamma}1_{B}(\gamma^{-1}\boldsymbol{o})d\gamma\right]
=\displaystyle= 𝔼[1A(𝒐)]=ν([A]).\displaystyle\mathbb{E}\left[1_{A}(\boldsymbol{o})\right]=\nu([A]).

This proves the uniqueness of ν\nu. Also, by letting A:=XA:=X, one gets that Bh𝑑μ<\int_{B}hd\mu<\infty and b=1/ab=1/a. It also implies that, by choosing another point with distribution b(hμ)|Bb{\left.\kern-1.2pt(h\mu)\vphantom{\big{|}}\right|_{B}}, ν\nu would not change. Hence, one may choose 𝒐b(hμ)|B\boldsymbol{o}\sim b{\left.\kern-1.2pt(h\mu)\vphantom{\big{|}}\right|_{B}} from the beginning. In addition, for every βΓ\beta\in\Gamma, since μ(C)=μ(βC)\mu(C)=\mu(\beta C), (4.2) implies

𝔼[Γ1C(γ1𝒐)𝑑γ]=𝔼[Γ1βC(γ1𝒐)𝑑γ]=m(β)1𝔼[Γ1C(γ1𝒐)𝑑γ],\displaystyle\mathbb{E}\left[\int_{\Gamma}1_{C}(\gamma^{-1}\boldsymbol{o})d\gamma\right]=\mathbb{E}\left[\int_{\Gamma}1_{\beta C}(\gamma^{-1}\boldsymbol{o})d\gamma\right]=m(\beta)^{-1}\mathbb{E}\left[\int_{\Gamma}1_{C}(\gamma^{-1}\boldsymbol{o})d\gamma\right],

where the last equality is by the change of variable γ:=γβ\gamma^{\prime}:=\gamma\beta and this changes the Haar measure by the constant factor m(β)m(\beta), where mm is the modular function of Γ\Gamma. The above equation implies that m(β)=1m(\beta)=1; i.e., Γ\Gamma is unimodular. So the claim is proved. ∎

4.6 Connection with Borel Equivalence Relations

Let SS be a Polish space and EE be an equivalence relation on SS. It is called a countable Borel equivalence relation (CBER) if it is a Borel subset of S×SS\times S and every equivalence class is countable. A probability measure ν\nu on SS is invariant under RR if for all measurable functions f:S×S0f:S\times S\to\mathbb{R}^{\geq 0}, one has yR(x)f(x,y)dν(x)=yR(x)f(y,x)dν(x)\int\sum_{y\in R(x)}f(x,y)d\nu(x)=\int\sum_{y\in R(x)}f(y,x)d\nu(x), where R(x)R(x) is the equivalence class containing xx. As discussed in Example 9.9 of [3], this notion is tightly connected to unimodular graphs: if one has a graphing of RR and 𝒐X\boldsymbol{o}\in X is a random point with distribution ν\nu, then the component of the graphing containing 𝒐\boldsymbol{o} is unimodular. Conversely, if \mathbb{P} is a unimodular probability measure on 𝒢\mathcal{G}_{*}, where 𝒢\mathcal{G}_{*}\subseteq\mathcal{M}_{*} is the space of connected rooted graphs, then \mathbb{P} is invariant under the equivalence relation on 𝒢\mathcal{G}_{*} defined by (G,o)(G,v)(G,o)\sim(G,v) for all vGv\in G. The last claim is an if and only if in the case where \mathbb{P} is supported on graphs with no nontrivial automorphism. As mentioned in [3], there is a substantial overlap between the two theories, but their viewpoints and motivations are different. In fact, graphings of CBERs are quite more general and they can capture some features of graph limits that unimodular graphs don’t (see local-global convergence in [22]). Also, some of the results for unimodular graphs are proved by the results on CBERs; e.g., ergodic decomposition and amenability. It should be noted that, due to the possibility of automorphisms, some geometry is lost when passing to graphings. This issue should be dealt with; e.g., by adding extra randomness to break the automorphisms.

For unimodular rmm spaces, the analogous equivalence class on \mathcal{M}_{*} defined by (X,o,μ)(X,v,μ)(X,o,\mu)\sim(X,v,\mu) for vXv\in X is not countable. Hence, the theory of CBERs is not directly applicable. In Section 7, we will construct a CBER by introducing the Poisson point process on unimodular rmm spaces. Also, by the Palm theory developed in Section 6, we construct an invariant measure for the CBER. This enables us to use the results in the theory of CBERs.

5 Further Properties of Unimodular rmm Spaces

5.1 Allowing Extra Randomness

Lemma 3.4 deals with factor subsets, which are functions of the underlying measured metric space. The lemma still holds if one allows extra randomness in choosing the subset suitably if unimodularity is preserved. However, this is not straightforward to formalize due to measurabiliy issues (the space of all (X,o,μ,S)(X,o,\mu,S), where SS is a Borel subset of XX, does not have a natural topology).888This is similar to the issue of defining random Borel subsets of d\mathbb{R}^{d}, where one has to use a random field instead. The following idea is similar to the use of random fields.

To do this generalization, assume there exists already a random geometric structure 𝒎\boldsymbol{m} on 𝑿\boldsymbol{X} and [𝑿,𝒐,𝝁,𝒎][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\boldsymbol{m}] is a random rmm space equipped with some additional structure. This makes sense as soon as there is a suitable generalization of the GHP metric. For instance, 𝒎\boldsymbol{m} can be a random measure on 𝑿\boldsymbol{X}, a random closed subset of 𝑿\boldsymbol{X}, etc. In the previous work [29], a quite general framework is presented for extending the GHP metric using the notion of functors. This can be applied to various types of additional geometric structures and sufficient criteria for Polishness are also provided (it is necessary that for every deterministic (X,o,μ)(X,o,\mu), the set of additional structures on (X,o,μ)(X,o,\mu) is Polish, but more assumptions are needed). Here, we assume that Polishness holds as well.

Definition 5.1.

[𝑿,𝒐,𝝁,𝒎][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\boldsymbol{m}] is unimodular if the MTP (2.1) holds even if gg depends on the additional structure 𝒎\boldsymbol{m}.

In fact, this means that there exists a version of the conditional distribution of 𝒎\boldsymbol{m} given [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] such that the conditional law does not depend on the root. This will be formalized in Section 6 (see Definition 6.1, Lemma 6.3 and Remark 6.7).

Once we have such an additional structure, then one can extend the notion of factor subsets by allowing it to be a function of (𝑿,𝝁,𝒎)(\boldsymbol{X},\boldsymbol{\mu},\boldsymbol{m}). Then, the results of this paper like Lemmas 3.4 and 3.1 can be generalized to this more general setting.

Additional geometric structures are interesting in their own and various special cases have been considered in the literature separately (see [29] for an account of the literature and for a unification). In this work, we use this setting in various places; e.g., for developing Palm theory in Section 6 (where 𝒎\boldsymbol{m} is a tuple of kk random measures on 𝑿\boldsymbol{X}), for studying random walks on unimodular spaces in Subsection 5.2 (where 𝒎\boldsymbol{m} is a sequence of points of 𝑿\boldsymbol{X}), for ergodicity (Subsection 5.3), for hyperfiniteness (Subsection 5.6), and in the proofs in Section 7 (where 𝒎\boldsymbol{m} is a marked measure or a closed subset).

5.2 Random Walk

In Subsection 3.3, re-rooting a unimodular rmm space is defined by means of equivariant Markovian kernels. By iterating a re-rooting, one obtains a random walk on the unimodular rmm space, as formalized below.

Let \mathcal{M}_{\infty} be the space of all (X,o,μ,(yn)n)(X,o,\mu,(y_{n})_{n\in\mathbb{Z}}), where (yn)n(y_{n})_{n} is a sequence in XX and y0=oy_{0}=o. By the discussion in Subsection 5.1, one can show that \mathcal{M}_{\infty} can be turned into a Polish space. Consider the following shift operator on \mathcal{M}_{\infty}:

𝒮(X,o,μ,(yn)n):=(X,y1,μ,(yn+1)n).\mathcal{S}(X,o,\mu,(y_{n})_{n}):=(X,y_{1},\mu,(y_{n+1})_{n}). (5.1)

An initial bias is a measurable function b:0b:\mathcal{M}_{*}\to\mathbb{R}^{\geq 0}. Then, for every deterministic (X,μ)(X,\mu), one obtains a measure bμb\mu on XX defined by

bμ(A):=Ab(X,y,μ)𝑑μ(y).b\mu(A):=\int_{A}b(X,y,\mu)d\mu(y). (5.2)

Let kk be an equivariant Markovian kernel (Subsection 3.3). Given every deterministic (X,o,μ)(X,o,\mu), the kernel k(X,μ)k^{(X,\mu)} defines a Markov chain (𝒙n)n(\boldsymbol{x}_{n})_{n\in\mathbb{Z}} on XX such that 𝒙0=o\boldsymbol{x}_{0}=o and 𝒙1\boldsymbol{x}_{1} has law kok_{o}. Let θ(X,o,μ)\theta_{(X,o,\mu)} be the law of (X,o,μ,(𝒙n)n)(X,o,\mu,(\boldsymbol{x}_{n})_{n}) on \mathcal{M}_{\infty}. Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a random rmm space such that 𝔼[b(𝒐)]<\mathbb{E}\left[b(\boldsymbol{o})\right]<\infty and let QQ be the distribution of [𝑿,𝒐,𝝁,(𝒙n)n][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},(\boldsymbol{x}_{n})_{n}] biased by b(𝒐)b(\boldsymbol{o}).

Theorem 5.2.

Assume [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is a unimodular rmm space. Under the above assumptions, if for almost every sample [X,o,μ][X,o,\mu] of [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], bμb{\mu} is a stationary (resp. reversible) measure for the Markovian kernel k(X,μ)k^{(X,\mu)} on XX, then QQ is a stationary (resp. reversible) measure under the shift 𝒮\mathcal{S}.

This generalizes Theorem 4.1 of [3] which is for unimodular graphs. A further generalization is provided in Subsection 6.3.4 by allowing the stationary measure be singular with respect to 𝝁\boldsymbol{\mu}.

Proof.

First, assume that k𝒐k_{\boldsymbol{o}} is absolutely continuous w.r.t. 𝝁\boldsymbol{\mu} almost surely. In this case, a slight modification of the proof of Theorem 4.1 of [3] (similarly to Lemma 3.9) can be used to prove the claim. But in the general case, the idea of the proof of Theorem 3.8 (composing kk with a continuous kernel) does not work. In this case, it is enough to approximate kk by a sequence of equivariant kernels that have the same properties as kk (stationarity or reversibility) plus absolute continuity. For this, let h~n\tilde{h}_{n} be a sequence of kernels converging to the trivial kernel given by Remark 3.11. Note that the kernel h~nkh~n\tilde{h}_{n}\circ k\circ\tilde{h}_{n} converges to kk as nn\to\infty, preserves b𝝁b\boldsymbol{\mu} if kk does, is reversible if kk is reversible (since hnh_{n} is symmetric) and has the absolute continuity property. This proves the claim. ∎

Theorem 5.3 (Characterization of Unimodularity).

Let hh be a fixed symmetric transport function as in Lemma 3.10 (h>0h>0 and h+()=h()=1h^{+}(\cdot)=h^{-}(\cdot)=1). Let (𝐱n)n(\boldsymbol{x}_{n})_{n} be the random walk given by the kernel kk defined by ko:=h(o,)μk_{o}:=h(o,\cdot)\mu. Then, a random rmm space [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is unimodular if and only if the law of [𝐗,𝐨,𝛍,(𝐱n)n][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},(\boldsymbol{x}_{n})_{n}] (defined above) is stationary and reversible under the shift 𝒮\mathcal{S}. The condition h>0h>0 can also be relaxed to nh(n)>0\sum_{n}h^{(n)}>0, where h(n)h^{(n)} is given by the nn-fold composition of the kernel with itself.

Remark 5.4.

One can similarly extend this theorem to have an arbitrary initial bias bb by having h+=h=bh^{+}=h^{-}=b. This generalizes the fact that for random rooted graphs, unimodularity is equivalent to stationarity and reversibility of the simple random walk after biasing by the degree of the root (see Section 4 of [3]). The benefit of the above theorem is that it characterizes all unimodular rmm spaces, not those with the moment condition 𝔼[b(𝒐)]<\mathbb{E}\left[b(\boldsymbol{o})\right]<\infty. This seems to be new even for graphs (see also the proof of Lemma 4 of [30]).

Proof of Theorem 5.3.

The only if part is implied by Theorem 5.2. For the if part, note that if h>0h>0, then the two sides of (2.1) are 𝔼[g(𝒐,𝒙1)]\mathbb{E}\left[g^{\prime}(\boldsymbol{o},\boldsymbol{x}_{1})\right] and 𝔼[g(𝒙1,𝒐)]\mathbb{E}\left[g^{\prime}(\boldsymbol{x}_{1},\boldsymbol{o})\right], where g:=g/hg^{\prime}:=g/h. So, since [𝑿,𝒐,𝒙1,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{x}_{1},\boldsymbol{\mu}] has the same distribution as [𝑿,𝒙1,𝒐,𝝁][\boldsymbol{X},\boldsymbol{x}_{1},\boldsymbol{o},\boldsymbol{\mu}] by reversibility, the MTP holds; i.e., [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is unimodular.

Under the weaker condition nh(n)>0\sum_{n}h^{(n)}>0, let AnA_{n} be the event h(n)>0h^{(n)}>0 and write g=ngng=\sum_{n}g_{n}, where gng_{n} is the restriction of gg to An(i=1n1Ai)A_{n}\setminus(\cup_{i=1}^{n-1}A_{i}). Each g(n)g^{(n)} satisfies the MTP by the first part of the proof. Hence, gg also satisfies the MTP. ∎

Proposition 5.5 (Speed Exists).

Under the setting of Theorem 5.2, the speed of the random walk (𝐱n)n(\boldsymbol{x}_{n})_{n}, defined by limnd(𝐨,𝐱n)/n\lim_{n}d(\boldsymbol{o},\boldsymbol{x}_{n})/n, exists.

This is a generalization of Proposition 4.8 of [3] and can be proved by the same argument using Kingman’s subadditive ergodic theorem. In fact, Theorem 5.7 implies that the speed does not depend on 𝒐\boldsymbol{o} and is measurable with respect to the invariant sigma-field (up to modifying a null event).

5.3 Ergodicity

An even AA\subseteq\mathcal{M}_{*} is invariant if it does not depend on the root; i.e., if [X,o,μ]A[X,o,\mu]\in A, then yX:[X,y,μ]A\forall y\in X:[X,y,\mu]\in A. Let II be the sigma-field of invariant events. A unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] (or a unimodular probability measure on \mathcal{M}_{*}) is ergodic if [A]{0,1}\mathbb{P}\left[A\right]\in\{0,1\} for all AIA\in I.

One can express ergodicity in terms of ergodicity of random walks as follows. Let hh be a fixed symmetric transport function such that h+()=h()=1h^{+}(\cdot)=h^{-}(\cdot)=1 and h>0h>0 (given by Lemma 3.10). Let (𝒙n)n(\boldsymbol{x}_{n})_{n\in\mathbb{Z}} be the resulting two-sided random walk as in Theorem 5.3 and let QQ be the distribution of [𝑿,𝒐,𝝁,(𝒙n)n][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},(\boldsymbol{x}_{n})_{n}]. Let Γ\Gamma be the group of automorphisms of the time axis \mathbb{Z}; i.e., transforms of the form tt+t0t\mapsto t+t_{0} or tt+t0t\mapsto-t+t_{0}. By Theorem 5.3, [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is unimodular if and only if QQ is invariant under the action of Γ\Gamma on \mathcal{M}_{\infty} defined by shifts and time-reversion (note that considering time-reversions is important at this point). Here, we prove:

Theorem 5.6.

[𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is ergodic if and only if the law of [𝐗,𝐨,𝛍,(𝐱n)n][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},(\boldsymbol{x}_{n})_{n}] is ergodic under the action of Γ\Gamma.

The latter means that every Γ\Gamma-invariant event has probability zero or one. This result is straightforwardly implied by the following theorem.

Theorem 5.7.

Under the above setting, for every shift-invariant event BB\subseteq\mathcal{M}_{\infty}, there exists an invariant event AIA\in I such that BΔ{(X,o,μ,(xn)n):(X,o,μ)A}B\Delta\{(X,o,\mu,(x_{n})_{n}):(X,o,\mu)\in A\} has zero probability for every unimodular rmm space equipped with the random walk (𝐱n)n(\boldsymbol{x}_{n})_{n}.

This extends Theorem 4.6 of [3] with the additional point that AA does not depend on the choice of the unimodular probability measure (this is important in the next section). The same proof works here and is skipped for brevity. This results hold for an arbitrary random walk preserving an initial bias as well (as in Theorem 5.2). Also, one can relax the condition h>0h>0 to nh(n)>0\sum_{n}h^{(n)}>0 as in Theorem 5.3.

Corollary 5.8.

Let [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] and (𝐱n)n(\boldsymbol{x}_{n})_{n} be as above. Then, for every f:f:\mathcal{M}_{*}\to\mathbb{R} such that 𝔼[|f(𝐨)|]<\mathbb{E}\left[\left|f(\boldsymbol{o})\right|\right]<\infty, ave(f):=limn12ni=nnf(𝐱n)\mathrm{ave}(f):=\lim_{n}\frac{1}{2n}\sum_{i=-n}^{n}f(\boldsymbol{x}_{n}) exists and 𝔼[ave(f)]=𝔼[f(𝐨)]\mathbb{E}\left[\mathrm{ave}(f)\right]=\mathbb{E}\left[f(\boldsymbol{o})\right]. In particular, if [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is ergodic, then ave(f)=𝔼[f(𝐨)]\mathrm{ave}(f)=\mathbb{E}\left[f(\boldsymbol{o})\right] a.s.

5.4 Ergodic Decomposition

In this subsection, we will prove ergodic decomposition for unimodular rmm spaces; i.e., expressing the distribution as a mixture of ergodic probability measures. This does not follow immediately from the existing results in the literature; e.g., for measure preserving semi-group actions (see [16]) or for countable Borel equivalence relations. We will deduce the claim from ergodic decomposition for the random walk.

Let 𝒰\mathcal{U} (resp. \mathcal{E}) denote the set of unimodular (resp. ergodic) probability measures on \mathcal{M}_{*}. These are subsets of the set of probability measures on \mathcal{M}_{*} and can be equipped with the topology of weak convergence. Also, 𝒰\mathcal{U} is a closed and convex subset by Lemma 3.1.

Proposition 5.9.

\mathcal{E} is the set of extreme points of 𝒰\mathcal{U}.

An extreme point means a point that cannot be expresses as a a convex combination of two other points. This claim is implied by the following stronger result and the proof is skipped.

Theorem 5.10 (Ergodic Decomposition).

For every unimodular probability measure ν\nu on \mathcal{M}_{*}, there exists a unique probability measure λ\lambda on \mathcal{E} such that ν=φ𝑑λ(φ)\nu=\int_{\mathcal{E}}\varphi d\lambda(\varphi).

See also Theorem 5.12 for another formulation. The proof is by using random walks. Let hh be an arbitrary symmetric transport function as in Lemma 3.10 (h>0h>0 and h+()=h()=1h^{+}(\cdot)=h^{-}(\cdot)=1) and let (𝒙n)n(\boldsymbol{x}_{n})_{n} be the resulting random walk as in Subsection 5.2. Let II^{\prime} be the sigma field of Γ\Gamma-invariant events in \mathcal{M}_{\infty}, where Γ\Gamma is the automorphism group of \mathbb{Z} (as in Subsection 5.3). Let 𝒰\mathcal{U}^{\prime} (resp. \mathcal{E}^{\prime}) be the set of Γ\Gamma-invariant probability measures on \mathcal{M}_{\infty} (resp. ergodic under the action of Γ\Gamma). Let π:\pi:\mathcal{M}_{\infty}\to\mathcal{M}_{*} be the projection defined by forgetting the trajectory of points. By using Theorems 5.3 and 5.6, it is straightforward to deduce that π𝒰=𝒰\pi_{*}\mathcal{U}^{\prime}=\mathcal{U} and π=\pi_{*}\mathcal{E}^{\prime}=\mathcal{E}.

Proof of Theorem 5.10 (Existence).

Let μ\mu be the distribution of a unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. Let ν^\hat{\nu} be the distribution of [𝑿,𝒐,𝝁,(𝒙n)n][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},(\boldsymbol{x}_{n})_{n}]. By Theorem 5.2, ν^𝒰\hat{\nu}\in\mathcal{U}^{\prime}. Therefore, by the ergodic decomposition for the action of Γ\Gamma (see e.g., [16]), there exists a probability measure α\mathcal{\alpha} on \mathcal{E}^{\prime} such that ν^=φ𝑑α(φ)\hat{\nu}=\int_{\mathcal{E}^{\prime}}\varphi d\alpha(\varphi). This implies that ν=(πφ)𝑑α(φ)\nu=\int_{\mathcal{E}^{\prime}}(\pi_{*}\varphi)d\alpha(\varphi). Use the change of variables ψ:=πφ\psi:=\pi_{*}\varphi and note that πφ\pi_{*}\varphi\in\mathcal{E}. One gets ν=ψ𝑑λ(ψ)\nu=\int_{\mathcal{E}}\psi d\lambda(\psi), where λ=(π)α\lambda=(\pi_{*})_{*}\alpha. So, the existence of ergodic decomposition is proved. ∎

Lemma 5.11.

For every event CC\subseteq\mathcal{M}_{*} and every 0p10\leq p\leq 1, there exists an invariant event AIA\in I such that {ν:ν(C)p}={ν:ν(A)=1}\{\nu\in\mathcal{E}:\nu(C)\leq p\}=\{\nu\in\mathcal{E}:\nu(A)=1\}.

Proof.

Let C:={(X,o,μ,(xn)):(X,o,μ)A}C^{\prime}:=\{(X,o,\mu,(x_{n})):(X,o,\mu)\in A\}\subseteq\mathcal{M}_{\infty}. By Theorems 1 and 3 of [16], there exists BIB\in I^{\prime} such that {ν:ν(C)p}={ν:ν(B)=1}\{\nu^{\prime}\in\mathcal{E}^{\prime}:\nu^{\prime}(C^{\prime})\leq p\}=\{\nu^{\prime}\in\mathcal{E}^{\prime}:\nu^{\prime}(B)=1\}. Now, let AA be the event given by Theorem 5.7. ∎

Proof of Theorem 5.10 (Uniqueness).

Let ν1=φ𝑑λ1(φ)\nu_{1}=\int_{\mathcal{E}}\varphi d\lambda_{1}(\varphi) and ν2=φ𝑑λ2(φ)\nu_{2}=\int_{\mathcal{E}}\varphi d\lambda_{2}(\varphi), where λ1\lambda_{1} and λ2\lambda_{2} are distinct probability measures on \mathcal{E}. The sets EC,p:={ν:ν(C)p}E_{C,p}:=\{\nu\in\mathcal{E}:\nu(C)\leq p\}, where CC\subseteq\mathcal{M}_{*} and 0p10\leq p\leq 1, generate the weak topology and its corresponding Borel sigma-field. Therefore, there exists CC and pp such that λ1(EC,p)λ2(EC,p)\lambda_{1}(E_{C,p})\neq\lambda_{2}(E_{C,p}). By Lemma 5.11, there exists an invariant event AIA\in I such that EC,p={ν:ν(A)=1}E_{C,p}=\{\nu\in\mathcal{E}^{\prime}:\nu(A)=1\}. Since φ(A){0,1}\varphi(A)\in\{0,1\}, one gets νi(A)=φ(A)𝑑λi(φ)=λi(EC,p)\nu_{i}(A)=\int\varphi(A)d\lambda_{i}(\varphi)=\lambda_{i}(E_{C,p}). Thus, ν1(A)ν2(A)\nu_{1}(A)\neq\nu_{2}(A), and hence, ν1ν2\nu_{1}\neq\nu_{2}. ∎

Lemma 5.11 proves condition (b) of [16]. Condition (c) is also implied by the proof of existence in Theorem 5.10. One can similarly prove condition (a) of [16] using random walks. Therefore, one can leverage the direct construction of the ergodic decomposition in [16] to prove the following formulation of ergodic decomposition (see the paragraph of [16] after defining the condition (c)).

Theorem 5.12.

There exists an II-measurable map β:\beta:\mathcal{M}_{*}\to\mathcal{E} such that ν=β(ξ)𝑑ν(ξ)\nu=\int_{\mathcal{M}_{*}}\beta(\xi)d\nu(\xi) for every unimodular probability measure ν\nu on \mathcal{M}_{*}. In addition, every ergodic φ\varphi\in\mathcal{E} is concentrated on β1(φ)\beta^{-1}(\varphi) and no other ergodic measure is concentrated on β1(φ)\beta^{-1}(\varphi). Such a map β\beta is unique in the sense that two such maps are equal ν\nu-a.s., for every unimodular probability measures ν\nu.

See Theorem 4.11 of [28] for a similar statement for countable Borel equivalence relations.

5.5 Ends

Ends are defined for every topological space [19]. In particular, if XX is boundedly-compact and locally-connected and oXo\in X, every end of XX can be represented uniquely by a sequence (Un)n(U_{n})_{n}, where UnU_{n} is an unbounded connected component of Bn(o)cB_{n}(o)^{c} and U1U2U_{1}\supseteq U_{2}\supseteq\cdots. There is also a natural compact topology on the union of XX and the set of its ends.

For unimodular graphs, it is proved that the number of ends (defined similarly) is either 0, 1, 2 or \infty. In addition, in the last case, there is no isolated end (Proposition 6.10 of [3]). Here, we extend these results to unimodular rmm spaces.

Definition 5.13.

Let (X,o,μ)(X,o,\mu) be a locally-connected rmm space. An end of XX, represented by a sequence (Un)n(U_{n})_{n} as above, is light if μ(Un)<\mu(U_{n})<\infty for some nn. Otherwise, it is called heavy. It is called isolated if UnU_{n} has only one end for some nn.

If 𝝁(𝑿)<\boldsymbol{\mu}(\boldsymbol{X})<\infty, there is no heavy end, but the number of light tails can be arbitrary (Example 2.4). Also, even if 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty, there might exist light isolated ends (e.g., when 𝑿=n({n}×)(×{0})\boldsymbol{X}=\cup_{n\in\mathbb{Z}}\left(\{n\}\times\mathbb{R}\right)\cup\left(\mathbb{R}\times\{0\}\right) and 𝝁\boldsymbol{\mu} is the sum of the Lebesgue measure on ×{0}\mathbb{R}\times\{0\} and some finite measures on the vertical lines). Here, we focus on the other cases.

Proposition 5.14.

Assume [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is a unimodular rmm space that is connected and locally-connected a.s. and assume 𝛍(𝐗)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s.

  1. (i)

    The number of heavy ends of 𝑿\boldsymbol{X} is either 1, 2 or \infty. In the last case, there is no isolated heavy end.

  2. (ii)

    The set of light ends is either empty or is an open dense subset of the set of ends of 𝑿\boldsymbol{X} (and hence, is infinite).

If 𝑿\boldsymbol{X} is disconnected but locally-connected, the same result holds for almost every connected component of 𝑿\boldsymbol{X} by Lemma 3.7 (note that every component is clopen by local connectedness).

The proof of the claim for heavy ends is a modification of that of Proposition 3.9 of [36] for unimodular graphs. So this part is only sketched for brevity. First, we provide the following generalization of Lemma 3.7.

Lemma 5.15.

Let [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be as in Proposition 5.14 and 𝐒\boldsymbol{S} be a factor subset. Then, almost surely, if 𝐒\boldsymbol{S}\neq\emptyset, then every heavy end of (𝐗,𝛍)(\boldsymbol{X},\boldsymbol{\mu}) is a limiting point of 𝐒\boldsymbol{S}.

Proof.

If not, with positive probability, there exists a bounded subset CC and a component UU of 𝑿C\boldsymbol{X}\setminus C such that C𝑺C\cap\boldsymbol{S}\neq\emptyset, U𝑺=U\cap\boldsymbol{S}=\emptyset and 𝝁(U)=\boldsymbol{\mu}(U)=\infty. One might assume that CC is open and connected and has diameter at most nn (for some fixed nn). Let CC^{\prime} be the union of all such sets CC. Then, CC^{\prime} is a factor subset, intersects 𝑺\boldsymbol{S} and one can see that 𝑿C\boldsymbol{X}\setminus C^{\prime} has some components UU^{\prime} with infinite mass. For every such UU^{\prime}, send unit mass from every xUx\in U^{\prime} to the compact set U\partial U^{\prime} (or if 𝝁(U)=0\boldsymbol{\mu}(\partial U^{\prime})=0, to some neighborhood of U\partial U^{\prime}). This contradicts the MTP since the outgoing mass is at most 1 and the incoming mass is \infty at some points. ∎

Proof of Proposition 5.14.

The existence of heavy tails is proved by constructing UnU_{n}’s inductively such that 𝝁(Un)=\boldsymbol{\mu}(U_{n})=\infty. If the number of heavy ends is finite and at least three, then one can construct a compact subset S𝑿S\subseteq\boldsymbol{X}, as a factor of 𝑿\boldsymbol{X}, that separates all of the heavy ends (consider the union of all open connected subsets with diameter at most nn that separate all ends, and note that every two such subsets intersect). This contradicts Lemma 3.7. Also, if ee is an isolated heavy end and there are at least three heavy ends, one can assign to ee a bounded set S(e)XS(e)\subseteq X equivariantly that separates ee from all other heavy ends and separates at least two other heavy ends from each other (consider the union of all such sets that are open and connected and have diameter at most nn and note that every two of them intersect). The set eS(e)\cup_{e}S(e) violates Lemma 5.15. If the set of light ends is nonempty and non-dense, there exists some open connected set CC with diameter at most nn (for some fixed nn) such that some component of 𝑿C\boldsymbol{X}\setminus C has only heavy ends, and some other component UU^{\prime} of 𝑿C\boldsymbol{X}\setminus C has light ends and 𝝁(U)1\boldsymbol{\mu}(U^{\prime})\leq 1. One can see that the union of all such CC violates Lemma 5.15. ∎

5.6 Amenability

The notion of amenability is originally defined for countable groups. It is also extended to locally-compact topological groups and also to countable Borel equivalence relations. The latter is extended to unimodular graphs in [3]. There are various equivalent definitions, some functional-analytic ones and some combinatorial ones. To extend them to unimodular rmm spaces, the existing definitions do not apply directly, since no group or countable Borel equivalence relation is present.

In this subsection, we extend some of the definitions of amenability of countable Borel equivalence relations (in [12] and [26]) to unimodular rmm spaces. In Theorem 5.21, we prove that these definitions are equivalent by reducing them to analogous conditions for some specific countable Borel equivalence relation. This reduction requires Palm theory developed in Section 6. So the proof of the main result is postponed to Subsection 7.3.

5.6.1 Definition by Local Means

A definition of amenability of groups is the existence of an invariant mean. That is, the existence of a map that assigns a mean value to every bounded measurable function such that the map is group-invariant and finitely-additive. For countable Borel equivalence relations, two definitions of global mean and local mean are provided (see [26] and [12]). Here, we extend the local mean to unimodular rmm spaces (global means are based on partial bijections and seem more difficult to be extended to the continuum setting).

Definition 5.16 (Local Mean).

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a unimodular discrete space. A local mean mm is a map that assigns to (some class of) deterministic rmm spaces (X,o,μ)(X,o,\mu), a state m(X,o,μ)=:mom_{(X,o,\mu)}=:m_{o} on (X,μ)(X,\mu) (i.e., mo:L(X,μ)m_{o}:L^{\infty}(X,\mu)\to\mathbb{R} is a positive linear functional such that mo(1)=mo=1m_{o}(1)=||m_{o}||_{\infty}=1), such that:

  1. (i)

    mm is isomorphism-invariant and is defined for a.e. realization of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}],

  2. (ii)

    If mom_{o} is defined, then mym_{y} is defined for all yXy\in X and mo=mym_{o}=m_{y},

  3. (iii)

    For all bounded measurable functions f:f:\mathcal{M}_{**}\to\mathbb{R}, the map [X,o,μ]mo(f(o,))[X,o,\mu]\mapsto m_{o}(f(o,\cdot)) is measurable.

Then, the following condition is a definition of amenability of a unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] and is analogous to the condition (AI) of [26]:

There exists a local mean. (LM)

5.6.2 Definition by Approximate Means

Definition 5.17.

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a unimodular rmm space. An approximate mean is a sequence of measurable functions λn:0\lambda_{n}:\mathcal{M}_{**}\to\mathbb{R}^{\geq 0} such that, almost surely, y𝑿:𝑿λn(y,)𝑑𝝁=1\forall y\in\boldsymbol{X}:\int_{\boldsymbol{X}}\lambda_{n}(y,\cdot)d\boldsymbol{\mu}=1 and y𝑿:λn(𝒐,)λn(y,)10\forall y\in\boldsymbol{X}:||\lambda_{n}(\boldsymbol{o},\cdot)-\lambda_{n}(y,\cdot)||_{1}\to 0.

Here, λn(y,)\lambda_{n}(y,\cdot) is regarded as an element of L1(𝑿,𝝁)L^{1}(\boldsymbol{X},\boldsymbol{\mu}). Then, the following condition is another definition of amenability and is analogous to the condition (AI) of [26] for Borel equivalence relations:

There exists an approximate mean. (AM)

The name approximate mean come from the fact that a local mean is obtained by taking an ultra limit of λn(o,)f()𝑑μ\int\lambda_{n}(o,\cdot)f(\cdot)d\mu as nn\to\infty.

5.6.3 Definition by Hyperfiniteness

Roughly speaking, a countable Borel equivalence relation is hyperfinite if it can be approximated by finite equivalence sub-relations. The following is analogous to equivalence sub-relations of the natural equivalence relation on \mathcal{M}_{*} (Subsection 4.6):

Definition 5.18.

A factor partition Π\Pi is a map that assigns to every (X,μ)(X,\mu) a partition of XX such that the map is invariant under isomorphisms and, if Π(o)\Pi(o) denotes the element containing oXo\in X, then {(X,o,y,μ):yΠ(o)}\{(X,o,y,\mu):y\in\Pi(o)\} is a measurable subset of \mathcal{M}_{**}. It is called finite if every element of Π\Pi has finite measure under μ\mu (but need not be a finite or bounded set). A factor sequence of nested partitions (Πn)n(\Pi_{n})_{n} is defined similarly by the condition that the map (X,o,y,μ)min{n:yΠn(o)}(X,o,y,\mu)\mapsto\min\{n:y\in\Pi_{n}(o)\} is measurable.

If (X,μ)(X,\mu) has nontrivial automorphisms, not all partitions of XX can appear in the above definition. So, we allow extra randomness. But due to topological issues to define a random partition on XX (see Subsection 5.1), we use another form of extra randomness as follows.

Definition 5.19.

An equivariant (random) partition is defined similarly to factor partitions with the difference that the partition can be a factor of (X,o,Φ)(X,o,\Phi), where Φ\Phi is an equivariant random additional structure as in Subsection 5.1. An equivariant nested sequence of partitions is defined similarly.

Remark 5.20.

In fact, the arguments in Subsection 7.3 show that it is enough that the partition is a factor of the marked Poisson point process (see Example 6.10 and Subsection 7.3). Also, if there is no nontrivial automorphism a.s., then factor partitions are enough.

These definitions allow us to define the following three forms of hyperfiniteness, which will be seen to be equivalent.

There exist equivariant nested finite partitions Πn\displaystyle\text{There exist equivariant nested finite partitions }\Pi_{n}
such that [nΠn(𝒐)=𝑿]=1.\displaystyle\text{such that }\mathbb{P}\left[\bigcup_{n}\Pi_{n}(\boldsymbol{o})=\boldsymbol{X}\right]=1. (HF1)

This does not imply that Πn(𝒐)\Pi_{n}(\boldsymbol{o}) contains a large neighborhood of 𝒐\boldsymbol{o}. The following condition is another form of hyperfiniteness.

There exist equivariant nested finite partitions Πn\displaystyle\text{There exist equivariant nested finite partitions }\Pi_{n}
such that r<:[n:Br(𝒐)Πn(𝒐)]=1.\displaystyle\text{such that }\forall r<\infty:\mathbb{P}\left[\exists n:B_{r}(\boldsymbol{o})\subseteq\Pi_{n}(\boldsymbol{o})\right]=1. (HF2)

Hyperfiniteness can also be phrased in terms of a single partition as follows.

r<,ϵ>0, there exists an equivariant finite partition Π\displaystyle\forall r<\infty,\forall\epsilon>0,\text{ there exists an equivariant finite partition }\Pi
such that [Br(𝒐)Π(𝒐)]<ϵ.\displaystyle\text{such that }\mathbb{P}\left[B_{r}(\boldsymbol{o})\not\subseteq\Pi(\boldsymbol{o})\right]<\epsilon. (HF3)

5.6.4 Definition by The Folner Condition

The Folner condition is a combinatorial way to define amenability of groups (and deterministic graphs); i.e., the existence of a finite set AA such that the boundary of AA is arbitrarily small compared to AA. Modified versions of this condition are provided for countable Borel equivalence relations (based on equivariant graphings; see [12] and [26]) and for unimodular graphs (Definition 8.1 of [3]). In particular, AA is required to be an element of some factor partition. The extension to the continuum setting is not straightforward. Here, we provide two Folner-type conditions as follows. For A𝑿A\subseteq\boldsymbol{X} and r0r\geq 0, let rA:={yA:Br(y)A}\partial_{r}A:=\{y\in A:B_{r}(y)\not\subseteq A\} denote the inner rr-boundary of AA.

r<,ϵ>0, there exists an equivariant finite partition Π\displaystyle\forall r<\infty,\forall\epsilon>0,\text{ there exists an equivariant finite partition }\Pi
such that 𝔼[𝝁(rΠ(𝒐))𝝁(Π(𝒐))]<ϵ.\displaystyle\text{such that }\mathbb{E}\left[\frac{\boldsymbol{\mu}(\partial_{r}\Pi(\boldsymbol{o}))}{\boldsymbol{\mu}(\Pi(\boldsymbol{o}))}\right]<\epsilon. (FO1)

The MTP implies easily that this condition is equivalent to (HF3) (see the proof of Theorem 5.21).

There exist equivariant nested finite partitions Πn such that\displaystyle\text{There exist equivariant nested finite partitions }\Pi_{n}\text{ such that}
r:𝝁(rΠn(𝒐))𝝁(Πn(𝒐))0,a.s.\displaystyle\forall r:\frac{\boldsymbol{\mu}(\partial_{r}\Pi_{n}(\boldsymbol{o}))}{\boldsymbol{\mu}(\Pi_{n}(\boldsymbol{o}))}\to 0,\quad a.s. (FO2)

It is not clear to the author whether one can use the outer boundary or the full boundary in the above definitions or not.

5.6.5 Equivalence of the Definitions

The following is the main result of this subsection.

Theorem 5.21.

For unimodular rmm spaces, the conditions (LM), (AM), (HF1), (HF2), (HF3), (FO1) and (FO2) are equivalent.

Definition 5.22 (Amenability).

A unimodular rmm space is called amenable if the equivalent conditions in Theorem 5.21 hold.

The proof of the analogous results for countable groups and for countable Borel equivalence relations (Theorem 1 of [26]) is heavily based on the discreteness. So, these proofs cannot be directly extended to unimodular rmm spaces. We will prove the above theorem by reducing it to the analogous result for an specific countable Borel equivalence relation. The reduction is based on Palm theory developed in Section 6 and the proof is postponed to Subsection 7.3. In short, a countable Borel equivalence relation is constructed using the marked Poisson point process on 𝑿\boldsymbol{X} (Example 6.10). An invariant measure is constructed by the Palm distribution (Example 6.24). Then, the reduction is proved using the Voronoi tessellation and balancing transport kernels (later: cross ref).

Here, we only prove the implications that do not rely on Section 6:

Lemma 5.23.

(HF2) \Rightarrow (HF3) \Rightarrow (HF1) and (HF3) \Leftrightarrow (FO1) \Leftrightarrow (FO2).

Proof.

The implication (HF2) \Rightarrow (HF3) is clear.

(HF3) \Rightarrow (HF1). Let Πn\Pi_{n} be the equivariant partition given by (HF3) for r:=nr:=n and ϵ:=2n\epsilon:=2^{-n}. Then, one can let Πn\Pi^{\prime}_{n} be the superposition of Πn,Πn+1,\Pi_{n},\Pi_{n+1},\ldots and use the Borel Cantelli lemma to deduce (HF1).

(HF3) \Leftrightarrow (FO1). By letting g(o,y):=1/μ(Π(o))1{yΠ(o)}1{Br(o)Π(o)}g(o,y):=1/{\mu(\Pi(o))}1_{\{y\in\Pi(o)\}}1_{\{B_{r}(o)\not\subseteq\Pi(o)\}}, the MTP gives that [Br(𝒐)Π(𝒐)]=𝔼[𝝁(rΠ(𝒐))/𝝁(Π(𝒐))]\mathbb{P}\left[B_{r}(\boldsymbol{o})\not\subseteq\Pi(\boldsymbol{o})\right]=\mathbb{E}\left[\boldsymbol{\mu}(\partial_{r}\Pi(\boldsymbol{o}))/\boldsymbol{\mu}(\Pi(\boldsymbol{o}))\right].

(FO1) \Leftrightarrow (FO2). The \Rightarrow part is obtained by taking a subsequence such that the almost sure convergence holds. The \Leftarrow part is obtained by the bounded converge theorem noting that rAA\partial_{r}A\subseteq A. ∎

Remark 5.24.

Theorem 1 of [26] assumes ergodicity, but the claim holds for non-ergodic cases as well. In fact, in each of the definitions, [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is amenable if and only if almost all of its ergodic components are amenable.

6 Palm Theory on Unimodular rmm Spaces

Recall from Subsection 4.1 that the Palm version of a stationary point process (or random measure) is defined and means heuristically re-rooting to a typical point of the point process. In this section, we generalize Palm theory to unimodular rmm spaces. Here, a unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is thought of as a generalization of the Euclidean or hyperbolic spaces. The Palm theory is intended for random measures Φ\Phi on 𝑿\boldsymbol{X} which are chosen equivariantly (e.g., the Poisson point process with intensity measure 𝝁\boldsymbol{\mu}). The notion of equivariant random measures is discussed in Subsection 6.1 and the Palm theory is given in the next subsections.

6.1 Additional Random Measures on a Unimodular Random rmm Space

For k1k\geq 1, let k\mathcal{M}^{k}_{*} be the space of all tuples (X,o,μ1,,μk)(X,o,\mu_{1},\ldots,\mu_{k}), where (X,o,μ1)(X,o,\mu_{1}) is a rmm space and μ2,,μk\mu_{2},\ldots,\mu_{k} are measures on XX. Likewise, let k\mathcal{M}^{k}_{**} be the space of all doubly-rooted tuples (X,o1,o2,μ1,,μk)(X,o_{1},o_{2},\mu_{1},\ldots,\mu_{k}). By the discussion in Subsection 5.1 and using the results of [29], one can equip k\mathcal{M}^{k}_{*} and k\mathcal{M}^{k}_{**} with generalizations of the GHP metric such that they are Polish spaces.

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a unimodular rmm space. Roughly speaking, an equivariant random measure on 𝐗\boldsymbol{X} is assigning an additional measure Φ\Phi on 𝑿\boldsymbol{X} (in every realization of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]), possibly using extra randomness, in a way that unimodularity is preserved. Intuitively, the law of Φ\Phi (given [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]) should not depend on the root, should be isomorphism-invariant and satisfy some measurability condition. More precisely:

Definition 6.1.

An equivariant random measure Φ\Phi is a map that assigns to every deterministic rmm space (X,o,μ)(X,o,\mu) a random measure Φ(X,o,μ)\Phi_{(X,o,\mu)} on XX such that:

  1. (i)

    For all (X,o,μ)(X,o,\mu) and all yXy\in X, Φ(X,y,μ)Φ(X,o,μ)\Phi_{(X,y,\mu)}\sim\Phi_{(X,o,\mu)},

  2. (ii)

    If ρ:(X,o,μ)(X,o,μ)\rho:(X,o,\mu)\to(X^{\prime},o^{\prime},\mu^{\prime}) is an isomorphism, then ρΦ(X,o,μ)Φ(X,o,μ)\rho_{*}\Phi_{(X,o,\mu)}\sim\Phi_{(X^{\prime},o^{\prime},\mu^{\prime})},

  3. (iii)

    For all Borel subsets A2A\subseteq\mathcal{M}^{2}_{*}, the map (X,o,μ)[[X,o,μ,Φ(X,o,μ)]A](X,o,\mu)\mapsto\mathbb{P}\left[[X,o,\mu,\Phi_{(X,o,\mu)}]\in A\right] is measurable.

In addition, if [𝑿,𝒐,μ][\boldsymbol{X},\boldsymbol{o},\mu] is a unimodular rmm space, an equivariant random measure on X\boldsymbol{X} is a map with the above conditions relaxed to be defined on an invariant event with full probability. We denote Φ(𝑿,𝒐,𝝁)\Phi_{(\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu})} simply by Φ\Phi for brevity.

As usual, if for all (X,o,μ)(X,o,\mu), Φ(X,o,μ)\Phi_{(X,o,\mu)} is a counting measure a.s., Φ\Phi is called an equivariant (simple) point process and if Φ(X,o,μ)()\Phi_{(X,o,\mu)}(\cdot)\in\mathbb{Z} a.s., it is called an equivariant (non-simple) point process.

By integrating the distribution of Φ\Phi over the distribution of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], one obtains a probability measure on 2\mathcal{M}^{2}_{*} (similarly to (2.3) of [6]). This determines a random object of 2\mathcal{M}^{2}_{*}. By an abuse of notation, we denote the latter by [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] and we use the same symbols \mathbb{P} and 𝔼\mathbb{E} for its distribution. It is easy to deduce that [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is unimodular in the sense of Subsection 5.1 (see Lemma 6.3); i.e.,

Definition 6.2.

A random element [𝒀,𝒑,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] on 2\mathcal{M}^{2}_{*} is unimodular if the following MTP holds for all measurable functions g:20g:\mathcal{M}^{2}_{**}\to\mathbb{R}^{\geq 0}:

𝔼[𝒀g(𝒑,z)𝑑𝝁1(z)]=𝔼[𝒀g(z,𝒑)𝑑𝝁1(z)].\mathbb{E}\left[\int_{\boldsymbol{Y}}g(\boldsymbol{p},z)d\boldsymbol{\mu}_{1}(z)\right]=\mathbb{E}\left[\int_{\boldsymbol{Y}}g(z,\boldsymbol{p})d\boldsymbol{\mu}_{1}(z)\right].

Note that the integral is against 𝝁1\boldsymbol{\mu}_{1}, but gg can depend on 𝝁2\boldsymbol{\mu}_{2} as well.

One can also extend the above definition to define a jointly equivariant pairs (or tuples) of random measures (Φ(X,o,μ),Ψ(X,o,μ))(\Phi_{(X,o,\mu)},\Psi_{(X,o,\mu)}) and the results of this section remain valid.

6.1.1 An Equivalent Definition

A simpler definition of equivariant random measures on 𝑿\boldsymbol{X} would be a unimodular tuple [𝒀,𝒑,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] such tat [𝒀,𝒑,𝝁1][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1}] has the same distribution as [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. Indeed, the two definitions are equivalent in the following sense.

Lemma 6.3.

Let [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a nontrivial unimodular rmm space. If Φ\Phi is an equivariant random measure on 𝐗\boldsymbol{X}, then [𝐗,𝐨,𝛍,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is unimodular. Conversely, if [𝐘,𝐩,𝛍1,𝛍2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] is unimodular and [𝐘,𝐩,𝛍1][𝐗,𝐨,𝛍][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1}]\sim[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], then there exists an equivariant random measure Φ\Phi such that [𝐘,𝐩,𝛍1,𝛍2][𝐗,𝐨,𝛍,Φ][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}]\sim[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi].

The proof of this lemma is based on the Palm theory developed in the next subsections and will be given in Subsection 7.2 (the lemma will not be used in this section). To prove the converse, one basically needs to consider the regular conditional distribution of [𝒀,𝒑,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] w.r.t. [𝒀,𝒑,𝝁1][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1}]. However, a difficult step is proving that a version of the conditional law exists which does not depend on the root (property (i) in Definition 6.1). This will be proved using the Palm theory and invariant disintegration [27].

Remark 6.4.

The above alternative definition of equivariant measures is useful for weak convergence of equivariant random measures and tightness. See the following definition and corollary. It also enables us to regard 𝝁\boldsymbol{\mu} as an equivariant measure on (the Palm version of) [𝑿,𝒐,Φ][\boldsymbol{X},\boldsymbol{o},\Phi], which will be explained in Subsection 6.3.1. In contrast, Definition 6.1 is useful for explicit constructions and also for dealing with couplings of two equivariant random measures; e.g., to define the independent coupling (conditional to [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]) of two equivariant random measures (Example 6.11).

Definition 6.5.

Two equivariant random measures Φ\Phi and Ψ\Psi on 𝑿\boldsymbol{X} are equivalent if [𝑿,𝒐,𝝁,Φ][𝑿,𝒐,𝝁,Ψ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi]\sim[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Psi] (equivalently, on almost every realization of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], one has Φ()Ψ()\Phi_{(\cdot)}\sim\Psi_{(\cdot)}). Also, we say that Φn\Phi_{n} converges to Φ\Phi if [𝑿,𝒐,𝝁,Φn][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi_{n}] converges weakly to [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi].

Corollary 6.6.

Let b:×b:\mathcal{M}_{*}\times\mathbb{N}\to\mathbb{R} be a lower semi-continuous function (e.g., (X,o,μ,n)μ(Bn(o))(X,o,\mu,n)\mapsto\mu(B_{n}(o))). Then, the set of equivariant random measures Φ\Phi on 𝐗\boldsymbol{X} such that n:Φ(Bn(𝐨))b([𝐗,𝐨,𝛍],n)\forall n:\Phi(B_{n}(\boldsymbol{o}))\leq b([\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}],n) a.s. is compact.

Remark 6.7.

Lemma 6.3 can be generalized to the case where Φ\Phi is any other type of additional random structures on 𝑿\boldsymbol{X}, discussed in Subsection 5.1, as long as the Polishness property holds. The same proof works in the general case, but for simplicity, we provide the proof for random measures only.

6.1.2 Examples

For basic examples of equivariant random measures, one can mention Φ=0\Phi=0, Φ=μ\Phi=\mu, Φ=bμ\Phi=b\mu (see (5.2)), or more generally, any factor measures; i.e., when Φ(X,o,μ)\Phi_{(X,o,\mu)} is a deterministic function of (X,μ)(X,\mu) satisfying the assumptions in Definition 6.1. The following are further examples.

Example 6.8 (Stationary Random Measure).

When [𝑿,𝒐,𝝁]:=[d,0,Leb][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]:=[\mathbb{R}^{d},0,\mathrm{Leb}], every stationary point process or random measure on d\mathbb{R}^{d} is an equivariant random measure on 𝑿\boldsymbol{X} in the sense of Subsection 6.1.1. However, in order to have the conditions in Definition 6.1, one should assume that it is isometry-invariant (or just apply a random isometry fixing 0). By the modification mentioned in Remark 4.2, one can say that stationarity is equivalent to being an equivariant random measure on d\mathbb{R}^{d}.

Example 6.9 (Intensity Measure).

If Φ\Phi is an equivariant random measure, then the intensity measure of Φ\Phi, defined by Ψ(X,o,μ)():=𝔼[Φ(X,o,μ)()]\Psi_{(X,o,\mu)}(\cdot):=\mathbb{E}\left[\Phi_{(X,o,\mu)}(\cdot)\right], is also equivariant. In fact, the intensity measure is a factor measure.

Example 6.10 (Poisson Point Process).

Let Φ\Phi be an equivariant random measure. The Poisson point process with intensity measure Φ\Phi is an equivariant random measure (defined by considering in every realization of Φ(X,o,μ)\Phi_{(X,o,\mu)}, the classical definition of the Poisson point process with intensity measure Φ(X,o,μ)\Phi_{(X,o,\mu)}). In addition, Φ\Phi and the Poisson point process are jointly-equivariant. Note that if Φ\Phi has atoms, the the Poisson point process has multiple points and is not a simple point process.
One can prove that if [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] is ergodic and 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s., then [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is also ergodic (this fails if 0<𝝁(𝑿)<0<\boldsymbol{\mu}(\boldsymbol{X})<\infty). The proof is similar to Lemma 4 of [30] and is skipped.

Example 6.11 (Independent Coupling).

Let each of Φ\Phi and Ψ\Psi be an equivariant random measure on 𝑿\boldsymbol{X}. Using the definition of Subsection 6.1.1, it is not trivial to define a coupling of Φ\Phi and Ψ\Psi, but it is easy using Definition 6.1: For every deterministic (X,o,μ)(X,o,\mu), consider an independent coupling of Φ(X,o,μ)\Phi_{(X,o,\mu)} and Ψ(X,o,μ)\Psi_{(X,o,\mu)}. Then, (Φ,Ψ)(\Phi,\Psi) becomes jointly-equivariant and is called the independent coupling (conditional to [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]) of Φ\Phi and Ψ\Psi.

6.2 Palm Distribution and Intensity

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a nontrivial unimodular rmm space and Φ\Phi be an equivariant random measure on 𝑿\boldsymbol{X} (in the sense of either Definition 6.1 or Subsection 6.1.1). Inspired by the analogous notions in stochastic geometry (Subsection 4.1), we are going to define the Palm distribution and the intensity. The approach (4.1) does not work here since the base space is not fixed. We will extend two other approaches by using the Campbell measure and tessellations.

The Campbell measure CΦC_{\Phi} is the measure on 2\mathcal{M}^{2}_{**} defined by

CΦ(A):=𝔼[𝑿1A(𝒐,y)𝑑Φ(y)],C_{\Phi}(A):=\mathbb{E}\left[\int_{\boldsymbol{X}}1_{A}(\boldsymbol{o},y)d\Phi(y)\right],

where, as before, we use the abbreviation 1A(𝒐,y)1_{A}(\boldsymbol{o},y) for 1A(𝑿,𝒐,y,𝝁,Φ)1_{A}(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu},\Phi). It is straightforward to see that CΦC_{\Phi} is a sigma-finite measure. The Palm distribution will be obtained by a disintegration of the Campbell measure as follows.

Theorem 6.12.

There exists a unique sigma-finite measure QΦQ_{\Phi} on 2\mathcal{M}^{2}_{*} such that for all measurable functions g:20g:\mathcal{M}^{2}_{**}\to\mathbb{R}^{\geq 0},

𝔼[𝑿g(𝒐,y)𝑑Φ(y)]=g𝑑CΦ=2(Xg(y,o)𝑑μ(y))𝑑QΦ([X,o,μ,φ]).\mathbb{E}\left[\int_{\boldsymbol{X}}g(\boldsymbol{o},y)d\Phi(y)\right]=\int gdC_{\Phi}=\int_{\mathcal{M}^{2}_{*}}\left(\int_{X}g(y,o)d\mu(y)\right)dQ_{\Phi}([X,o,\mu,\varphi]). (6.1)

Note that the left hand side is just the definition of CΦC_{\Phi}.

Definition 6.13 (Intensity and Palm Distribution).

The intensity λΦ\lambda_{\Phi} of Φ\Phi (w.r.t. 𝝁\boldsymbol{\mu}) is the total mass of QΦQ_{\Phi}. If 0<λΦ<0<\lambda_{\Phi}<\infty, then one can normalize QΦQ_{\Phi} to find a probability measure Φ:=1λΦQΦ\mathbb{P}_{\Phi}:=\frac{1}{\lambda_{\Phi}}Q_{\Phi} on 2\mathcal{M}^{2}_{*}, which is called the Palm distribution of Φ\Phi.
By an abuse of notation, we use the same symbol [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] when dealing with Φ\mathbb{P}_{\Phi}; e.g., we use formulas like Φ[[𝑿,𝒐,𝝁,Φ]A]\mathbb{P}_{\Phi}\left[[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi]\in A\right] and 𝔼Φ[g(𝒐)]\mathbb{E}_{\Phi}\left[g(\boldsymbol{o})\right] by keeping in mind that the probability measure has changed.

By the above notation, we can rewrite (6.1) as follows:

𝔼[𝑿g(𝒐,y)𝑑Φ(y)]=λΦ𝔼Φ[𝑿g(y,𝒐)𝑑𝝁(y)].\mathbb{E}\left[\int_{\boldsymbol{X}}g(\boldsymbol{o},y)d\Phi(y)\right]=\lambda_{\Phi}\mathbb{E}_{\Phi}\left[\int_{\boldsymbol{X}}g(y,\boldsymbol{o})d\boldsymbol{\mu}(y)\right]. (6.2)

Example 6.8 shows that the above definition generalizes the Palm distribution of stationary random measures on d\mathbb{R}^{d}. In addition, (6.2) is a generalization of the refined Campbell Theorem (see [33] or [38]).

The intuition of the Palm distribution is that, under Φ\mathbb{P}_{\Phi}, the root is a typical point of Φ\Phi. The equation (6.2) means that, if 𝝁\boldsymbol{\mu} (resp. Φ\Phi) is interpreted as a measure on a set of senders (resp. receivers), then the expected mass sent from a typical sender is equal to λΦ\lambda_{\Phi} times the expected mass received by a typical receiver.

We will prove Theorem 6.12 by constructing the Palm distribution directly. The construction (4.1) does not work since the underlying space 𝑿\boldsymbol{X} is not deterministic and one cannot fix a subset BB. We replace it by a transport function hh such that h()=1h^{-}(\cdot)=1 as follows. This extends Mecke’s construction in (2.8) and Satz 2.3 of [37]. This also extends the construction of a typical cell for equivariant tessellations of stationary point processes.

Theorem 6.14 (Construction of Palm).

Let h:0h:\mathcal{M}_{**}\to\mathbb{R}^{\geq 0} be a transport function such that y𝐗:h(y)=1\forall y\in\boldsymbol{X}:h^{-}(y)=1 a.s. Then, for every equivariant random measure Φ\Phi, the intensity of Φ\Phi and the measure QΦQ_{\Phi} are obtained as follows and satisfy (6.1):

λΦ\displaystyle\lambda_{\Phi} =\displaystyle= 𝔼[hΦ+(𝒐)]:=𝔼[𝑿h(𝒐,z)𝑑Φ(z)],\displaystyle\mathbb{E}\left[h^{+}_{\Phi}(\boldsymbol{o})\right]:=\mathbb{E}\left[\int_{\boldsymbol{X}}h(\boldsymbol{o},z)d\Phi(z)\right], (6.3)
QΦ(A)\displaystyle Q_{\Phi}(A) =\displaystyle= 𝔼[𝑿1A(𝑿,z,𝝁,Φ)h(𝒐,z)𝑑Φ(z)].\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}1_{A}(\boldsymbol{X},z,\boldsymbol{\mu},\Phi)h(\boldsymbol{o},z)d\Phi(z)\right]. (6.4)

So, if 0<λΦ<0<\lambda_{\Phi}<\infty, the Palm distribution is obtained by biasing by hΦ+(𝐨)h^{+}_{\Phi}(\boldsymbol{o}), and then, re-rooting to a point chosen with law proportional to h(𝐨,)Φh(\boldsymbol{o},\cdot)\Phi; i.e.,

Φ(A)=1λΦ𝔼[𝑿1A(𝑿,z,𝝁,Φ)h(𝒐,z)𝑑Φ(z)].\mathbb{P}_{\Phi}(A)=\frac{1}{\lambda_{\Phi}}\mathbb{E}\left[\int_{\boldsymbol{X}}1_{A}(\boldsymbol{X},z,\boldsymbol{\mu},\Phi)h(\boldsymbol{o},z)d\Phi(z)\right]. (6.5)

Note that such a transport function hh exists by Lemma 3.10.

Proof.

It is enough to prove that QΦQ_{\Phi} satisfies (6.1). Let gg be a transport function on 2\mathcal{M}^{2}_{**}. The right hand side of (6.1) equals to:

2g(o)𝑑QΦ([X,o,μ,φ])\displaystyle\int_{\mathcal{M}^{2}_{*}}g^{-}(o)dQ_{\Phi}([X,o,\mu,\varphi]) =\displaystyle= 𝔼[𝑿g(z)h(𝒐,z)𝑑Φ(z)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}g^{-}(z)h(\boldsymbol{o},z)d\Phi(z)\right]
=\displaystyle= 𝔼[𝑿𝑿g(y,z)h(𝒐,z)𝑑Φ(z)𝑑𝝁(y)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}\int_{\boldsymbol{X}}g(y,z)h(\boldsymbol{o},z)d\Phi(z)d\boldsymbol{\mu}(y)\right]
=\displaystyle= 𝔼[𝑿𝑿g(𝒐,z)h(y,z)𝑑Φ(z)𝑑𝝁(y)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}\int_{\boldsymbol{X}}g(\boldsymbol{o},z)h(y,z)d\Phi(z)d\boldsymbol{\mu}(y)\right]
=\displaystyle= 𝔼[𝑿g(𝒐,z)𝑑Φ(z)],\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}g(\boldsymbol{o},z)d\Phi(z)\right],

where the first equality is by the definition of QΦQ_{\Phi}, the third one is by the MTP, and the last one is because h(z)=1h^{-}(z)=1. This proves (6.1). ∎

Proof of Theorem 6.12.

The existence is proved in Theorem 6.14. For uniqueness, assume QΦQ_{\Phi} is a measure satisfying (6.1). Let hh be a transport function such that h()=1h^{-}(\cdot)=1 a.s. For an arbitrary event A2A\subseteq\mathcal{M}^{2}_{*}, let g(o,z):=h(o,z)1A(z)g(o,z):=h(o,z)1_{A}(z). Inserting gg into (6.1) gives QΦ(A)=𝔼[1A(y)h(𝒐,y)𝑑Φ(y)]Q_{\Phi}(A)=\mathbb{E}\left[\int 1_{A}(y)h(\boldsymbol{o},y)d\Phi(y)\right]; i.e., QΦQ_{\Phi} is the measure given in Theorem 6.14. ∎

As a first application of the Palm calculus, we prove the following extension of Lemma 3.4.

Proposition 6.15.

Let Φ\Phi be an equivariant random measure on 𝐗\boldsymbol{X}. If 𝛍(𝐗)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s., then Φ(𝐗){0,}\Phi(\boldsymbol{X})\in\{0,\infty\} a.s.

Proof.

Let AA be the event that 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty and 0<Φ(𝑿)<0<\Phi(\boldsymbol{X})<\infty and assume [A]>0\mathbb{P}\left[A\right]>0. Since AA is an invariant event, (6.4) implies that QΦ(A)>0Q_{\Phi}(A)>0. On AA, let g(u,v):=Φ(𝑿)1g(u,v):=\Phi(\boldsymbol{X})^{-1}. Then, gΦ+()=1g^{+}_{\Phi}(\cdot)=1 and g()=g^{-}(\cdot)=\infty on AA. This contradicts (6.1). ∎

6.3 Properties of Palm

6.3.1 Unimodularity of the Palm Version

The Palm distribution of an equivariant random measure is generally not unimodular. However, it satisfies the following mass transport principle similarly to the case of stationary random measures (Subsection 4.1).

Definition 6.16.

A random element [𝒀,𝒑,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] on 2\mathcal{M}^{2}_{*} is unimodular with respect to μ2\boldsymbol{\mu}_{2} if the following MTP holds for all measurable functions g:20g:\mathcal{M}^{2}_{**}\to\mathbb{R}^{\geq 0}:

𝔼[𝒀g(𝒑,z)𝑑𝝁2(z)]=𝔼[𝒀g(z,𝒑)𝑑𝝁2(z)].\mathbb{E}\left[\int_{\boldsymbol{Y}}g(\boldsymbol{p},z)d\boldsymbol{\mu}_{2}(z)\right]=\mathbb{E}\left[\int_{\boldsymbol{Y}}g(z,\boldsymbol{p})d\boldsymbol{\mu}_{2}(z)\right].

This is equivalent to the unimodularity of [𝒀,𝒑,𝝁2,𝝁1][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{2},\boldsymbol{\mu}_{1}], which is obtained by swapping the two measures.

Theorem 6.17.

Let [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a unimodular rmm space and Φ\Phi be an equivariant random measure on 𝐗\boldsymbol{X} with positive finite intensity. Then, under the Palm distribution Φ\mathbb{P}_{\Phi}, [𝐗,𝐨,𝛍,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is unimodular w.r.t. Φ\Phi. In other words, [𝐗,𝐨,Φ,𝛍][\boldsymbol{X},\boldsymbol{o},\Phi,\boldsymbol{\mu}] is unimodular under the Pam distribution.

Proof.

Let gg be a transport function and hh be as in Theorem 6.14. By (6.5),

𝔼Φ[gΦ+(𝒐)]\displaystyle\mathbb{E}_{\Phi}\left[g^{+}_{\Phi}(\boldsymbol{o})\right] =\displaystyle= 1λΦ𝔼[𝑿gΦ+(y)h(𝒐,y)𝑑Φ(y)]\displaystyle\frac{1}{\lambda_{\Phi}}\mathbb{E}\left[\int_{\boldsymbol{X}}g^{+}_{\Phi}(y)h(\boldsymbol{o},y)d\Phi(y)\right]
=\displaystyle= 1λΦ𝔼[𝑿𝑿g(y,z)h(𝒐,y)𝑑Φ(y)𝑑Φ(z)]\displaystyle\frac{1}{\lambda_{\Phi}}\mathbb{E}\left[\int_{\boldsymbol{X}}\int_{\boldsymbol{X}}g(y,z)h(\boldsymbol{o},y)d\Phi(y)d\Phi(z)\right]
=\displaystyle= 1λΦ𝔼Φ[𝑿𝑿g(y,𝒐)h(z,y)𝑑Φ(y)𝑑𝝁(z)]\displaystyle\frac{1}{\lambda_{\Phi}}\mathbb{E}_{\Phi}\left[\int_{\boldsymbol{X}}\int_{\boldsymbol{X}}g(y,\boldsymbol{o})h(z,y)d\Phi(y)d\boldsymbol{\mu}(z)\right]
=\displaystyle= 1λΦ𝔼Φ[𝑿g(y,𝒐)𝑑Φ(y)],\displaystyle\frac{1}{\lambda_{\Phi}}\mathbb{E}_{\Phi}\left[\int_{\boldsymbol{X}}g(y,\boldsymbol{o})d\Phi(y)\right],

where in the third equality, we have swapped 𝒐\boldsymbol{o} and zz by the refined Campbell formula (6.2) and the last equality holds by h1(y)=1h^{-1}(y)=1. This proves the claim. ∎

6.3.2 Exchange Formula

Neveu’s exchange formula (see Theorem 3.4.5 of [38]) is a form of the MTP between two jointly-stationary random measures on d\mathbb{R}^{d}. The refined Campbell formula (6.2) can be thought of an exchange formula between 𝝁\boldsymbol{\mu} and Φ\Phi. More generally, assume (Φ,Ψ)(\Phi,\Psi) is a pair of random measures on [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] which are jointly equivariant. Assume the intensities λΦ\lambda_{\Phi} and λΨ\lambda_{\Psi} are positive and finite and consider the Palm distributions Φ\mathbb{P}_{\Phi} and Ψ\mathbb{P}_{\Psi}.

Proposition 6.18 (Exchange Formula).

For jointly-equivariant random measures Φ\Phi and Ψ\Psi on 𝐗\boldsymbol{X} as above, and for all transport functions g:30g:\mathcal{M}^{3}_{**}\to\mathbb{R}^{\geq 0},

λΦ𝔼Φ[𝑿g(𝒐,y)𝑑Ψ(y)]=λΨ𝔼Ψ[𝑿g(y,𝒐)𝑑Φ(y)],\lambda_{\Phi}\mathbb{E}_{\Phi}\left[\int_{\boldsymbol{X}}g(\boldsymbol{o},y)d\Psi(y)\right]=\lambda_{\Psi}\mathbb{E}_{\Psi}\left[\int_{\boldsymbol{X}}g(y,\boldsymbol{o})d\Phi(y)\right], (6.6)

where g(u,v)g(u,v) abbreviates g(𝐗,u,v,𝛍,Φ,Ψ)g(\boldsymbol{X},u,v,\boldsymbol{\mu},\Phi,\Psi) as usual.

Proof.

Let hh be as in Theorem 6.14. By (6.5), the left hand side is equal to

𝔼[h(𝒐,z)g(z,y)𝑑Ψ(y)𝑑Φ(z)]=𝔼[f(𝒐,y)𝑑Ψ(y)],\displaystyle\mathbb{E}\left[\int\int h(\boldsymbol{o},z)g(z,y)d\Psi(y)d\Phi(z)\right]=\mathbb{E}\left[\int f(\boldsymbol{o},y)d\Psi(y)\right],

where f(u,v):=𝑿h(u,z)g(z,v)𝑑Φ(z)f(u,v):=\int_{\boldsymbol{X}}h(u,z)g(z,v)d\Phi(z). By (6.2), the last term is equal to

λΨ𝔼Ψ[f(y,𝒐)𝑑𝝁(y)]=λΨ𝔼Ψ[h(y,z)g(z,𝒐)𝑑Φ(z)𝑑𝝁(y)].\displaystyle\lambda_{\Psi}\mathbb{E}_{\Psi}\left[\int f(y,\boldsymbol{o})d\boldsymbol{\mu}(y)\right]=\lambda_{\Psi}\mathbb{E}_{\Psi}\left[\int\int h(y,z)g(z,\boldsymbol{o})d\Phi(z)d\boldsymbol{\mu}(y)\right].

Since h(z)=1h^{-}(z)=1, the last term is equal to the right hand side of (6.6) and the claim is proved. ∎

Another interpretation of the exchange formula is as follows. By Theorem 6.17, under the Palm distribution of Φ\Phi, [𝑿,𝒐,𝝁,Φ,Ψ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi,\Psi] is unimodular w.r.t. Φ\Phi. Now, one may regard Ψ\Psi as a random measure on the latter and think of 𝝁\boldsymbol{\mu} as a decoration. Then, the exchange formula (6.6) turns into the refined Campbell formula for Ψ\Psi with respect to Φ\Phi. More precisely, one can deduce the following.

Corollary 6.19.

Under the Palm distribution of Φ\Phi, and by thinking of Φ\Phi as the base measure on 𝐗\boldsymbol{X}, the intensity of Ψ\Psi would be λΨ/λΦ\lambda_{\Psi}/\lambda_{\Phi} and the Palm distribution of Ψ\Psi would be identical to Ψ\mathbb{P}_{\Psi}.

6.3.3 Palm Inversion = Palm

Palm inversion refers to the construction of the stationary distribution from the Palm distribution. Similarly, we desire to reconstruct the distribution of [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] from the Palm distribution Φ\mathbb{P}_{\Phi}.

By Theorem 6.17, [𝑿,𝒐,Φ,𝝁][\boldsymbol{X},\boldsymbol{o},\Phi,\boldsymbol{\mu}] is unimodular under Φ\mathbb{P}_{\Phi}. Now, 𝝁\boldsymbol{\mu} can be regarded as a random measure (in the sense of Subsection 6.1.1) on the Palm version of [𝑿,𝒐,Φ][\boldsymbol{X},\boldsymbol{o},\Phi]. Therefore, it makes sense to speak of the Palm distribution of 𝝁\boldsymbol{\mu}, namely, \mathbb{P}^{\prime}. Using the symmetry of 𝝁\boldsymbol{\mu} and Φ\Phi in the refined Campbell formula (6.2), it is straightforward to deduce that \mathbb{P}^{\prime} is equal to the distribution of [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] (see Corollary 6.19). An explicit construction of \mathbb{P}^{\prime} can also be provided by Theorem 6.14.

The above discussion shows that the generalization of Palm theory in this section unifies Palm and Palm-inversion as well. In particular, if Φ0\Phi_{0} is the Palm version of a stationary point process in d\mathbb{R}^{d}, the stationary version is obtained by considering the Palm distribution of the Lebesgue measure w.r.t. Φ0\Phi_{0}. Indeed, for Palm inversion, one desires to move the origin to a typical point of the Euclidean space.

6.3.4 Stationary Distribution of Random Walks

In Theorem 5.2, we considered random walks on [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] such that the Markovian kernel preserves a measure of type b𝝁b\boldsymbol{\mu} a.s. If the Markovian kernel has a stationary distribution which is not absolutely continuous w.r.t. 𝝁\boldsymbol{\mu}, one can extend this result as follows.

Let Φ\Phi be an equivariant random measure with positive and finite intensity on a unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. Let kk a Markovian kernel (that might depend on Φ\Phi as well) and (𝒙n)n(\boldsymbol{x}_{n})_{n} be the random walk with kernel kk as in Theorem 5.2. Define a shift operator 𝒮\mathcal{S} similarly to (5.1). Since [𝑿,𝒐,𝝁,Φ,(𝒙n)n][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi,(\boldsymbol{x}_{n})_{n}] is unimodular, one can define the Palm distribution of Φ\Phi on it, which we call QQ.

Proposition 6.20.

In the above setting, if in almost every sample [X,o,μ,φ][X,o,\mu,\varphi] of [𝐗,𝐨,𝛍,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi], φ\varphi is a stationary (resp. reversible) measure for the Markovian kernel, then the Palm distribution QQ of Φ\Phi is a stationary (resp. reversible) measure for the shift operator 𝒮\mathcal{S}.

Proof.

By Theorem 6.17, the claim is reduced to Theorem 5.2. ∎

6.4 Examples

We already mentioned that the Palm distribution defined in this section generalizes the case of stationary point processes and random measures (by Example 6.8). The following are further examples of the Palm distribution.

Example 6.21 (Conditioning).

Let 𝑺\boldsymbol{S} be a factor subset (Definition 3.3) and let Φ:=𝝁|𝑺\Phi:={\left.\kern-1.2pt\boldsymbol{\mu}\vphantom{\big{|}}\right|_{\boldsymbol{S}}}. Then, the intensity of Φ\Phi is [𝒐𝑺]\mathbb{P}\left[\boldsymbol{o}\in\boldsymbol{S}\right] and the Palm distribution is obtained by conditioning on 𝒐𝑺\boldsymbol{o}\in\boldsymbol{S}.

Example 6.22 (Biasing).

Let b:0b:\mathcal{M}_{*}\to\mathbb{R}^{\geq 0} be an initial bias and Φ:=b𝝁\Phi:=b\boldsymbol{\mu} defined in (5.2). Then, the intensity of Φ\Phi is 𝔼[b(𝒐)]\mathbb{E}\left[b(\boldsymbol{o})\right] and the Palm distribution of Φ\Phi is just biasing the probability measure by b(𝒐)b(\boldsymbol{o}) (if 0<𝔼[b(𝒐)]<0<\mathbb{E}\left[b(\boldsymbol{o})\right]<\infty). In particular, Theorem (6.17) implies that, after biasing by bb, [𝑿,𝒐,b𝝁][\boldsymbol{X},\boldsymbol{o},b\boldsymbol{\mu}] would be unimodular, which is already shown in Example 2.6. This fact can be used to reduce some results to the case without initial biasing (e.g., Theorem 5.2).

Example 6.23 (Extending a Unimodular Graph).

In many examples in the literature, a unimodular graph is constructed by adding new vertices and edges to a given unimodular graph, and then applying a biasing and a root-change (see [30] for various examples in the literature). Here, we will show that all this examples are instances of the Palm distribution developed in this work. As an example, the unimodularization of the dual of a unimodular planar graph (Example 9.6 of  [3]) is reduced to the Palm distribution of the counting measure on the set of faces (see below for the details).
We use the most general construction of this form mentioned in Section 5 of [30]. In short, assume [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}] is a random rooted graph that is not unimodular, but the MTP holds on a subset 𝑺\boldsymbol{S} of 𝑮\boldsymbol{G} (see Definition 10 of [30]). In other words, 𝑮\boldsymbol{G} is obtained by adding vertices and edges to [𝑺,𝒐][\boldsymbol{S},\boldsymbol{o}]. Let 𝝁\boldsymbol{\mu} and Φ\Phi be the counting measures of 𝑺\boldsymbol{S} and 𝑮\boldsymbol{G} respectively. In the setting of this section, [𝑮,𝒐,𝝁][\boldsymbol{G},\boldsymbol{o},\boldsymbol{\mu}] is unimodular (but not [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}]) and Φ\Phi can be regarded as an equivariant random measure on [𝑮,𝒐,𝝁][\boldsymbol{G},\boldsymbol{o},\boldsymbol{\mu}]. Therefore, one can define the Palm distribution of Φ\Phi using Definition 6.13 (if the intensity is finite). Then, Theorem 6.17 implies that, under the Palm distribution, [𝑮,𝒐][\boldsymbol{G},\boldsymbol{o}] is a unimodular graph, as desired. In this setting, the general construction in Theorem 5 of [30] (which covers the similar examples in the literature) is reduced to the explicit construction of the Palm distribution given in Theorem 6.14.
For a continuum example, it can be seen that the product of a unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] with an arbitrary compact measured metric space can be made unimodular (Example 2.5) and this is an instance of the Palm distribution. See the proof of Theorem 5.21 in Subsection 7.3 for more explanation.

The final example is the Poisson point process as follows.

Theorem 6.24 (Palm of Poisson).

Let c>0c>0 and Φ\Phi be the Poisson point process on 𝐗\boldsymbol{X} with intensity measure c𝛍c\boldsymbol{\mu} (Example 6.10). Then, the Palm version has the same distribution as [𝐗,𝐨,𝛍,Φ+δ𝐨][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi+\delta_{\boldsymbol{o}}].

Note that Φ+δ𝒐\Phi+\delta_{\boldsymbol{o}} is different from Φ{𝒐}\Phi\cup\{\boldsymbol{o}\} if 𝝁\boldsymbol{\mu} has atoms. This result generalizes Mecke’s theorem for the ordinary Poisson point process (see Theorem 3.3.5 of [38]). We guess that this property characterizes the Poisson point process (which generalizes Slivnyak’s theorem), but we do not discuss this here.

Proof.

By (6.3), it is straightforward to show that λΦ=c\lambda_{\Phi}=c. We claim that for every measurable function f:20f:\mathcal{M}^{2}_{**}\to\mathbb{R}^{\geq 0}, one has

𝔼[yΦf(𝑿,𝒐,y,𝝁,Φ)]=c𝔼[𝑿f(𝑿,𝒐,y,𝝁,Φ+δy)𝑑𝝁(y)],\mathbb{E}\left[\sum_{y\in\Phi}f(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu},\Phi)\right]=c\mathbb{E}\left[\int_{\boldsymbol{X}}f(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu},\Phi+\delta_{y})d\boldsymbol{\mu}(y)\right], (6.7)

which generalizes Mecke’s theorem (see Theorem 3.2.5 of [38]). Assuming this, Let B2B\subseteq\mathcal{M}^{2}_{*} be any event. By (6.5), (6.7), the MTP and the fact h(𝒐)=1h^{-}(\boldsymbol{o})=1 respectively,

Φ[B]\displaystyle\mathbb{P}_{\Phi}\left[B\right] =\displaystyle= 1c𝔼[xΦ1B(𝑿,z,𝝁,𝚽)h(𝒐,z)]\displaystyle\frac{1}{c}\mathbb{E}\left[\sum_{x\in\Phi}1_{B}(\boldsymbol{X},z,\boldsymbol{\mu},\boldsymbol{\Phi})h(\boldsymbol{o},z)\right]
=\displaystyle= 𝔼[𝑿1B(𝑿,z,𝝁,𝚽+δz)h(𝒐,z)𝑑𝝁(z)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}1_{B}(\boldsymbol{X},z,\boldsymbol{\mu},\boldsymbol{\Phi}+\delta_{z})h(\boldsymbol{o},z)d\boldsymbol{\mu}(z)\right]
=\displaystyle= 𝔼[𝑿1B(𝑿,𝒐,𝝁,𝚽+δ𝒐)h(z,𝒐)𝑑𝝁(z)]\displaystyle\mathbb{E}\left[\int_{\boldsymbol{X}}1_{B}(\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\boldsymbol{\Phi}+\delta_{\boldsymbol{o}})h(z,\boldsymbol{o})d\boldsymbol{\mu}(z)\right]
=\displaystyle= [[𝑿,𝒐,𝝁,Φ+δ𝒐]B]\displaystyle\mathbb{P}\left[[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi+\delta_{\boldsymbol{o}}]\in B\right]

and the theorem is proved. To prove (6.7), for simplicity, we assume that (supp(𝝁),𝝁)(\mathrm{supp}(\boldsymbol{\mu}),\boldsymbol{\mu}) has no automorphism a.s. (otherwise, one may add an extra randomness like an independent Poisson rain to break the automorphisms). In this case, it is enough to prove this claim when the function ff is of the form f=g(𝑿,𝒐,y,𝝁)1{Φ(S)=k}f=g(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu})1_{\{\Phi(S)=k\}}, where k{0}k\in\mathbb{N}\cup\{0\}, S:=S(𝑿,𝒐,𝝁):={y𝑿:(𝑿,𝒐,y,𝝁)A}S:=S(\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}):=\{y\in\boldsymbol{X}:(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu})\in A\} and AA\subseteq\mathcal{M}_{**} is measurable. The left hand side of (6.7) is

𝔼[(gΦ)(𝑿)1{Φ(S)=k}]\displaystyle\mathbb{E}\left[(g\Phi)(\boldsymbol{X})1_{\{\Phi(S)=k\}}\right] =\displaystyle= 𝔼[(gΦ)(𝑿S)1{Φ(S)=k}]+𝔼[(gΦ)(S)1{Φ(S)=k}]\displaystyle\mathbb{E}\left[(g\Phi)(\boldsymbol{X}\setminus S)1_{\{\Phi(S)=k\}}\right]+\mathbb{E}\left[(g\Phi)(S)1_{\{\Phi(S)=k\}}\right]
=:\displaystyle=: 𝔼[a1]+𝔼[a2].\displaystyle\mathbb{E}\left[a_{1}\right]+\mathbb{E}\left[a_{2}\right].

Conditioned on [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], the random variables Φ(S)\Phi(S) and (gΦ)(𝑿S)(g\Phi)(\boldsymbol{X}\setminus S) are independent. In addition, by Mecke’s formula, 𝔼[(gΦ)(𝑿S)|[𝑿,𝒐,𝝁]]=c(g𝝁)(𝑿S)\mathbb{E}\left[(g\Phi)(\boldsymbol{X}\setminus S)\left|[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]\right.\right]=c(g\boldsymbol{\mu})(\boldsymbol{X}\setminus S). Hence,

𝔼[a1]=c𝔼[(g𝝁)(𝑿S)1{Φ(S)=k}]=c𝔼[𝑿Sf(𝑿,𝒐,y,𝝁,Φ+δy)𝑑𝝁(y)].\mathbb{E}\left[a_{1}\right]=c\mathbb{E}\left[(g\boldsymbol{\mu})(\boldsymbol{X}\setminus S)1_{\{\Phi(S)=k\}}\right]=c\mathbb{E}\left[\int_{\boldsymbol{X}\setminus S}f(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu},\Phi+\delta_{y})d\boldsymbol{\mu}(y)\right].

For the second term, conditioned on [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] and on Φ(S)=k\Phi(S)=k, Φ|S{\left.\kern-1.2pt\Phi\vphantom{\big{|}}\right|_{S}} is distributed as the set of kk random points with distribution proportional to 𝝁|S{\left.\kern-1.2pt\boldsymbol{\mu}\vphantom{\big{|}}\right|_{S}}. So, 𝔼[a2|[𝑿,𝒐,𝝁],Φ(S)=k]=k(g𝝁)(S)/𝝁(S)1{Φ(S)=k}\mathbb{E}\left[a_{2}\left|[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}],\Phi(S)=k\right.\right]=k(g\boldsymbol{\mu})(S)/\boldsymbol{\mu}(S)1_{\{\Phi(S)=k\}}. By the formula of the Poisson distribution with parameter c𝝁(S)c\boldsymbol{\mu}(S), one can obtain that 𝔼[a2|[𝑿,𝒐,𝝁]]=c(g𝝁)(S)1{Φ(S)=k1}\mathbb{E}\left[a_{2}\left|[\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]\right.\right]=c(g\boldsymbol{\mu})(S)1_{\{\Phi(S)=k-1\}}. Thus,

𝔼[a2]=c𝔼[(g𝝁)(S)1{Φ(S)=k1}]=c𝔼[Sf(𝑿,𝒐,y,𝝁,Φ+δy)𝑑𝝁(y)].\mathbb{E}\left[a_{2}\right]=c\mathbb{E}\left[(g\boldsymbol{\mu})(S)1_{\{\Phi(S)=k-1\}}\right]=c\mathbb{E}\left[\int_{S}f(\boldsymbol{X},\boldsymbol{o},y,\boldsymbol{\mu},\Phi+\delta_{y})d\boldsymbol{\mu}(y)\right].

This proves (6.7) and the theorem is proved. ∎

7 Some Applications of Palm Theory

In Subsections 7.2 and 7.3, we prove equivariant disintegration and the amenability theorem (Lemma 6.3 and Theorem 5.21) using the Palm theory developed in Section 6. The proof is by constructing a countable Borel equivalence relation using the Poisson point process, and then, constructing an invariant measure using the Palm theory. Then, the results on Borel equivalence relations are used to prove the claims. Before that, we study the existence of balancing transport kernels in Subsection 7.1, which will be used in Subsection 7.3.

7.1 Balancing Transport Kernels

Let Φ\Phi and Ψ\Psi be jointly-equivariant random measures on a unimodular rmm space [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] with positive and finite intensities. A balancing transport kernel from Φ\Phi to Ψ\Psi is a Markovian kernel on 𝑿\boldsymbol{X} (in the sense of Subsection 3.3, but might depend on Φ\Phi and Ψ\Psi as well) that transports Φ\Phi to Ψ\Psi; i.e., 𝑿k(x,)𝑑Φ(x)=Ψ(),x𝑿\int_{\boldsymbol{X}}k(x,\cdot)d\Phi(x)=\Psi(\cdot),\forall x\in\boldsymbol{X} a.s. This extends the notion of invariant transport for stationary random measures [33] and fair tessellations for point processes [25]. On the Euclidean space, using the shift-coupling result of Thorisson [39], a necessary and sufficient condition for the existence of invariant transports is provided in [33]. An analogous result is provided for unimodular graphs in [30]. Here, we extend these results to unimodular rmm spaces.

An event A3A\subseteq\mathcal{M}^{3}_{*} is called invariant if it does not depend on the root. Let II denote the invariant sigma-field. The sample intensity of Φ\Phi is defined by 𝔼[hΦ+(𝒐)|I]\mathbb{E}\left[h^{+}_{\Phi}(\boldsymbol{o})\left|I\right.\right], where hh is the function in (6.3).

Theorem 7.1.

If [𝐗,𝐨,𝛍,Φ,Ψ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi,\Psi] is ergodic, then the existence of balancing transport kernels is equivalent to the condition that Φ\Phi and Ψ\Psi have the same intensities. In the non-ergodic case, the condition is that the sample intensities of Φ\Phi and Ψ\Psi are equal a.s.

The latter is equivalent to the condition that after conditioning to any invariant event, Φ\Phi and Ψ\Psi have the same intensities. It is also equivalent to the condition that Φ\mathbb{P}_{\Phi} and Ψ\mathbb{P}_{\Psi} agree on II.

This theorem is an extension of Theorem 5.1 of [33]. To prove it, one might try to extend the shift-coupling result of [39] (as done in [30] for unimodular graphs), but we give another proof by extending the constructive proof of [21]. The latter is an extension of [23] for point processes, which is an infinite version of the Gale-Shapley stable marriage algorithm.

First, note that the existence of a balancing transport kernel is equivalent to the existence of a balancing transport density; i.e., a measurable function K:3K:\mathcal{M}^{3}_{**}\to\mathbb{R}^{\geq} such that x,yX:𝑿K(x,)𝑑Ψ=𝑿K(,y)𝑑Φ=1\forall x,y\in X:\int_{\boldsymbol{X}}K(x,\cdot)d\Psi=\int_{\boldsymbol{X}}K(\cdot,y)d\Phi=1 on an invariant event with full probability. To show this, if k(x,)k(x,\cdot) is absolutely continuous w.r.t. Ψ\Psi for all xx, it is enough to consider the Radon-Nikodym derivative equivariantly (as in Lemma 3.9) and to modify it on a null event. In the singular case, it is enough to smoothify kk by composing it with another kernel that preserves Ψ\Psi given by Lemma 3.10.

Sketch of the Proof of Theorem 7.1.

For simplicity, we assume ergodicity. Assume a balancing transport density KK exists. Under Φ\mathbb{P}_{\Phi}, by letting h:=Kh:=K in (6.3), one obtains that the intensity of Ψ\Psi is equal to 1. So, the exchange formula in Corollary 6.19 implies that λΦ=λΨ\lambda_{\Phi}=\lambda_{\Psi}. For the converse, it is straightforward to extend Algorithm 4.4 of [21] to construct a density KK (the stable constrained density). Assuming λΦ=λΨ\lambda_{\Phi}=\lambda_{\Psi}, one can use the Palm theory developed in Section 6 and mimic the proof of Theorem 4.8 of [21] to prove that KK is balancing. The proof is not short, but the extension is straightforward and is skipped for brevity. ∎

Theorem 7.1 allows us to prove the following result, which generalizes a result of [39] (see the end of Section 1 of [39]).

Theorem 7.2.

Let Φ\Phi be an equivariant random measure on a unimodular rmm space [𝐗,𝐨,𝛍][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. Then, the following are equivalent:

  1. (i)

    The Palm distribution of Φ\Phi is obtained by a random re-rooting of [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi].

  2. (ii)

    There exists a balancing transport kernel between c𝝁c\boldsymbol{\mu} and Φ\Phi for some cc.

  3. (iii)

    The sample intensity of Φ\Phi is constant.

In particular, these conditions hold if [𝐗,𝐨,𝛍,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is ergodic.

Proof.

The equivalence (iii)\Leftrightarrow(ii) is implied by Theorem 7.1.

(i)\Rightarrow(iii). It is straightforward to deduce from (i) that Φ(A)=[A]\mathbb{P}_{\Phi}(A)=\mathbb{P}\left[A\right] for every invariant event AA. This implies (iii).

(ii)\Rightarrow(i). Let KK be such a balancing transport density. In this case, Theorem 7.1 for h:=1cKh:=\frac{1}{c}K shows that the Palm version is obtained by re-rooting according to the kernel h(𝒐,)Φh(\boldsymbol{o},\cdot)\Phi (and there is no biasing since hΦ+=1h^{+}_{\Phi}=1). ∎

Example 7.3.

Assume Ψ\Psi is the Poisson point process with intensity measure Φ\Phi. Then, if Φ(𝑿)=\Phi(\boldsymbol{X})=\infty a.s., the ergodicity mentioned in Example 6.10 implies that the conditions of Theorem 7.1 are satisfied (if [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is not ergodic, use its ergodic decomposition). So, there exists a balancing transport density. In addition, in the case Φ=𝝁\Phi=\boldsymbol{\mu}, Theorem 7.2 and 6.24 imply that there exists a random re-rooting that is equivalent (in distribution) to adding a point at the origin to Ψ\Psi. This is an extension of extra head schemes [25].

7.2 Proof of Equivariant Disintegration

In this section, we will prove Lemma 6.3 in the following steps.

Lemma 7.4.

The claim of Lemma 6.3 holds if 𝛍\boldsymbol{\mu} is a counting measures a.s. and every automorphism of (𝐗,𝛍)(\boldsymbol{X},\boldsymbol{\mu}) fixes every atom of 𝛍\boldsymbol{\mu} a.s.

In this case, we will use results on countable Borel equivalence relations to deduce the claim from invariant disintegration for group actions.

Proof.

Let AA be the event that 𝝁\boldsymbol{\mu} is a counting measure and every automorphism of (𝑿,𝝁)(\boldsymbol{X},\boldsymbol{\mu}) fixes every atom of 𝝁\boldsymbol{\mu}. Consider the following countable equivalence relation on \mathcal{M}_{*}: Outside AA or if oo is not an atom of μ\mu, [X,o,μ][X,o,\mu] is equivalent to only itself. On AA and if o1o_{1} and o2o_{2} are atoms of μ\mu, let [X,o1,μ][X,o_{1},\mu] be equivalent to [X,o2,μ][X,o_{2},\mu]. Then, this is a Borel equivalence relation on \mathcal{M}_{*} and every equivalence class is countable. Therefore, by Theorem 1 of [17], it is generated by the action of some countable group GG on \mathcal{M}_{*}. Note that on AA, the map y[X,y,μ]y\mapsto[X,y,\mu] maps the atoms of μ\mu bijectively to an equivalence class. So, on AA, GG acts on the set of atoms of μ\mu. Let gygy denote this action if gGg\in G and yy is an atom of μ\mu (outside AA or if yy is not an atom, one has gy=ygy=y). This property allows us to extend the action of GG to 2\mathcal{M}^{2}_{*} as follows (the assumption on automorphisms is essential for this goal): Given [Y,p,μ1,μ2]2[Y,p,\mu_{1},\mu_{2}]\in\mathcal{M}^{2}_{*} and gGg\in G, let g[Y,p,μ1,μ2]:=[Y,gp,μ1,μ2]g[Y,p,\mu_{1},\mu_{2}]:=[Y,gp,\mu_{1},\mu_{2}], where gpgp is defined by forgetting μ2\mu_{2} and using the above definition.

Let QQ and Q~\tilde{Q} be the distributions of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] and [𝒀,𝒑,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] respectively. The two actions of GG on \mathcal{M}_{*} and 2\mathcal{M}^{2}_{*} are compatible with the projection π:2\pi:\mathcal{M}^{2}_{*}\to\mathcal{M}_{*}, which is defined by forgetting the second measure. In addition, since every gGg\in G acts bijectively on the set of atoms of 𝝁\boldsymbol{\mu} and 𝝁1\boldsymbol{\mu}_{1}, the distributions QQ and Q~\tilde{Q} are invariant under these actions (by Theorem 3.8). Hence, we can use Kallenberg’s invariant disintegration theorem (Theorem 3.5 of [27]). This gives a disintegration kernel kk from \mathcal{M}_{*} to 2\mathcal{M}^{2}_{*} such that for all events B2B\subseteq\mathcal{M}^{2}_{*},

Q~(B)=k([X,o,μ],B)𝑑Q([X,o,μ])\tilde{Q}(B)=\int_{\mathcal{M}_{*}}k([X,o,\mu],B)dQ([X,o,\mu]) (7.1)

and k(g[X,o,μ],gB)=k([X,o,μ],B)k(g[X,o,\mu],gB)=k([X,o,\mu],B) for all gGg\in G. This implies that k([X,o,μ],)k([X,o,\mu],\cdot) is concentrated on {[X,o,μ,φ]:φM(X)}\{[X,o,\mu,\varphi]:\varphi\in M(X)\}, where M(X)M(X) is the set of boundedly-finite measures on XX. Assuming [X,o,μ]A[X,o,\mu]\in A, by applying a random element of the automorphism group of (X,μ)(X,\mu) (which is a compact group since the atoms are fixed points) to k([X,o,μ],)k([X,o,\mu],\cdot), one obtains an automorphism-invariant probability measure on M(X)M(X), which we call it k(o,)k^{\prime}(o,\cdot). Invariance under the action of GG implies that k(o1,)=k(o2,)k^{\prime}(o_{1},\cdot)=k^{\prime}(o_{2},\cdot) for all atoms o1o_{1} and o2o_{2} of μ\mu. So, for all yXy\in X, one can let k(y,):=k(o,)k^{\prime}(y,\cdot):=k^{\prime}(o,\cdot), where oo is an arbitrary atom of μ\mu. Now, by choosing Φ(X,y,μ)\Phi_{(X,y,\mu)} randomly with law k(y,)k^{\prime}(y,\cdot), one obtains an equivariant random measure which satisfies all of the assumptions of Definition 6.1. In addition, (7.1) implies that [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] has law Q~\tilde{Q} and the claim is proved. ∎

Lemma 7.5.

The claim of Lemma 6.3 holds if there exists a factor point process 𝐒\boldsymbol{S} (depending only on 𝐗\boldsymbol{X} and 𝛍\boldsymbol{\mu}) with finite intensity such that every automorphism of (𝐗,𝛍)(\boldsymbol{X},\boldsymbol{\mu}) fixes every element of 𝐒\boldsymbol{S} a.s. and 𝐒\boldsymbol{S} is nonempty a.s.

Proof.

Let QQ and Q~\tilde{Q} be the distributions of [𝑿,𝒐,𝝁,𝑺][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\boldsymbol{S}] and [𝒀,𝒑,𝝁1,𝝁2,𝑺][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2},\boldsymbol{S}], as random elements of 2\mathcal{M}^{2}_{*} and 3\mathcal{M}^{3}_{*} respectively, where in the latter, 𝑺=𝑺(𝒀,𝝁1)\boldsymbol{S}=\boldsymbol{S}(\boldsymbol{Y},\boldsymbol{\mu}_{1}). The Palm distributions of 𝑺\boldsymbol{S} in these two unimodular objects give probability measures Q0Q_{0} and Q~0\tilde{Q}_{0} respectively. By Theorem 6.17, under Q0Q_{0}, [𝑿,𝒐,𝝁,𝑺][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\boldsymbol{S}] is unimodular with respect to the counting measure on 𝑺\boldsymbol{S} (Definition 6.16) and the same holds for [𝒀,𝒑,𝝁1,𝝁2,𝑺][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2},\boldsymbol{S}]. In addition, it is straightforward to show that, under Q~0\tilde{Q}_{0}, [𝒀,𝒑,𝝁1,𝑺][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{S}] has the same distribution as Q0Q_{0}. Now, we can use Lemma 7.4 to find an equivariant disintegration of Q~0\tilde{Q}_{0} w.r.t. Q0Q_{0} (the same argument works if the counting measure of 𝑺\boldsymbol{S} is regarded as the base measure and 𝝁\boldsymbol{\mu} and 𝝁1\boldsymbol{\mu}_{1} are thought of as decorations). This gives an equivariant random measure Φ\Phi (whose distribution depends on (X,μ,S)(X,\mu,S)) such that

Q~0(B)=[[X,o,μ,Φ,S]B]𝑑Q0([X,o,μ,S])\tilde{Q}_{0}(B)=\int\mathbb{P}\left[[X,o,\mu,\Phi,S]\in B\right]dQ_{0}([X,o,\mu,S])

for all events BB. By using Palm inversion to reconstruct QQ and Q~\tilde{Q} (Subsection 6.3.3), it is straightforward to deduce that the above equation holds if Q~0\tilde{Q}_{0} and Q0Q_{0} are replaced by Q~\tilde{Q} and QQ respectively. This shows that Φ\Phi is the desired equivariant random measure and the claim is proved. ∎

We are now ready to prove Lemma 6.3.

Proof of Lemma 6.3.

First assume that 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s. We start by adding a marked Poisson point process to ensure that the assumptions of Lemma 7.5 hold. Let Ψ=Ψ(𝑿,𝝁)\Psi=\Psi(\boldsymbol{X},\boldsymbol{\mu}) be the Poisson point process on 𝑿\boldsymbol{X} with intensity measure 𝝁\boldsymbol{\mu} and equip the points of Ψ\Psi with i.i.d. marks in [0,1][0,1] chosen with the uniform distribution. By a suitable extension of the GHP metric discussed in Subsection 5.1, one can think of [𝑿,𝒐,𝝁,Ψ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Psi] and [𝒀,𝒑,𝝁1,𝝁2,Ψ][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2},\Psi] as unimodular rmm spaces equipped with additional structures, where in the latter, Ψ=Ψ(𝒀,𝝁1)\Psi=\Psi(\boldsymbol{Y},\boldsymbol{\mu}_{1}). In the former, the probability space is the set \mathcal{M}_{*}^{\prime} of tuples [X,o,μ,ψ][X,o,\mu,\psi], where (X,o,μ)(X,o,\mu) is a rmm space and ψ\psi is a marked measure on XX; i.e., a boundedly-finite measure on X×[0,1]X\times[0,1].

Note that the support of Ψ\Psi is a factor point process (depending on 𝝁\boldsymbol{\mu} and Ψ\Psi) such that every point of the subset is fixed under every automorphism of (𝑿,𝝁,Ψ)(\boldsymbol{X},\boldsymbol{\mu},\Psi) a.s. (due to the i.i.d. marks). In addition, the support is not empty a.s. since 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s. So, we can use an argument similar to Lemma 7.5 to obtain an equivariant random measure Φ\Phi such that [𝑿,𝒐,𝝁,Φ,Ψ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi,\Psi] has the same distribution as [𝒀,𝒑,𝝁1,𝝁2,Ψ][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2},\Psi].

Note that Φ=Φ(X,μ,ψ)\Phi=\Phi(X,\mu,\psi) is defined for deterministic spaces (X,μ,ψ)(X,\mu,\psi) and not for the spaces of the form (X,μ)(X,\mu). To resolve this issue, one can take expectation w.r.t. Ψ\Psi as follows. Let (X,o,μ)(X,o,\mu) be a realization of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}]. Given a realization ψ\psi of the marked Poisson point process on (X,μ)(X,\mu), let α=α(X,μ,ψ)\alpha=\alpha_{(X,\mu,\psi)} be the distribution of Φ=Φ(X,μ,ψ)\Phi=\Phi(X,\mu,\psi). Here, α\alpha is a probability measure on M(X)M(X). Let β(X,μ):=𝔼[α(X,μ,Ψ)]\beta_{(X,\mu)}:=\mathbb{E}\left[\alpha_{(X,\mu,\Psi)}\right], where the expectation is w.r.t. to the randomness of Ψ\Psi. It is straightforward to see that β\beta is invariant under the automorphisms of (X,μ)(X,\mu). Now, by choosing a random measure Φ\Phi^{\prime} on XX with distribution β(X,μ)\beta_{(X,\mu)}, one obtains an equivariant random measure such that [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi^{\prime}] has the same distribution as [𝒀,𝒑,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{p},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] as desired. So the claim is proved.

Finally, assume 𝝁(𝑿)<\boldsymbol{\mu}(\boldsymbol{X})<\infty with positive probability. Conditioned on the event 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty, the above arguments provide the equivariant disintegration. Conditioned on the event 𝝁(𝑿)<\boldsymbol{\mu}(\boldsymbol{X})<\infty, the situation is easier. Under this conditioning, [𝑿,𝝁][\boldsymbol{X},\boldsymbol{\mu}] and [𝒀,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] make sense as random non-rooted measured metric spaces and the corresponding probability spaces are Polish (under suitable topologies which are skipped here). So, it is enough to consider the regular conditional distribution of [𝒀,𝝁1,𝝁2][\boldsymbol{Y},\boldsymbol{\mu}_{1},\boldsymbol{\mu}_{2}] w.r.t. [𝒀,𝝁1][\boldsymbol{Y},\boldsymbol{\mu}_{1}], and then apply a random automorphism of (𝒀,𝝁1)(\boldsymbol{Y},\boldsymbol{\mu}_{1}) to obtain an equivariant random measure. This automatically does not depend on the root and has the desired properties. ∎

7.3 Proof of the Amenability Theorem

In this subsection, we prove that the different notions of amenability defined in Subsection 5.6 are equivalent (Theorem 5.21).

Let [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] be a unimodular rmm space. First, we assume that 𝝁\boldsymbol{\mu} does not have atoms a.s. and 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s. As in Subsection 7.2, let Φ\Phi be the Poisson point process on 𝑿\boldsymbol{X} with intensity measure 𝝁\boldsymbol{\mu} (Example 6.10), equipped with i.i.d. marks in [0,1][0,1] chosen with the uniform distribution. Then, Φ\Phi is infinite, there are no multiple points and Φ\Phi has no nontrivial automorphism a.s. Let Φ\mathbb{P}_{\Phi} be the Palm distribution of Φ\Phi. By Theorem 6.17, under Φ\mathbb{P}_{\Phi}, [𝑿,𝒐,𝝁,Φ][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu},\Phi] is unimodular w.r.t. Φ\Phi.

The above definitions give a countable equivalence relation as follows. The probability measure Φ\mathbb{P}_{\Phi} is concentrated on the subset ′′\mathcal{M}_{*}^{\prime\prime}\subseteq\mathcal{M}_{*}^{\prime} of tuples [X,o,μ,φ][X,o,\mu,\varphi] in which μ\mu has no atom, μ(X)=\mu(X)=\infty, φ\varphi is the counting measure on an infinite discrete subset of XX, oφo\in\varphi and the marks of the points of φ\varphi are distinct. Let RR be the equivalence relation on \mathcal{M}_{*}^{\prime} defined as follows: Outside ′′\mathcal{M}_{*}^{\prime\prime}, everything is equivalent to only itself. If [X,o,μ,φ]′′[X,o,\mu,\varphi]\in\mathcal{M}_{*}^{\prime\prime}, then it is equivalent to [X,y,μ,φ][X,y,\mu,\varphi] for all yφy\in\varphi. Then, RR is a countable equivalence relation on the Polish space \mathcal{M}_{*}^{\prime}. In addition, unimodularity of Φ\mathbb{P}_{\Phi} is equivalent to invariance of Φ\mathbb{P}_{\Phi} under RR (see Subsection 4.6 and note that having no automorphism is important for this equivalence).

Lemma 7.6.

If 𝛍\boldsymbol{\mu} has infinite total mass and no atom a.s., then each of the conditions of the existence of local means, the existence of approximate means, and hyperfiniteness is equivalent to the analogous condition for the countable measured equivalence relation (,R,Φ)(\mathcal{M}_{*}^{\prime},R,\mathbb{P}_{\Phi}) defined above.

Proof.

An essential ingredient of the proof is the Voronoi tessellation: For every [X,o,μ,φ]′′[X,o,\mu,\varphi]\in\mathcal{M}_{*}^{\prime\prime}, let τ(o)\tau(o) be the closest point of φ\varphi to oo. If the closest point is not unique, choose the one with the smallest mark. For yφy\in\varphi, τ1(y)\tau^{-1}(y) is the Voronoi cell of yy. Another ingredient is the balancing transport density constructed in Subsection 7.1 (Example 7.3). This gives an equivariant function k(x,y)k(x,y) for x𝑿x\in\boldsymbol{X} and yΦy\in\Phi such that zΦk(x,z)=1\sum_{z\in\Phi}k(x,z)=1 and k(z,y)𝑑𝝁(z)=1\int k(z,y)d\boldsymbol{\mu}(z)=1 a.s. (for all xx and yy). From now on, we always assume [X,o,μ,φ]′′[X,o,\mu,\varphi]\in\mathcal{M}_{*}^{\prime\prime} and the above balancing property of kk holds. In the next paragraphs, we prove each equivalence claimed in the lemma.

Local Mean. Given fL(X,μ)f\in L^{\infty}(X,\mu), define fL(φ)f^{\prime}\in L^{\infty}(\varphi) by f(y):=f(z)k(z,y)𝑑μ(z)f^{\prime}(y):=\int f(z)k(z,y)d\mu(z) (the balancing property of kk is important here). Conversely, given gL(φ)g\in L^{\infty}(\varphi), define gL(X,μ)g^{\prime}\in L^{\infty}(X,\mu) by g(x):=zφg(z)k(x,z)g^{\prime}(x):=\sum_{z\in\varphi}g(z)k(x,z). Using this, every mean on L(φ)L^{\infty}(\varphi) gives a mean on L(X,μ)L^{\infty}(X,\mu) and vice versa. This implies the claim.

Approximate mean. Assume (λn)n(\lambda_{n})_{n} is an approximate mean. For x,yφx,y\in\varphi, define Λn(x,y):=λn(x,z)k(z,y)𝑑μ(z)\Lambda_{n}(x,y):=\int\lambda_{n}(x,z)k(z,y)d\mu(z). This gives approximate means for φ\varphi. Equivalently, (Λn)n(\Lambda_{n})_{n} satisfies the condition (AI) of [26] (it is important that φ\varphi has no nontrivial automorphism). Conversely, given approximate means Λn\Lambda_{n} for (R,Φ)(R,\mathbb{P}_{\Phi}), for x,yXx,y\in X, define λn(x,y):=ztk(x,z)Λn(z,t)k(t,y)\lambda_{n}(x,y):=\sum_{z}\sum_{t}k(x,z)\Lambda_{n}(z,t)k(t,y). It is left to the reader to prove that λn\lambda_{n} is an approximate mean.

Hyperfiniteness. In Lemma 5.23, it is proved that (HF2) \Rightarrow (HF3) \Rightarrow (HF1). Assume (Πn)n(\Pi_{n})_{n} satisfies (HF1) and, conditioned to [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}], the extra randomness in the definition of (Πn)n(\Pi_{n})_{n} is independent of Φ\Phi (use Example 6.11). Hence, since each element of Πn\Pi_{n} has finite mass, it has also finitely many points in the Poisson point process a.s. So, (Πn)n(\Pi_{n})_{n} induces equivariant nested finite partitions of Φ\Phi. If there is no extra randomness (i.e., (Πn)n(\Pi_{n})_{n} is a factor), it also induces a nested sequence of Borel equivalence sub-relations of RR and RR is hyperfinite. If not, one can enlarge \mathcal{M}_{*}^{\prime} according to the extra randomness of the partitions, but this does not affect the hyperfiniteness of the Borel equivalence relation (using Theorem 1 of [26], find a local and then take its expectation w.r.t. to the extra randomness to find a local mean for (R,Φ)(R,\mathbb{P}_{\Phi})).
Finally, assume (R,Φ)(R,\mathbb{P}_{\Phi}) is hyperfinite. This gives equivariant nested finite partitions Πn\Pi_{n} of φ\varphi such that yφ,r<,n:Br(y)φΠn(y)\forall y\in\varphi,\forall r<\infty,\exists n:B_{r}(y)\cap\varphi\subseteq\Pi_{n}(y). Define the partition Πn\Pi^{\prime}_{n} of XX as follows: xΠn(y)x\in\Pi^{\prime}_{n}(y) if and only if τ(x)Πn(τ(y))\tau(x)\in\Pi_{n}(\tau(y)). The sequence (Πn)n(\Pi^{\prime}_{n})_{n} satisfies (HF2) since τ(Br(𝒐))\tau(B_{r}(\boldsymbol{o})) is a finite set a.s. for every rr (since τ(Br(𝒐))B2r+s(𝒐)\tau(B_{r}(\boldsymbol{o}))\subseteq B_{2r+s}(\boldsymbol{o}), where s:=d(𝒐,τ(𝒐))s:=d(\boldsymbol{o},\tau(\boldsymbol{o})), which is straightforward to verify). This finishes the proof. ∎

Proof of Theorem 5.21.

The Folner conditions are already treated in Lemma 5.23. Here, we prove the equivalence of the rest of the conditions. On the event 𝝁(𝑿)<\boldsymbol{\mu}(\boldsymbol{X})<\infty, all of the conditions hold. So it is enough to assume 𝝁(𝑿)=\boldsymbol{\mu}(\boldsymbol{X})=\infty a.s. If 𝝁\boldsymbol{\mu} has no atoms a.s., then Lemma 7.6 implies that the different notions of amenability of [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] are equivalent to those for (,R,Φ)(\mathcal{M}_{*}^{\prime},R,\mathbb{P}_{\Phi}), defined above. So, the equivalence of the conditions is implied by Theorem 1 of [26].

Finally, if 𝝁\boldsymbol{\mu} is allowed to have atoms, we multiply 𝑿\boldsymbol{X} by [0,1][0,1] to destroy the atoms as follows. Let 𝑿:=𝑿×[0,1]\boldsymbol{X}^{\prime}:=\boldsymbol{X}\times[0,1] equipped with the sum metric d((x1,t1),(x2,t2)):=d(x1,x2)+|t1t2|d((x_{1},t_{1}),(x_{2},t_{2})):=d(x_{1},x_{2})+\left|t_{1}-t_{2}\right|. Let 𝝁:=𝝁×Leb\boldsymbol{\mu}^{\prime}:=\boldsymbol{\mu}\times\mathrm{Leb}. Then, by choosing 𝒐\boldsymbol{o}^{\prime} in {𝒐}×[0,1]\{\boldsymbol{o}\}\times[0,1] uniformly, [𝑿,𝒐,𝝁,𝑿×{0}][\boldsymbol{X}^{\prime},\boldsymbol{o}^{\prime},\boldsymbol{\mu}^{\prime},\boldsymbol{X}\times\{0\}] is unimodular, where 𝑿×{0}\boldsymbol{X}\times\{0\} is kept as a distinguished closed subset of 𝑿\boldsymbol{X}^{\prime} (Example 2.5; this is in fact the Palm version of 𝝁\boldsymbol{\mu}^{\prime} by Theorem 6.14). Note that the product structure can be recovered from (𝑿,𝑿×{0})(\boldsymbol{X}^{\prime},\boldsymbol{X}\times\{0\}). It is left to the reader to show that the different notions of amenability for [𝑿,𝒐,𝝁][\boldsymbol{X},\boldsymbol{o},\boldsymbol{\mu}] are equivalent to those for [𝑿,𝒐,𝝁,𝑿×{0}][\boldsymbol{X}^{\prime},\boldsymbol{o}^{\prime},\boldsymbol{\mu}^{\prime},\boldsymbol{X}\times\{0\}]. Since 𝝁\boldsymbol{\mu}^{\prime} has no atoms, the first part of the proof implies the claim. ∎

Acknowledgments

This work was supported by the ERC NEMO grant, under the European Union’s Horizon 2020 research and innovation programme, grant agreement number 788851 to INRIA. A major part of the work was done when the author was affiliated with IPM. The research was in part supported by a grant from IPM (No. 98490118).

References

  • [1] D. Aldous. The continuum random tree. I. Ann. Probab., 19(1):1–28, 1991.
  • [2] D. Aldous. The continuum random tree. II. An overview. In Stochastic analysis (Durham, 1990), volume 167 of London Math. Soc. Lecture Note Ser., pages 23–70. Cambridge Univ. Press, Cambridge, 1991.
  • [3] D. Aldous and R. Lyons. Processes on unimodular random networks. Electron. J. Probab., 12:no. 54, 1454–1508, 2007.
  • [4] D. Aldous and J. M. Steele. The objective method: probabilistic combinatorial optimization and local weak convergence. In Probability on discrete structures, volume 110 of Encyclopaedia Math. Sci., pages 1–72. Springer, Berlin, 2004.
  • [5] F. Baccelli, M.-O. Haji-Mirsadeghi, and A. Khezeli. Eternal family trees and dynamics on unimodular random graphs. In Unimodularity in randomly generated graphs, volume 719 of Contemp. Math., pages 85–127. Amer. Math. Soc., [Providence], RI, [2018] ©2018.
  • [6] F. Baccelli, M.-O. Haji-Mirsadeghi, and A. Khezeli. Unimodular Hausdorff and Minkowski dimensions. Electron. J. Probab., 26:Paper No. 155, 64, 2021.
  • [7] I. Benjamini, R. Lyons, Y. Peres, and O. Schramm. Group-invariant percolation on graphs. Geom. Funct. Anal., 9(1):29–66, 1999.
  • [8] I. Benjamini and O. Schramm. Recurrence of distributional limits of finite planar graphs. Electron. J. Probab., 6:no. 23, 13, 2001.
  • [9] C. J. Bishop and Y. Peres. Fractals in probability and analysis, volume 162 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2017.
  • [10] Thomas Budzinski. The hyperbolic Brownian plane. Probab. Theory Related Fields, 171(1-2):503–541, 2018.
  • [11] G. Cannizzaro and M. Hairer. The Brownian web as a random \mathbb{R}-tree. arXiv preprint arXiv:2102.04068, 2021.
  • [12] A. Connes, J. Feldman, and B. Weiss. An amenable equivalence relation is generated by a single transformation. Ergodic Theory Dynam. Systems, 1(4):431–450 (1982), 1981.
  • [13] M. Deijfen, A. E. Holroyd, and J. B. Martin. Friendly frogs, stable marriage, and the magic of invariance. Amer. Math. Monthly, 124(5):387–402, 2017.
  • [14] T. Duquesne and J. F. Le Gall. Random trees, Lévy processes and spatial branching processes. Astérisque, (281):vi+147, 2002.
  • [15] T. Duquesne and J. F. Le Gall. Probabilistic and fractal aspects of Lévy trees. Probab. Theory Related Fields, 131(4):553–603, 2005.
  • [16] R. H. Farrell. Representation of invariant measures. Illinois J. Math., 6:447–467, 1962.
  • [17] J. Feldman and C. C. Moore. Ergodic equivalence relations, cohomology, and von Neumann algebras. I. Trans. Amer. Math. Soc., 234(2):289–324, 1977.
  • [18] L. R. G. Fontes, M. Isopi, C. M. Newman, and K. Ravishankar. The Brownian web: characterization and convergence. Ann. Probab., 32(4):2857–2883, 2004.
  • [19] H. Freudenthal. Über die Enden topologischer Räume und Gruppen. Math. Z., 33(1):692–713, 1931.
  • [20] O. Häggström. Infinite clusters in dependent automorphism invariant percolation on trees. Ann. Probab., 25(3):1423–1436, 1997.
  • [21] M. O. Haji-Mirsadeghi and A. Khezeli. Stable transports between stationary random measures. Electron. J. Probab., 21:Paper No. 51, 25, 2016.
  • [22] H. Hatami, L. Lovász, and B. Szegedy. Limits of locally-globally convergent graph sequences. Geom. Funct. Anal., 24(1):269–296, 2014.
  • [23] C. Hoffman, A. E. Holroyd, and Y. Peres. A stable marriage of Poisson and Lebesgue. Ann. Probab., 34(4):1241–1272, 2006.
  • [24] A. E. Holroyd and Y. Peres. Trees and matchings from point processes. Electron. Comm. Probab., 8:17–27, 2003.
  • [25] A. E. Holroyd and Y. Peres. Extra heads and invariant allocations. Ann. Probab., 33(1):31–52, 2005.
  • [26] V. A. Kaimanovich. Amenability, hyperfiniteness, and isoperimetric inequalities. C. R. Acad. Sci. Paris Sér. I Math., 325(9):999–1004, 1997.
  • [27] O. Kallenberg. Invariant measures and disintegrations with applications to Palm and related kernels. Probab. Theory Related Fields, 139(1-2):285–310, 2007.
  • [28] A. S. Kechris. The theory of countable borel equivalence relations.
  • [29] A. Khezeli. A unified framework for generalizing the Gromov-Hausdorff metric. preprint.
  • [30] A. Khezeli. Shift-coupling of random rooted graphs and networks. To appear in the special issue of Contemporary Mathematics on Unimodularity in Randomly Generated Graphs, 2018.
  • [31] A. Khezeli. Metrization of the Gromov-Hausdorff (-Prokhorov) topology for boundedly-compact metric spaces. Stochastic Process. Appl., 2019.
  • [32] Ali Khezeli and Samuel Mellick. On the existence of balancing allocations and factor point processes. arXiv preprint arXiv:2303.05137, 2023.
  • [33] G. Last and H. Thorisson. Invariant transports of stationary random measures and mass-stationarity. Ann. Probab., 37(2):790–813, 2009.
  • [34] J. F. Le Gall. Brownian geometry. Jpn. J. Math., 14(2):135–174, 2019.
  • [35] L. Lovász. Compact graphings. Acta Math. Hungar., 161(1):185–196, 2020.
  • [36] R. Lyons and O. Schramm. Indistinguishability of percolation clusters. Ann. Probab., 27(4):1809–1836, 1999.
  • [37] J. Mecke. Stationäre zufällige Masse auf lokalkompakten Abelschen Gruppen. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 9:36–58, 1967.
  • [38] R. Schneider and W. Weil. Stochastic and integral geometry. Probability and its Applications (New York). Springer-Verlag, Berlin, 2008.
  • [39] H. Thorisson. Transforming random elements and shifting random fields. Ann. Probab., 24(4):2057–2064, 1996.
  • [40] A. Timár. Tree and grid factors for general point processes. Electron. Comm. Probab., 9:53–59, 2004.
  • [41] R. van der Hofstad. Infinite canonical super-Brownian motion and scaling limits. Comm. Math. Phys., 265(3):547–583, 2006.