Brownian loops on non-smooth surfaces
and the Polyakov-Alvarez formula
Abstract
Let be compactly supported on . Endow with the metric . As the set of Brownian loops centered in with length at least has measure
When is smooth, this follows from the classical Polyakov-Alvarez formula. We show that the above also holds if is not smooth, e.g. if is only Lipschitz. This fact can alternatively be expressed in terms of heat kernel traces, eigenvalue asymptotics, or zeta regularized determinants. Variants of this statement apply to more general non-smooth manifolds on which one considers all loops (not only those centered in a domain ).
We also show that the error is uniform for any family of satisfying certain conditions. This implies that if we weight a measure on this family by the (-truncated) Brownian loop soup partition function, and take the vague limit, we obtain a measure whose Radon-Nikodym derivative with respect to is . When the measure is a certain regularized Liouville quantum gravity measure, a companion work [APPS20] shows that this weighting has the effect of changing the so-called central charge of the surface.
Acknowledgments. We thank Morris Ang, Ewain Gwynne, Camillo De Lellis, Sung-jin Oh, and Peter Sarnak for helpful comments. The authors were partially supported by NSF grants DMS 1712862 and DMS 2153742. J.P. was partially supported by a NSF Postdoctoral Research Fellowship under grant 2002159.
1 Introduction
Let us first recall a few standard definitions and observations. On a compact surface with boundary, the heat kernel trace can be written where are the eigenvalues of the Laplace-Beltrami operator . If is the number of less than then
In other words, is the Laplace transform of . The asymptotics of (as ) are therefore closely related to the asymptotics of (as ). Weyl addressed the latter for bounded planar domains in 1911 [Wey11] (see discussion in [MS+67]) by showing as which is equivalent to
(1.1) |
as . In 1966 Kac gave higher order correction terms for on domains with piecewise linear boundaries (accounting for boundary length and corners) in his famously titled “Can you hear the shape of a drum?” which asks what features of the geometry of can be deduced from (or equivalently from ) [Kac66]. McKean and Singer extended these asymptotics from planar domains to smooth manifolds with non-zero curvature [MS+67] where the constant order correction term is a certain curvature integral. For two dimensional surfaces, with metric given by times a flat metric, the integral turns out to be a natural quantity whose small asymptotics involve a constant order term that corresponds to the Dirichlet energy of (the so-called Polyakov-Alvarez formula, also known as the Polyakov-Ray-Singer or Weyl anomaly formula) [RS73, Pol81, Alv83, Sar87, OPS88]. This constant order Dirichlet energy term (which can also be formulated in terms of Brownian loop soups, see below) is the main concern of this paper.
Much of the literature assumes that is smooth and makes regular use of objects like curvature that are not well defined if is not . But it is known [Hör68] that if is only then Weyl’s law still holds, i.e. [AHT18, Example 4.9] and Weyl’s law has been established in certain less smooth settings as well.111We remark that there is a general theory of metric measure spaces with the so-called “Riemannian curvature-dimension” (RCD) condition, not necessarily confined to conformal changes of flat metrics. They include Ricci limit spaces [Stu06, LV09], weighted Riemannian manifolds [Gri06], Alexandrov spaces [Pet10], and many others. A lower bound on Ricci curvature is a key ingredient of many useful estimates in geometric analysis, so Sturm, Lott and Villani [Stu06, LV09] initiated the study of a class of metric measure spaces with a generalized lower-Ricci-bound condition. This has been an active research topic for the last decade; see [Gig18] for an overview. The classical Weyl’s law and the short time asymptotics for heat kernels on these non-smooth metric measure spaces still hold [ZZ17, AHT18]. Many aspects of the theory are stable under the pointed measured Gromov-Hausdorff topology; for example eigenvalues, heat kernels, and Green’s function converge uniformly [Din02, ZZ17], Brownian motions converge weakly [Suz19], etc. Therefore, Weyl’s law holds for any RCD space with a measure that can be reasonably approximated. On the other hand, the short time expansion used to define the functional determinant does not exist in this non-smooth setting, so it is not clear if the zeta regularization procedure is also stable. The problem is somewhat different when the regularity is below , since curvature is no longer well-defined everywhere and the relevant estimates no longer hold pointwise and instead hold in an average sense. The primary purpose of this note is to extend some of the basic results in this subject about the conformal anomaly (the Dirichlet energy of ) to that are less regular—e.g., only Lipschitz—and to show that the rates of convergence can be made to hold uniformly across certain families of values.
This paper is motivated in part by another work by the authors [APPS20] in which similar results are formulated in terms of the so-called Brownian loop measures which were introduced in [LW04] and are related to heat kernel traces on planar domains in e.g. [Dub09, Wan18] as well as [APPS20]. The results here are useful in the context of [APPS20] because they strengthen the sense in which one can say that “decorating” regularized Liouville quantum gravity surfaces by Brownian loop soups has the effect of changing their central charge. We will formulate the results in this paper solely in terms of Brownian loop measures and their generalizations. (The relationship to heat kernels is explained in [APPS20].)
In addition to the weaker regularity assumptions and the use of generalized loop measures, there are several smaller differences between our presentation and the classical approach in e.g. [MS+67]: we work in the conformal gauge throughout and do all our calculations in terms of , we index loops by their Euclidean center rather than by a typical point on the loop (which would be more similar to the heat kernel approach), and we establish the Polyakov-Alvarez formula in terms of Dirichlet energy directly rather than first establishing an equivalent curvature integral.
Although we encounter some complexity due to the non-smoothness of , we also take advantage of the extra simplicity of the two-dimensional setting, where the manifold is completely determined by a conformal factor.
Finally, we note that there is a great deal of additional work in this area, and we cannot begin to survey it all. For example, reference texts such as [BGV03, Gil18] explore heat kernel traces in greater generality: dimensions other than , operators other than the Laplacian, etc. Other works extend the behavior known for compact smooth manifolds to specific non-smooth manifolds such as those with conical singularities or boundary corners (which both correspond to logarithmic singularities in ) [Moo99, Kok13, She13, She15, AR18, Gre21, Kal21] or to non-compact surfaces [AAR13]. There are also many open problems in this subject, which spans probability, geometry, number theory, mathematical physics, and analysis. We present a few of these questions in Section 6. We hope that the techniques and perspectives presented here will facilitate progress on these problems and perhaps also find applications in other contexts where Weyl’s formula and the Polyakov-Alvarez term appear.
1.1 Main result
Let denote the set of zero-centered unit-length loops in . We define the Brownian loop measure in the plane by encoding each loop in the plane as an element of and formulating the Brownian loop measure as a measure on this product space.
Definition 1.1.
We express every loop in by the triple , where
-
•
is the length of , where we define the length of a path as the length of its parametrizing interval.
-
•
is the center of , Euclidean center of mass of , which is equal to .
-
•
is the zero-centered unit-length loop obtained from by translating the center to zero and rescaling time by and space by .
Definition 1.2.
We define the Brownian loop measure on as the measure on loops in given by
where denotes Lebesgue measure on , is Lebesgue measure on and is the probabilistic law of the random loop in obtained by first sampling a two-dimensional Brownian bridge on and then subtracting its mean.222Equivalently on the complex plane is the law of the complex-valued GFF indexed by the unit-length circle—with additive constant chosen to make the mean zero. In particular, is invariant under rotations of that circle.
The mass of the set of Brownian loops centered in with size greater than is given by
(1.2) |
In particular, (1.2) implies that no matter how small is, most loops with length have length of order : half of them have , ninety-five percent have , and so forth. Also, the fact that (1.2) tends to as informally means that there are very few large loops centered in .
Our main result describes how this mass of Brownian loops changes when we measure the length of loops with respect to a different metric on the plane, for a Lipschitz function supported in . We begin by defining the length of a Brownian loop in the metric , which we call its -length. If the loop were a smooth curve, we would compute its -length by integrating along the curve. Since Brownian loops have Hausdorff dimension , we instead define its -length by integrating along the loop, so that it has the same scale factor as area.
Definition 1.3.
Let be a smooth two-dimensional Riemannian manifold, and let be a function on . We define the -length of a loop as . We define the -volume form as the volume form associated to , and we write .
Except in Theorem 1.9 and Section 4, we always take in Definition 1.3 to be the Euclidean plane. We first describe the space of functions in the scope of this section.
Theorem 1.4.
Let be a bounded open subset of , and be the space of real-valued Lipschitz functions that vanish outside of . Suppose that is a collection of functions that (1) has uniformly bounded Lipschitz constants, and (2) is precompact in .
Then as , the -mass of loops centered in with -length at least , with respect to the Brownian loop measure, is given by
(1.3) |
with the convergence uniform over .
Remark 1.5 (Uniform boundedness).
The conditions (1) and (2) imply that the functions in are uniformly bounded.
Remark 1.6 (General ).
Since we require for now that the Lipschitz constant is uniformly bounded and the domain is bounded, precompactness in is equivalent to precompactness in for any fixed . In particular, Theorem 1.4 could have been formulated using precompactness in instead of . Let us also remark that the space is equivalent to the space of for which is finite.
Remark 1.7 (Precompactness and uniform equicontinuity).
Recall the Fréchet-Kolmogorov theorem (e.g., see [BB11]): let be a bounded domain, and . A subset is precompact if and only if is bounded in and
(1.4) |
where is extended to the function on whose value outside is zero.
We will use this equivalent characterization of precompactness in some of our proofs, usually referred as the uniform equicontinuity condition in . In particular, we will apply this, in the case , to the set of the gradients of the functions in the set from the statement of Theorem 1.4.
Remark 1.8 (The uniform equicontinuity condition is necessary).
As mentioned above, the precompactness hypothesis is equivalent to a type of uniform equicontinuity hypothesis. This hypothesis—or some similar condition on the functions —is necessary for the conclusion of Theorem 1.4 (or Theorem 1.12) to hold. Simply requiring all surfaces in to be with a universal bound on would not suffice. For example, in the Theorem 1.4 setting, could contain a sequence of functions that converge uniformly to zero with for each . We can construct such a sequence of functions by arranging for to oscillate between fixed opposite values, with the oscillation rate becoming faster as . (A simple example of such a family of functions on the torus is given by a constant multiple of ; we can define similarly on the planar domain by tapering the sine functions to zero near the boundary of .) We can also perturb the functions to arrange that for all . This set of functions does not satisfy (1.3): for any fixed , one can easily show that
(1.5) |
even though for each fixed we have
(1.6) |
If the limit in (1.6) were uniform in , we could choose a with for all , and (1.5) would not hold for that .333One might wonder whether it is enough have in the Hilbert space defined by the inner product , i.e., the Sobolev space , with (say) zero boundary conditions. Such a can be nowhere differentiable [Ser61], so one would also need to modify the condition on . See Question 6.1.
We extend this result to general surfaces. The statement of the theorem involves the notion of the zeta-regularized determinant of the Laplacian, as defined, e.g., in [Alv83, Sar87]. (However, it is not necessary to understand the definition of the zeta-regularized determinant to follow the proof of Theorem 1.9.)
Theorem 1.9.
Let be a fixed compact smooth two-dimensional Riemannian manifold, and we let denote the Brownian loop measure on . Let be the Gaussian curvature on , let be the Laplacian associated to , and let denote its zeta-regularized determinant. Let be a family of Lipschitz functions that (1) has uniformly bounded Lipshitz constants, and (2) is precompact in .
Then the -mass of loops with -length between and is given by
(1.7) |
with the convergence as and uniform over , where is the Euler-Mascheroni constant.
For simplicity, we have addressed just the compact manifold case, but one could prove a similar result for manifolds with boundary, see Question 6.3 where we give a heuristic justification in the preceding paragraph; the resulting expression would include a boundary term which is of order .
Observe that, for smooth , the expression in the second line (1.7) of the above expression is equal to , where is the Laplacian associated to . The expression (1.7) for is known as the Polyakov-Alvarez formula; see, e.g., [APPS20, Proposition 6.9]. Thus, for smooth , Theorem 1.9 reduces to a relation between the Brownian loop measure and the zeta-regularized Laplacian determinant, which was shown in [APPS20, Theorem 1.3].
In fact, we prove a slight generalization of Theorem 1.4 in which we consider a more general class of loop measures.
Definition 1.10.
The expected occupation measure of a random variable is the function such that, for each measurable set , the set has expected Lebesgue measure .
Definition 1.11.
We define a generalized loop measure as a measure on loops in given by
where denotes Lebesgue measure on , is Lebesgue measure on and is an arbitrary rotationally invariant measure on loops in whose expected occupation measure is a Schwartz distribution (but not necessarily Gaussian as for the Brownian loop measure). We denote by the second central moment of the first—or equivalently, second—coordinate of a random variable whose density is this expected occupation measure.
Definition 1.11 is the same as Definition 1.2, except that we no longer insist that be the Brownian bridge (and we have removed the factor as it is less natural for general ). The measure can be supported on the space of circular loops, square-shaped loops, or outer boundaries of Brownian loops, etc. The from Definition 1.11 need not have the same conformal symmetries as the Brownian loop measure. Even if is supported on smooth loops (rather than Brownian loops) we parameterize the space of loops as in Definition 1.1, so that represents the loop that traces over time duration . In the special case of the Brownian loop measure, . (See Proposition 3.9.) The Schwartz distribution assumption in Definition 1.11 does not seem necessary for Theorem 1.12 to hold, but we have included it to simplify the calculations in the proof of Proposition 3.1 below.
1.2 Proof outline
In this section, we let be a fixed collection of functions satisfying the hypotheses of Theorem 1.12. To prove Theorem 1.12, we compare to a simpler notion of the length of a loop with respect to , in which we approximate along the loop by its value at the center of the loop.
Definition 1.13.
We define the center -length of a loop in centered at a point as .
We observe that the cutoff corresponds to a unique value of :
Proposition 1.14.
Let , and , and set . The loop satisfies iff , and iff .
Proof.
The result follows trivially from the definition of center -length. ∎
Proposition 1.14 immediately implies the following trivial analogue of Theorem 1.12 for center -length.
Proposition 1.15.
The mass of loops centered in with with respect to the Brownian loop measure is given by
(1.9) |
Proof.
The result is an immediate consequence of Proposition 1.14. ∎
We can therefore restate Theorem 1.12 as the assertion that if we change our notion of loop length from to , the -mass of loops with length increases by , up to an error that is as uniformly in .
We divide the proof of Theorem 1.12 into two stages. First, in Section 2 we show that, up to a uniform error, replacing with has the effect of subtracting times the average discrepancy between the value of along a Brownian loop and the value of at its center.
Lemma 1.16.
Consider a loop sampled from conditioned to have its center in and length . Let denote its center, and let denote a point on the loop sampled uniformly with respect to length. Then the -mass of loops with center in and is equal to the -mass of loops with center in and , minus
(1.10) |
with the error tending to 0 as at a rate that is uniform in . (In (1.10) the expectation is w.r.t. the overall law of and as described above.)
We then complete the proof of Theorem 1.12 in Section 3 by showing that the quantity (1.10) equals up to a uniform error.
Lemma 1.17.
The quantity (1.10) equals plus an error term that converges to as uniformly in .
2 Loop mass difference vs. expected length discrepancy
In this section, we prove Lemma 1.16 in three steps.
Step 1: Establishing a length threshold corresponding to -length . We saw in Proposition 1.14 that, with , we have if and only if , and if and only if . To prove Lemma 1.16, we establish a similar result for -length. We will show in Proposition 2.4 below that, for and with the diameter of sufficiently small, there exists a threshold such that if and only if , and if and only if . We note that, unlike , the threshold may depend on as well as and .
Step 2: Relating the difference in the masses of loops to the quantity . We saw in Proposition 1.15 that the -mass of loops with center -length can be expressed as an integral of . In Proposition 2.5 below, we similarly express the -mass of loops with -length as an integral of , plus a uniform error. This reduces the task of proving Lemma 1.16 to analyzing the difference of integrands .
Step 3: Expressing the difference in terms of a difference in lengths. We first express the difference in terms of a difference between the -length and center -length of a loop (Proposition 2.7). We then apply this result in Proposition 2.8 to derive a similar expression for , which immediately yields Lemma 1.16.
Having described the main steps of the proof of Lemma 1.16, we now proceed with Step 1—proving the existence of the threshold . As we observed in Proposition 1.14, the existence of the analogous threshold for center -length is trivial, since for fixed and , the function is linear with slope . This is not the case for -length, so we proceed by showing its derivative as a function of is positive on a sufficiently large interval. We first observe that, since the functions are uniformly bounded above and below, we can crudely bound the function between two linear functions uniformly in and .
Proposition 2.1.
There exists a constant such that, for each , and , the loop satisfies
(2.1) |
In other words, and length agree up to a universal constant factor.
Proof.
The lemma follows immediately from the fact that is bounded from above and below by a constant uniform in , as noted in Remark 1.5. ∎
In addition, the collection of functions satisfies the same conditions of , possibly with different bounds.
Proposition 2.2.
The functions for also have uniformly bounded Lipschitz constants and are precompact in .
Proof.
Since the functions are uniformly bounded, their images are contained in some finite closed interval. The exponential function is Lipschitz on any finite closed interval, so the composition is also Lipschitz. Other conditions are straightforward to check. ∎
We now apply the uniform boundedness of the Lipschitz constants of for to show that, when is not too large, the derivative of the function is uniformly close to that of the linear function .
Proposition 2.3.
For any , there exists and a family of sets with such that the following is true. For each , , and , the derivative
exists and differs from
by at most for almost every with . Furthermore, there exists a constant such that the previous statement it true for arbitrary and when we choose .
Proof.
By Rademacher’s theorem, any Lipschitz function is differentiable almost everywhere. In particular, by Proposition 2.2, there exists some constant that does not depend on such that for almost every for all , where is a weak gradient of . In addition, as is precompact in , it follow from (1.4) that there exists some such that the set
(2.2) |
satisfies for each .
For fixed , , and , we can write as , where . We express a weak -derivative of in terms of as
(2.3) |
Since
(2.4) | ||||
(2.5) |
we can bound from above by
for almost every , using (2.4). On the other hand, if , we use (2.2) and (2.5) to bound from above by
for almost every with .
Note that (2.3) gives
and we also have
(2.6) |
Combining these two inequalities with the estimates for in two different cases gives the desired result. ∎
We use Proposition 2.3 to show that, just as for center -length, there is a unique value of corresponding to .
Proposition 2.4.
We can choose such that the following holds. For each and , if is a loop with less than , then there is a unique positive such that iff and iff . (If is , then we arbitrarily set , so that is defined for every loop .)
Proof.
Let be the constant in Proposition 2.1. If , then by Proposition 2.1. Thus, it suffices to show that, for sufficiently small, the function is strictly increasing in when . We prove this fact by analyzing using Proposition 2.3.
Let be the constant in Proposition 2.3 so that for all , , and almost every ,
(2.7) |
We choose sufficiently small less than such that, for almost every , the bound in (2.7) is less than . This means is strictly positive for almost every . Since is a continous function in , we conclude that is strictly increasing on . ∎
When and are fixed, gives the Euclidean value of that corresponds to a value of . We obtain the following analogue of Proposition 1.15 with in place of .
Proposition 2.5.
The -mass of the set of loops with center in and is equal to
(2.8) |
plus an error term that tends to as at a rate that is uniform in .
The reason the mass of loops does not exactly equal (2.8) is that is improperly defined when , with as in Proposition 2.4. Proposition 2.5 asserts that the resulting error is negligible in the limit.
To prove Proposition 2.5, we use the following bound on the -measure of loops with large diameter.
Proposition 2.6.
The -measure of loops with diameter greater than tends to as faster than any negative power of .
Proof.
Proof of Proposition 2.5.
By definition of , the integral in (2.8) gives the -mass of loops with and either
-
•
and , or
-
•
and .
Thus, the error term—i.e., the difference between (2.8) and the mass of loops we consider in the proposition statement—is the -mass of loops with and for which either
-
•
and , or
-
•
and .
To bound this error, we recall from Proposition 2.1 that is when and when . Thus, the error is at most the -mass of loops with center in , and . This mass is equal to the Lebesgue measure of times times the -measure of the set of loops with . From Proposition 2.6, we deduce that the error tends to as at a rate that is uniform in . ∎
Proposition 2.5 reduces the problem of proving Lemma 1.16 to the problem of analyzing the difference of integrands and in (2.8) and (1.9). We first derive an expression for in terms of the difference in the -length and center -length of the loop .
Proposition 2.7.
Let be fixed. Then, for all with diameter at most , as ,
(2.9) |
with the error converging uniformly in the choice of , with diameter at most , and where is defined as in Proposition 2.3. If we remove the restriction on , then the error is uniformly in , , and ..

Proof.
Let be as in Proposition 2.4. Observe that if we are restricting to with diameter at most some constant , we automatically have for uniformly small . If we do not impose the restriction , then we could have for arbitrarily small . However, the proposition statement easily holds for this range of and : by Proposition 2.1, the error in the proposition statement must be bounded by a uniform constant times , which is when because . Thus, we may assume for the rest of the proof that .
Throughout the proof, we refer to the graphs of the functions and for fixed , , and in Figure 1. See the caption of the figure for the definitions of the red, blue and green points. By Proposition 2.4, is the distance between the red and blue points, and the quantity on the right-hand side of (2.9) is the distance between the blue and purple points. We can express these two distances in terms of the slopes of the two curves:
-
•
The distance between the blue and purple points is equal to the length of the green segment divided by the slope of the blue line (i.e., ).
-
•
The distance between the blue and red points is equal to the length of the green segment divided by the average derivative of the red curve between the red and green points.
The error that we need to bound is the difference between these two distances—i.e., the distance between the red and purple points. By Proposition 2.3, the derivative of the red curve between the red and green points differs from the derivative of the blue line by at most , where is a constant bounded uniformly in and that can be made arbitrarily small for sufficiently small. By Proposition 2.1, this implies that the inverses of these two derivatives differ by a uniform constant multiplied by . Moreover, the length of the green segment is , and by (2.6),
(2.10) |
with as above. Thus, the error term—i.e., the distance between the red and purple points—is bounded by a uniform constant times . If we do not restrict to , the error is uniformly in , , and . If we restrict to , then as at a uniform rate, so the error is uniformly in , , and with . ∎
Proposition 2.7 immediately yields the following expression for .
Proposition 2.8.
Let be fixed. Then, for all with diameter at most , as ,
(2.11) |
with the error converging uniformly in the choice of , with diameter at most , and where is defined as in Proposition 2.3. If we remove the restriction on , then the error is + uniformly in , , and .
Proof.
Throughout the following proof, each and error converges uniformly as in the choice of , , and .
By the Taylor expansion of at , we have
for some between and . Proposition 2.7 gives
We now handle the term. From (2.1), we have ; therefore, . Next, by (2.10) and Proposition 2.7, , where is a constant bounded uniformly in that can be made arbitrarily small for sufficiently small. Hence, . The latter is with the restriction on , and otherwise. ∎
We also give a similar estimate for bookkeeping which might be useful for Question 6.5.
Proposition 2.9.
With the same assumption of Proposition 2.8 without restriction on , we have uniformly in , , and , where is a constant bounded uniformly in that can be made arbitrarily small for sufficiently small.
Proof.
Proof of Lemma 1.16.
Let and be defined as in Proposition 2.3 so that . By Propositions 1.15 and 2.5, integrating over and yields the -mass of loops centered in with minus the -mass of loops centered in with , up to a uniform error. By Proposition 2.8, for each fixed and , the integrand is equal to
(2.12) |
plus an error term that has the following limiting behavior as :
-
(a)
The error is uniformly in and .
-
(b)
For each fixed , the error is uniformly in and with .
We now integrate over and and take the limit. The integral of (2.12) is exactly equal to the term in (1.10), so we just need to show that the integral of the error term in the expression for tends to as uniformly in .
To analyze the integral of this error term, we partition the domain of integration. For any function of , we can partition into two subdomains: the set of pairs with , and the set of with . The bound (a) implies that the integral of the error over with equals a uniform constant times , which tends to zero as at a rate uniform in (small) and . Moreover, (b) implies that if is fixed, the integral of the error over with tends to as uniformly in . Hence, if we choose a function that tends to infinity sufficiently slowly as , the integrals of the error term over both subdomains of tend to as uniformly in .
Finally, the integral of over and is bounded by a uniform constant times because of (2.1), which finishes the proof. ∎
3 Expected length discrepancy vs. Dirichlet energy
We now complete the proof of Theorem 1.12 by proving Lemma 1.17. The main step in proving this lemma is proving the following proposition.
Proposition 3.1.
We prove Proposition 3.1 by a Fourier analysis argument. Throughout this section, we define the Fourier transform of a function as (using the convention of characteristic functions). To prove Proposition 3.1, we express the conditional density of given in terms of the expected occupation measure of a loop sampled from with given length and center. In the following proposition, we introduce some notation for this expected occupation measure and record some of its elementary properties.
Proposition 3.2.
For and , let be the expected occupation measure of a loop sampled from and conditioned to have length and center zero. For fixed , the measure is radially symmetric Schwartz, and a probability measure. Each coordinate has second central moment . Moreover, satisfies the scaling relation for any . Finally, we have .
Proof.
Since is the law of With sampled from , the loop has law . It follows from Definition 1.11 that is radially symmetric, Schwartz, and a probability measure with the desired second central moments. The scaling relation also follows immediately. Finally, the second central moments imply that the measures converge weakly as to a point mass at , so their characteristic functions converge to . ∎
We first reduce the task of proving Proposition 3.1 to analyzing the Dirichlet energy of the inverse Laplacian of an appropriately chosen measure.
Proposition 3.3.
Suppose that is a function satisfying
(3.1) |
where is defined in Proposition 3.2 and is the point mass at . Then
Proof.
The conditional density of given is . Therefore,
by the definition of . Integrating by parts (or applying Green’s identity) gives , and
by applying the Cauchy-Schwarz inequality. ∎
Therefore, it is enough to show uniformly for the proof. To express a function satisfying (3.1) using Fourier integral444Alternatively, such can be written in terms of the convolution with Green’s function., we first define a family of auxiliary functions that we label indexed by .
Proposition 3.4.
With the notation , the function for defined as the inverse Fourier transform of is a radially symmetric Schwartz function. In addition, and is uniformly bounded for all and .
Proof.
Fix . Recall that is Schwartz in , so we use the notation in this proof. The scaling relation and the radial symmetry of (Proposition 3.2) implies that . Hence, it is enough to show that , as a function of , is a Schwartz function. This will also imply that is uniformly bounded by , and from the Fourier inversion.
First we repeatedly differentiate both sides of the relation using the chain rule, and observe exists as a constant multiple of . Also, the first differentiation implies
Let be any polynomial in , and be any nonnegative integer. On one hand, for a fixed , we have
because and its derivatives with respect to are rapidly decreasing, On the other hand, note that is continuous on and bounded at 0. This proves that is Schwartz. ∎
Proposition 3.5.
Proof.
By the Fourier transform, we have
as we have defined . Therefore, the Fourier inversion of gives the desired result. Furthermore, the first identity shows that from Proposition 3.4. By the Riemann-Lebesgue lemma, we conclude that , and thus it follows from the dominated convergence theorem that . ∎
Proposition 3.6.
Proof.
Proposition 3.7.
For each , , and , let
(3.5) |
where is defined as (3.3). Then and both converges to 0 in as uniformly over .
Proof.
Since is rapidly decreasing, given any , there exists such that
(3.6) |
for all , , and as is uniformly bounded. By radial symmetry, note that for all and because is an odd function. Now we divide the domain of integral (3.5) into two regions depending on the sign of when we write , so that the integral on each domain becomes
Since is Lipschitz as being Schwartz, the uniform equicontinuity of in (in the sense of Remark 1.7) implies that , as a function of , is also uniformly equicontinuous in . Therefore, denoting , there exists such that
for all whenever . Combined with (3.6), we conclude that converges to 0 in uniformly over as .
A similar argument applies to . In particular, as a function of and , note that is Lipschitz. Hence the uniform equicontinuity of in (in the sense of Remark 1.7) implies the uniform equicontinuity of in over as a function of . Since is rapidly decreasing, from integration by parts,
again by the oddity of . Repeating the previous argument, we conclude that converges to 0 in uniformly over as . ∎
Proposition 3.8.
For each , define
(3.7) |
Then converges to 0 in uniformly over as .
Proof.
Proof of Proposition 3.1.
Proof of Lemma 1.17.
Let , so that
(3.8) |
By Proposition 3.1, the term is with the convergence uniform in . We now analyze the second term in (3.8). Since the gradients for are uniformly equicontinuous in (in the sense of Remark 1.7), we can express as almost surely, where is an error term uniformly in expectation, as . Conditional on , each coordinate of has mean and variance . Therefore, with ,
Therefore,
with the error uniform in and . We can similarly show that is uniformly in and . Multiplying both sides by and taking the expectation, we deduce that the second term in (3.8) is equal to plus an error that converges uniformly in . ∎
Proof of Theorem 1.12.
Finally, to deduce Theorem 1.4 from Theorem 1.12, we explicitly characterize the expected occupation measure.
Proposition 3.9.
For the Brownian loop measure, the expected occupation measure of a loop sampled from is the density of a complex Gaussian with variance .
Proof.
The law of a loop sampled from is that of a Brownian bridge indexed by the circle minus its mean. The value of a Brownian bridge indexed by the circle at any given time minus the mean value is a complex mean-zero Gaussian random variable of variance ; this calculation appears, for example, in [She07].555One way to see it is to consider a Gaussian free field indexed by the circle and observe that the Dirichlet energy on the circle parameterized by of the function is given by , so . But for a function on the circle we have from integration by parts that . Then using the above and the definition of the GFF we have . By rotational symmetry, this holds if is replaced by any other number in . The number is also derived in [She07] by Fourier series, and we remark that comparing these two derivations is one way to prove . ∎
4 Brownian loops on surfaces
In this section, we prove Theorem 1.9. Throughout the section, we let be a fixed compact smooth two-dimensional Riemannian manifold, and we let denote the Brownian loop measure on . We also fix as a precompact set of Lipschitz functions in with uniformly bounded Lipschitz constants.
To prove Theorem 1.9, we analyze the mass of “large” loops and the mass of “small” loops with respect to -length separately. We begin by analyzing the mass of large loops by proving a central limit theorem for -length along large loops:
Proposition 4.1.
We can choose a constant such that, for every and and every loop sampled from ,
(4.1) |
with probability as , with the rate uniform in .
To prove Proposition 4.1, we apply the Markov central limit theorem to Brownian motion on , and then compare Brownian motion to a loop sampled from by using the following Radon-Nikodym estimate.
Proposition 4.2.
Let be fixed, and let be a loop sampled from . The Radon-Nikodym derivative of the law of with respect to Brownian motion restricted to is given by for some .
Proof.
By Proposition 4.3, for some , with the error uniform in . This implies the derivative bound. ∎
To apply the Markov central limit theorem to Brownian motion on , we need the following convergence result for the Brownian transition kernel as .
Proposition 4.3.
We have
for some constant .
Proof.
Let denote the eigenvalues of , and let be a corresponding Hilbert basis of eigenfunctions. It follows from the heat equation that can be written as
It is known that for some constant depending only on [Don01]. Hence, is bounded from above by a function of that decays exponentially as . ∎
Proof of Proposition 4.1.
(In the proof that follows, the rate of convergence of the errors are uniform in the choice of .) Let be a Brownian motion on started at a point sampled from the volume measure associated to . Let , and let . For , let be the expected value of , where is a Brownian motion on started at . By Proposition 4.3, the conditional expectation of given is for some . Hence, for some . Thus, we may apply the Markov chain central limit theorem to deduce that
converges in the limit to a centered Gaussian distribution whose variance is bounded uniformly in the choice of . Note that we have . Therefore, for each fixed ,
with probability .
Combining this with Proposition 4.2, we deduce that, if is an integer with , then
with probability as . Since is bounded from above by times a constant uniform in , this implies the proposition. ∎
We now apply Proposition 4.1 to analyze the mass of large loops.
Proposition 4.4.
For each , the symmetric difference between
-
•
the set of loops with length and length , and
-
•
the set of loops with length and -length ,
has -mass at most , with the rate of convergence uniform in .
Proof.
(In the proof that follows, the rate of convergence of the errors are uniform in the choice of .) Let denote the symmetric difference between the two sets. By Proposition 2.1, each loop in has length between and , with independent of the choice of . By Proposition 4.1, the subset of loops not satisfying (4.1) has -mass at most . Moreover, with , we can bound the -mass of the set of loops in by
Next, we analyze the mass of small loops.
Proposition 4.5.
For each , the difference in the masses of the sets
under the Brownian loop measure in is equal to the difference in the expressions
(4.2) |
plus a term that tends to zero as at a rate that is uniform in the choice of .
To prove Proposition 4.5, we will apply the following pair of propositions.
Proposition 4.6.
Let be a smooth two-dimensional Riemannian manifold. Let , and let be a collection of loops in that intersects a closed set disjoint from . Then the following holds for all sufficiently small. Let be uniformly bounded functions that agree outside . Then, for each , we have iff .
Proof.
It is straightforward as is uniformly bounded and the length of outside is bounded below. ∎
Proposition 4.7.
Let be a compactly supported function on a region with finite Dirichlet energy, and let be a smooth compactly supported function on , and let denote the gradient, volume form and Gaussian curvature associated to . Then
Proof.
It follows from
Proof of Proposition 4.5.
In the proof that follows, we consider -lengths and -volume forms (as defined in Definition 1.3) for functions on both and a region of the Euclidean plane. To avoid confusion between the two settings, we use the notation and in the setting, and and in the Euclidean setting.
Let be open sets in with and , such that we can find a homeomorphism . By a partition of unity argument, it suffices to prove the proposition under the assumption that agree on . So, we assume that this is the case. By Proposition 4.6, the difference in the masses of the sets
(4.3) |
under the Brownian loop measure in is equal to the difference in masses with (4.3) replaced by
(4.4) |
Since, by Proposition 2.1, automatically implies for sufficiently small (in a manner that does not depend on the choice of ), we can replace (4.4) by the sets
(4.5) |
with the condition in (4.4) removed. Let be the pushforward of via , and let be the pushforward of the metric via . Also, set . By conformal invariance of the Brownian loop measure [APPS20, Lemma 3.3], the masses of the sets (4.5) under the Brownian loop measure in equals the masses of the sets
(4.6) |
under the Brownian loop measure in the Euclidean plane. Now, let be a smooth function on that equals on and zero outside a compact subset of . Since on , (4.6) is equal to the set
(4.7) |
By Proposition 4.6, the difference in these loop masses is unchanged if we replace the sets (4.7) by the sets
By Theorem 1.4, this difference in loop masses is given by the difference in the quantities
(4.8) |
By Proposition 4.7 with and , together with the fact that outside and in , we can write the difference in the quantities (4.8) as the difference in the quantities
(4.9) |
where are the gradient and Gauss curvature associated to . Pulling back via , we can rewrite the expressions (4.9) as
We complete the proof by noting that, since outside , we can replace by in (4.2). ∎
Proof of Theorem 1.9.
By conformal invariance of the Brownian loop measure [APPS20, Lemma 3.3], the statement of the theorem is equivalent to the assertion that (1.7) holds up to scaling. The result holds for by [APPS20, Theorem 1.3].666We remark that the in the statement of [APPS20, Theorem 1.3] represents the quadratic variation of Brownian loops, which is two times the time interval length we use in this paper. Thus, for our application, we need to substitute there into . We deduce the result for general by applying Propositions 4.4 and 4.5. ∎
5 Constructing square subdivision regularizations
Now what happens if is defined from a finite square subdivision as in [APPS20, Section 6], so that it has constant Laplacians on each square in a grid? These functions can be shown to be , but along the edges of the squares they fail to be . In this section, we explain enough about these function to make it clear that they fit into the framework of this paper, at least if one restricts attentions to those for which the square averages are restricted to a compact set.
One can construct functions with piecewise-constant Laplacian on squares somewhat explicitly. Consider the function on restricted to the quadrant . Note that on the boundary of the quadrant, is purely imaginary, equal to on the upper boundary ray and on the lower, while is on the upper boundary and on the lower. so that on both rays. Thus has constant Laplacian and equals zero on the boundary . We can extend the definition of to the other three quadrants (, , and ) by imposing the relation . In a small neighborhood of the origin, the defined this way is negative on and positive on . The complex derivatives of are first , second , third , etc. One can deduce from this that is differentiable as a real-valued function (with derivative at ) but that its second derivatives blow up slowly (logarithmically) near zero and also have discontinuities along the quadrant boundaries.
The function (where is a fixed eighth root of unity) is thus a function that has piecewise constant Laplacian on the four standard quadrants, while being equal to zero on the boundaries of these quadrants.
By taking linear combinations of and the four functions (with ) we can get a differentiable function whose Laplacian matches any function that is constant on each of the four standard quadrants — in particular a function whose Laplacian is on one quadrant and on the other three. By taking differences of translates of this function, we can obtain a function whose Laplacian is on a semi-strip or a rectangle (and elsewhere). Linear combinations of these allow us to describe a function whose Laplacian is any given function that is constant on the squares of the grid. Any other function with the same Laplacian has to then differ from by a harmonic function. For example, by subtracting the harmonic extension of the values of on one obtains a function with the desired Laplacian whose boundary values on are zero.
6 Open questions
In this section, we discuss some open questions. First, as already mentioned in Footnote 3, we expect that some of our main results can be extended to . In addition to the fact that the Dirichlet energy is well-defined for such functions, here is another reason that the arguments from the Lipschitz case might carry over to this setting. Suppose that . Then the function
is a Lipschitz function on with . Therefore, the celebrated Stampacchia’s theorem [EG18, §4.2.2] asserts that implies is and the chain rule of weak derivatives implies that
holds for almost every . This already implies that many parts of our proofs carry over to the general case. As an example, Lemma 1.17 follows almost immediately with the condition of precompactness in , or equivalently the uniform equicontinuity in , in the sense of Remark 1.7. However, there are a few technical issues where we need to improve our estimates. For example, our proof of Lemma 1.16 is based on a pointwise estimate of , which would have to be replaced by an estimate, and such an estimate does not immediately follow from our current arguments.
Question 6.1.
Prove that Theorem 1.4 holds for a larger class of functions. For example, one might consider functions in and take to be any precompact subset of , possibly with some additional conditions.
Next, we note that Lemma 1.17 is proven for Schwartz functions , which is not expected to be optimal in any sense other than making the proofs simple. In fact, we believe the result holds for a much more general class of functions. For instance, we apply the lemma to expected occupation measure, but it should also be true for a single occupation measure in some sense.
Question 6.2.
Prove that Lemma 1.17 holds for a more general class of functions. For example, prove a similar result for a random function describing the occupation measure of a Brownian loop.
Third, we may consider the case when the manifold has a boundary, say when . In principle, one could formulate the boundary problem in a style similar to that presented above, and try to weaken the required regularity along the boundary as well. To start, imagine the boundary line is the horizontal real axis. Let be the measure on unit length loops that hit the real axis and are centered at a point on the positive imaginary axis.
This measure can be obtained from by starting with , then weighting by the gap between the minimum and maximum imaginary values obtained by the loop, then shifting the vertical height loop to a uniformly random height (within this range). The expected maximal height of a Brownian bridge is (as can be proved using the reflection principle). The expected minimal height is thus minus that, and the expected difference , which means that the expected vertical gap between the maximum height and the central height is again .
So formally, instead of being a probability measure, is a measure with weight . For real values , write . Note that the set of loops of length greater than is given by
We can imagine is defined in a neighborhood of a real line segment, and then try to measure the mass of loops (of size greater than ) that hit the boundary line itself. The relevant quantities are the gradient in the parallel direction and the gradient in the normal direction (with the latter affecting whether the real axis is hit by more loops centered above or below).
Question 6.3.
Prove that Theorem 1.9 also holds for manifolds with boundaries. In other words, generalize the boundary case of [APPS20, Theorem 1.3]:777Similar to the case without boundary, we plug instead of in the statement of [APPS20, Theorem 1.3], where there represents the quadratic variation of Brownian loops, which is two times the time interval length we use in this paper.
Conjecture 6.4.
Let be a fixed compact smooth two-dimensional Riemannian manifold with smooth boundary, and we let denote the Brownian loop measure on . Let be the Gaussian curvature on , let be the Laplacian associated to , and let denote its zeta-regularized determinant. Let be a family of Lipschitz functions that (1) has uniformly bounded Lipshitz constants, and (2) is precompact in .
The -mass of loops with -length greater than is given by
with the convergence as uniform over , where is the Euler-Mascheroni constant.
Finally, one naive approach to generalizing [APPS20, Proposition 6.9] for lower-regularity settings is to follow the zeta-regularization procedure verbatim, hoping each step works for the generalized settings in a similar way. The first obstacle in this direction is the lack of short-time expansions for the trace of heat kernels. For our application, what we need is if is a two-dimensional non-smooth manifold without boundary, and is the associated Laplacian defined in terms of Brownian loop mass, then
Question 6.5.
Prove the short time expansion for the (trace) heat kernel holds for lower regularity metrics.
References
- [AAR13] P. Albin, C. L. Aldana, and F. Rochon. Ricci flow and the determinant of the Laplacian on non-compact surfaces. Communications in Partial Differential Equations, 38(4):711–749, 2013.
- [AHT18] L. Ambrosio, S. Honda, and D. Tewodrose. Short-time behavior of the heat kernel and weyl’s law on spaces. Annals of Global Analysis and Geometry, 53(1):97–119, 2018.
- [Alv83] O. Alvarez. Theory of strings with boundaries: Fluctuations, topology and quantum geometry. Nuclear Physics B, 216(1):125 – 184, 1983.
- [APPS20] M. Ang, M. Park, J. Pfeffer, and S. Sheffield. Brownian loops and the central charge of a Liouville random surface. arXiv preprint arXiv:2005.11845, 2020.
- [AR18] C. L. Aldana and J. Rowlett. A Polyakov formula for sectors. The Journal of Geometric Analysis, 28(2):1773–1839, 2018.
- [BB11] H. Brezis and H. Brézis. Functional analysis, Sobolev spaces and partial differential equations, volume 2. Springer, 2011.
- [BGV03] N. Berline, E. Getzler, and M. Vergne. Heat kernels and Dirac operators. Springer Science & Business Media, 2003.
- [Din02] Y. Ding. Heat kernels and green’s functions on limit spaces. Communications in Analysis and Geometry, 10(3):475–514, 2002.
- [Don01] H. Donnelly. Bounds for eigenfunctions of the laplacian on compact riemannian manifolds. Journal of Functional Analysis, 187(1):247–261, 2001.
- [Dub09] J. Dubédat. SLE and the free field: partition functions and couplings. J. Amer. Math. Soc., 22(4):995–1054, 2009, 0712.3018. MR2525778 (2011d:60242)
- [EG18] L. C. Evans and R. F. Garzepy. Measure theory and fine properties of functions. Routledge, 2018.
- [Gig18] N. Gigli. Lecture notes on differential calculus on rcd spaces. Publications of the Research Institute for Mathematical Sciences, 54(4):855–918, 2018.
- [Gil18] P. B. Gilkey. Invariance theory: the heat equation and the Atiyah-Singer index theorem, volume 16. CRC press, 2018.
- [Gre21] R. L. Greenblatt. Discrete and zeta-regularized determinants of the Laplacian on polygonal domains with dirichlet boundary conditions. arXiv preprint arXiv:2102.04837, 2021.
- [Gri06] A. Grigor’yan. Heat kernels on weighted manifolds and applications. Cont. Math, 398(2006):93–191, 2006.
- [Hör68] L. Hörmander. The spectral function of an elliptic operator. Acta mathematica, 121(1):193–218, 1968.
- [Kac66] M. Kac. Can one hear the shape of a drum? The american mathematical monthly, 73(4P2):1–23, 1966.
- [Kal21] V. Kalvin. Polyakov-alvarez type comparison formulas for determinants of laplacians on riemann surfaces with conical singularities. Journal of Functional Analysis, 280(7):108866, 2021.
- [Kok13] A. Kokotov. Polyhedral surfaces and determinant of Laplacian. Proceedings of the American Mathematical Society, 141(2):725–735, 2013.
- [LV09] J. Lott and C. Villani. Ricci curvature for metric-measure spaces via optimal transport. Annals of Mathematics, pages 903–991, 2009.
- [LW04] G. F. Lawler and W. Werner. The Brownian loop soup. Probability theory and related fields, 128(4):565–588, 2004.
- [Moo99] E. A. Mooers. Heat kernel asymptotics on manifolds with conic singularities. Journal d’Analyse Mathematique, 78(1):1–36, 1999.
- [MS+67] H. P. McKean, I. M. Singer, et al. Curvature and the eigenvalues of the Laplacian. J. Differential Geometry, 1(1):43–69, 1967.
- [OPS88] B. Osgood, R. Phillips, and P. Sarnak. Extremals of determinants of Laplacians. Journal of Functional Analysis, 80(1):148 – 211, 1988.
- [Pet10] A. Petrunin. Alexandrov meets lott–villani–sturm. arXiv preprint arXiv:1003.5948, 2010.
- [Pol81] A. M. Polyakov. Quantum geometry of bosonic strings. Phys. Lett. B, 103(3):207–210, 1981. MR623209 (84h:81093a)
- [RS73] D. Ray and I. Singer. Analytic torsion for complex manifolds. Annals Math., 98:154–177, 1973.
- [Sar87] P. Sarnak. Determinants of Laplacians. Communications in mathematical physics, 110(1):113–120, 1987.
- [Ser61] J. Serrin. On the differentiability of functions of several variables. Archive for Rational Mechanics and Analysis, 7(1):359–372, 1961.
- [She07] S. Sheffield. Gaussian free fields for mathematicians. Probab. Theory Related Fields, 139(3-4):521–541, 2007, math/0312099. MR2322706 (2008d:60120)
- [She13] D. Sher. The heat kernel on an asymptotically conic manifold. Analysis & PDE, 6(7):1755–1791, 2013.
- [She15] D. A. Sher. Conic degeneration and the determinant of the Laplacian. Journal d’Analyse Mathématique, 126(1):175–226, 2015.
- [Stu06] K.-T. Sturm. On the geometry of metric measure spaces. Acta mathematica, 196(1):65–131, 2006.
- [Suz19] K. Suzuki. Convergence of brownian motions on metric measure spaces under riemannian curvature–dimension conditions. Electronic Journal of Probability, 24:1–36, 2019.
- [Wan18] Y. Wang. Equivalent descriptions of the Loewner energy. Inventiones mathematicae, pages 1–49, 2018.
- [Wey11] H. Weyl. Über die asymptotische verteilung der eigenwerte. Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 1911:110–117, 1911.
- [ZZ17] H.-C. Zhang and X.-P. Zhu. Weyl’s law on metric measure spaces. arXiv preprint arXiv:1701.01967, 2017.