Convergence of asymptotic costs for random Euclidean matching problems
Abstract.
We investigate the average minimum cost of a bipartite matching between two samples of independent random points uniformly distributed on a unit cube in dimensions, where the matching cost between two points is given by any power of their Euclidean distance. As grows, we prove convergence, after a suitable renormalization, towards a finite and positive constant. We also consider the analogous problem of optimal transport between points and the uniform measure. The proofs combine sub-additivity inequalities with a PDE ansatz similar to the one proposed in the context of the matching problem in two dimensions and later extended to obtain upper bounds in higher dimensions.
Key words and phrases:
Matching problem, optimal transport, geometric probability2010 Mathematics Subject Classification:
60D05, 90C05, 39B62, 60F25, 35J051. Introduction
The aim of this paper is to extend the results of [12, 9, 4, 11] on the existence of the thermodynamic limit for some random optimal matching problems. Because of their relations to computer science, statistical physics and quantization of measures, optimal matching problems have been the subject of intense research both from the mathematical and physical communities. We refer for instance to [29, 27, 10] for more details in particular regarding the vast literature.
Probably the simplest and most studied variant of these problems is the bipartite (or Euclidean bipartite) matching on the unit cube in dimensions. Given and two independent families of i.i.d. random variables and with common law the uniform (Lebesgue) measure on , the problem is to understand the behavior for large of
where the minimum is taken over all permutations of . It is by now well-known, see [1, 4, 6, 21] that for111The notation , which we only use in assumptions, means that there exists an only depending on the dimension and on , such that if then the conclusion holds. Similarly, the notation , which we use in output statements, means that there exists a global constant depending on the dimension and on such that . We write if both and . (see [13] for some non-asymptotic bounds)
(1.1) |
Let us point out that while the case , is not explicitly covered in the literature, the proof of [21] clearly extends to any (see also [7]). Our main result is the following:
Theorem 1.1.
For every and , there exists a constant such that
(1.2) |
This extends earlier results of [12, 9, 4, 11] where the same conclusion was obtained under the more restrictive condition . See also [26] for bounds on as becomes large. As in the previously quoted papers, our proof is based on a sub-additivity argument and makes use of the classical observation that, thanks to the Birkhoff-Von Neumann Theorem, the bi-partite matching problem is actually an optimal transport problem. Indeed, if and are the associated empirical measures, then
where denotes the Wasserstein distance of order (see [28, 25]). However, the papers [9, 4, 11] rely then upon combinatorial arguments, and in fact their results apply to a larger class of random optimization problems, while [12] strongly uses the dual formulation of optimal transport, which in the case is quite specific, since it becomes a maximization over the set of -Lipschitz functions.
The optimal transport point of view allows us to treat the defect in sub-additivity as a defect in local distribution of mass rather than a defect in local distribution of points. More precisely, even if it is in general not true that for a given partition of in sub-cubes , . Therefore, in order to use sub-additivity, one needs to relax the definition of the matching problem to take into account this disparity. In [9, 4, 11] this is done by requiring that as many points as possible are matched. Here we allow instead for varying weights. That is, for and containing potentially a different number of points, we consider the problem
The main sub-additivity argument for this quantity is contained in Lemma 3.1. In order to estimate the error in sub-additivity, we then use in Lemma 3.4 a PDE ansatz similar to the one proposed in the context of the matching problem in [10] and then used in [3] (see also [21, 17, 5]) to show that when
Notice however that our use of this linearization ansatz is quite different from [10, 3]. Indeed, for us the main contribution to the transportation cost is given by the transportation at the smaller scales and linearization is only used to estimate the higher order error term. On the other hand, in [10, 3] (see also [2]), the main contribution to the cost is given by the linearized i.e. , cost. This is somewhat in line with the prediction by [10] that for , the first order contribution to the Wasserstein cost is not given by the linearized problem while higher order corrections are. In any case, we give in Proposition 5.3 an alternative argument to estimate the error term without relying on the PDE ansatz. There we use instead an elementary comparison argument with one-dimensional transport plans. We included both proofs since we believe that they could prove useful in other contexts.
Let us make a few more comments on the proof of Theorem 1.1. As in [12, 9, 4], the proof of (1.2) is actually first done on a variant of the problem where the number of points follows a Poisson distribution instead of being deterministic. This is due to the fact that the restriction of a Poisson point process is again a Poisson point process. For this variant of the problem, rather than working on a fixed cube with an increasing density of points, we prefer to make a blow-up at a scale and consider in Theorem 5.1, a fixed Poisson point process of intensity on but restricted to cubes with (hence the terminology thermodynamic limit). We believe that the sub-additivity argument is slightly clearer in these variables (a similar rescaling is actually implicitly used in [9, 4]). This setting is somewhat reminiscent of [18], where super-additivity is used to construct an optimal coupling between the Poisson point process and the Lebesgue measure on . In order to pass from the Poisson version of the problem to the deterministic one, we prove a general de-Poissonization result in Proposition 6.1 which can hopefully be useful in other contexts.
Besides the bipartite matching we also treat in Theorem 4.1 and Theorem 6.2 the case of the matching to the reference measure. We actually treat this problem first since the proof is a bit simpler. Indeed, while the general scheme of the proof is identical to the bipartite case, the PDE ansatz used in Lemma 3.4 works well for “regular” measures and a more delicate argument is required for the bipartite matching. Notice that by Jensen and triangle inequalities, (1.1) also holds for the matching to the reference measure.
We point out that in [12, 4, 11], it is more generally proven that if the points and have a common law supported in instead of the Lebesgue measure (for measures with unbounded support a condition on the moments is required), then for and ,
However, when and without additional assumptions on , the asymptotic rate may be different and thus this inequality may fail, see e.g. [13]. Positive results for specific densities can be obtained nonetheless. For instance, it is proven in [22] that for the standard Gaussian measure on , , the asymptotic bound
holds true also for .
Finally, we notice that usual results on concentration of measure allow us to improve from convergence of the expectations to strong convergence. However, we are able to cover only the case , see Remark 6.5.
The plan of the paper is the following. In Section 2, we fix some notation and recall basic moment and concentration bounds for Poisson random variables. In Section 3, we state and prove our two main lemmas namely the sub-additivity estimate Lemma 3.1 and the error estimate Lemma 3.4. In Section 4, we then prove the existence of the thermodynamic limit for the matching problem of a Poisson point process to the reference measure. The analog result for the bipartite matching between two Poisson point processes is obtained in Section 5. Finally, in Section 6 we pass from the Poissonized problem to the deterministic one and discuss stronger convergence results.
2. Notation and preliminary results
We use the notation for the Lebesgue measure of a Borel set , and for the Lebesgue integral of a function on . For , we let . We denote by the Euclidean norm of a vector . For a function , we use the notation for the gradient, for the divergence, for the Laplacian and for the Hessian.
2.1. Optimal transport
In this section we introduce some notation for the Wasserstein distance and recall few simple properties that will be used throughout. Proofs can be found in any of the monographs [28, 25, 24] with expositions of theory of optimal transport, from different perspectives.
Given , a Borel subset and two positive Borel measures , with and finite -th moments, the Wasserstein distance of order between and is defined as the quantity
where is the set of couplings between and .
Moreover, if , we define , while if , we let .
Let us recall that since is a distance, we have the triangle inequality
(2.1) |
We will also use the classical sub-additivity inequality
(2.2) |
for a finite set of positive measures , . This follows from the observation that if , then .
Remark 2.1.
In fact, our results deal with the transportation cost rather than . To keep notation simple, we write
Moreover, if a measure is absolutely continuous with respect to Lebesgue measure, we only write its density. For example,
denotes the transportation cost between to the uniform measure on with total mass .
Occasionally we may write instead of . This may lead to some ambiguity, but it should be clear from the context.
2.2. Poisson point processes
As in [12, 9, 4], we exploit invariance properties of Poisson point processes on with uniform intensity in order to obtain simpler sub-additivity estimates. We refer e.g. to [19] for a general introduction to Poisson point processes. Here we only recall that a Poisson point process on with intensity one can be defined as a random variable taking values on locally finite atomic measures
such that, for every , for any disjoint Borel sets , the random variables are independent and has a Poisson distribution of parameter , for every . In particular, if is Lebesgue negligible, then almost surely.
Existence of a Poisson point process of intensity one is obtained via a superposition argument, noticing that on every bounded subset the law of can be easily described: conditionally on , the measure has the same law as the random measure
where are independent random variables with uniform law on . Uniqueness in law can be also obtained, so that translation invariance of Lebesgue measure entails that the process is stationary, i.e., any deterministic translation of the random measure leaves its law unchanged.
Let us finally recall the classical Cramér-Chernoff concentration bounds for Poisson random variables.
Lemma 2.2.
Let be a Poisson random variable with parameter . Then, for every
(2.3) |
As a consequence, for every ,
(2.4) |
3. The main lemmas
Our sub-additivity argument rests on a general but relatively simple lemma (which we only apply here for rectangles).
Lemma 3.1.
For every , there exists a constant depending only on such that the following holds. For every Borel set , every Borel partition of , every measures , on , and every ,
(3.1) |
Proof.
Remark 3.2.
Lemma 3.1 shows that in order to estimate the defect in sub-additivity, it is enough to bound the local defect of mass distribution. This will be done here through a PDE argument.
Definition 3.3.
We say that a rectangle is of moderate aspect ratio if for every , . A partition of is called admissible if for every , is a rectangle of moderate aspect ratio and . Notice that in particular for every admissible partition.
Lemma 3.4.
Let be a rectangle of moderate aspect ratio, and be measures on with equal mass, both absolutely continuous with respect to Lebesgue and such that . Then, for every
(3.3) |
Proof.
Let be a solution of the Poisson equation with Neumann boundary conditions
(3.4) |
We first argue that
(3.5) |
Let us point out that this estimate is well-known
and has already been used in the context of the matching problem, see [3, 17] in the case and [21, Th. 2] for general .
Still, we give a proof for the reader’s convenience.
We first argue as in [17, Lem. 2.7], and use triangle inequality (2.1) and the monotonicity of (2.2) to get
and thus
We now recall that by the Benamou-Brenier formula (see [25, Th. 5.28])
Estimate (3.5) follows using
as competitor and noticing that for , .
We now claim that
(3.6) |
which together with (3.5) would conclude the proof of (3.3). Estimate (3.6) is a direct consequence of Poincaré inequality and Calderón-Zygmund estimates for the Laplacian.
However, since we did not find a precise reference for (global) Calderón-Zygmund estimates on rectangles with Neumann boundary conditions, we give here a short proof.
By scaling, we may assume that . Furthermore, using even reflections along we may replace Neumann boundary conditions by periodic ones in (3.4).
By Poincaré inequality [23, Prop. 12.29] (notice that thanks to the periodic boundary conditions we now have ),
By interior Calderón-Zygmund estimates (see for instance [15, Th. 7.3]), periodicity and the fact that has moderate aspect ratio, we get
By Bochner’s formula and Hölder inequality,
which concludes the proof of (3.6). ∎
4. Matching to the reference measure
In this section, we consider the optimal matching problem between a Poisson point process on with intensity one and the Lebesgue measure. More precisely, for every we let
where and
is the generic constant for which this is well defined.
Theorem 4.1.
For every and , the limit
exists and is strictly positive. Moreover, there exists depending on and such that for ,
(4.1) |
The proof follows the argument of [9] (see also [4]) and is mostly based on the following sub-additivity estimate.
Proposition 4.2.
For every and , there exists a constant such that for every and ,
(4.2) |
Proof.
We start by pointing out that since , it is not restrictive to assume that in the proof of (4.2).
Step 1. [The dyadic case] For the sake of clarity, we start with the simpler case for some . We claim that
(4.3) |
The desired estimate (4.2) would then follow iterating (4.3) and using that for . In order to prove (4.3), we divide the cube in sub-cubes and let (and ). Notice that we are considering a partition up to a Lebesgue negligible remainder, which gives no contribution almost surely. By (3.1), for every ,
Dividing by , taking expectations and using the fact that by translation invariance , we get
We now estimate . Using together with
which follows from the Cramér-Chernoff bounds (2.3), we may reduce ourselves to the event . Under this condition, by (3.3), we have
Recalling that are Poisson random variables of parameter and that , we get from (2.4)
Thus
(4.4) |
and we conclude that for ,
(4.5) |
Optimizing in by choosing , and using that is bounded
(by (1.1) and (2.3), see for instance [16, Prop. 2.7] for details) we conclude the proof of (4.3).
Let us point out that we used here boundedness of for simplicity but that as shown below it can also be obtained as a consequence of our proof.
Step 2.[The general case] We now consider the case when is not necessarily dyadic. We will partition into rectangles of almost dyadic size and thus need to deal with slightly more general configurations than dyadic cubes. Let us introduce some notation. Let be a rectangle with moderate aspect ratio and be an admissible partition of (recall Definition 3.3). Slightly abusing notation, we define
(4.6) |
Step 2.1. We claim that the following variant of (4.5) holds: for every rectangle of moderate aspect ratio with , every admissible partition of and every , we have
(4.7) |
Defining and using (3.1) as above, we get
The estimate
(4.8) |
is then obtained arguing exactly as for (4.4), using first the Crámer-Chernoff bound (2.3) to reduce to the event
and then (3.3) (recalling that since has moderate aspect ratio) in combination with (2.4) and the fact that
since is an admissible partition.
Step 2.2. Starting from the cube , let us construct a sequence of finer and finer partitions of by rectangles of moderate aspect ratios and
side-length given by integer multiples of .
We let and define inductively as follows.
Let . Up to translation we may assume that for some . We then split each interval into .
It is readily seen that this induces an admissible partition of . Let us point out that when for some , the corresponding interval is empty.
This procedure stops after a finite number of steps once . It is also readily seen that and that for every and every we have .
Let us prove by a downward induction that there exists such that for every and every ,
(4.9) |
This is clearly true for . Assume that it holds true for . Let . Applying (4.7) with , we get
If is large enough, then . Finally, choosing yields (4.9).
Applying (4.9) to and using that , we get
Since , writing that every may be written as for some and , we conclude that is bounded and thus (4.2) follows. ∎
Remark 4.3.
We can now prove Theorem 4.1
Proof of Theorem 4.1.
The existence of a limit is obtained from (4.2) arguing exactly as in [9] using the continuity of (which can be obtained for instance by dominated convergence). The fact that and thus follows from (1.1) and (2.3). Finally, (4.1) follows from (4.2) by sending for fixed .
∎
Remark 4.4.
5. Bi-partite matching
We now turn to the bi-partite matching. For and two independent Poisson point processes of intensity one, we want to study the asymptotic behavior as of
with . Analogously to Theorem 4.1, we have:
Theorem 5.1.
For every and , the limit
exists and is strictly positive. Moreover, there exists depending on and such that for ,
The proof of Theorem 5.1 follows the same line of arguments as for Theorem 4.1. We only detail the estimate of the sub-additivity defect i.e. the counterpart of (4.8), since it is more delicate in the bipartite case.
Proposition 5.2.
Let be a rectangle of moderate aspect ratio with and be an admissible partition of (recall Definition 3.3). Defining and , we have for every and ,
(5.1) |
Proof.
As opposed to (4.8), since is atomic, we cannot directly use (3.3). The idea is to use as intermediate step the matching between and the reference measure.
Let us point out that
since is of order one, we cannot apply naively the triangle inequality.
The main observation is that since with overwhelming probability, the amount of mass which actually needs to be transported is very small.
Let to be optimized later on. For every let
and assume that
(5.2) |
so that . Notice that thanks to the Crámer-Chernoff bounds (2.3), (5.2) is satisfied with overwhelming probability as long as . Using the triangle inequality (2.1) we have
Notice that by definition of and the fact that so that the second term is well defined.
We now estimate the three terms separately. The first and third are estimated in a similar way and we thus focus only on the first one. By sub-additivity (2.2) of , we have
We turn to the middle term. Again by sub-additivity of ,
Using (3.3) in the event (which has overwhelming probability) and recalling that we assumed , we have
Putting these two estimates together, dividing by and taking expectations we find
Optimizing in by choosing (so that (5.2) is satisfied) this yields (5.1).
∎
Comparing (4.4) and (5.1), one may wonder if (5.1) is suboptimal and could be improved. Let us prove that it is not the case, at least if we consider the slightly more regular situation of the matching to a deterministic grid.
Proposition 5.3.
Let and for every , be independent Poisson random variables of parameter . Writing for , where are disjoint cubes of sidelength and defining , , we have for and ,
Proof.
Step 1.[The lower bound] We start by proving that
(5.3) |
For this we notice that if is any admissible coupling, then for every in the support of with and thus222recall that denotes the Wasserstein distance.
Let and . Using as test function in the dual formulation of , we obtain
Taking expectations we find (5.3).
Step 2.[The upper bound] We turn to the corresponding upper bound,
(5.4) |
One could argue exactly as for (5.1) but we provide an alternative proof which uses a one-dimensional argument instead of (3.3). Notice that this argument could have also been used to give a different proof of (4.4).
Step 2.1.[The one-dimensional estimate] Let and for , and , define . We claim that if , then
(5.5) |
Let us assume without loss of generality that . The optimal transport map is essentially symmetric around and is given in by sending a mass from position to (which is admissible since by hypothesis ) so that
This proves (5.5).
Step 2.2.[Proof of (5.4)] The proof is made recursively by layers. In the first step, we pass from cubes to rectangles of the form . For this we remark that for every cube there is exactly one cube such that (where is the canonical basis of ). Let us focus on and . Let . Define . We claim that
For this we notice that in , the measures and are constant in the directions orthogonal to and thus
Since , by the Cramér-Chernoff bounds (2.3), we have with overwhelming probability. Hence, if we may apply (5.5) to get
which proves the claim.
We finally iterate this this argument times. At every step we have rectangles of the form and each iteration has a cost of the same order (namely ).
Using triangle inequality this concludes the proof of (5.4).
∎
6. De-Poissonization
In this section we discuss how to transfer limit results from matching problems with Poisson point process, i.e., with a random number of points, to those of with a deterministic number of points.
We use the following general result.
Proposition 6.1.
Let , , satisfy the following assumptions:
-
(1)
(-homogeneity) , for every , ,
-
(2)
(boundedness) , for every ,
-
(3)
(monotonicity) , for every such that for .
Define
where are i.i.d. Poisson random variables with parameter . Then,
Proof.
Let and introduce the event
By independence and Poisson tail bounds (2.3),
(6.1) |
We decompose
If holds, we use monotonicity of to argue that
where , . Otherwise, we use and to obtain
Combining these inequalities, we have
(6.2) |
We now apply Proposition 6.1 to matching problems. Let us first consider the case of matching to the reference measure.
Theorem 6.2.
Proof.
Recalling the notation , we introduce the function
and . It is clearly bounded and -homogeneous in the sense of Proposition 6.1. To show monotonicity, let and use the identity
in combination with the convexity of the transportation cost, to obtain
Taking expectation yields , since for with , have the same law as .
Let be a Poisson point process of intensity one on and for let be a Poisson random variable of parameter . For , we notice that
(6.3) |
where we write, extending the notation from Section 4,
with (and ). Notice that
In order to apply Proposition 6.1 and conclude the proof, we argue that
(6.4) |
To this aim, we first bound uniformly from above as . For this we combine the following two simple facts. First, since , letting , we have
Second, since by (2.3) ,
where in the last inequality we used that is bounded as a consequence of Theorem 4.1. Therefore,
Using Hölder inequality with , and the fact that ,
Choosing close enough to , in particular if , we get (6.4). ∎
Arguing similarly, we obtain the corresponding result for the bi-partite matching on the unit cube, that is Theorem 1.1, which we restate for the reader’s convenience.
Theorem 6.3.
Proof.
The proof is very similar to that of Theorem 6.2, but in this case we define the function
for , , and let otherwise. It is clearly bounded and -homogeneous. Using the identities
we obtain monotonicity arguing analogously as in Theorem 6.2. The proof is then concluded as in Theorem 6.2 and we omit the details. ∎
Remark 6.4 (Convergence rates).
An inspection of the proof of Proposition 6.1 shows that one can transfer rates of convergence (even only one-sided) from the Poisson case to that of a fixed number of points. In the case of matching to the reference measure this leads to the inequality
while for the bi-partite matching we obtain
for some constant . Besides being one-sided bounds, these are still far from the conjectured rates in [10], which for read
with an explicit constant .
Remark 6.5 (Strong convergence).
If , standard concentration of measure arguments allow to obtain strong convergence from convergence of the expected values (see also [3] for a similar argument in the case ). Let us consider for example the case of bi-partite matching, and show that, both -a.s. and in ,
(6.5) |
with , independent uniform random variables on .
For any , the function
is -Lipschitz with respect to the Euclidean distance, where we write
This relies on the triangle inequality (2.1) and the fact that
if we endow the set of probability measures on with the Wasserstein distance of order . Indeed, for , then
Gaussian concentration for the uniform measure on the unit cube [20, Prop. 2.8] yields that if
then, for ,
where is an absolute constant. A standard application of Borel-Cantelli Lemma gives that, if and , then
(6.6) |
Moreover, using the layer-cake formula, we obtain the inequality
(6.7) |
which is infinitesimal as . To conclude, it is sufficient to argue that , since it yields by (6.7) and (6.6) that in and -a.s. and hence (6.5). By Theorem 6.2, . By Jensen’s inequality,
By the elementary inequality (3.2), we have, for any ,
Taking expectation and letting , using (6.7), we obtain
Letting we conclude.
Acknowledgments
M.G was partially supported by the project ANR-18-CE40-0013 SHAPO financed by the French Agence Nationale de la Recherche (ANR) and by the LYSM LIA AMU CNRS ECM INdAM. M.G. also thanks the Centro de Giorgi in Pisa for its hospitality. D.T. was partially supported by the University of Pisa, Project PRA 2018-49, and Gnampa project 2019 “Proprietà analitiche e geometriche di campi aleatori”.
References
- [1] M. Ajtai, J. Komlós, and G. Tusnády, On optimal matchings., Combinatorica 4 (1984), 259–264.
- [2] L. Ambrosio and F. Glaudo, Finer estimates on the 2-dimensional matching problem, J. Éc. polytech. Math. 6 (2019), 737–765.
- [3] L. Ambrosio, F. Stra, and D. Trevisan, A PDE approach to a 2-dimensional matching problem, Probab. Theory Relat. Fields 173 (2019), no. 1-2, 433–477.
- [4] F. Barthe and C. Bordenave, Combinatorial optimization over two random point sets., Séminaire de probabilités XLV, Cham: Springer, 2013, pp. 483–535.
- [5] D. Benedetto and E. Caglioti, Euclidean random matching in 2d for non-constant densities, Journal of Statistical Physics 181 (2020), no. 3, 854–869.
- [6] S. Bobkov and M. Ledoux, One-dimensional empirical measures, order statistics, and Kantorovich transport distances, Mem. Amer. Math. Soc. 261 (2019), no. 1259, v+126.
- [7] S. Bobkov and M. Ledoux, A simple Fourier analytic proof of the AKT optimal matching theorem, 2019.
- [8] S. Boucheron, G. Lugosi, and P. Massart, Concentration inequalities, Oxford University Press, Oxford, 2013.
- [9] J. H. Boutet de Monvel and O. C. Martin, Almost sure convergence of the minimum bipartite matching functional in Euclidean space, Combinatorica 22 (2002), no. 4, 523–530.
- [10] S. Caracciolo, C. Lucibello, G. Parisi, and G. Sicuro, Scaling hypothesis for the Euclidean bipartite matching problem, Physical Review E 90 (2014), no. 1.
- [11] S. Dereich, M. Scheutzow, and R. Schottstedt, Constructive quantization: approximation by empirical measures, Ann. Inst. Henri Poincaré Probab. Stat. 49 (2013), no. 4, 1183–1203.
- [12] V. Dobrić and J.E. Yukich, Asymptotics for transportation cost in high dimensions., J. Theor. Probab. 8 (1995), no. 1, 97–118.
- [13] N. Fournier and A. Guillin, On the rate of convergence in Wasserstein distance of the empirical measure, Probab. Theory Related Fields 162 (2015), no. 3-4, 707–738.
- [14] S. J. Fromm, Potential space estimates for Green potentials in convex domains, Proc. Amer. Math. Soc. 119 (1993), no. 1, 225–233.
- [15] M. Giaquinta and L. Martinazzi, An introduction to the regularity theory for elliptic systems, harmonic maps and minimal graphs, second ed., Appunti. Scuola Normale Superiore di Pisa (Nuova Serie), vol. 11, Edizioni della Normale, Pisa, 2012.
- [16] M. Goldman, M. Huesmann, and F. Otto, A large-scale regularity theory for the Monge-Ampere equation with rough data and application to the optimal matching problem, Aug 2018.
- [17] by same author, Quantitative linearization results for the Monge-Ampère equation, (2019).
- [18] M. Huesmann and K.-T. Sturm, Optimal transport from Lebesgue to Poisson, Ann. Probab. 41 (2013), no. 4, 2426–2478.
- [19] G. Last and M. Penrose, Lectures on the Poisson process, vol. 7, Cambridge University Press, 2017.
- [20] M. Ledoux, The concentration of measure phenomenon, no. 89, American Mathematical Soc., 2001.
- [21] by same author, On optimal matching of Gaussian samples, Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) 457 (2017), no. Veroyatnost’ i Statistika. 25, 226–264.
- [22] M. Ledoux and J.-X. Zhu, On optimal matching of Gaussian samples III, arXiv preprint arXiv:1911.07579 (2019).
- [23] G. Leoni, A first course in Sobolev spaces, second ed., Graduate Studies in Mathematics, vol. 181, American Mathematical Society, Providence, RI, 2017.
- [24] G. Peyré, M. Cuturi, et al., Computational optimal transport: With applications to data science, Foundations and Trends® in Machine Learning 11 (2019), no. 5-6, 355–607.
- [25] F. Santambrogio, Optimal transport for applied mathematicians, Progress in Nonlinear Differential Equations and their Applications, vol. 87, Birkhäuser/Springer, Cham, 2015, Calculus of variations, PDEs, and modeling.
- [26] M. Talagrand, Matching random samples in many dimensions, Ann. Appl. Probab. 2 (1992), no. 4, 846–856.
- [27] by same author, Upper and lower bounds for stochastic processes: modern methods and classical problems, vol. 60, Springer Science & Business Media, 2014.
- [28] C. Villani, Topics in optimal transportation, Graduate Studies in Mathematics, vol. 58, American Mathematical Society, Providence, RI, 2003.
- [29] J. E. Yukich, Probability theory of classical Euclidean optimization problems., Berlin: Springer, 1998.