Sharp Bounds for the Concentration of the Resolvent
in Convex Concentration Settings
Abstract
Considering random matrix with independent columns satisfying the convex concentration properties issued from a famous theorem of Talagrand, we express the linear concentration of the resolvent around a classical deterministic equivalent with a good observable diameter for the nuclear norm. The general proof relies on a decomposition of the resolvent as a series of powers of .
keywords:
random matrix theory ; concentration of measure ; convariance matrices ; convex concentration ; concentration of products and infinite series ; Hanson-Wright Theorem.Mathematics Subject Classification 2000: 15A52, 60B12, 62J10
Introduction
Initiated by Milman in the 70s [Mil71], the theory of concentration of the measure provided a wide range of concentration inequalities, mainly concerning continuous distribution (i.e. with no atoms), in particular thanks to the beautiful interpretation with the bound on the Ricci curvature [Gro79]. To give a simple fundamental example (more examples can be found in [Led05]), a random vector having independent and Gaussian entries with unit variance satisfies for any -Lipschitz mapping :
(1) |
This inequality is powerful for two reasons: first, it is independent on the dimension , second it concerns any -Lipschitz mapping . It is then interesting to formalize this behavior to introduce a class of “Lipschitz concentrated random vectors” satisfying the same concentration inequality as the Gaussian distribution (in particular, all the Lipschitz transformation of a Gaussian vector). This was done in several books and papers where this approach proved its efficiency in the study of random matrices [Tao12, Nou09, LC21b]…
We want here to extend those results to a new class of concentrated vectors discovered by Talagrand in [Tal95]. Although the concentration result looks similar, its nature is quite different as it concerns bounded distributions for which classical tools of differential geometry do not operate. In a sense, it could be seen as a combinatorial result of concentration. Given a random vector with independent entries, this result sets that for any -Lipschitz and convex mapping :
(2) |
We can mention here the recent results of [HT21] that extend this kind of inequalities for random vectors with independent and subgaussian entries. Adopting the terminology of [VW14, MS11, Ada11], we call those vectors convexly concentrated random vector (see Definition 1.2 below). The convexity required for the observations to concentrate makes the discussion on convexly concentrated random vector far more delicate. There is no more stability towards Lipschitz transformations and given a convexly concentrated random vector , just its affine transformations are sure to be concentrated. This issue raises naturally for one of the major objects of random matrix theory, namely the resolvent that can provide important eigen properties on . In the case of convex concentration, the concentration of the resolvent is no more a mere consequence of a bound on its differential on . Still, as first shown by [GZ00], it is possible to obtain concentration properties on the sum of Lipschitz functionals of the eigen values. Here we pursue the study, looking at linear concentration properties of for which similar inequalities to (1) or (2) are only satisfied by -Lipschitz and linear functionals . The well known identity
(3) |
is true for any analytical mapping defined on the interior of a path containing the spectrum of (or any limit of such mappings), therefore, our results on the concentration of concern in particular the quantities studied in [GZ00].
Although it is weaker222Lipschitz concentrated random vectors are convexly and linearly concentrated, convexly concentrated random vectors are linearly concentrated., the class of linearly concentrated vectors behaves very well towards the dependence and the sum and allows us to obtain the concentration of the resolvent expressing it as a sum . The linear concentration of the powers of was justified in333We provide an alternative proof in the appendix. [MS11] in the case of convexly concentrated random matrix . We call this weakening of the concentration property “the degeneracy of the convex concentration through multiplication”. The linear concentration of the resolvent is though sufficient for most practical applications that rely on an estimation of the Stieltjes transform .
We present below our main contribution without the useful but non-standard formalism introduced in the rest of the article. It concerns the concentration and the estimation of444The concentration of for a positive symmetric matrix is immediate from our approach. The estimation of its expectation is more laborious and goes beyond the scope of the paper although it can be obtained with the same tools.
for a random matrix . Following the formalism of the random matrix theory, the computable estimation of will be called a “deterministic equivalent”. Its definition relies on a well known result that states that given a family of symmetric matrices , there exists a unique vector satisfying:
With those notations at hand, let us state:
Theorem 0.1 (Concentration of the resolvent).
Considering two sequences , and four constants , we suppose that we are given for any a random matrix: such that
-
•
-
•
for all , are independent,
-
•
-
•
for any quasi-convex mapping , -Lipschitz for the euclidean norm:
Then for any constant , there exist two constants such that for all , for any deterministic matrix such that and for any , such that :
where . Besides there exists a constant such that for all :
This theorem allows us to get good inferences on the eigen values distribution through the identity (3) and the estimation of the Stieltjes transform satisfying the concentration inequality:
for two constants (and for ).
When the distribution of the spectrum of presents different bulks, this theorem also allows us to understand the eigen-spaces associated to those different bulks. Indeed, considering a path containing a bulk of eigen-values , if we note the associated random eigen-space and the orthogonal projector on , then for any deterministic matrix :
(4) |
we can estimate this projection defining thanks to the concentration inequality555for the concentration to be valid on all the values of the path , one must be careful to consider a path staying at a distance from the bulk, that is why we only consider here multiple bulk distributions:
for some constants and for any projector defined on .
The approach we present here does not only allows us to set the concentration of , but also the concentration of any polynomial of finite degree taking as variable combination of , and . The general idea is to develop the polynomial as an infinite series of powers of in a way that the observable diameters of the different terms of the series sum to the smallest value possible. As it is described in the proof of Proposition 4.6, the summation becomes slightly elaborate when gets close to the spectrum.
After presenting the definition and the basic properties of the convex and linear concentration (Section 1), we express the concentration of the sum of linearly concentrated random vectors (Section 2). Then we express the concentration of the entry wise product and the matricial product of convexly concentrated random vectors and matrices (Section 3). Finally we deduce the concentration of the resolvent and provide a computable deterministic equivalent (Section 4).
1 Definition and first properties
The concentration inequality (2) is actually also valid for quasi-convex functionals defined folowingly.
Definition 1.1.
Given a normed vector space , an application is said to be quasi-convex iif for any , the set is convex.
The theory of concentration of measure becomes relevant only when dimensions get big. In the cases under study in this paper, the dimension is either given by the number of entries, either by the number of columns of random matrices - the number of rows is then understood to depend on , we will sometimes note . We follow then the approach with Levy families [Lé51] whose aim is to track the concentration speed through dimensionality. Therefore, we do not talk about a static concentration of a vector but about the concentration of a sequence of random vectors as seen in the definition below. In this paper, will either be , , or .
There will generally be three possibilities for the norms defining the Lipschitz character of the concentrated observations. Talagrand Theorem gives the concentration for the euclidean norm - i.e. the Frobenius norm for matrices - but we will see that some concentrations are expressed with the nuclear norm (the dual norm of the spectral norm). Given two integers , the euclidean norm on is noted , the spectral, Frobenius and nuclear norm are respectively defined for any with the expressions:
Definition 1.2.
Given a sequence of normed vector spaces , a sequence of random vectors , a sequence of positive reals , we say that is convexly concentrated with an observable diameter of order iff there exist two positive constants such that and for any -Lipschitz and quasi-convex function (for the norms )666In this inequality, one could have replaced the term “” by “” (with , an independent copy of ) or by “” (with a median of ). All those three definitions are equivalent.,
We write in that case777The index in “” is here a reference to the power of in the concentration bound , we will see some example where this exponent is , in particular in the Hanson-Wright Theorem 3.5 where we will let appear a notation “”. (or more simply ).
The Theorem of Talagrand then writes:
Theorem 1.3 ([Tal95]).
A (sequence of) random vector with independent entries satisfies .
Convex concentration is preserved through affine transformations (as for the class of linearly concentrated vectors). Given two vector spaces, and , we note the set of affine transformation from to , and given , we decompose , where is the linear part of and is the translation part. When , is simply noted .
Proposition 1.4.
Given two normed vector spaces and , a random vector and an affine mapping such that :
We pursue our presentation with the introduction of the linear concentration. It is the “minimal” hypothesis necessary on a random vector to be able to bound quantities of the form , as it has been explained in [LC19]. Here we will need its stability towards the sum when we will express as an infinite series.
Definition 1.5 (Linearly concentrated vectors).
Given a sequence of normed vector spaces , a sequence of random vectors , a sequence of deterministic vectors , a sequence of positive reals , is said to be linearly concentrated around the deterministic equivalent with an observable diameter of order iff there exist two constants such that and for any unit-normed linear form (, : ):
When the property holds, we write . If it is unnecessary to mention the deterministic equivalent, we will simply write ; and if we just need to control the order of the norm of the deterministic equivalent, we can write when .
In the literature [BLM13], those vectors are commonly called sub-Gaussian random vectors.
The notions of linear concentration, convex concentration (and Lipschitz concentration) are equivalent for random variables and we have this important characterization with the moments:
Proposition 1.6 ([Led05], Proposition 1.8., [LC19], Lemma 1.22.).
Given a sequence of random variables and a sequence of positive parameters , we have the equivalence:
We end with a simple lemma that allows us to state that "every deterministic vector at a distance smaller than the observable diameter to a deterministic equivalent is also a deterministic equivalent".
Lemma 1.7 ([LC19], Lemma 2.6.).
Given a sequence of random vectors and two sequence of deterministic random vector , if , then:
2 Linear concentration through sums and integrals
Independence is known to be a key elements to most of concentration inequalities. However, linear concentration behaves particularly well for the concatenation of random vectors whose dependence can not be disentangled.
The next proposition sets that the observable diameter for the norm remains unchanged through concatenation. Given a product , where are normed vector spaces we define the norm on with the following identity:
(5) |
Proposition 2.1.
Given two sequences and , a constant , sequences of normed vector spaces , sequences of deterministic vectors , and sequences of random vectors (possibly dependent) satisfying, for any , , we have the concentration :
In other word, the linear observable diameter of can not be bigger than the observable diameter of , where is chosen as the worse possible random vector satisfying the hypotheses of .
Remark 2.2.
Example 2.27. in [LC19] shows that this stability towards concatenation is not true for Lipschitz and convex concentration.
Proof 2.3.
Let us consider a linear function , such that
Given , let us note the function defined as (where is in the entry). For any , one can write :
where and (). We have the inequality :
With this bound at hand, we plan to employ the characterization with the centered moments. Let us conclude thanks to Proposition 1.6 and the convexity of , for any :
If we want to consider the concatenation of vectors with different observable diameter, it is more convenient to look at the concentration in a space , for any given , where, for any :
Corollary 1
Given two constants , , , sequences of , sequences of deterministic vectors , and sequences of random vectors (possibly dependent) satisfying, for any , , we have the concentration :
Remark 2.4.
Proof 2.5.
We already know from Proposition 2.1 that:
Let us then consider the linear mapping:
the Lipschitz character of is clearly , and we can deduce the concentration of .
Corollary 1 is very useful to set the concentration of infinite series of concentrated random variables. This is settled thanks to an elementary result of [LC19] that sets that the observable diameter of a limit of random vectors is equal to the limit of the observable vectors. Be careful that rigorously, there are two indexes, coming from Definition 1.5 that only describes the concentration of sequences of random vectors, and particular to this lemma that will tend to infinity. For clarity, we do not mention the index .
Lemma 2.6 ([LC19], Proposition 1.12.).
Given a sequence of random vectors , a sequence of positive reals and a sequence of deterministic vectors such that:
if we assume that converges in law888For any , for any bounded continuous mapping : when tends to infinity to a random vector , that and that , then:
(The result also holds for Lipschitz and convex concentration)
Corollary 2
Given two constants , , a (sequences of) normed vector spaces , deterministic, and random (possibly dependent) satisfying, for any , . If we assume that is pointwise convergent999For any , and we define , that is well defined and that , then we have the concentration :
Proof 2.7.
The concentration of infinite series directly implies the concentration of resolvents and other related operators (like for instance).
Corollary 3
Given a (sequence of) vector space , let be a (sequence of) random affine mapping such that there exists a constant satisfying and a (sequences of) integers satisfying for all (sequence of) integer :
Then the random equation
admits a unique solution satisfying the linear concentration:
In practical examples, is rarely bounded by for all drawings of and to obtain the concentration of with an observable diameter of order , one needs to place oneself on an event satisfying . Then, thanks to a simple adaptation of Lemma 4.2 below to the case of linear concentration, we have the concentration . When for and is sufficiently concentrated, it is generally possible to chose an event of overwhelming probability.
As it will be seen in Subsection 4, this corollary finds its relevancy under convex concentration hypotheses, where the linear concentration seems to be the best concentration property to obtain on the resolvent .
Proof 2.8.
In order to satisfy the hypothesis of Corollary 3, but also for independent interest, we are now going to express the concentration of the product of convexly concentrated random matrices.
3 Degeneracy of convex concentration through product
Given two convexly concentrated random vectors satisfying , the convex concentration of the couple is ensured if:
-
1.
and are independent
-
2.
with affine and .
We can then in particular state the concentration of as it is a linear transformation of . For the product it is not as simple as for the Lipschitz concentration, let us first consider the particular case of the entry-wise product in . Since this result is not important for the rest of the paper, we left its proof in A.
Theorem 3.1.
Given a (sequences of) integer and a (sequence of) positive number such that , a (sequence of) random vectors , if we suppose that
(with the notation defined in (5)) and that there exists a (sequence of) positive numbers such that , then:
And if , the constant is no more needed and we get the concentration .
Remark 3.2.
If we replace the strong assumption , with the bound we can still deduce a similar result to [LC21a, Example 4.], stating the existence of a constant such that:
The result of concentration of a product of matrices convexly concentrated was already proven in [MS11] but since their formulation is slightly different, we reprove in A the following result with the formulation required for the study of the resolvent.
Theorem 3.3 ([MS11], Theorem 1).
Let us consider three sequences and , and a sequence of random random matrices , satisfying101010The norm is defined on by the identity: :
in
In the particular case where , it is sufficient111111Be careful that does not imply that , it is only true when is endowed with the norm , satisfying for any , to assume that in . If there exist a sequence of positive values such that , then the product is concentrated for the nuclear norm:
where, for any , (it is the dual norm of the spectral norm).
Remark 3.4.
The hypothesis might look quite strong, however in classical settings where and it has been shown that there exist three constants such that . Placing ourselves on the event , we can then show from Lemma 4.2 below that:
and |
(here and ). The same inferences hold for the concentration of .
We end this section on the concentration of the product of convexly concentrated random vectors with the Hanson-Wright Theorem that will find some use of the estimation of . This result was first proven in [Ada15], an alternative proof with our notations is provided in [LC21a, Proposition 8]121212This paper only studies the Lipschitz concentration case, however, since quadratic forms are convex, the arguments stays the same with convex concentration hypotheses..
Theorem 3.5 ([Ada15]).
Given two random matrices such that and , for any :
4 Concentration of the resolvent of the sample covariance matrix of convexly concentrated data
4.1 Assumptions on and “concentration zone” of the resolvent
Given data , to study the eigen behavior of the sample (non centered) covariance matrix , where , one classically studies the resolvent for the values of where it is defined. Let us note the eigen values of : , for ( then ), then the spectral distribution of :
has for Stieltjes transform .
The present study was already lead in previous papers in the case of Lipschitz concentration of [LC21b], or in the case of convex concentration of but with negative [LC19]. The goal of this section, is manly to present the consequences of Theorem 3.3 and adapt the recent results of [LC21b] on the case of convex concentration. We adopt here classical hypotheses and assume a convex concentration for . {assumption}[Convergence scheme] . {assumption}[Independence] are independent. {assumption}[Concentration] . {assumption}[Bounding condition131313As already done in [LC19] (but with real negative ), one can obtain the same conclusion assuming that there are a finite number of classes for the distribution of the columns and that ] . When gets big, distributes along a finite number of bulks. To describe them, let us consider a positive parameter, , that could be chosen arbitrarily small (it will though be chosen independent with in most practical cases) and introduce as in [LC21b] the sets:
One can show that and introducing the event:
the concentration of , allows us to set:141414In [LC21b], the proof is conducted for Lipschitz concentration hypotheses on . However, since only the linear concentration of is needed, the justification are the same in a context of convex concentration thanks to Theorem A.7.
Lemma 4.1 ([LC21b], Lemma 3.).
There exist two constants such that .
The following lemma allows us to conduct the concentration study on the highly probable event (when ).
Lemma 4.2.
Given a (sequence of) positive numbers , a (sequence of) random vector satisfying , and a (sequence of) convex subsets , if there exists a constant such that then:151515There exist two constants such that for any (sequence of) -Lipschitz and quasi-convex mappings : and similar concentration occur around any median of or any independent copy of (under ).
Proof 4.3.
The proof is the same as the one provided in [LC21b, Lemma 2.] except that this time, one needs the additional argument that since (for , a median of ) is convex, the mappings and are both quasi-convex thanks to the triangular inequality.
We can deduce from Lemma 4.2 that for all , , and the random matrix is far easier to control because (we recall that ).
4.2 Concentration of the resolvent
Placing ourselves under the event , let us first show that the resolvent is concentrated if has a big enough modulus. Be careful that the following concentration is expressed for the nuclear norm (for any deterministic matrix such that , ). All the following results are provided under Assumptions 4.1-13. The next proposition is just provided as a first direct application of Theorem 3.3 and Corollary 2, a stronger result is provided in Proposition 4.6.
Proposition 4.4.
Given two parameters and such that :
Proof 4.5.
Let us now try to study the concentration of when gets close to the spectrum, for that we now require to be a constant ().
Proposition 4.6.
Given , for all :
and we recall that there exist two constants such that .
Proof 4.7.
Proposition 4.4 already set the result for , therefore, let us now suppose that .
With the notation , let us decompose:
(6) |
We can then deduce the linear concentration of with the same justifications as previously thanks to the Taylor decomposition:
Indeed, and thus:
We therefore deduce from (6) that:
For the sake of completeness, we left in the appendix an alternative laborious proof (but somehow more direct) already presented in [LC19].
4.3 Computable deterministic equivalent
We are going to look for a deterministic equivalent of . We mainly follow the lines of [LC21b], we thus allow ourselves to present the justifications rather succinctly. Although Proposition 4.6 gives us a concentration of in nuclear norm, we will provide a deterministic equivalent for the Frobenius norm with a better observable diameter. For any , let us introduce and recall that for any , we note . We have the following first approximation to :
Proposition 4.8.
For any :
and |
To prove this proposition, we will play on the dependence of towards with the notation and:
To link to we will extensively use a direct application of the Schur identity:
(7) |
Proof 4.9.
All the estimations hold under , therefore the expectation should also be taken under to be fully rigorous. Note that if and are independent on the whole universe, they are no more independent under . However, since the probability of is overwhelming, the correction terms are negligible, we thus allow ourselves to abusively expel from this proof the independence and approximation issues related to , a rigorous justification is provided in [LC21b].
Let us bound for any deterministic matrix such that :
We can then develop with (7):
thanks to Lemma 4.13 and the independence between and . We can then bound thanks to Hölder inequality and Lemma 4.15 below:
indeed since we know that from Lemma 4.10, is a -Lipschitz transformation of , therefore, it follows the same concentration inequality (with a variance of order ). Since this inequality is true for any , we can bound:
which directly implies that and .
Lemma 4.10 ([LC21b], Lemmas 4., 8. ).
, under :
and |
Lemma 4.11.
For any , any and any such that :
Proof 4.12.
We do not care about the independence issues brought by . Let us simply bound for any and under :
for some constants . Besides, we can bound:
Lemma 4.13.
Under , for any and any :
Proof 4.14.
For any , we can bound thanks to Lemma 4.11:
Lemma 4.15.
For any deterministic matrix :
Proof 4.16.
Theorem 0.1 is then a consequence of the following proposition proven in [LC21b] (once Proposition 4.8 is proven, the convex concentration particularities do not intervene anymore). Recall that is defined as the unique solution to the equation:
where .
Proposition 4.17.
For all :
Appendix A Proofs of the concentration of products of convexly concentrated random vectors and of convexly concentrated random matrices
We will use several time the following elementary result:
Lemma A.1.
Given a convex mapping , and a vector , the mapping is convex (so in particular quasi-convex).
To efficiently manage the concentration rate when multiplying a large number of random vectors, we will also need:
Lemma A.2.
Given commutative or non commutative variables of a given algebra, we have the identity:
where is the cardinality of .
Proof A.3.
The idea is to inverse the identity:
thanks to the Rota formula (see [Rol06]) that sets for any mappings defined on the set subsets of and having values in a commutative group (for the sum):
where is an analog of the Moëbus function for the order relation induced by the inclusions in . In our case, for any , if we set:
and |
we see that for any , , therefore taking the Rota formula in the case , we obtain the result of the Lemma (in that case, and ).
Proof A.4 (Proof of Theorem 3.1).
Let us first assume that all the are equal to a vector . Considering , we want to show the concentration of where are the entries of .
The mapping is not quasi-convex when is odd, therefore, in that case we decompose it into the difference of two convex mappings where:
and | (8) |
(say that, if is even, then we set and ). For the same reasons, we decompose and into:
and |
(for ), so that:
becomes a combination of quasi-convex functionals of . We now need to measure their Lipschitz parameter. Let us bound for any :
and the same holds for , and . Note then that , , and are all -Lipschitz to conclude on the concentration of .
Now, if we assume that the are different, we employ Lemma A.2 in this commutative case to write ():
(9) |
Therefore, the sum being -Lipschitz for the norm , we know that , , and , therefore, . We can then exploit Proposition 2.1 to obtain
(note that ) Thus summing the concentration inequalities, we can conclude from Equation (9), and the Stirling formula that:
For the concentration of the matrix product, we introduce a new notion of concentration, namely the transversal convex concentration. Let us give some definitions.
Definition A.5.
Given a sequence of normed vector spaces , a sequence of groups , each (for ) acting on , a sequence of random vectors , a sequence of positive reals , we say that is convexly concentrated transversally to the action of with an observable diameter of order and we note iff there exist two constants such that and for any -Lipschitz, quasi-convex and -invariant161616For any and , function , 171717Once again, we point out that one could have replaced here by or .
.
Remark A.6.
Given a normed vector space , a group acting on and a random vector , we have the implication chain:
Considering the actions:
-
•
on where for and , ,
-
•
on where for and , ,
the convex concentration in transversally to can be expressed as a concentration on transversally to thanks to the introduction the mapping providing to any matrix the ordered sequence of its singular values :
(there exists such that , where has on the diagonal).
Theorem A.7 ([Led05], Corollary 8.23. [LC19], Theorem 2.44).
Given a random matrix :
(where the concentrations inequalities are implicitly expressed for euclidean norms: on and on ).
Proof A.8 (Proof of Theorem 3.3).
Let us start to study the case where and in . We know from Theorem A.7 that:
and therefore, as a -Lipschitz linear observation of (see Theorem 3.1), follows the concentration:
Now, we consider the general setting where we are given matrices , a deterministic matrix satisfying , and we want to show the concentration of . First note that we stay in the hypotheses of the theorem if we replace with , we are thus left to show the concentration of . We can not employ again Lemma A.2 without a strong hypothesis of commutativity on the matrices . Indeed, one could not have gone further than a concentration on the whole term . However, we can still introduce the random matrix
then |
where for , . Since satisfies and , the first part of the proof provides the concentration in which directly implies the concentration of .
Appendix B Alternative proof of Proposition 4.6
We are going to show the concentration of the real part and the imaginary part of , where:
Since it is harder, we will only prove the linear concentration of . For that we are going to decompose, for any matrix with unit spectral norm, the random variable as the sum of convex and -Lipschitz mappings of . Let us introduce the two mappings, and defined for any and with:
We then have the identity .
We then then look at the second derivative of to prove convex properties on . Given , let us compute:
and given :
where:
-
•
-
•
-
•
.
First we deduce from the expression of the first derivative and thanks to Lemma 4.10 that, on , is a -Lipschitz transformation of (for the Frobenius norm).
Second,choosing :
In this identity the only term raising an issue is because is not nonnegative symmetric. We can however still bound:
for (in particular and ). Now, if we note defined for any as , we see that is a quadratic functional on , is thus convex. It is beside -Lipschitz on (for the Frobenius norm). Assuming in a first time that is nonnegative symmetric and choosing a constant sufficiently big, we show that is convex and -Lipschitz on like . We have thus the concentration:
Now, given a general matrix , we decompose where and are nonnegative symmetric and is anti-symmetric, in that case , and we can conclude the same way. That eventually gives us the concentration of the proposition.
References
- [Ada11] Radoslaw Adamczak. On the marchenko-pastur and circular laws for some classes of random matrices with dependent entries. Electronic Journal of Probability, 16:1065–1095, 2011.
- [Ada15] Radosław Adamczak. A note on the hanson-wright inequality for random vectors with dependencies. Electronic Communications in Probability, 20(72):1–13, 2015.
- [BLM13] Stéphane Boucheron, Gabor Lugosi, and Pascal Massart. Concentration inequalities. Oxford University Press, 2013.
- [Gro79] Mikhail Gromov. Paul levy’s isoperimetric inequality. preprint de l’IHES, 1979.
- [GZ00] Alice Guillonet and Olivier Zeitouni. Concentration of the spectral measure for large matrices. Electronic Communications in Probability, 2000.
- [HT21] Han Huang and Konstantin Tikhomirov. On dimension-dependent concentration for convex lipschitz functions in product spaces. arXiv:2106.06121v1, 2021.
- [LC19] Cosme Louart and Romain Couillet. Concentration of measure and large random matrices with an application to sample covariance matrices. arXiv:1805.08295, 2019.
- [LC21a] Cosme Louart and Romain Couillet. Concentration of measure and generalized product of random vectors with an application to hanson-wright-like inequalities. arXiv preprint arXiv:2102.08020, 2021.
- [LC21b] Cosme Louart and Romain Couillet. Spectral properties of sample covariance matrices arising from random matrices with independent non identically distributed columns. arXiv preprint, 2021.
- [Led05] Michel Ledoux. The concentration of measure phenomenon. Number 89. American Mathematical Soc., 2005.
- [Lé51] Paul Lévy. Problemes concrets d’analyse fonctionnelle. Gauthier-Villars, 1951.
- [Mil71] Vitali Milman. Asymptotic properties of functions of several variables that are defined on homogeneous spaces. Doklady Akademii Nauk SSSR, 199:1247–1250, 1971.
- [MS11] Mark W. Meckes and Stanislaw J. Szqrek. Concentration for noncommutative polynomials in random matrices. Proceedings of the American Mathematical Societ, 2011.
- [Nou09] El Karoui Nourredine. Concentration of measure and spectra of random matrices: applications to correlation matrices, elliptical distributions and beyond. The Annals of Applied Probability, 19(6):2362–2405, 2009.
- [Rol06] Robert Rolland. Fonctions de möbius - formule de rota. 2006.
- [Tal95] Michel Talagrand. Concentration of Measure and Isoperimetric Inequalities in product spaces. Publications mathématiques de l’I.H.E.S., tome 81, 1995.
- [Tao12] Terence Tao. Topics in random matrix theory, volume 132. American Mathematical Soc., 2012.
- [VW14] Van Vu and Ke Wang. Random weighted projections, random quadratic forms and random eigenvectors. Random Structures and Algorithms, 2014.