-transform in finite free probability
Abstract.
We characterize the limiting root distribution of a sequence of polynomials with nonnegative roots and degree , in terms of their coefficients. Specifically, we relate the asymptotic behaviour of the ratio of consecutive coefficients of to Voiculescu’s -transform of .
In the framework of finite free probability, we interpret these ratios of coefficients as a new notion of finite -transform, which converges to in the large limit. It also satisfies several analogous properties to those of the -transform in free probability, including multiplicativity and monotonicity.
The proof of the main theorem is based on various ideas and new results relating finite free probability and free probability. In particular, we provide a simplified explanation of why free fractional convolution corresponds to the differentiation of polynomials, by finding how the finite free cumulants of a polynomial behave under differentiation.
This new insight has several applications that strengthen the connection between free and finite free probability. Most notably, we generalize the approximation of to and prove a finite approximation of the Tucci–Haagerup–Möller limit theorem in free probability, conjectured by two of the authors. We also provide finite analogues of the free multiplicative Poisson law, the free max-convolution powers and some free stable laws.
1. Introduction
The main object of interest in this paper is the finite free multiplicative convolution of polynomials. This is a binary operation on the set of polynomials of degree which can be defined explicitly in terms of the coefficients of the polynomials involved. For polynomials
(1.1) |
the finite free multiplicative convolution of and is defined as
(1.2) |
This operation is bilinear, commutative, and associative and more importantly it is closed in the set of polynomials with all non-negative real roots. Moreover, it is known that the maximum root of is bounded above by the product of the maximum roots of and . All these basic properties have been well understood since more than a century ago, see for instance [Wal22].
Recently, this convolution was rediscovered in the study of characteristic polynomials of certain random matrices [MSS22]. To elaborate, for -dimensional positive semidefinite matrices, and , with characteristic polynomials and , respectively, their finite free multiplicative convolution, , is the expected characteristic polynomial of , where is a random unitary matrix sampled according to the Haar measure on the unitary group of degree . One motivation behind this new interpretation in terms of random matrices was to derive new polynomial inequalities for the roots of the convolution. Marcus, Spielman, and Srivastava achieved a stronger bound on the maximum root using tools from free probability, in particular from Voiculescu’s -transform. In the light of this, Marcus pursued this connection further [Mar21] by suggesting and providing evidence that, as the degree tends to infinity, the finite free convolution of polynomials, , tends to free multiplicative convolution of measures, , thus initiating the theory of finite free probability. This was rigorously proved in [AGP23].
The study of polynomial convolutions from the perspective of finite free probability has strengthened the connections between geometry of polynomials, random matrices, random polynomials and free probability theory, and has given new insights into these topics. In particular, after the original paper of Marcus, Spielman and Srivastava [MSS22] and the work of Marcus [Mar21], various papers have investigated limit theorems for important sequences of polynomials or their finite free convolutions, see [FU23, AP18, AGP23, MMP24a, Kab22, HK21, Kab21, AFU24].
In a certain sense, the present paper continues this line of research, understanding the relation of finite free probability with free probability in the large limit.
One of the motivations of this work is to give a concrete understanding of how the limiting behaviour of the coefficients of a polynomial is connected with convergence in distribution to a probability measure. More specifically, consider a sequence of polynomials of degree such that the empirical root distribution of the polynomials, denoted by , converges weakly to a measure as the degree tends to infinity and assume that has coefficients given as in (1.1). Then a natural question is what happens with the coefficients in the limit, or vice versa, to give conditions on the behaviour of the coefficients so that converges weakly to . The naive approach of fixing yields a trivial limit, see Corollary 10.3.
From their studies of the law of large numbers, Fujie and Ueda [FU23] observed that it may be more fruitful to look at the ratio of two consecutive coefficients, see Section 1.3 below. Our main result says that this is the case. We find that one can get a meaningful limit when considering the ratio and doing a diagonal limit, i.e. letting and tend to infinity with approaching some constant . Furthermore, such a limit can be tracked down and has precise relation to Voiculescu’s -transform.
1.1. -transform
Recall that the main analytic tool to study the free multiplicative convolution is the -transform, introduced by Voiculescu [Voi87] and studied in detail by Bercovici and Voiculescu [BV92, BV93]. Its importance in free probability stems from the fact that characterizes the measure and is multiplicative with respect to free multiplicative convolution:
We refer the reader to Section 2.4.3 for more details.
Our results can be nicely presented if we introduce a new finite free -transform111Let us mention that Marcus already defined a modified -transform in [Mar21] that tends to the inverse of Voiculescu’s -transform. Our definition is different, and the relationd with Marcus’ transform is not clear. We will address these in more detail in Remarks 6.2 and 6.8.. Consider again a polynomial as in (1.1) and, for simplicity, let us assume that all the roots of are strictly positive so that the coefficients are non-zero. In this case the finite -transform of is the map given by
It is straightforward from (1.2) that . Besides the multiplicative property, this map satisfies many other properties analogous to Voiculescu’s -transform, such as monotonicity, same image, same range and a formula for the reversed polynomial, see Section 6.2.
More importantly, in the limit as the dimension goes to infinity, the convergence of the empirical root distribution of a sequence of polynomials to some measure is equivalent to the convergence of the finite -transform of to the -transform of . This is the content of our first main result.
Theorem 1.1.
Let be a sequence of polynomials with and . The following are equivalent:
-
(1)
The weak convergence: .
-
(2)
For every and every sequence with and , one has
(1.3)
The details of the proof are given in Section 7. Similarly, we can also define a finite symmetric -transform of a symmetric polynomial whose all the roots are non-zero as follows:
and prove an analogous result, see Section 8.1 for more details.
To show the effectiveness of the above result, let us give a few examples of well-known polynomials and their limiting distributions. Further related examples will be give in Section 10
-
•
(Laguerre polynomials) Let be the normalized Laguerre polynomials of degree and with parameter , where and for and . Then its finite -transform satisfies:
where is the Marchenko-Pastur distribution with parameter . This gives an alternative proof of well-known fact: . For the notation and detail, see Example 10.10.
-
•
(Hermite polynomials) Let be the normalized Hermite polynomials of degree . We have the convergence of finite symmetric -transform:
where is the standard semicircular distribution. This gives an alternative proof of well-known fact: , see Example 10.16.
-
•
The (normalized) Chebyshev polynomials of the first kind may be written as Then we have
where is the arcsine law on . See [AH13] for the -transform of .
-
•
The (normalized) Chebyshev polynomials of the second kind can be expressed as . Then we obtain
1.2. Quick overview of the proof
While the proof of our main theorem is technical for the general case, it relies on simple but useful results that provide a deeper understanding of the relation between finite free convolution and differentiation.
As we prove in Section 3, the finite free cumulants of the derivatives of a polynomial can be directly related to the finite free cumulants of . That is, by using the notation
we have the relation
Using the fact that convergence of finite free cumulants implies weak convergence, see [AP18], the equation above allows us to give a new proof of the following result, relating derivatives to free convolution powers.
Theorem 1.2 ([HK21], [AGP23]).
Fix a compact subset on . Let and be a sequence of polynomials of degree such that every has only roots in and . Then, for a parameter and a sequence of integers such that and , we have as .
It is worth mentioning that if in the previous result we allow or , we can also draw some conclusions about the limiting distribution, see Remark 3.6. Also, in Theorem 7.7 of Section 7.4 we will show that the assertion of Theorem 1.2 still holds if we drop the assumption on the uniform compactness.
Once Theorem 1.2 is settled, the last ingredients that connect the finite -transform with the -transform are two observations. First, the -transform of a measure contained on for is related with the value of the Cauchy transform at of the free convolution powers:
Second, for the finite -transform, a similar relation holds but in this case for derivatives of a polynomial with strictly positive roots, namely,
Then, since weak convergence implies the convergence of the Cauchy transform on compact sets outside the support. From the convergence of we may conclude that
However, when attempting to upgrade these considerations to the case when the support of includes or it is unbounded, one faces technical difficulties such as or possibly being undefined. To get around this, we use uniform bounds on the roots of polynomials after repeated differentiation. One important tool that we use is a partial order on polynomials, see Section 4.
1.3. Relation to Multiplicative Law of Large Numbers
Notice that the existence of the limit (1.3) amounts to the fact that the ratio of two consecutive coefficients approaches some limit.
The intuition of why this limit should exist comes from the law of large numbers for the free multiplicative convolution of Tucci, and Haagerup and Möller, as well as its finite counterpart of Fujie and Ueda [FU23].
Recall that Tucci [Tuc10], Haagerup and Möller [HM13, Theorem 2] proved that, for every , there exists a measure such that
where for and the measure denotes the pushforward of by .
The measure is characterized by
unless is not a Dirac measure. If for some , then .
Fujie and Ueda [FU23, Theorem 3.2] proved a finite free analogue of this result. Namely, for , there exists a limiting polynomial
Here, if has roots , then stands for the polynomial with roots . Moreover, for , the -th largest root of is given by , which is the multiplicative inverse of the finite -transform at .
In [FU23, Conjecture 4.4], it was conjectured that the map tends to the map as tends to . As a consequence of our main theorem, we will prove this conjecture in Section 9.
Theorem 1.3.
Let be a sequence of polynomials with and . The convergence is equivalent to the convergence .
1.4. Organization of the paper
Besides this introductory section and the upcoming section with basic preliminaries, the paper is roughly divided into three parts.
First, Sections 3, 4 and 5 contain key results used in the proof of the main theorem. Each topic might be of independent interest.
- •
-
•
In Section 4, we equip the set of real-rooted polynomials with a partial order that allows us to reduce our study to simpler polynomials with all its roots equal to 0 or 1.
-
•
We bound the roots obtained after differentiating these simple polynomials several times in Section 5 by using classic bounds on the roots of Jacobi polynomials.
Next, Sections 6, 7, and 8 concern our central object, the finite -transform and its relation to Voiculescu’s -transform.
-
•
In Section 6, we introduce the finite -transform and study its basic properties.
-
•
Section 7 is devoted to showing that the finite -transform tends to Voiculescu’s -transform.
-
•
In Section 8, we extend the definition of the -transform to symmetric polynomials and explore the case where the polynomials have roots only in the unit circle.
Finally, in Sections 9 and 10 we provide examples and applications.
- •
-
•
Section 10 contains examples and applications in various directions: an approximation of free convolution, a limit for the coefficients of polynomial, examples with hypergeometric polynomials, finite analogue of some free stable laws, a finite free multiplicative Poisson’s law, and finite free max-convolution powers.
2. Preliminaries
2.1. Measures
We use to denote the family of all Borel probability measures on . When we want to specify that the support of the measure is contained in a subset we use the notation
In most of the cases we will let be a subset of the real line , like the positive real line , the set of non-negative real line , or a compact interval .
Notation 2.1 (Basic transformations of measures).
We fix the following notation:
-
•
Dilation. For and , we let be the measure satisfying that
-
•
Shift. For , we let be the measure satisfying that
-
•
Power of a measure. For and , we denote by the measure satisfying that
-
•
Reversed measure. For such that , we denote by the measure satisfying that
2.2. Polynomials
We denote by the family of monic polynomials of degree , similar as with measures, for a subset we use the notation
Given we denote by the roots of (accounting for multiplicity). If we further assume that .
Given , the symbol denotes the normalized -th elementary symmetric polynomial on the roots of , namely
with the convention that . Hence, we can express any as
The empirical root distribution of is defined as
and we let the -th moment of be the -th moment of , that is,
Notice that the map provides a bijection from onto the set
Moreover, we have that for every , where the latter is defined as . Notice also that for every , the subset is invariant under all the transformations in Notation 2.1, thus we can use the bijection to define the analogous transformations on the set of polynomials:
Notation 2.2 (Basic transformations of polynomials).
Let and .
-
•
Dilation. For , we define .
-
•
Shift. For , we define .
-
•
Power of a polynomial. Given and , is the polynomial with roots for .
-
•
Reversed polynomial. For , the reversed polynomial of is the polynomial with coefficients
(2.1)
2.3. Weak convergence
In this section, we present the well-known facts and results on the weakly convergent sequence of probability measures on .
Given a compact interval and a probability measure , the Cauchy transform of is
Lemma 2.3.
Let us consider and measures . The following assertions are equivalent.
-
(1)
The weak convergence: as .
-
(2)
For all , we have .
-
(3)
Let . For all we have that .
Proof.
The proof is similar to [Bor19, Corollary 3.1], but we include the proof for the reader’s convenience. For , the real and imaginary parts of are bounded and continuous on . Thus the definition of weak convergence shows the implication . The implication is trivial. We are only left to show that . By Helly’s selection theorem (see e.g. [Dur19, Theorem 3.2.12]), for any subsequence of , there exists a further subsequence and a finite measure (on ) such that vaguely converges to , and therefore for all , where the Cauchy transform can be defined even if is a finite measure. By assumption (3), we have for all . Since and are analytic on , the principle of analytic extension shows that for all . Finally, we get by the Stieltjes inversion formula, and hence . Then [Dur19, Theorem 3.2.15] yields . ∎
Given , its cumulative distribution function is the function such that
Let and be probability measures on . It is well known that the weak convergence of to is equivalent to the convergence of their cumulative distribution functions to on the continuous points of . Actually, such convergence is locally uniform by Polya’s theorem if is continuous.
Lemma 2.4.
Point-wise convergence of to the continuous function means the locally uniform convergence.
The right-continuous inverse of is defined to be
see [Dur19, The proof of Theorem 1.2.2.] for example. The following lemma is usually used to show Skorohod’s representation theorem in dimension one.
Lemma 2.5 (Ref. [Dur19, Theorem 3.2.8.]).
The convergence of to on the continuous points of is equivalent to the convergence of their right-continuous inverse functions to on the continuous points of .
2.4. Free Probability
In this section we review some of the basics of free probability. For a comprehensive introduction to free probability, we recommend the monographs [VDN92, NS06, MS17].
Free additive and multiplicative convolutions, denoted by and respectively, correspond to the sum and product of free random variables, that is, and for free random variables and . In this paper, rather than using the notion of free independence we will work solely with the additive and multiplicative convolutions, which can be defined in terms of the and transforms, respectively.
2.4.1. Free additive convolution and -transform
Given measures , their free additive convolution is the distribution of , where and are freely independent non-commutative random variables distributed as and , respectively. The convolution was introduced by Voiculescu as a binary operation on compactly supported measures. The definition was extended to measures with unbounded support in [BV93].
The free convolution can be described analytically by the use of the -transform. For every , it is known that the Cauchy transform has a unique compositional inverse, , in a neighborhood of infinity, . Thus, one has
The -transform of is defined as .
Definition 2.6 (Free additive convolution).
Let and be probability measures on the real line. We define the free convolution of and , denoted by , as the unique measure satisfying
2.4.2. Free cumulants
For any probability measure , we denote by its -th moment whenever is finite. The free cumulants [Spe94] of , denoted by , are recursively defined via the moment-cumulant formula
where is the noncrossing partitions of and are the multiplicative extension of . It is easy to see that the sequence fully determines and vice-versa. In this case we can recover the cumulants from the -transform as follows:
Hence, we can define free convolutions of compactly supported measures on the real line via their free cumulants. Indeed, given two compactly supported probability measures and on the real line, we define to be the unique measure with cumulant sequence given by
If and are compaclty supported on then is also compactly supported probability measures on , as can be seen from [Spe94].
Let be the free convolution of copies of . From the above definition, it is clear that . In [NS96] Nica and Speicher discovered that one can extend this definition to non-integer powers, we refer the reader to [ST22, Section 1] for a discussion on fractional powers.
Definition 2.7 (Fractional free convolution powers).
Let be a compactly supported probability measure on the real line. For , the fractional convolution power is defined to be the unique measure with cumulants
2.4.3. Free multiplicative convolution and -transform
In this section, we introduce the free multiplicative convolution, the -transform and related results from [Voi87, BV93]. Given measures and , their free multiplicative convolution, denoted by , is the distribution of , where and are freely independent non-commutative random variables distributed as and , respectively. The convolution was introduced by Voiculescu as a binary operation on compactly supported measures. The definition was extended to measures with unbounded support in [BV93].
We now introduce Voicuelscu’s -transform [Voi87], the main analytic tool used to study the multiplicative convolution . Given a probability measure , the moment generating function of is
For it is known that , the compositional inverse of , exists in a neighborhood of . The -transform of is defined as
Remark 2.8.
In this paper, it is helpful for the intuition to think of as a function on the whole interval by allowing for . We can formalize this heuristic if we instead consider the multiplicative inverse of the -transform. See the definition of -transform in Equation (2.2).
According to [BV93, Corollary 6.6], for every the -transform satisfies an elegant product formula in some open neighborhood
For example, the formula holds on , and it is known that , see [Bel03].
Lemma 2.9.
Let us consider .
The multiplicative inverse of the -transform, which goes by the name of -transform [Dyk07, Eq. (15)] and plays the same role as the -transform. For , we define its (shifted) -transform as the function such that
(2.2) |
The -transform is continuous on because
which follows from the fact that as approaches from the right when .
Remark 2.10.
Notice that the -transform contains the same information of the -transform and thus it determines the measure . Also, all the properties of the -transform mentioned above can be readily adapted to the -transform. We highlight two advantages of using the new transform. First is that it can be defined over the whole interval , formalizing the intuition mentioned in Remark 2.8. In particular, we can define the -transform of as for all . Moreover the -tranform can be understood as the inverse of the cumulative distribution function of the so-called law of large numbers for the multiplicative convolution that we introduce below.
Tucci [Tuc10], Haagerup and Möller [HM13, Theorem 2] proved that, for any , there exists such that
If is a Dirac measure, then . Otherwise, the distribution is determined by
(2.3) |
Moreover, the support of the measure is the closure of the interval
(2.4) |
In other words, and are inverse functions of each other. Besides, as long as is not a Dirac measure, is strictly increasing and then these functions provide a one-to-one correspondence between and .
2.5. Finite free probability
In this section, we summarize some definitions and basic results on finite free probability.
2.5.1. Finite free convolutions
The finite free additive and multiplicative convolutions that correspond to two classical polynomial convolutions were studied a century ago by Szegö [Sze22] and Walsh [Wal22] and they were recently rediscovered in [MSS22] as expected characteristic polynomials of the sum and product of randomly rotated matrices.
Definition 2.11.
Let be polynomials of degree .
-
•
The finite free additive convolution of and is the polynomial uniquely determined by
(2.5) -
•
The finite free multiplicative convolution of and is the polynomial uniquely determined by
(2.6)
In many circumstances the finite free convolution of two real-rooted polynomials is also real-rooted. For , then
-
(i)
.
-
(ii)
.
-
(iii)
.
If we replace above the sets by strict inclusion the statements remain valid. Moreover we can use a rule of signs to determine the location of the roots when doing a multiplicative convolution of polynomials in or , see [MMP24a, Section 2.5].
Definition 2.12 (Interlacing).
Let . We say that interlaces , denoted by , if
(2.7) |
We use the notation when all inequalities in (2.7) are strict.
From the real-root preservation and the linearity of the free finite convolution, one can derive an interlacing-preservation property for the free convolutions, see [MMP24a, Proposition 2.11].
Proposition 2.13 (Preservation of interlacing).
If and , then
and
The same statements hold if we replace all by .
The finite free cumulants were defined in [AP18] as an analogue of the free cumulants [Spe94]. Below we define the finite free cumulants using the coefficient-cumulant formula from [AP18, Remark 3.5] and briefly mention the basic facts that will be used in this paper. For a detailed explanation of these objects we refer the reader to [AP18].
Definition 2.14 (Finite free cumulants).
The finite free cumulants of a polynomial are the sequence defined in terms of the coefficients as
(2.8) |
where is the set of all partitions of , is the number of blocks of and is the size of the block .
The main property of finite free cumulants is that they linearize the finite free additive convolution .
Proposition 2.15 (Proposition 3.6 of [AP18]).
For it holds that
2.5.2. Limit theorems
The connection between free and finite free probability is revealed in the asymptotic regime when we take the degree . To simplify the presentation throughout this paper, we introduce some notation for a sequence of polynomials that converge to a measure.
Notation 2.16.
We say that a sequence of polynomials converges to a measure if
(2.9) |
Furthermore, given a degree sequence we say that is a diagonal sequence with ratio limit if
(2.10) |
Notice that if is a closed subset and converges to then necessarily .
In case when the limit distribution is compactly supported, we can characterize the convergence in distribution via the moments or cumulants. While not stated explicitly, the proof of this proposition is implicit in [AP18].
Proposition 2.17.
Fix a compact subset on . Let and be a sequence of polynomials of degree such that every has only roots in . The following assertions are equivalent.
-
(1)
The sequence converges to in distribution.
-
(2)
Moment convergence: for all .
-
(3)
Cumulant convergence: for all .
Proposition 2.18 (Corollary 5.5 in [AP18] and Theorem 1.4 in [AGP23]).
Let be two sequences of polynomials converging to compactly supported measures , respectively. Then
-
(1)
The sequence converges to in distribution.
-
(2)
If additionally and then converges to in distribution.
A finite free version of Tucci, Haagerup and Möller limit from (2.3) was studied by Fujie and Ueda [FU23]. Given a polynomial with multiplicity at root , then [FU23, Theorem 3.2] asserts that there exists a limiting polynomial
(2.11) |
Moreover, the roots of can be explicitly written in terms of the coefficients of :
(2.12) |
3. Finite free cumulants of derivatives
Since the work of Marcus [Mar21] and Marcus, Spielman and Srivastava [MSS22], it has been clear that the finite free convolutions behave well when applying differential operators, in particular, with respect to differentiation. One instance of such behaviour is the content of Theorem 1.2 from the introduction, stating that the asymptotic root distribution of polynomials after repeated differentiation tends to fractional free convolution.
In this section, we will collect some simple but powerful lemmas regarding finite free cumulants and differentiation that will a source of useful insight for the rest of the paper. In particular, some of the ideas are key steps in the proof of our main theorem. Interestingly enough, the results of this section allow us to provide a more direct proof of Theorem 1.2, which is given at the end of this section. Note that later in Section 7.3 we will generalize this result to Theorem 7.7.
Notation 3.1.
We will denote by the operation
that differentiates times a polynomial of degree and then normalizes by to obtain a monic polynomial of degree .
Notice that directly from the definition, we have that
This can be understood as the finite free analogue of the following well-known property of the fractional convolution powers:
This will be clear from the next series of lemmas in particular from Corollary 3.5.
Our first main claim is that the normalized coefficients of a polynomial are invariant under the operations that we just introduced.
Lemma 3.2.
If then
Proof.
Recall that we write the polynomial of degree as
Then
Thus for . If we now fix and iterate this procedure then we conclude that
as desired. ∎
Remark 3.3.
Since additive and multiplicative convolutions only depend on these normalized coefficients, a direct implication is that convolutions are operations that commute with differentiation. Specifically, for and , one has
To the best of our knowledge, these identities have not appeared before in the literature. However, the formula for the additive convolution follows easily from two facts mentioned in [MSS22]: additive convolution commutes with differentiation (Section 1.1) and a relation between polynomials with different degrees (Lemma 1.16).
Lemma 3.2 can be converted to a similar expression, but here we use finite free cumulants instead of coefficients.
Proposition 3.4.
Given a polynomial , one has
By basic properties of cumulants, the factor can be interpreted as doing a dilation and a fractional convolution:
Corollary 3.5.
Given a polynomial , one has
where the free fractional finite free convolution power is a polynomial determined by for .
As a direct consequence of this result we will now prove that derivatives of a sequence of polynomials tend to the free fractional convolution of the limiting distribution. This result was conjectured by Steinerberger [Ste19, Ste21] and then proved formally by Hoskins and Kabluchko [HK21] using differential equations. A proof using finite free multiplicative convolution was given by Arizmendi, Garza-Vargas and Perales [AGP23]. Notice that the main upshot of this new proof is that we no longer need to use the fact that finite free multiplicative convolution tends to free multiplicative convolution. Instead, we will use this result later to give a new proof that finite free multiplicative convolution tends to free multiplicative convolution.
New proof of Theroem 1.2.
In Section 7.4, we will show that the assertion of Theorem 1.2 still holds if we drop the assumption that is compact, see Theorem 7.7.
Remark 3.6.
We should add a few words on Theorem 1.2 and its generalization, Theorem 7.7. We can always extend the result for but not for . Let us explain this case in detail. First we consider . Given and any interval by interlacing property, we can see that
Indeed, if the number of roots of included in is then the possibility for the number of roots of included in is only , , or . This together with the inequality , gives the bound above.
Hence, if then
when . This means that .
For , the situation becomes a bit more complicated. This case is essentially related to the law of large numbers of (finite) free probability. For instance, let us consider the sequence of polynomials . Clearly, , but and then . So, there should be some additional assumptions. If polynomials are uniformly supported by some compact set, we can use the finite free cumulants and their convergence. Precisely,
Hence, if and ,the finite cumulants satify that for and . Thus, the limit measure of is where is the mean of , which corresponds to the limit of as by the law of large numbers. In other words, it is necessary to assume that the limit measure has the first moment and also the convergence of first moments of , i.e. because the limit of should be .
Besides, let us consider the polynomials for . It is trivial that and but as , which means . Thus, we additionally need to assume .
Under this two assumptions, we may conclude by the following key formula:
Hence, if then for a fixed integer . That is, . For and , we need a bit stronger assumption on the boundedness of : if
that is, , then we have .
Finally, we conclude this remark by mentioning that the CLT for was recently dealt with by A. Campbell, S. O’rourke, and D. Renfrew in [COR24]. The considerations above can be modified to give a proof of this result, using finite free cumulants.
4. A partial order on polynomials
In this section we equip with a partial order that compares the roots of a pair of polynomials. This partial order, defined through the bijection between and , comes from a partial order in measures which was studied in connection to free probability in [BV93].
Notation 4.1 (A partial order on measures).
Given we say that if their cumulative distribution functions satisfy
In [BV93, Propositions 4.15 and 4.16] it was shown that for measures such that ,
-
(i)
if , then , and
-
(ii)
if , then .
In, particular, by considering in (ii) above, since , we readily obtain the following.
Corollary 4.2.
If then for .
The goal of this section is to prove finite analogues of the previous results. First we define a partial order in . Recall that given a polynomial its roots are denoted by .
Notation 4.3 (A partial order on polynomials).
Given we say that if the roots of are smaller than the roots of in the following sense:
It is readily seen that
Theorem 4.4 ( and preserve ).
Let such that .
-
(1)
If then .
-
(2)
If then .
Proof.
The case of is clear, so below we assume that .
First we assume that and are both simple. For every we construct the polynomial with roots given by
so that is a continuous interpolation from to . Consider the constant
Notice that the assumption, , implies that for every and it holds that
(4.1) |
We now consider the constant
Notice that for every and for , one has the lower bound
This bound, together with (4.1), guarantees that for and . Then, we have
By adding , we obtain that
which means that the following interlacing inequality holds:
Thus, the family of polynomials is monotonically increasing, i.e., their roots increase as increases.
We now explain how to prove part (1). Since additive convolution preserves interlacing, then
In particular, we get that . In other words, for every , the function defined as is increasing, and (1) follows from
Part (2) follows by a similar method.
For the case, when or have multiple roots, we can simply approximate them with polynomials that are simple. For example, for consider the polynomials and with roots
respectively. Then are simple, and satisfy , and . The general result then follows from the continuity of and . ∎
As a direct corollary we get an inequality for the derivatives of polynomials, which can be seen as the finite free analogue of Corollary 4.2.
Corollary 4.5 (Differentiation preserves ).
Given such that , one has
Proof.
Fix a and recall that if then and . By Theorem 4.4 we get that and this is equivalent to . ∎
Another interesting consequence of the theorem, which will be useful later, is that the map from Equation (2.11) also preserves the partial order.
Proposition 4.6.
Given such that , one has .
Proof.
First notice that using induction we can prove . Indeed, the base case is just our assumption and the inductive step follows from applying Theorem 4.4 twice:
Letting , we conclude that
∎
5. Root bounds on Jacobi polynomials
The purpose of this section is to show a uniform bound on the extreme roots of Jacobi polynomials, which will be crucial in our proof of Theorem 1.1. The bound readily follows from a classic result of Moak, Saff and Varga [MSV79] after reparametrization of Jacobi polynomials, see Theorem 5.2.
The Jacobi polynomials have been studied thoroughly (notably by Szegö [Sze75]) and the literature on them is extensive. In this section we restrict ourselves to those results that are necessary to prove the bound. Notice that these polynomials are a particular case of the much larger class of hypergeometric polynomials. These polynomials will be reviewed in detail later in Section 10.3, providing plenty of examples using our main theorem once proved.
5.1. Basic facts from a free probability perspective.
We will adopt a slightly different point of view on Jacobi polynomials to emphasize the intuition coming from free probability. Specifically, by making a simple change of variables we can make the parameters of the polynomials coincide with those parameters of the measures obtained as a weak limit of the polynomials. This provides interesting insights into the roles of these polynomials within finite free probability by drawing an analogy of the corresponding measures in free probability.
Following the notation from [MMP24a, Section 5], if we fix a degree and parameters and , the (modified) Jacobi polynomial of parameters is the polynomial with coefficients given by
(5.1) |
Notice that with a simple reparametrization this new notation can be readily translated into the more common expression in terms of hypergeometric functions or in terms of the standard notation used for Jacobi polynomials . In particular, this is the notation in the literature [Sze75, MSV79], from where we will import some results:
[MMP24a, Eq. (80)] | ||||
[MMP24a, Eq. (27)] |
With the standard notation , the classical Jacobi polynomials correspond to parameters and are orthogonal on with respect to the weight function . In particular, they have only simple roots, all contained in . In our new notation, this means that for and we obtain that
5.2. Bernoulli and free-binomial distribution
For non-standard parameters, the Jacobi polynomials may have multiple roots. We are particularly interested in polynomials with all roots equal to or . From [MMP24a, Page 40] we know that
(5.3) |
are polynomials whose empirical root distribution follows a Bernoulli distribution. Clearly, if we let and then we can approximate an arbitrary Bernoulli distribution with atoms in and probability .
We want to understand the behaviour of these polynomials after repeated differentiation. From [AGP23, Lemma 3.5] we know that this is the same as studying multiplicative convolution of two of these polynomials. Moreover, using (5.2) we have that
(5.4) |
By Theorem 1.2, when tends to infinity with and , the limit of empirical root distributions is given by
The distribution has been studied in connection to free probability under the name of free binomial distribution. It is also related to the free beta distributions of Yoshida [Yos20], see also [SY01].
Remark 5.1.
Since the expression on the left-hand side of (5.4) is symmetric, the roles of and can be interchanged. In particular, the largest roots of them coincide:
(5.5) |
In the next section we will uniformly bound the roots of these polynomials.
5.3. Uniform bound on the extreme roots
Finally, our main ingredient for bounding the roots is the well-understood limiting behaviour of the empirical root distribution of the Jacobi polynomials. In particular, we will use the following result.
Theorem 5.2.
Let us consider a degree sequence and sequences and such that
(5.6) |
Then, by [MMP24a, page 42] the sequence of polynomials weakly converges to the measure with density
where and are the extremes of the support and depend on the parameters :
(5.7) |
Furthermore, [MSV79, Theorem 1] guarantees the smallest and largest root converges to the extremes of the support. Namely,
(5.8) |
Remark 5.3.
For , we define the values
in preparation for our next result. Notice that this value is symmetric with respect to and ;
Furthermore, it is easily seen that the following identities hold:
We are now ready to prove the main result of this section, which concerns the asymptotic behaviour of the extreme roots of after repeated differentiation.
Lemma 5.4.
Fix and let be a divergent sequence of integers and , diagonal sequences with limit , respectively.
-
(1)
If then the max roots converge to .
-
(2)
If then the min roots converge to .
Proof.
We first prove (1) under the assumption . By Equation (5.2), we have
Thus in Theorem 5.2 we shall consider the case and . Since
we have that for large enough. Similarly, since , we obtain that for large enough. So both inequalities in hypothesis (5.6) are satisfied for every larger than some . Since and , then the max root of converges to as desired.
In the case , since by Eq. (5.4), we have the same conclusion.
In the case , let such that and diagonal sequences , with limit , respectively. Then, by Corollary 4.5,
for large enough and
for . It implies
Letting , we obtain the result .
For the proof of (2), note that , which is just the changing of the variables. Then one has . Hence, if (equivalently ) then
from the result of (1) and Remark 5.3. ∎
Lemma 5.5.
Let be a sequence of polynomials converging to a measure . For every diagonal sequence with limit , there exists such that
Proof.
Let us take Since , there exists such that and is a continuous point of . Then we consider some sequence of integers such that and and use it to construct a sequence of polynomials
Corollary 5.6.
Let be a measure on . For any , there exists such that the measure is supported on .
6. Finite -transform
In this section we introduce our main object, the finite -transform. The development is parallel to that of Section 2.4.3, but in the finite free probability framework. After defining the finite -transform we also introduce a finite -transform, which contains the same information but is more suitable to deal with certain cases. Then we study all the basic properties of the finite -transform. These properties can be readily transferred to properties of the finite -transform.
6.1. Definition of Finite -transform and Finite -transform
We are now ready to introduce the finite -transform that was advertised in the introduction.
Definition 6.1 (Finite -transform).
Let such that and the multiplicity of the root in . We define the finite -transform of as the map
such that
(6.1) |
Remark 6.2.
Notice that since has all non-negative roots, then if and only if . Thus, the -transform is well defined, and cannot be extended to as it would produce a division by 0. Similar to what was pointed out in Remark 2.8, it is useful for the intuition to allow the values when . This will be formally explained below when considering the modified -transform.
Another natural question is why the domain is a discrete set. Notice that in principle the domain of can be extended to the whole interval so that it resembles more Voiculescu’s -transform. There are several ways to achieve this: defining a continuous piece-wise linear function whose non-differentiable points are at ; using Lagrange’s interpolation333Seems like this option produces a function that is closer to Marcus’ -finite -transform in [Mar21, Eq. (14)]. We believe this because looking at [Mar21, page 20], the function , that is related to Marcus’ -transform, is obtained as Lagrange interpolation in the same set. However, we were unable to devise a clear connection between our definition and Marcus’ definition. to define a polynomial with values at ; or simply defining a step function. Although the last option do not produce a continuous function, it seems to be in more agreement with the intuition coming from the -transform.
Since it is not clear which extension is the best, we simply opted to restrict the definition to the discrete set . Recall that the values of the finite -transform at this point are enough to recover the coefficients. Indeed, we just need to multiply them:
(6.2) |
Recall from Equation (2.2) that the -transform in free probability is a shift of the multiplicative inverse of the -transform. Alternatively, from Equation (2.3), the -transform of can be understood as the inverse of the cumulative distribution function of . Since the map from (2.11) is the finite version of map , we will use the last interpretation to define the finite counterpart of the -transform.
Definition 6.3 (Finite -transform).
Given a polynomial we define the finite -transform of as the function that is the right-continuous inverse of in .
Remark 6.4.
Using (2.12) it is straightforward that the finite -transform can be explicitly defined in terms of the coefficients of . Indeed, if is the multiplicity of the root of , then is the map such that
(6.3) |
Then, it is also clear that
(6.4) |
6.2. Basic properties of -transform
We now turn to the study the basic properties of . All these basic facts are analogous to those of Voiculescu’s -transform. Notice also that these properties can be readily adapted to fit the finite -transform; after each result we will leave a brief comment pointing out the corresponding result.
First we notice that the -transform is non-increasing and compute its extreme values.
Proposition 6.5 (Monotonicity and extreme values).
Let such that . Let be the roots of , and denote by the multiplicity of the root in , namely . Then
-
(1)
If for some constant , then
(6.5) -
(2)
Otherwise, whenever has at least two distinct roots, the finite -transform is strictly decreasing:
(6.6)
Moreover the smallest and largest values are, respectively,
(6.7) |
When has no roots at 0 (when ), we can identify the latter as the value at 0 of the Cauchy transform of the empirical root distribution of
(6.8) |
Proof.
The coefficients of are of the form , which directly implies Equation (6.5).
Assertion (6.6) follows from Newton’s inequality:
This in turn implies that the smallest and largest values of are attained at and , respectively. Using the definition we can compute these values explicitly:
∎
Remark 6.6.
The previous result implies that the -transform is non-decreasing step function with smallest value
and largest value
A fundamental property of Voiculescu’s -transform is that it is multiplicative with respect to . The analogous property for our finite -transform is an easy consequence of the fact that the coefficients of the polynomial are multiplicative with respect to .
Proposition 6.7.
Let with and let be the multiplicities of root in , respectively. Then we have
Proof.
From the definitions of finite -transform (6.1) and finite multiplicative convolution it follows that
for . ∎
Notice that in terms of the finite -transform we do not need to worry about excluding or the multiplicity of the root 0. For one has that
(6.9) |
Remark 6.8.
We present here a direct application of the finite -transform (or -transform which is easier to handle in the situation below).
Proposition 6.9.
Let and assume for some . If , then or . If , there exist such that , , and .
Proof.
If , we have and hence or . Since and have only non-negative roots, it means or .
If , we have . Since the finite -transform is non-negative and weakly increasing, and for some such that . It implies and . ∎
Note that similar arguments lead to the corresponding result for free multiplicative convolution. Now we show a formula for the reversed polynomial.
Proposition 6.10 (Reversed polynomial).
Given a polynomial and its reversed polynomial , their -transforms satisfy the relation
(6.10) |
Proof.
By formula (2.1), the coefficients of the reversed polynomial are
In terms of the -transform, this yields
as desired. ∎
We then continue to study some other properties of the finite -transform in connection to the behaviour under taking derivatives and shifts of polynomials. We can study these operations using the Cauchy transform of the empirical root distribution of the polynomials. These properties can also be understood as finite counterparts of known facts in free probability, specifically Lemma 2.9.
Lemma 6.11 (Derivatives and shifts).
Consider and let be the multiplicity of root in .
-
(1)
For any , we have
-
(2)
Given , we obtain
If , then this can be extended to :
Remark 6.12.
The equation in part (1) has the following analogue in free probability
that follows easily from Lemma 2.9 (1).
Proof.
Part (1) follows from the fact that coefficients of a polynomial are preserved under differentiation, Lemma 3.2:
For part (2) we use that differentiation and shift operator commute , and thus
(by part (1) above) | ||||
(by Equation (6.8)) | ||||
Notice that the assumption ensures the polynomial has no roots at and hence is well defined. In the case , the previous computation holds for as well. ∎
To finish this section we prove the interesting fact that the partial order studied in Section 4 implies an inequality for the finite -transforms of the polynomial.
Lemma 6.13.
Let with the multiplicity of the root 0 in . If then
7. Finite -transform tends to -transform
The goal of this section is to prove Theorem 1.1 advertised in the introduction, which is our main approximation theorem. In order to simplify the presentation, we will make use of the concept of converging sequence of polynomials and diagonal sequence introduced in Notation 2.16. Recall that given a sequence of polynomials we say converges to if
(7.1) |
Furthermore, if is a sequence of polynomials converging to with degree sequence we say that is a diagonal sequence with ratio limit if
(7.2) |
With this new terminology, Theorem 1.1 can be rephrased as follows. Given a measure , the following are equivalent:
-
(1)
The sequence of polynomials converges to .
-
(2)
For every diagonal sequence with ratio limit , it holds that
The following lemma relating the -transform of a measure evaluated at with the Cauchy transform of the -fractional convolution evaluated at will be useful throughout this section.
Lemma 7.1.
Let be measure on . Then,
As a consequence, for all and .
Proof.
Fix , and consider the measure . By Corollary 5.6, is supported on for some .
Then, the Cauchy transform is well-defined in , in particular, we know that
On the other hand, since , [HM13, Lemma 4] yields
(7.3) |
In its simplest form, when and is a compact interval that does not contain 0, the proof of follows naturally from the basic properties of the -transform. However, the same proof in the most general case, requires of several steps, where we gradually generalize each result, building upon simpler cases. On the other hand, the proof of in the general case follows from using Helly’s selection Theorem to guarantee that there exist a limit, and then using the implication to assure that the limit must coincide with the given measure.
To guide the reader on how our claim is generalized in each step, this section is divided into several cases. Each case builds upon the previous until we reach the most general case. In Section 7.1 we illustrate the simplicity of the ideas used in the case where all the roots of the polynomials lie on a compact interval that does not contain 0. Then, in Section 7.2 we use a uniform bound of the smallest root after differentiation, to reduce the case when all the roots of the polynomials lie on a compact interval that contains 0, to the previous case (where the interval does not contain 0). In Section 7.3 we use the reversed polynomial of a shift, to generalize Theorem 1.2 to allow measures with unbounded support, then the same is done for our main result, using the same ideas from the previous section. Finally, in 7.4 we explain how to obtain the converse statement and prove Theorem 1.1.
7.1. Compact interval not containing 0.
We first prove it for the case where all the roots of the polynomials lie on a compact interval that does not contain zero. In this case, the proof of the first implication follows easily from the basic properties of the -transform.
Proposition 7.2 (Compact interval not containing 0).
Fix a compact interval . If is a sequence of polynomials converging to , then for every diagonal sequence with ratio limit it holds that
7.2. Compact interval containing 0.
The goal of this section is to extend Proposition 7.2 to the case where the interval is allowed to contain . The approach is to reduce to the previous case by observing that after repeatedly differentiating polynomials with non-negative roots, we can find a uniform lower bound (away from zero) of the smallest roots. To achieve this we make use of the bounds obtained in Section 5 (that in turn rely on classical bounds of Jacobi polynomials), and then we use the partial order from Section 4 to extrapolate this bound to an arbitrary polynomial.
Proposition 7.3 (Compact support containing 0).
Fix a compact interval and let be sequence of polynomials converging to . Then for every diagonal sequence with ratio limit it holds that
Proof.
Remark 7.4.
The authors thank Jorge Garza-Vargas for some insightful discussions regarding relations between the supports of and of that ultimately helped in the proof of Proposition 7.3.
7.3. Case with unbounded support
We now turn to the study of measures with unbounded support. The key idea here is that if , then if we shift by a positive constant and then take the reversed measure, we obtain a measure contained in a compact interval. In this way we can reduce the problem to the previous case. First, we introduce the cut-down and cut-up measures.
Notation 7.5 (Cut-up and cut-down measures).
Given a measure with cumulative distribution function , we define the cut-down measure at as the measure with cumulative distribution function
Similarly, we define the cut-up measure at as the measure with cumulative distribution function
We can define the corresponding cut-down and cut-up measures on polynomials using the bijection between and . Then for every and the cut-down polynomial and cut-up polynomial have roots given by
Remark 7.6.
We will use three basic properties of the cut-down and cut-up measures that follow directly from the definition.
-
(1)
as and as .
-
(2)
, so . And similarly, .
-
(3)
If is a sequence of polynomials converging to then converges to , and similarly converges to .
We are now ready to extend Theorem 1.2 on the limits of derivatives of polynomials to the case of measures with unbounded support.
Theorem 7.7.
Let be a sequence of polynomials converging to and let be diagonal sequence with limit . Then,
(7.5) |
Proof.
We first prove the claim assuming that for some . We fix such that , so that . Thus, we can consider . By Proposition 6.10 and Lemma 6.11 (2) we know that
On the other hand, since the roots of the polynomials are contained in a compact interval and converges weakly to , Proposition 7.3 yields
Therefore,
and by Lemma 2.3 we conclude that as . Thus, the claim is proved in the case for some .
Notice that the claim is also true if for some . Indeed, we simply apply the reflection map to the polynomials, and use the previous case on the new sequence.
Now, the proof of the general case, when , follows from using the cut-down and cut-up measures from Notation 7.5 to reduce the problem to the previous case. Indeed, if we fix an then from Remark 7.6 (2) and Corollary 4.5 we know that
By Remark 7.6 (3), as we have the weak convergence
Therefore, at every point one has
Letting , we conclude that for every continuous point of and this is equivalent to (7.5). ∎
With this theorem, we finally upgrade our main result to measures with unbounded support.
Proposition 7.8 (Unbounded support).
Let be a sequence of polynomials converging to . Then for every diagonal sequence with limit it holds that
7.4. The converse and proof of Theorem 1.1
For the proof of our main theorem to be complete, we must prove that if the finite -transform converges to the -transform of a measure, then the sequence of polynomials converge to the measure.
Proposition 7.9 (Converse).
Let be a sequence of polynomials with . Assume there exist a and a function such that for every and every diagonal sequence with limit one has
Then there exists a measure such that , , and for all .
Proof.
Consider the sequence of cumulative distribution functions . By Helly’s selection, every subsequence of functions has a further subsequence, denoted by , converging to some function . It is clear that is non-decreasing, is equal to on the negative real line, and the image of is contained in . In order to justify that is the cumulative distribution function of some probability measure , we just need to check that . For the sake of contradiction, we assume that there exists such that for all . Let and so that . Then the sequence of polynomials satisfies that for large enough . By Lemma 6.13, this implies
Since the converge to where . Then, in the limit we obtain
On the other hand, we can choose arbitrarily large so that , which yields a contradiction.
Therefore, is the cumulative distribution function of some probability measure . By Proposition 7.8 we obtain that for all .
Recall that this was obtained as the convergent subsequence of an arbitrary subsequence of the original sequence of polynomials. However, since the -transforms of any two such measures coincide in a small neighborhood , the measures are the same. Thus, there is a unique limiting measure and we conclude that . ∎
The proof of the main Theorem 1.1 now follows from the previous two results.
8. Symmetric and unitary case
8.1. Symmetric case
We say that a probability measure is symmetric if for all Borel sets of , we denote by the set of symmetric probability measures on the real line. There is a natural bijection from to by taking the square of the measure. Specifically, we denote by the pushforward of by the map for . Arizmendi and Pérez-Abreu [AP09] used this map to extend the definition of -transform to symmetric measures:
(8.1) |
A similar approach works to define a finite -transform for symmetric polynomials. We say that is a symmetric polynomial if its roots are of the form:
and denote by the subset of symmetric polynomials. Given we denote by the polynomial with roots
It is readily seen that . Moreover, and are easily related by the formula
In particular,
(8.2) |
With this in hand, we can extend our definition.
Definition 8.1 (-transform for symmetric polynomials).
Let with a multiplicity of in the root 0. We define its finite -transform map
such that
Remark 8.2.
We can use this new transform to study the multiplicative convolution of polynomials, one of which is symmetric.
Proposition 8.3.
Let , and let be the maximum of the multiplicities at the root 0 of and . Then
for .
Proof.
Using the definition, we compute
∎
It is also easy to check that our finite symmetric -tranform tends to the symmetric -transform from [AP09] in the limit.
Proposition 8.4.
Let be a sequence of symmetric polynomials with degree sequence and assume converges to . Then for every diagonal sequence with limit , it holds that
8.2. Unitary case
It would be interesting to construct an -transform that can handle the set of polynomials with roots in the unit circle . However, if we naively try to apply the same approach used in the previous cases, we run into some problems. To illustrate such difficulties, let us consider the following example. When considering polynomials that resemble the Haar unitary measure in , namely polynomials with roots uniformly distributed in , there are at least two natural candidates:
Notice that has the same roots as with an extra root in 1. Thus, when , the empirical distributions of and both tend to , the uniform distribution on .
For , the only non-vanishing coefficients are and . Thus, our method of looking at the quotient of coefficients does not work at all simply because all the quotients are undefined. On the other hand, for ,
Thus, their ratio limit is
as with . On the other hand, since , the -transform of cannot be defined and thus there is no relation to the last limit.
Even though our approach does not seem to work every sequence of polynomials contained in , in some cases we do obtain the expected limit. For instance, fix and consider the unitary Hermite polynomials
that where studied in [AFU24, Section 6] and [Kab22]. Then, if we take the ratio of consecutive coefficients and take the corresponding diagonal limit approaching , we obtain the -transform of , the free normal distribution on :
We can also prove that approaches to as on the unitary case without using the -transform, to the best our knowledge, there is no literature which explicitly states the assertion:
Proposition 8.5.
Let for and . If and as , repectively, then .
Since the proof of this result is very similar to the proof in the real case (see Propositions 2.17 and 2.18), we only provide the idea of the proof. For the details, we refer the reader to [AP18, proof of Corollary 5.5].
Idea of the proof.
Since is compact, then the moment convergence of polynomial sequence is equivalent to weak convergence. The proof then follows from the equivalence of the convergences of moments and of finite free cumulants of , which is similar to how it is done in the real case. ∎
9. Approximation of Tucci, Haagerup, and Möller
The purpose of this section is to prove Theorem 1.3 stating that Fujie and Ueda’s limit theorem [FU23] is an approximation of Tucci, Haagerup and Möller’s limit theorem [Tuc10, HM13].
The main idea is that this approximation is equivalent to the convergence of the finite -transform from the Section 6 to the -transform introduced in (2.2). This in turn is almost equivalent to the convergence of the finite -transform to the -transform, except that the -transform is more adequate to handle the case where the polynomial has roots in 0. Notice that we also need include the case of . First, we will adapt Theorem 1.1 to a version with -transforms.
Theorem 9.1.
Given a measure and a sequence of polynomials , the following are equivalent:
-
(1)
The weak convergence of to .
-
(2)
For every diagonal sequence with limit , it holds that
-
(3)
For every , it holds that
Proof.
Using the definition of -transform in terms of -transform from (2.2), its finite analogue (6.4) and our main Theorem 1.1, we obtain that with , if and only if for every diagonal sequence with limit one has that
Since the functions are positive and non-decreasing (see Remark 6.6) and is positive, continuous and increasing, then the previous limit can be extended to the whole interval , and this is equivalent to part (2).
The equivalence between (2) and (3) follows by the increasing property of the -transform. Indeed, for any one can find diagonal sequences and with with limit and then
The converse statement follows by a similar argument.
Therefore, we are only left to check what happens when and for . If converges to , it is clear as for any . Thus,
because and by Proposition 4.6. Letting tend to we obtain .
For the converse, assume that for . For the sake of contradiction, we assume that does not converge to . Then there exist and such that
By taking a subsequence, we may assume for all . Let us set another sequence of polynomials where . Then it is clear that and for all by Proposition 4.6. However, . This is a contradiction. Therefore we conclude that converges to . ∎
Recall that given a measure the map from (2.3) yields a measure such that and are inverse functions
We are now ready to give a proof of Theorem 1.3, namely that
Proof of Theorem 1.3.
Notice that is equivalent to the convergence as for every that is a continuous point of . In turn, Lemma 2.5 and the definition of the -transform assures us that the later is equivalent to the convergence for every continuous point of . Since is continuous on , the later is equivalent to due to Theorem 9.1. ∎
10. Examples and applications
In this section, we present various limit theorems relating finite free probability to free probability. Thus, throughout the whole section, we will consider situations where the dimension or tends to infinity, and assume that the polynomials converge to a measure, as in Notation 2.16.
10.1. approximates
As announced, we present the first application of our main theorems. That is, we present a new independent proof of part (2) in Proposition 2.18 which includes the general case.
Proposition 10.1.
Let and be sequences of polynomials such that and let such that and weakly converge to and , respectively. Then weakly converges to .
10.2. A limit for the coefficients of a sequence of polynomials.
Our main theorem provides a limit for the ratio of consecutive coefficients, in a converging sequence of polynomials. The convergence of ratios can be easily translated to understand other ratios, or the behaviour of the coefficients alone.
Proposition 10.2.
Fix a measure and fix . Let be a sequence of polynomials converging to and let and be diagonal sequences with ratio limit and , respectively. Then
Additionally, unless for some ,
Proof.
By Theorem 9.1, one has the convergence of to . Note that it is locally uniform by Polya’s theorem since they are monotone functions. The same is true for the convergence of to . Hence,
(10.1) |
Besides, if is not a Dirac measure then is a strictly monotone function. Thus, by the change of variables that is equivalent to , one has
For diagonal sequences with ratio limits and , the left-hand limit of (10.1) coincides with the limit of
by the approximation of Riemann sum. Taking the exponential, we have
Finally, by the simple parameter change, we obtain the desired result. ∎
Notice that if we may take in the previous result then we have the following:
Corollary 10.3.
Fix a measure and a sequence of polynomials converging to . Assume and
as , which means the first moment convergence of to . Then for every and diagonal sequence with ratio limit , we have
Additionally, unless for some ,
where we may replace by because the support of is included in , see Equation (2.4).
Remark 10.4.
In [HM13] it is shown that the following integrals are all equal:
whenever one of them is finite. Now, recall from [FK52] that the Fuglede-Kadison determinant of a positive operator in a tracial -algebra, with distribution is given by
In the case of a positive matrix of dimension with eigenvalues , the Fuglede-Kadison determinant can be written as
Hence, the statements above can be seen as a generalization of the convergence of the Fuglede-Kadison determinant for finite dimensional operators that converge weakly to an operator .
10.3. Hypergeometric polynomials
The hypergeometric polynomials are a particular family of generalized hypergeometric functions. They were studied in connection with finite free probability in [MMP24a] and several families of parameters where the polynomials have all positive roots were determined. This large family of polynomials contains as particular cases some important families of orthogonal polynomials, such as Laguerre, Bessel and Jacobi polynomials. In this section, we compute their finite -transform and use it to directly obtain the -transform of their limiting root distribution.
Definition 10.5 (Hypergeometric polynomials).
For and , we denote444Our notation slightly differs from that in [MMP24a], by a dilation of on the roots. This simplifies the study of the asymptotic behaviour of the roots, and does not change the fact that roots are all real (or all positive). by the unique monic polynomial of degree with coefficients given by
Notice that the reason why we do not allow a parameter below to be of the form is to avoid indeterminacy (a division by 0).
We also allow the cases where there is no parameter below or no parameter above ( or ), in these cases the coefficients are
Since the ratio of two consecutive coefficients of a hypergeometric polynomial is easily expressed in terms of the parameters, a direct computation yields their finite -transform.
Lemma 10.6.
For parameters and , the finite -transform of the polynomial is
Equivalently, the roots of the polynomial are given by:
Proof.
Directly from the definition we compute
as desired. ∎
As a direct corollary we can determine the -transform of the limiting measure whenever the hypergeometric polynomials have all positive real roots. Notice that a more general result for hypergeometric polynomials that does not necessarily have real roots was recently proved in [MMP24, Theorem 3.7]; there, the approach relies on the three-term recurrence relation that is specific to this family of polynomials. We would like to highlight that with our approach, the computation of the limiting measure using our main theorem is straightforward.
Corollary 10.7.
For every , consider parameters and such that the following limits exist
Assume further for every . Then
where is the measure supported in the positive real line, with -transform given by
Remark 10.8.
Notice that in Corollary 10.7 one can also consider polynomials with all negative roots, or equivalently, one can let the sequence of polynomials be
In this case, the odd coefficients of the polynomials change sign, and this means that the finite -transforms changes sign, ultimately we obtain that the -transform of the limiting measure is
Example 10.9 (Identity for ).
When there are no parameters above nor below, we obtain the polynomial
which is the identity for the multiplicative convolution. Its finite -transform is given by
Thus the limiting distribution is the measure.
Example 10.10 (Laguerre polynomials).
In the case where we just consider one parameter above and no parameter below, the polynomial is the well-known Laguerre polynomial and has all positive roots whenever . By Lemma 10.6 the finite -transform is
With respect to the asymptotic behaviour, notice that using Corollary 10.7 we can retrieve the known result that the limiting zero counting measure of a sequence of Laguerre polynomials is the Marchenko-Pastur distribution. Indeed, for a sequence with , then in the limit we obtain the Marchenko-Pastur distribution of parameter :
which is determined by the -transform
On the other hand, the spectral measure of is the uniform distribution on and, in the limit, we retrieve the fact that
As an application of the previous example and Proposition 10.1, we know that the multiplicative convolution of a polynomial with a Laguerre polynomial provides us with a finite approximation of compound free Poisson distribution.
Corollary 10.11.
Let be a sequence of polynomials with such that . Then weakly converges to the compound free Poisson distribution (with rate and jump distribution ).
We now turn our attention to the reversed Laguerre polynomials.
Example 10.12 (Bessel polynomials).
In the case where we just consider one parameter below and no parameter above, the polynomial goes by the name of Bessel polynomial and has all positive roots whenever . By Remark 10.8 the finite -transform is
If we consider a sequence with , then the limiting -transform is equal to
that corresponds to the measure that is the reciprocal distribution of a Marchenko-Pastur distribution of parameter , see [Yos20, Proposition 3.1].
Example 10.13 (Jacobi polynomials).
In the case where we consider one parameter above and one parameter below, we obtain the Jacobi polynomials . Recall that we already encountered these polynomials in Section 5 and proved some bounds for some specific parameters. We now complete the picture. There are several regions of parameters where the polynomial has all positive roots. Below we highlight the three main regions of parameters where we get a polynomial with positive roots. The reader is referred to [MMP24a, Section 5.2 and Table 2] for the complete description of all the regions. Same as for the previous examples, we can consider sequences , in those region of parameters and with limits , . By Corollary 10.7 we can compute the -transform of the limiting measure that we denote by .
-
•
For and then . The -transform of the limiting measure is given by
-
•
For and then . The -transform of the limiting measure is given by
Notice that in this case, is the free beta prime distribution studied by Yoshida [Yos20, Eq (2)]. In other words, a simple change of variable yields that for the limiting measure of the polynomials tends to the measure .
-
•
For and , then . The -transform of the limiting measure is given by
To finish this section we want to highlight a case where our main result applies for hypergeometric polynomials with several parameters.
Proposition 10.14.
For every , let and such that the following limits exist
Let for every . Then
where is the measure supported in the positive real line with -transform given by
Proof.
10.4. Finite analogue of some free stable laws
The purpose of this section is to give some finite analogue of free stable laws. Free stable laws are defined as distributions satisfying that for every there exist and such that
Any free stable law can be uniquely (up to a scaling parameter) characterized by a pair , which is in the set of admissible parameters:
More precisely, the Voiculescu transform of the free stable law is given by
see [BP99, AH16, HSW20] for details. We then denote by the free stable law with an admissible parameter . In particular, the following cases are well-known.
-
•
(Wigner’s semicircle law)
-
•
(Cauchy distribution) .
-
•
(Positive free -stable law) .
In the following, we construct some finite analogues of free stable laws.
Example 10.16 (Hermite polynomials).
It is well known that the Hermite polynomials (with an appropiate normalization) converge in distribution to the semicircle law, see for instance [Mar21]. Specifically, if let
denote the Hermite polynomial of degree , then as . Thus, we can interpret as the finite analogue of the symmetric free -stable law . The finite symmetric -transform of can be easily computed:
Example 10.17 (Positive finite -stable).
From [BP99] we know that is the positive free -stable law. We also have that the compound free Poisson distribution coincides with the positive boolean -stable law, see [SW97]. Now, we provide the finite counterparts as follows.
Recall from Example 10.10 that the Laguerre polynomial is the finite free analogue of the Marchenko-Pastur distribution . From [MMP24a, Eq. (81)] we know that its reversed polynomial is
(10.2) |
and letting the empirical root distribution of these polynomials tends to the positive free -stable law . Clearly, we have
Example 10.18 (Symmetric finite -stable).
According to [AP09, Theorem 12], there is an interesting relation between positive free stable laws and symmetric free stable laws via the free multiplicative convolution. That is,
(10.3) |
for any . Here we give the finite analogue of the symmetric -stable law.
10.5. Finite free multiplicative Poisson’s law of small numbers
In [BV92, Lemma 7.2] it was shown that for and there exists a measure with -transform given by
This measure can be understood as a free multiplicative Poisson’s law. The purpose of this section is to give a finite counterpart.
In this case, we can think of it as a limit of convolution powers of polynomials of the form
where the equality follows from a direct computation, see also [MMP24a, Eq (60)].
Proposition 10.19.
Let and , and for each consider the polynomial
Then
Proof.
Consider . Recall that the coefficients of are
so its finite -transform is given by
and the finite -transform of is given by
Then, if we let with and then we obtain that
The conclusion follows from Theorem 1.1. ∎
10.6. Finite max-convolution powers
In 2006, Ben Arous and Voiculescu [BV06] introduced the free analogue of max-convolution. Given two measures , their free max-convolution, denoted by , is the measures with cumulative distribution function given by
Similarly, given and , one can define the free max-convolution of to the power as the unique measure with cumulative distribution function given by
(10.5) |
This notion was introduced by Ueda, who used Tucci-Haagerup-Möller’s limit to relate it to free additive convolution powers in [Ued21, Theorem 1.1]: given and , one has that
(10.6) |
The purpose of this section is to prove a finite free analogue of this relation and to show that it approximates (10.6) as the degree tends to .
Definition 10.20.
Given and , we define the finite max-convolution power of as the polynomial with roots given by
It is straightforward from the definition of free-max convolution in (10.5) that
Then, we have the following.
Proposition 10.21.
Let . Then
(10.7) |
Proof.
From Equation (10.7) and Theorem 1.3 it readily follows that in the limit we obtain an approximation of Equation (10.6).
Corollary 10.22.
Fix and a measure . Let and a diagonal sequence with ratio limit . If is a sequence of polynomials converging to , then
We now look at an example related to free stable laws and their finite counterparts.
Example 10.23.
From [Ued21, Example 4.2] we know that coincides with the free Fréchet distribution (or Pareto distribution) with index .
To finish this section, we study the finite analogue of the map on positive measures that was defined in [HU21]. If we denote
then it holds that
(10.9) |
We can define the finite free analogue of the map as
To obtain the finite analogue of (10.9), we first compute the derivatives of .
Lemma 10.24.
For , we get
Proof.
For , we compare the -th coefficients of both polynomials. One can see that
On the other hand, we have
(by Lemma 3.2) | ||||
as desired. ∎
Using this, we infer the following formula.
Proposition 10.25.
Let . Then
Thanks to Proposition 10.25, the last claim can be seen as an approximation of its free counterpart introduced in Equation (10.9).
Corollary 10.26.
Fix and a measure . Let and a diagonal sequence with ratio limit . If us a sequence of polynomials converging to , then
Acknowledgements
The authors thank Takahiro Hasebe and Jorge Garza-Vargas for fruitful discussions in relation to this project. We thank Andrew Campbell for useful comments that help improving the presentation of the paper.
The authors gratefully acknowledge the financial support by the grant CONAHCYT A1-S-9764 and JSPS Open Partnership Joint Research Projects grant no. JPJSBP120209921. We greatly appreciate the hospitality of Hokkaido University of Education during June 2023, where this project originated. K.F. was supported by the Hokkaido University Ambitious Doctoral Fellowship (Information Science and AI) and JSPS Research Fellowship for Young Scientists PD (KAKENHI Grant Number 24KJ1318). D.P. was partially supported by the AMS-Simons Travel Grant, and by Simons Foundation via Michael Anshelevich’s grant. Y.U. is supported by JSPS Grant-in-Aid for Young Scientists 22K13925 and for Scientific Research (C) 23K03133.
References
- [AFU24] Octavio Arizmendi, Katsunori Fujie and Yuki Ueda “New Combinatorial Identity for the Set of Partitions and Limit Theorems in Finite Free Probability Theory” In International Mathematics Research Notices, 2024, pp. 10450–10484 DOI: 10.1093/imrn/rnae089
- [AGP23] Octavio Arizmendi, Jorge Garza-Vargas and Daniel Perales “Finite free cumulants: Multiplicative convolutions, genus expansion and infinitesimal distributions” In Transactions of the American Mathematical Society 376.06, 2023, pp. 4383–4420
- [AH13] Octavio Arizmendi and Takahiro Hasebe “On a class of explicit Cauchy–Stieltjes transforms related to monotone stable and free Poisson laws” In Bernoulli 19(5B), 2013, pp. 2750–2767 DOI: https://doi.org/10.3150/12-BEJ473
- [AH16] Octavio Arizmendi and Takahiro Hasebe “Classical scale mixtures of Boolean stable laws” In Transactions of the American Mathematical Society 368.7, 2016, pp. 4873–4905
- [AP18] Octavio Arizmendi and Daniel Perales “Cumulants for finite free convolution” In Journal of Combinatorial Theory, Series A 155 Elsevier, 2018, pp. 244–266
- [AP09] Octavio Arizmendi E and Victor Pérez-Abreu “The S-transform of symmetric probability measures with unbounded supports” In Proceedings of the American Mathematical Society 137.9, 2009, pp. 3057–3066
- [Bel03] Serban T Belinschi “The Atoms of the Free Multiplicative Convolution of Two Probability Distributions” In Integral Equations and Operator Theory 46 Springer, 2003, pp. 377–386
- [BN08] Serban T. Belinschi and Alexandru Nica “On a remarkable semigroup of homomorphisms with respect to free multiplicative convolution” In Indiana University Mathematics Journal 57.4, 2008, pp. 1679–1713 DOI: 10.1512/iumj.2008.57.3285
- [BV06] G. Ben Arous and Dan V. Voiculescu “Free extreme values” In Annals of Probability 34.5 Institute of Mathematical Statistics, 2006, pp. 2037–2059
- [BP99] Hari Bercovici and Vittorino Pata “Stable laws and domains of attraction in free probability theory” With an appendix by Philippe Biane In Annals of Mathematics. Second Series 149.3, 1999, pp. 1023–1060 DOI: 10.2307/121080
- [BV93] Hari Bercovici and Dan Voiculescu “Free convolution of measures with unbounded support” In Indiana University Mathematics Journal 42.3, 1993, pp. 733–773 DOI: 10.1512/iumj.1993.42.42033
- [BV92] Hari Bercovici and Dan V. Voiculescu “Lévy-Hinčin type theorems for multiplicative and additive free convolution” In Pacific journal of mathematics 153.2 Mathematical Sciences Publishers, 1992, pp. 217–248
- [Bor19] Charles Bordenave “Lecture notes on random matrix theory” In https://www.math.univ-toulouse.fr/ bordenave/IMPA-RMT.pdf, 2019
- [COR24] Andrew Campbell, Sean O’Rourke and David Renfrew “Universality for roots of derivatives of entire functions via finite free probability” In arXiv preprint arXiv:2410.06403, 2024
- [Dur19] Rick Durrett “Probability: theory and examples” Cambridge university press, 2019
- [Dyk07] Kenneth J Dykema “Multilinear function series and transforms in free probability theory” In Advances in Mathematics 208.1 Elsevier, 2007, pp. 351–407
- [FK52] Bent Fuglede and Richard V Kadison “Determinant theory in finite factors” In Annals of Mathematics 55.3 JSTOR, 1952, pp. 520–530
- [FU23] Katsunori Fujie and Yuki Ueda “Law of large numbers for roots of finite free multiplicative convolution of polynomials” In SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 19 SIGMA. Symmetry, IntegrabilityGeometry: MethodsApplications, 2023, pp. 004
- [HM13] Uffe Haagerup and Sören Möller “The law of large numbers for the free multiplicative convolution” In Operator Algebra and Dynamics. Springer Proc. Math. Stat. 58.2 Springer, Heidelberg, 2013, pp. 157–186
- [HS07] Uffe Haagerup and Hanne Schultz “Brown measures of unbounded operators affiliated with a finite von Neumann algebra” In Mathematica Scandinavica 100.2, 2007, pp. 209–263
- [HSW20] Takahiro Hasebe, Thomas Simon and Min Wang “Some properties of the free stable distributions” In Annales de l’Institut Henri Poincaré Probabilités et Statistiques 56.1, 2020, pp. 296–325 DOI: 10.1214/19-AIHP962
- [HU21] Takahiro Hasebe and Yuki Ueda “Homomorphisms relative to additive convolutions and max-convolutions: Free, boolean and classical cases” In Proceedings of the American mathematical society 149.11, 2021, pp. 4799–4814
- [HK21] Jeremy Hoskins and Zakhar Kabluchko “Dynamics of zeroes under repeated differentiation” In Experimental Mathematics Taylor & Francis, 2021, pp. 1–27
- [Kab21] Zakhar Kabluchko “Repeated differentiation and free unitary Poisson process” In arXiv preprint arXiv:2112.14729, 2021
- [Kab22] Zakhar Kabluchko “Lee-Yang zeroes of the Curie-Weiss ferromagnet, unitary Hermite polynomials, and the backward heat flow” In arXiv preprint arXiv:2203.05533, 2022
- [Mar21] Adam W Marcus “Polynomial convolutions and (finite) free probability” In arXiv preprint arXiv:2108.07054, 2021
- [MSS22] Adam W Marcus, Daniel A Spielman and Nikhil Srivastava “Finite free convolutions of polynomials” In Probability Theory and Related Fields 182.3 Springer, 2022, pp. 807–848
- [MMP24] Andrei Martinez-Finkelshtein, Rafael Morales and Daniel Perales “Zeros of generalized hypergeometric polynomials via finite free convolution. Applications to multiple orthogonality” In arXiv preprint arXiv:2404.11479, 2024
- [MMP24a] Andrei Martínez-Finkelshtein, Rafael Morales and Daniel Perales “Real Roots of Hypergeometric Polynomials via Finite Free Convolution” In International Mathematics Research Notices, 2024 DOI: 10.1093/imrn/rnae120
- [MS17] James A Mingo and Roland Speicher “Free probability and random matrices” Springer, 2017
- [MSV79] D.. Moak, E.. Saff and R.. Varga “On the zeros of Jacobi polynomials ” In Transactions of the American Mathematical Society 249.1, 1979, pp. 159–162 DOI: 10.2307/1998916
- [NS96] Alexandru Nica and Roland Speicher “On the multiplication of free -tuples of noncommutative random variables” In American Journal of Mathematics JSTOR, 1996, pp. 799–837
- [NS06] Alexandru Nica and Roland Speicher “Lectures on the combinatorics of free probability” Cambridge University Press, 2006
- [SY01] N Saitoh and H Yoshida “The infinite divisibility and orthogonal polynomials with a constant recursion formula in free probability theory” In Probability and Mathematical Statistics 21.1, 2001, pp. 159–170
- [ST22] Dimitri Shlyakhtenko and Terence Tao “Fractional free convolution powers” In Indiana University Mathematics Journal 71.6, 2022
- [SW97] R. Speicher and R. Woroudi “Boolean convolution” In In Free Probability Theory (Waterloo,ON,1995). Fields Inst. Commun. 12, 1997, pp. 267–279
- [Spe94] Roland Speicher “Multiplicative functions on the lattice of non-crossing partitions and free convolution” In Mathematische Annalen 298.1 Springer, 1994, pp. 611–628
- [Ste19] Stefan Steinerberger “A nonlocal transport equation describing roots of polynomials under differentiation” In Proceedings of the American Mathematical Society 147.11, 2019, pp. 4733–4744
- [Ste21] Stefan Steinerberger “Free convolution powers via roots of polynomials” In Experimental Mathematics Taylor & Francis, 2021, pp. 1–6
- [Sze22] Gábor Szegö “Bemerkungen zu einem Satz von JH Grace über die Wurzeln algebraischer Gleichungen” In Mathematische Zeitschrift 13.1 Springer, 1922, pp. 28–55
- [Sze75] Gábor Szegö “Orthogonal Polynomials, 4th edn., vol” In XXIII (American Mathematical Society, Colloquium Publications, Providence, 1975) MATH, 1975
- [Tuc10] Gabriel H. Tucci “Limits laws for geometric means of free random variables” In Indiana University Mathematics Journal 59.1, 2010, pp. 1–13 DOI: 10.1512/iumj.2010.59.3775
- [Ued21] Yuki Ueda “Max-convolution semigroups and extreme values in limit theorems for the free multiplicative convolution” In Bernoulli 27 Bernoulli, 2021, pp. 502–531
- [Voi87] Dan Voiculescu “Multiplication of certain non-commuting random variables” In Journal of Operator Theory JSTOR, 1987, pp. 223–235
- [VDN92] Dan V Voiculescu, Ken J Dykema and Alexandru Nica “Free random variables” American Mathematical Society, 1992
- [Wal22] Joseph L Walsh “On the location of the roots of certain types of polynomials” In Transactions of the American Mathematical Society 24.3 JSTOR, 1922, pp. 163–180
- [Yos20] Hiroaki Yoshida “Remarks on a Free Analogue of the Beta Prime Distribution” In Journal of Theoretical Probability 33 Springer, 2020, pp. 1363–1400