This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

SS-transform in finite free probability

Octavio Arizmendi, Katsunori Fujie, Daniel Perales and Yuki Ueda Octavio Arizmendi: Centro de Investigación en Matemáticas, Guanajuato, Mexico [email protected] Katsunori Fujie: Department of Mathematics, Kyoto University, Kyoto, Japan [email protected] Daniel Perales: Department of Mathematics, Texas A&M University, TX, USA [email protected] Yuki Ueda: Department of Mathematics, Hokkaido University of Education, Hokkaido, Japan [email protected]
Abstract.

We characterize the limiting root distribution μ\mu of a sequence of polynomials {pd}d=1\{p_{d}\}_{d=1}^{\infty} with nonnegative roots and degree dd, in terms of their coefficients. Specifically, we relate the asymptotic behaviour of the ratio of consecutive coefficients of pdp_{d} to Voiculescu’s SS-transform SμS_{\mu} of μ\mu.

In the framework of finite free probability, we interpret these ratios of coefficients as a new notion of finite SS-transform, which converges to SμS_{\mu} in the large dd limit. It also satisfies several analogous properties to those of the SS-transform in free probability, including multiplicativity and monotonicity.

The proof of the main theorem is based on various ideas and new results relating finite free probability and free probability. In particular, we provide a simplified explanation of why free fractional convolution corresponds to the differentiation of polynomials, by finding how the finite free cumulants of a polynomial behave under differentiation.

This new insight has several applications that strengthen the connection between free and finite free probability. Most notably, we generalize the approximation of d\boxtimes_{d} to \boxtimes and prove a finite approximation of the Tucci–Haagerup–Möller limit theorem in free probability, conjectured by two of the authors. We also provide finite analogues of the free multiplicative Poisson law, the free max-convolution powers and some free stable laws.

1. Introduction

The main object of interest in this paper is the finite free multiplicative convolution of polynomials. This is a binary operation on the set 𝒫d\mathcal{P}_{d} of polynomials of degree dd which can be defined explicitly in terms of the coefficients of the polynomials involved. For polynomials

(1.1) p(x)=k=0d(1)k(dk)𝖾~k(d)(p)xdkandq(x)=k=0d(1)k(dk)𝖾~k(d)(q)xdk,p(x)=\sum_{k=0}^{d}(-1)^{k}\binom{d}{k}\widetilde{\mathsf{e}}_{k}^{(d)}(p)x^{d-k}\qquad\text{and}\qquad q(x)=\sum_{k=0}^{d}(-1)^{k}\binom{d}{k}\widetilde{\mathsf{e}}_{k}^{(d)}(q)x^{d-k},

the finite free multiplicative convolution of pp and qq is defined as

(1.2) [pdq](x)=k=0d(1)k(dk)𝖾~k(d)(p)𝖾~k(d)(q)xdk.[p\boxtimes_{d}q](x)=\sum_{k=0}^{d}(-1)^{k}\binom{d}{k}\widetilde{\mathsf{e}}_{k}^{(d)}(p)\widetilde{\mathsf{e}}_{k}^{(d)}(q)x^{d-k}.

This operation is bilinear, commutative, and associative and more importantly it is closed in the set 𝒫d(0)\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) of polynomials with all non-negative real roots. Moreover, it is known that the maximum root of pdqp\boxtimes_{d}q is bounded above by the product of the maximum roots of pp and qq. All these basic properties have been well understood since more than a century ago, see for instance [Wal22].

Recently, this convolution was rediscovered in the study of characteristic polynomials of certain random matrices [MSS22]. To elaborate, for dd-dimensional positive semidefinite matrices, AA and BB, with characteristic polynomials pp and qq, respectively, their finite free multiplicative convolution, pdqp\boxtimes_{d}q, is the expected characteristic polynomial of AUBUAUBU^{*}, where UU is a random unitary matrix sampled according to the Haar measure on the unitary group of degree dd. One motivation behind this new interpretation in terms of random matrices was to derive new polynomial inequalities for the roots of the convolution. Marcus, Spielman, and Srivastava achieved a stronger bound on the maximum root using tools from free probability, in particular from Voiculescu’s SS-transform. In the light of this, Marcus pursued this connection further [Mar21] by suggesting and providing evidence that, as the degree dd tends to infinity, the finite free convolution of polynomials, d\boxtimes_{d}, tends to free multiplicative convolution of measures, \boxtimes, thus initiating the theory of finite free probability. This was rigorously proved in [AGP23].

The study of polynomial convolutions from the perspective of finite free probability has strengthened the connections between geometry of polynomials, random matrices, random polynomials and free probability theory, and has given new insights into these topics. In particular, after the original paper of Marcus, Spielman and Srivastava [MSS22] and the work of Marcus [Mar21], various papers have investigated limit theorems for important sequences of polynomials or their finite free convolutions, see [FU23, AP18, AGP23, MMP24a, Kab22, HK21, Kab21, AFU24].

In a certain sense, the present paper continues this line of research, understanding the relation of finite free probability with free probability in the large dd limit.

One of the motivations of this work is to give a concrete understanding of how the limiting behaviour of the coefficients of a polynomial is connected with convergence in distribution to a probability measure. More specifically, consider a sequence of polynomials pdp_{d} of degree dd such that the empirical root distribution of the polynomials, denoted by μpd\mu\left\llbracket p_{d}\right\rrbracket, converges weakly to a measure μ\mu as the degree dd tends to infinity and assume that pdp_{d} has coefficients given as in (1.1). Then a natural question is what happens with the coefficients 𝖾~k(pd)\widetilde{\mathsf{e}}_{k}(p_{d}) in the limit, or vice versa, to give conditions on the behaviour of the coefficients 𝖾~k(pd)\widetilde{\mathsf{e}}_{k}(p_{d}) so that μpd\mu\left\llbracket p_{d}\right\rrbracket converges weakly to μ\mu. The naive approach of fixing kk yields a trivial limit, see Corollary 10.3.

From their studies of the law of large numbers, Fujie and Ueda [FU23] observed that it may be more fruitful to look at the ratio of two consecutive coefficients, see Section 1.3 below. Our main result says that this is the case. We find that one can get a meaningful limit when considering the ratio 𝖾~k(pd)/𝖾~k+1(pd)\widetilde{\mathsf{e}}_{k}(p_{d})/\widetilde{\mathsf{e}}_{k+1}(p_{d}) and doing a diagonal limit, i.e. letting kk and dd tend to infinity with k/dk/d approaching some constant tt. Furthermore, such a limit can be tracked down and has precise relation to Voiculescu’s SS-transform.

1.1. SS-transform

Recall that the main analytic tool to study the free multiplicative convolution is the SS-transform, introduced by Voiculescu [Voi87] and studied in detail by Bercovici and Voiculescu [BV92, BV93]. Its importance in free probability stems from the fact that SμS_{\mu} characterizes the measure μ\mu and is multiplicative with respect to free multiplicative convolution:

Sμν=SμSνfor μ,ν(0){δ0}.S_{\mu\boxtimes\nu}=S_{\mu}S_{\nu}\qquad\text{for }\mu,\nu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}.

We refer the reader to Section 2.4.3 for more details.

Our results can be nicely presented if we introduce a new finite free SS-transform111Let us mention that Marcus already defined a modified SS-transform in [Mar21] that tends to the inverse of Voiculescu’s SS-transform. Our definition is different, and the relationd with Marcus’ transform is not clear. We will address these in more detail in Remarks 6.2 and 6.8.. Consider again a polynomial pp as in (1.1) and, for simplicity, let us assume that all the roots of pp are strictly positive so that the coefficients 𝖾~k(p)\widetilde{\mathsf{e}}_{k}(p) are non-zero. In this case the finite SS-transform of pp is the map Sp(d):{k/dk=1,2,,d}>0S_{p}^{(d)}\mathrel{\mathop{\ordinarycolon}}\left\{-k/d\mid k=1,2,\dots,d\right\}\to\mathbb{R}_{>0} given by

Sp(d)(kd):=𝖾~k1(d)(p)𝖾~k(d)(p) for k=1,2,,d.S_{p}^{(d)}\left(-\frac{k}{d}\right)\mathrel{\mathop{\ordinarycolon}}=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}\qquad\text{ for }\quad k=1,2,\dots,d.

It is straightforward from (1.2) that Spdq(d)=Sp(d)Sq(d)S^{(d)}_{p\boxtimes_{d}q}=S^{(d)}_{p}S^{(d)}_{q}. Besides the multiplicative property, this map satisfies many other properties analogous to Voiculescu’s SS-transform, such as monotonicity, same image, same range and a formula for the reversed polynomial, see Section 6.2.

More importantly, in the limit as the dimension dd goes to infinity, the convergence of the empirical root distribution of a sequence of polynomials pdp_{d} to some measure μ\mu is equivalent to the convergence of the finite SS-transform of pdp_{d} to the SS-transform of μ\mu. This is the content of our first main result.

Theorem 1.1.

Let (pd)d(p_{d})_{d\in\mathbb{N}} be a sequence of polynomials with pd𝒫d(0)p_{d}\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and μ(0)\{δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\backslash\{\delta_{0}\}. The following are equivalent:

  1. (1)

    The weak convergence: μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu.

  2. (2)

    For every t(0,1μ({0}))t\in(0,1-\mu(\{0\})) and every sequence (kd)d(k_{d})_{d\in\mathbb{N}} with 1kdd1\leq k_{d}\leq d and limdkdd=t\lim_{d\to\infty}\frac{k_{d}}{d}=t, one has

    (1.3) limdSpd(d)(kdd)=Sμ(t).\lim_{\begin{subarray}{c}d\to\infty\end{subarray}}S_{p_{d}}^{(d)}\left(-\frac{k_{d}}{d}\right)=S_{\mu}(-t).

The details of the proof are given in Section 7. Similarly, we can also define a finite symmetric SS-transform S~p(2d)\widetilde{S}_{p}^{(2d)} of a symmetric polynomial pp whose all the roots are non-zero as follows:

S~p(2d)(kd):=𝖾~2(k1)(2d)(p)𝖾~2k(2d)(p)for k=1,,d,\widetilde{S}_{p}^{(2d)}\left(-\frac{k}{d}\right)\mathrel{\mathop{\ordinarycolon}}=\sqrt{\frac{\widetilde{\mathsf{e}}_{2(k-1)}^{(2d)}(p)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(p)}}\qquad\text{for }k=1,\dots,d,

and prove an analogous result, see Section 8.1 for more details.

To show the effectiveness of the above result, let us give a few examples of well-known polynomials and their limiting distributions. Further related examples will be give in Section 10

  • (Laguerre polynomials) Let Ld(b)(x)=k=0d(1)k(dk)(bd)kdkxdkL_{d}^{(b)}(x)=\sum_{k=0}^{d}(-1)^{k}\binom{d}{k}\frac{(bd)_{k}}{d^{k}}x^{d-k} be the normalized Laguerre polynomials of degree dd and with parameter b1b\geq 1, where (a)0:=1(a)_{0}\mathrel{\mathop{\ordinarycolon}}=1 and (a)k:=a(a1)(ak+1)(a)_{k}\mathrel{\mathop{\ordinarycolon}}=a(a-1)\cdots(a-k+1) for aa\in\mathbb{R} and k1k\geq 1. Then its finite SS-transform satisfies:

    SLd(b)(d)(kd)=1bk1d1bt=S𝐌𝐏b(t)askdt,S_{L_{d}^{(b)}}^{(d)}\left(-\frac{k}{d}\right)=\frac{1}{b-\frac{k-1}{d}}\to\frac{1}{b-t}=S_{{\bf MP}_{b}}(-t)\qquad\text{as}\quad\frac{k}{d}\to t,

    where 𝐌𝐏b{\bf MP}_{b} is the Marchenko-Pastur distribution with parameter bb. This gives an alternative proof of well-known fact: μLd(b)𝑤𝐌𝐏b\mu\left\llbracket L_{d}^{(b)}\right\rrbracket\xrightarrow{w}{\bf MP}_{b}. For the notation and detail, see Example 10.10.

  • (Hermite polynomials) Let H2d(x)=k=0d(1)k(2d2k)(2k)!k!(4d)kx2d2kH_{2d}(x)=\sum_{k=0}^{d}(-1)^{k}\binom{2d}{2k}\frac{(2k)!}{k!(4d)^{k}}x^{2d-2k} be the normalized Hermite polynomials of degree 2d2d. We have the convergence of finite symmetric SS-transform:

    S~H2d(2d)(kd)=1kd+12d1t=S~μsc(t)askdt,\widetilde{S}_{H_{2d}}^{(2d)}\left(-\frac{k}{d}\right)=\frac{1}{\sqrt{-\tfrac{k}{d}+\tfrac{1}{2d}}}\to\frac{1}{\sqrt{-t}}=\widetilde{S}_{\mu_{\mathrm{sc}}}(-t)\qquad\text{as}\quad\frac{k}{d}\to t,

    where μsc\mu_{\mathrm{sc}} is the standard semicircular distribution. This gives an alternative proof of well-known fact: μH2d𝑤μsc\mu\left\llbracket H_{2d}\right\rrbracket\xrightarrow{w}\mu_{\mathrm{sc}}, see Example 10.16.

  • The (normalized) Chebyshev polynomials of the first kind may be written as T2d(x)=k=0d(2d2k)(1)k(2k)k(2d1)k22kx2d2k.T_{2d}(x)=\sum_{k=0}^{d}\binom{2d}{2k}(-1)^{k}\frac{(2k)_{k}}{(2d-1)_{k}2^{2k}}x^{2d-2k}. Then we have

    S~T2d(2d)(kd)=2(k2d)2k1t2t=Sμarc(t)askdt,\widetilde{S}_{T_{2d}}^{(2d)}\left(-\frac{k}{d}\right)=\sqrt{\frac{2(k-2d)}{2k-1}}\to\sqrt{\frac{t-2}{t}}=S_{\mu_{\mathrm{arc}}}(-t)\qquad\text{as}\quad\frac{k}{d}\to t,

    where μarc\mu_{\mathrm{arc}} is the arcsine law on (1,1)(-1,1). See [AH13] for the SS-transform of μarc\mu_{\mathrm{arc}}.

  • The (normalized) Chebyshev polynomials of the second kind can be expressed as U2d(x)=k=0d(2d2k)(1)k(2k)k(2d)k22kx2d2kU_{2d}(x)=\sum_{k=0}^{d}\binom{2d}{2k}(-1)^{k}\frac{(2k)_{k}}{(2d)_{k}2^{2k}}x^{2d-2k}. Then we obtain

    S~U2d(2d)(kd)=2(k12d)2k1t2t=Sμarc(t)askdt.\widetilde{S}_{U_{2d}}^{(2d)}\left(-\frac{k}{d}\right)=\sqrt{\frac{2(k-1-2d)}{2k-1}}\to\sqrt{\frac{t-2}{t}}=S_{\mu_{\mathrm{arc}}}(-t)\qquad\text{as}\quad\frac{k}{d}\to t.

1.2. Quick overview of the proof

While the proof of our main theorem is technical for the general case, it relies on simple but useful results that provide a deeper understanding of the relation between finite free convolution and differentiation.

As we prove in Section 3, the finite free cumulants of the derivatives of a polynomial pp can be directly related to the finite free cumulants of pp. That is, by using the notation

j|dp:=p(dj)(d)dj,\partial_{j|d}\,p\mathrel{\mathop{\ordinarycolon}}=\frac{p^{(d-j)}}{\left(d\right)_{d-j}},

we have the relation

κn(j)(j|dp)=(jd)n1κn(d)(p) for 1njd.\kappa_{n}^{(j)}(\partial_{j|d}\,p)=\left(\frac{j}{d}\right)^{n-1}\kappa_{n}^{(d)}(p)\quad\text{ for }1\leq n\leq j\leq d.

Using the fact that convergence of finite free cumulants implies weak convergence, see [AP18], the equation above allows us to give a new proof of the following result, relating derivatives to free convolution powers.

Theorem 1.2 ([HK21], [AGP23]).

Fix KK a compact subset on \mathbb{R}. Let μ()\mu\in\mathcal{M}(\mathbb{R}) and (pd)d(p_{d})_{d\in\mathbb{N}} be a sequence of polynomials of degree dd such that every pd𝒫d(K)p_{d}\in\mathcal{P}_{d}(K) has only roots in KK and μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu. Then, for a parameter t(0,1)t\in(0,1) and a sequence of integers (jd)d(j_{d})_{d\in\mathbb{N}} such that 1jdd1\leq j_{d}\leq d and limdjdd=t\lim_{d\to\infty}\frac{j_{d}}{d}=t, we have μjd|dpd𝑤Dilt(μ1/t)\mu\left\llbracket\partial_{j_{d}|d}\,p_{d}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}(\mu^{\boxplus 1/t}) as dd\to\infty.

It is worth mentioning that if in the previous result we allow t=0t=0 or t=1t=1, we can also draw some conclusions about the limiting distribution, see Remark 3.6. Also, in Theorem 7.7 of Section 7.4 we will show that the assertion of Theorem 1.2 still holds if we drop the assumption on the uniform compactness.

Once Theorem 1.2 is settled, the last ingredients that connect the finite SS-transform with the SS-transform are two observations. First, the SS-transform of a measure contained on [a,b][a,b] for a>0a>0 is related with the value of the Cauchy transform at 0 of the free convolution powers:

GDilt(μ1/t)(0)=Sμ(t)(see Lemma 7.1).G_{{\rm Dil}_{t}{(\mu^{\boxplus 1/t})}}(0)=-S_{\mu}(-t)\qquad\text{(see Lemma \ref{lem:approx_box_times})}.

Second, for the finite SS-transform, a similar relation holds but in this case for derivatives of a polynomial with strictly positive roots, namely,

Gμk|dpd(0)=Spd(d)(kd)(see Lemma 6.11 (2)).G_{\mu\left\llbracket\partial_{k|d}\,p_{d}\right\rrbracket}(0)=-S_{p_{d}}^{(d)}\left(-\frac{k}{d}\right)\qquad\text{(see Lemma \ref{lem:Strans_conditions} (2))}.

Then, since weak convergence implies the convergence of the Cauchy transform on compact sets outside the support. From the convergence of μk|dpdDilt(μ1/t)\mu\left\llbracket\partial_{k|d}\,p_{d}\right\rrbracket\to{\rm Dil}_{t}(\mu^{\boxplus 1/t}) we may conclude that

limdk/dtSpd(d)(kd)=Sμ(t).\lim_{\begin{subarray}{c}d\to\infty\\ k/d\to t\end{subarray}}S_{p_{d}}^{(d)}\left(-\frac{k}{d}\right)=S_{\mu}(-t).

However, when attempting to upgrade these considerations to the case when the support of μ\mu includes 0 or it is unbounded, one faces technical difficulties such as Gμk|dpd(0)G_{\mu\left\llbracket\partial_{k|d}\,p_{d}\right\rrbracket}(0) or GDilt(μ1/t)(0)G_{{\rm Dil}_{t}{(\mu^{\boxplus 1/t})}}(0) possibly being undefined. To get around this, we use uniform bounds on the roots of polynomials after repeated differentiation. One important tool that we use is a partial order on polynomials, see Section 4.

1.3. Relation to Multiplicative Law of Large Numbers

Notice that the existence of the limit (1.3) amounts to the fact that the ratio of two consecutive coefficients approaches some limit.

The intuition of why this limit should exist comes from the law of large numbers for the free multiplicative convolution of Tucci, and Haagerup and Möller, as well as its finite counterpart of Fujie and Ueda [FU23].

Recall that Tucci [Tuc10], Haagerup and Möller [HM13, Theorem 2] proved that, for every μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}), there exists a measure Φ(μ)(0)\Phi(\mu)\in\mathcal{M}(\mathbb{R}_{\geq 0}) such that

(μn)1/n𝑤Φ(μ)as n,(\mu^{\boxtimes n})^{\langle 1/n\rangle}\xrightarrow{w}\Phi(\mu)\qquad\text{as }n\to\infty,

where for ν(0)\nu\in\mathcal{M}(\mathbb{R}_{\geq 0}) and c>0c>0 the measure νc\nu^{\langle c\rangle} denotes the pushforward of ν\nu by ttct\to t^{c}.

The measure Φ(μ)\Phi(\mu) is characterized by

Φ(μ)([0,1Sμ(t1)])=tfor all t(0,1μ({0}))\Phi(\mu)\left(\left[0,\frac{1}{S_{\mu}(t-1)}\right]\right)=t\qquad\text{for all }t\in(0,1-\mu(\{0\}))

unless μ\mu is not a Dirac measure. If μ=δa\mu=\delta_{a} for some a0a\geq 0, then Φ(μ)=δa\Phi(\mu)=\delta_{a}.

Fujie and Ueda [FU23, Theorem 3.2] proved a finite free analogue of this result. Namely, for p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}), there exists a limiting polynomial

limn(pdn)1/nΦd(p).\lim_{n\to\infty}(p^{\boxtimes_{d}n})^{\langle 1/n\rangle}\to\Phi_{d}(p).

Here, if q𝒫d(0)q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) has roots λ1,,λd\lambda_{1},\dots,\lambda_{d}, then qcq^{\langle c\rangle} stands for the polynomial with roots λ1c,,λdc\lambda_{1}^{c},\dots,\lambda_{d}^{c}. Moreover, for k=1,,dk=1,\dots,d, the kk-th largest root of Φd(p)\Phi_{d}(p) is given by λk(Φd(p))=𝖾~k(d)(p)𝖾~k1(d)(p)\lambda_{k}(\Phi_{d}(p))=\frac{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}, which is the multiplicative inverse of the finite SS-transform 1/Sp(d)(kd)1/{S_{p}^{(d)}\left(-\frac{k}{d}\right)} at k/d-k/d.

In [FU23, Conjecture 4.4], it was conjectured that the map Φd\Phi_{d} tends to the map Φ\Phi as dd tends to \infty. As a consequence of our main theorem, we will prove this conjecture in Section 9.

Theorem 1.3.

Let (pd)d(p_{d})_{d\in\mathbb{N}} be a sequence of polynomials with pd𝒫d(0)p_{d}\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}). The convergence μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu is equivalent to the convergence μΦd(pd)𝑤Φ(μ)\mu\left\llbracket\Phi_{d}(p_{d})\right\rrbracket\xrightarrow{w}\Phi(\mu).

1.4. Organization of the paper

Besides this introductory section and the upcoming section with basic preliminaries, the paper is roughly divided into three parts.

First, Sections 3, 4 and 5 contain key results used in the proof of the main theorem. Each topic might be of independent interest.

  • In Section 3, we investigate the coefficients, finite free cumulants and limiting distribution of derivatives of polynomials, as a consequence we provide a new proof of Theorem 1.2.

  • In Section 4, we equip the set of real-rooted polynomials with a partial order that allows us to reduce our study to simpler polynomials with all its roots equal to 0 or 1.

  • We bound the roots obtained after differentiating these simple polynomials several times in Section 5 by using classic bounds on the roots of Jacobi polynomials.

Next, Sections 6, 7, and 8 concern our central object, the finite SS-transform and its relation to Voiculescu’s SS-transform.

  • In Section 6, we introduce the finite SS-transform and study its basic properties.

  • Section 7 is devoted to showing that the finite SS-transform tends to Voiculescu’s SS-transform.

  • In Section 8, we extend the definition of the SS-transform to symmetric polynomials and explore the case where the polynomials have roots only in the unit circle.

Finally, in Sections 9 and 10 we provide examples and applications.

  • Theorem 1.3, which relates the laws of large numbers for d\boxtimes_{d} and for \boxtimes, is proved in Section 9.

  • Section 10 contains examples and applications in various directions: an approximation of free convolution, a limit for the coefficients of polynomial, examples with hypergeometric polynomials, finite analogue of some free stable laws, a finite free multiplicative Poisson’s law, and finite free max-convolution powers.

2. Preliminaries

2.1. Measures

We use \mathcal{M} to denote the family of all Borel probability measures on \mathbb{C}. When we want to specify that the support of the measure is contained in a subset KK\subset\mathbb{C} we use the notation

(K):={μthe support of μ is in K}.\mathcal{M}(K)\mathrel{\mathop{\ordinarycolon}}=\{\mu\in\mathcal{M}\mid\text{the support of $\mu$ is in $K$}\}.

In most of the cases we will let KK be a subset of the real line \mathbb{R}, like the positive real line >0:=(0,)\mathbb{R}_{>0}\mathrel{\mathop{\ordinarycolon}}=(0,\infty), the set of non-negative real line 0:=[0,)\mathbb{R}_{\geq 0}\mathrel{\mathop{\ordinarycolon}}=[0,\infty), or a compact interval C=[α,β]C=[\alpha,\beta].

Notation 2.1 (Basic transformations of measures).

We fix the following notation:

  • Dilation. For μ\mu\in\mathcal{M} and c0c\neq 0, we let Dilcμ{\rm Dil}_{c}\mu\in\mathcal{M} be the measure satisfying that

    Dilcμ(B):=μ({x/c:xB})for every Borel set B.{\rm Dil}_{c}\mu(B)\mathrel{\mathop{\ordinarycolon}}=\mu(\{x/c\mathrel{\mathop{\ordinarycolon}}x\in B\})\qquad\text{for every Borel set }B\subset\mathbb{C}.
  • Shift. For μ\mu\in\mathcal{M}, we let Shicμ{\rm Shi}_{c}\mu\in\mathcal{M} be the measure satisfying that

    Shicμ(B):=μ({xc:xB})for every Borel set B.{\rm Shi}_{c}\mu(B)\mathrel{\mathop{\ordinarycolon}}=\mu(\{x-c\mathrel{\mathop{\ordinarycolon}}x\in B\})\qquad\text{for every Borel set }B\subset\mathbb{C}.
  • Power of a measure. For μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) and c>0c>0, we denote by μc\mu^{\langle c\rangle} the measure satisfying that

    μc(B):=μ({x1c:xB})for every Borel set B0.\mu^{\langle c\rangle}(B)\mathrel{\mathop{\ordinarycolon}}=\mu(\{x^{\frac{1}{c}}\mathrel{\mathop{\ordinarycolon}}x\in B\})\qquad\text{for every Borel set }B\subset\mathbb{R}_{\geq 0}.
  • Reversed measure. For μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) such that μ({0})=0\mu(\{0\})=0, we denote by μ1\mu^{\langle-1\rangle} the measure satisfying that

    μ1(B):=μ({x1:xB})for every Borel set B>0.\mu^{{\langle-1\rangle}}(B)\mathrel{\mathop{\ordinarycolon}}=\mu(\{x^{-1}\mathrel{\mathop{\ordinarycolon}}x\in B\})\qquad\text{for every Borel set }B\subset\mathbb{R}_{>0}.

2.2. Polynomials

We denote by 𝒫d\mathcal{P}_{d} the family of monic polynomials of degree dd, similar as with measures, for a subset KK\subset\mathbb{C} we use the notation

𝒫d(K):={p𝒫dall roots of pd are in K}.\mathcal{P}_{d}(K)\mathrel{\mathop{\ordinarycolon}}=\{p\in\mathcal{P}_{d}\mid\text{all roots of }p_{d}\text{ are in }K\}.

Given p𝒫dp\in\mathcal{P}_{d} we denote by λ1(p),λ2(p),,λd(p)\lambda_{1}(p),\lambda_{2}(p),\dots,\lambda_{d}(p) the dd roots of pp (accounting for multiplicity). If p𝒫d()p\in\mathcal{P}_{d}(\mathbb{R}) we further assume that λ1(p)λ2(p)λd(p)\lambda_{1}(p)\geq\lambda_{2}(p)\geq\dots\geq\lambda_{d}(p).

Given p𝒫dp\in\mathcal{P}_{d}, the symbol 𝖾~k(d)(p)\widetilde{\mathsf{e}}_{k}^{(d)}(p) denotes the normalized kk-th elementary symmetric polynomial on the roots of pp, namely

𝖾~k(d)(p):=(dk)11i1<<ikdλi1(p)λik(p),k=1,,d,\widetilde{\mathsf{e}}_{k}^{(d)}(p)\mathrel{\mathop{\ordinarycolon}}={\binom{d}{k}}^{-1}\sum_{1\leq i_{1}<\dots<i_{k}\leq d}\lambda_{i_{1}}(p)\cdots\lambda_{i_{k}}(p),\qquad k=1,\dots,d,

with the convention that 𝖾~0(d)(p):=1\widetilde{\mathsf{e}}_{0}^{(d)}(p)\mathrel{\mathop{\ordinarycolon}}=1. Hence, we can express any p𝒫dp\in\mathcal{P}_{d} as

p(x)=j=1d(xλj(p))=k=0d(1)k(dk)𝖾~k(d)(p)xdk.p(x)\,=\,\prod_{j=1}^{d}(x-\lambda_{j}(p))\,=\,\sum_{k=0}^{d}(-1)^{k}\binom{d}{k}\widetilde{\mathsf{e}}_{k}^{(d)}(p)x^{d-k}.

The empirical root distribution of p𝒫dp\in\mathcal{P}_{d} is defined as

μp:=1di=1dδλi(p),\mu\left\llbracket p\right\rrbracket\mathrel{\mathop{\ordinarycolon}}=\frac{1}{d}\sum_{i=1}^{d}\delta_{\lambda_{i}(p)},

and we let the nn-th moment of pp be the nn-th moment of μp\mu\left\llbracket p\right\rrbracket, that is,

mn(p):=1di=1dλin,n=1,2,.m_{n}(p)\mathrel{\mathop{\ordinarycolon}}=\frac{1}{d}\sum_{i=1}^{d}\lambda_{i}^{n},\qquad n=1,2,\dots.

Notice that the map μ\mu\left\llbracket\cdot\right\rrbracket provides a bijection from 𝒫d\mathcal{P}_{d} onto the set

d:={1di=1dδλi|λ1,,λd}.\mathcal{M}_{d}\mathrel{\mathop{\ordinarycolon}}=\left\{\left.\frac{1}{d}\sum_{i=1}^{d}\delta_{\lambda_{i}}\in\mathcal{M}\right|\lambda_{1},\dots,\lambda_{d}\in\mathbb{C}\right\}.

Moreover, we have that μ𝒫d(K)=d(K)\mu\left\llbracket\mathcal{P}_{d}(K)\right\rrbracket=\mathcal{M}_{d}(K) for every KK\subset\mathbb{C}, where the latter is defined as d(K):=(K)d\mathcal{M}_{d}(K)\mathrel{\mathop{\ordinarycolon}}=\mathcal{M}(K)\cap\mathcal{M}_{d}. Notice also that for every dd, the subset d\mathcal{M}_{d} is invariant under all the transformations in Notation 2.1, thus we can use the bijection μ\mu\left\llbracket\cdot\right\rrbracket to define the analogous transformations on the set of polynomials:

Notation 2.2 (Basic transformations of polynomials).

Let p𝒫dp\in\mathcal{P}_{d} and cc\in\mathbb{C}.

  • Dilation. For c0c\neq 0, we define [Dilcp](x):=cdp(xc)[{\rm Dil}_{c}p](x)\mathrel{\mathop{\ordinarycolon}}=c^{d}\ p\left(\frac{x}{c}\right).

  • Shift. For cc\in\mathbb{C}, we define [Shicp](x):=p(xc)[{\rm Shi}_{c}p](x)\mathrel{\mathop{\ordinarycolon}}=p(x-c).

  • Power of a polynomial. Given p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and c>0c>0, pcp^{\langle c\rangle} is the polynomial with roots λi(pc)=(λi(p))c\lambda_{i}(p^{\langle c\rangle})=(\lambda_{i}(p))^{c} for i=1,,di=1,\dots,d.

  • Reversed polynomial. For p𝒫d(>0)p\in\mathcal{P}_{d}(\mathbb{R}_{>0}), the reversed polynomial of pp is the polynomial p1𝒫d(>0)p^{\langle-1\rangle}\in\mathcal{P}_{d}(\mathbb{R}_{>0}) with coefficients

    (2.1) 𝖾~k(d)(p1)=𝖾~dk(d)(p)𝖾~d(d)(p) for k=0,d.\widetilde{\mathsf{e}}_{k}^{(d)}(p^{\langle-1\rangle})=\frac{\widetilde{\mathsf{e}}_{d-k}^{(d)}(p)}{\widetilde{\mathsf{e}}_{d}^{(d)}(p)}\qquad\text{ for }k=0,\dots d.

2.3. Weak convergence

In this section, we present the well-known facts and results on the weakly convergent sequence of probability measures on \mathbb{R}.

Given a compact interval CC\subset\mathbb{R} and a probability measure μ(C)\mu\in\mathcal{M}(C), the Cauchy transform of μ\mu is

Gμ(z):=C1zxμ(dx),zC.G_{\mu}(z)\mathrel{\mathop{\ordinarycolon}}=\int_{C}\frac{1}{z-x}\mu({\rm d}x),\qquad z\in\mathbb{C}\setminus C.
Lemma 2.3.

Let us consider LL\in\mathbb{R} and measures μn,μ(L)\mu_{n},\mu\in\mathcal{M}(\mathbb{R}_{\geq L}). The following assertions are equivalent.

  1. (1)

    The weak convergence: μn𝑤μ\mu_{n}\xrightarrow{w}\mu as nn\to\infty.

  2. (2)

    For all zLz\in\mathbb{C}\setminus\mathbb{R}_{\geq L}, we have limnGμn(z)=Gμ(z)\displaystyle\lim_{n\to\infty}G_{\mu_{n}}(z)=G_{\mu}(z).

  3. (3)

    Let r<s<Lr<s<L. For all a[r,s]a\in[r,s] we have that limnGμn(a)=Gμ(a)\displaystyle\lim_{n\to\infty}G_{\mu_{n}}(a)=G_{\mu}(a).

Proof.

The proof is similar to [Bor19, Corollary 3.1], but we include the proof for the reader’s convenience. For zLz\in\mathbb{C}\setminus\mathbb{R}_{\geq L}, the real and imaginary parts of x(zx)1x\mapsto(z-x)^{-1} are bounded and continuous on L\mathbb{R}_{\geq L}. Thus the definition of weak convergence shows the implication (1)(2)(1)\Rightarrow(2). The implication (2)(3)(2)\Rightarrow(3) is trivial. We are only left to show that (3)(1)(3)\Rightarrow(1). By Helly’s selection theorem (see e.g. [Dur19, Theorem 3.2.12]), for any subsequence of (μn)n(\mu_{n})_{n\in\mathbb{N}}, there exists a further subsequence (μnk)k(\mu_{n_{k}})_{k\in\mathbb{N}} and a finite measure ν\nu (on L\mathbb{R}_{\geq L}) such that μnk\mu_{n_{k}} vaguely converges to ν\nu, and therefore limkGμnk(z)=Gν(z)\lim_{k\to\infty}G_{\mu_{n_{k}}}(z)=G_{\nu}(z) for all z[r,s]z\in[r,s], where the Cauchy transform GνG_{\nu} can be defined even if ν\nu is a finite measure. By assumption (3), we have Gμ(a)=Gν(a)G_{\mu}(a)=G_{\nu}(a) for all a[r,s]a\in[r,s]. Since GμG_{\mu} and GνG_{\nu} are analytic on L\mathbb{C}\setminus\mathbb{R}_{\geq L}, the principle of analytic extension shows that Gν(z)=Gμ(z)G_{\nu}(z)=G_{\mu}(z) for all zLz\in\mathbb{C}\setminus\mathbb{R}_{\geq L}. Finally, we get ν=μ(L)\nu=\mu\in\mathcal{M}(\mathbb{R}_{\geq L}) by the Stieltjes inversion formula, and hence μnk𝑤μ\mu_{n_{k}}\xrightarrow{w}\mu. Then [Dur19, Theorem 3.2.15] yields μn𝑤μ\mu_{n}\xrightarrow{w}\mu. ∎

Given μ()\mu\in\mathcal{M}(\mathbb{R}), its cumulative distribution function is the function Fμ:[0,1]F_{\mu}\mathrel{\mathop{\ordinarycolon}}\mathbb{R}\to[0,1] such that

Fμ(x):=μ((,x])for all x.F_{\mu}(x)\mathrel{\mathop{\ordinarycolon}}=\mu((-\infty,x])\qquad\text{for all }x\in\mathbb{R}.

Let {μn}n\{\mu_{n}\}_{n\in\mathbb{N}} and μ\mu be probability measures on \mathbb{R}. It is well known that the weak convergence of μn\mu_{n} to μ\mu is equivalent to the convergence of their cumulative distribution functions FμnF_{\mu_{n}} to FμF_{\mu} on the continuous points of FμF_{\mu}. Actually, such convergence is locally uniform by Polya’s theorem if FμF_{\mu} is continuous.

Lemma 2.4.

Point-wise convergence of FμnF_{\mu_{n}} to the continuous function FμF_{\mu} means the locally uniform convergence.

The right-continuous inverse of FμF_{\mu} is defined to be

Fμ1(t)=sup{xFμ(x)<t}for t(0,1),F_{\mu}^{-1}(t)=\sup\{x\in\mathbb{R}\mid F_{\mu}(x)<t\}\qquad\text{for }t\in(0,1),

see [Dur19, The proof of Theorem 1.2.2.] for example. The following lemma is usually used to show Skorohod’s representation theorem in dimension one.

Lemma 2.5 (Ref. [Dur19, Theorem 3.2.8.]).

The convergence of FμnF_{\mu_{n}} to FμF_{\mu} on the continuous points of FμF_{\mu} is equivalent to the convergence of their right-continuous inverse functions Fμn1F_{\mu_{n}}^{-1} to Fμ1F_{\mu}^{-1} on the continuous points of Fμ1F_{\mu}^{-1}.

2.4. Free Probability

In this section we review some of the basics of free probability. For a comprehensive introduction to free probability, we recommend the monographs [VDN92, NS06, MS17].

Free additive and multiplicative convolutions, denoted by \boxplus and \boxtimes respectively, correspond to the sum and product of free random variables, that is, μaμb=μa+b\mu_{a}\boxplus\mu_{b}=\mu_{a+b} and μaμb=μab\mu_{a}\boxtimes\mu_{b}=\mu_{ab} for free random variables aa and bb. In this paper, rather than using the notion of free independence we will work solely with the additive and multiplicative convolutions, which can be defined in terms of the RR and SS transforms, respectively.

2.4.1. Free additive convolution and RR-transform

Given measures μ,ν()\mu,\nu\in\mathcal{M}(\mathbb{R}), their free additive convolution μν\mu\boxplus\nu is the distribution of X+YX+Y, where XX and YY are freely independent non-commutative random variables distributed as μ\mu and ν\nu, respectively. The convolution \boxplus was introduced by Voiculescu as a binary operation on compactly supported measures. The definition was extended to measures with unbounded support in [BV93].

The free convolution can be described analytically by the use of the RR-transform. For every μ()\mu\in\mathcal{M}(\mathbb{R}), it is known that the Cauchy transform GμG_{\mu} has a unique compositional inverse, KμK_{\mu}, in a neighborhood of infinity, Γμ\Gamma_{\mu}. Thus, one has

Gμ(Kμ(z))=z,zΓμ.G_{\mu}(K_{\mu}(z))=z,\qquad z\in\Gamma_{\mu}.

The RR-transform of μ\mu is defined as Rμ(z)=Kμ(z)1zR_{\mu}(z)=K_{\mu}(z)-\frac{1}{z}.

Definition 2.6 (Free additive convolution).

Let μ\mu and ν\nu be probability measures on the real line. We define the free convolution of μ\mu and ν\nu, denoted by μν\mu\boxplus\nu, as the unique measure satisfying

Rμν(z)=Rμ(z)+Rν(z),zΓμΓν.R_{\mu\boxplus\nu}(z)=R_{\mu}(z)+R_{\nu}(z),\qquad z\in\Gamma_{\mu}\cap\Gamma_{\nu}.

2.4.2. Free cumulants

For any probability measure μ\mu, we denote by mn(μ):=tnμ(dt)m_{n}(\mu)\mathrel{\mathop{\ordinarycolon}}=\int_{\mathbb{R}}t^{n}\mu(dt) its nn-th moment whenever |t|nμ(dt)\int_{\mathbb{R}}|t|^{n}\mu(dt) is finite. The free cumulants [Spe94] of μ\mu, denoted by (𝒓n(μ))n=1({\bm{r}}_{n}\left(\mu\right))_{n=1}^{\infty}, are recursively defined via the moment-cumulant formula

mn(μ)=πNC(n)𝒓π(μ),m_{n}(\mu)=\sum_{\pi\in\mathrm{NC}(n)}{\bm{r}}_{\pi}\left(\mu\right),

where NC(n)\mathrm{NC}(n) is the noncrossing partitions of {1,,n}\{1,\dots,n\} and 𝒓π(μ){\bm{r}}_{\pi}\left(\mu\right) are the multiplicative extension of (𝒓n(μ))n=1({\bm{r}}_{n}\left(\mu\right))_{n=1}^{\infty}. It is easy to see that the sequence (mn(μ))n=1(m_{n}(\mu))_{n=1}^{\infty} fully determines (𝒓n(μ))n=1({\bm{r}}_{n}\left(\mu\right))_{n=1}^{\infty} and vice-versa. In this case we can recover the cumulants from the RR-transform as follows:

Rμ(z)=n=0𝒓n+1(μ)zn.R_{\mu}(z)=\sum_{n=0}^{\infty}{\bm{r}}_{n+1}\left(\mu\right)z^{n}.

Hence, we can define free convolutions of compactly supported measures on the real line via their free cumulants. Indeed, given two compactly supported probability measures μ\mu and ν\nu on the real line, we define μν\mu\boxplus\nu to be the unique measure with cumulant sequence given by

𝒓n(μν)=𝒓n(μ)+𝒓n(ν).{\bm{r}}_{n}\left(\mu\boxplus\nu\right)={\bm{r}}_{n}\left(\mu\right)+{\bm{r}}_{n}\left(\nu\right).

If μ\mu and ν\nu are compaclty supported on \mathbb{R} then μν\mu\boxplus\nu is also compactly supported probability measures on \mathbb{R}, as can be seen from [Spe94].

Let μk=μμ\mu^{\boxplus k}=\mu\boxplus\cdots\boxplus\mu be the free convolution of kk copies of μ\mu. From the above definition, it is clear that 𝒓n(μk)=k𝒓n(μ){\bm{r}}_{n}\left(\mu^{\boxplus k}\right)=k\,{\bm{r}}_{n}\left(\mu\right). In [NS96] Nica and Speicher discovered that one can extend this definition to non-integer powers, we refer the reader to [ST22, Section 1] for a discussion on fractional powers.

Definition 2.7 (Fractional free convolution powers).

Let μ\mu be a compactly supported probability measure on the real line. For t1t\geq 1, the fractional convolution power μt\mu^{\boxplus t} is defined to be the unique measure with cumulants

𝒓n(μt)=t𝒓n(μ).{\bm{r}}_{n}\left(\mu^{\boxplus t}\right)=t\,{\bm{r}}_{n}\left(\mu\right).

2.4.3. Free multiplicative convolution and SS-transform

In this section, we introduce the free multiplicative convolution, the SS-transform and related results from [Voi87, BV93]. Given measures μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) and ν()\nu\in\mathcal{M}(\mathbb{R}), their free multiplicative convolution, denoted by μν\mu\boxtimes\nu, is the distribution of XYX\sqrt{X}Y\sqrt{X}, where X0X\geq 0 and YY are freely independent non-commutative random variables distributed as μ\mu and ν\nu, respectively. The convolution \boxtimes was introduced by Voiculescu as a binary operation on compactly supported measures. The definition was extended to measures with unbounded support in [BV93].

We now introduce Voicuelscu’s SS-transform [Voi87], the main analytic tool used to study the multiplicative convolution \boxtimes. Given a probability measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}), the moment generating function of μ\mu is

Ψμ(z):=0tz1tzμ(dt),z0.\Psi_{\mu}(z)\mathrel{\mathop{\ordinarycolon}}=\int_{0}^{\infty}\frac{tz}{1-tz}\mu({\rm d}t),\quad z\in\mathbb{C}\setminus\mathbb{R}_{\geq 0}.

For μ(0){δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\} it is known that Ψμ1\Psi_{\mu}^{-1}, the compositional inverse of Ψμ\Psi_{\mu}, exists in a neighborhood of (1+μ({0}),0)(-1+\mu(\{0\}),0). The SS-transform of μ\mu is defined as

Sμ(z):=1+zzΨμ1(z),z(1+μ({0}),0).S_{\mu}(z)\mathrel{\mathop{\ordinarycolon}}=\frac{1+z}{z}\Psi_{\mu}^{-1}(z),\qquad z\in(-1+\mu(\{0\}),0).
Remark 2.8.

In this paper, it is helpful for the intuition to think of SμS_{\mu} as a function on the whole interval (1,0)(-1,0) by allowing Sμ(z):=S_{\mu}(z)\mathrel{\mathop{\ordinarycolon}}=\infty for z(1,1+μ({0})]z\in(-1,-1+\mu(\{0\})]. We can formalize this heuristic if we instead consider the multiplicative inverse of the SS-transform. See the definition of TT-transform in Equation (2.2).

According to [BV93, Corollary 6.6], for every μ,ν(0){δ0}\mu,\nu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\} the SS-transform satisfies an elegant product formula in some open neighborhood

Sμν=SμSν.S_{\mu\boxtimes\nu}=S_{\mu}S_{\nu}.

For example, the formula holds on (1+[μν]({0}),0)(-1+[\mu\boxtimes\nu](\{0\}),0), and it is known that [μν]({0})=max{μ({0}),ν({0})}[\mu\boxtimes\nu](\{0\})=\max\{\mu(\{0\}),\nu(\{0\})\}, see [Bel03].

Lemma 2.9.

Let us consider μ(0){δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}.

  1. (1)

    ([BN08, Eq. (3.7) and (3.11)]) 222 In principle, this is proved in [BN08] only for compactly supported measures but their argument is also valid for all measures in (0){δ0}\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}. For t(0,1μ({0}))t\in(0,1-\mu(\{0\})), z(0,1)z\in(0,1), we have

    Sμ(tz)=SDilt(μ1/t)(z).S_{\mu}(-tz)=S_{{\rm Dil}_{t}(\mu^{\boxplus 1/t})}(-z).
  2. (2)

    ([HS07, Proposition 3.13]) If μ({0})=0\mu(\{0\})=0, then

    Sμ(t)Sμ1(t1)=1,t(0,1).S_{\mu}(-t)S_{\mu^{\langle-1\rangle}}(t-1)=1,\qquad t\in(0,1).
  3. (3)

    ([HM13, Lemma 2]) If additionally, μ\mu is not a Dirac measure, then SμS_{\mu} is strictly decreasing on (1+μ({0}),0)(-1+\mu(\{0\}),0).

The multiplicative inverse of the SS-transform, which goes by the name of TT-transform [Dyk07, Eq. (15)] and plays the same role as the SS-transform. For μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}), we define its (shifted) TT-transform as the function Tμ:(0,1)0T_{\mu}\mathrel{\mathop{\ordinarycolon}}(0,1)\to\mathbb{R}_{\geq 0} such that

(2.2) Tμ(t)={0if t(0,μ({0})],1Sμ(t1)if t(μ({0}),1).T_{\mu}(t)=\begin{cases}0&\text{if }t\in(0,\mu(\{0\})],\vspace{2mm}\\ \dfrac{1}{S_{\mu}(t-1)}&\text{if }t\in(\mu(\{0\}),1).\end{cases}

The TT-transform is continuous on (0,1)(0,1) because

limt+μ({0})Tμ(t)=0,\lim_{t^{+}\to\mu(\{0\})}T_{\mu}(t)=0,

which follows from the fact that Sμ(t)S_{\mu}(t)\to\infty as tt approaches 1+μ({0})-1+\mu(\{0\}) from the right when μ({0})>0\mu(\{0\})>0.

Remark 2.10.

Notice that the TT-transform contains the same information of the SS-transform and thus it determines the measure μ\mu. Also, all the properties of the SS-transform mentioned above can be readily adapted to the TT-transform. We highlight two advantages of using the new transform. First is that it can be defined over the whole interval (0,1)(0,1), formalizing the intuition mentioned in Remark 2.8. In particular, we can define the TT-transform of δ0\delta_{0} as Tδ0(t)=0T_{\delta_{0}}(t)=0 for all t(0,1)t\in(0,1). Moreover the TT-tranform can be understood as the inverse of the cumulative distribution function of the so-called law of large numbers for the multiplicative convolution that we introduce below.

Tucci [Tuc10], Haagerup and Möller [HM13, Theorem 2] proved that, for any μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}), there exists Φ(μ)(0)\Phi(\mu)\in\mathcal{M}(\mathbb{R}_{\geq 0}) such that

(μn)1/n𝑤Φ(μ)as n.(\mu^{\boxtimes n})^{\langle 1/n\rangle}\xrightarrow{w}\Phi(\mu)\qquad\text{as }n\to\infty.

If μ\mu is a Dirac measure, then Φ(μ)=μ\Phi(\mu)=\mu. Otherwise, the distribution Φ(μ)\Phi(\mu) is determined by

(2.3) Φ(μ)({0})=μ({0})andFΦ(μ)(Tμ(t))=tfor all t(μ({0}),1).\displaystyle\Phi(\mu)(\{0\})=\mu(\{0\})\quad\text{and}\quad F_{\Phi(\mu)}\left(T_{\mu}(t)\right)=t\qquad\text{for all }t\in(\mu(\{0\}),1).

Moreover, the support of the measure Φ(μ)(0)\Phi(\mu)\in\mathcal{M}(\mathbb{R}_{\geq 0}) is the closure of the interval

(2.4) (α,β)=((0x1μ(dx))1,0xμ(dx)).(\alpha,\beta)=\left(\left(\int_{0}^{\infty}x^{-1}\mu(dx)\right)^{-1},\int_{0}^{\infty}x\,\mu(dx)\right).

In other words, TμT_{\mu} and FΦ(μ)F_{\Phi(\mu)} are inverse functions of each other. Besides, as long as μ\mu is not a Dirac measure, TμT_{\mu} is strictly increasing and then these functions provide a one-to-one correspondence between (μ({0}),1)(\mu(\{0\}),1) and (α,β)(\alpha,\beta).

2.5. Finite free probability

In this section, we summarize some definitions and basic results on finite free probability.

2.5.1. Finite free convolutions

The finite free additive and multiplicative convolutions that correspond to two classical polynomial convolutions were studied a century ago by Szegö [Sze22] and Walsh [Wal22] and they were recently rediscovered in [MSS22] as expected characteristic polynomials of the sum and product of randomly rotated matrices.

Definition 2.11.

Let p,q𝒫dp,q\in\mathcal{P}_{d} be polynomials of degree dd.

  • The finite free additive convolution of pp and qq is the polynomial pdq𝒫dp\boxplus_{d}q\in\mathcal{P}_{d} uniquely determined by

    (2.5) 𝖾~k(d)(pdq)=j=0k(kj)𝖾~j(d)(p)𝖾~kj(d)(q)for k=0,1,,d.\widetilde{\mathsf{e}}_{k}^{(d)}(p\boxplus_{d}q)=\sum_{j=0}^{k}\binom{k}{j}\widetilde{\mathsf{e}}_{j}^{(d)}(p)\widetilde{\mathsf{e}}_{k-j}^{(d)}(q)\qquad\text{for }k=0,1,\dots,d.
  • The finite free multiplicative convolution of pp and qq is the polynomial pdq𝒫dp\boxtimes_{d}q\in\mathcal{P}_{d} uniquely determined by

    (2.6) 𝖾~k(d)(pdq)=𝖾~k(d)(p)𝖾~k(d)(q)for k=0,1,,d.\widetilde{\mathsf{e}}_{k}^{(d)}(p\boxtimes_{d}q)=\widetilde{\mathsf{e}}_{k}^{(d)}(p)\widetilde{\mathsf{e}}_{k}^{(d)}(q)\qquad\text{for }k=0,1,\dots,d.

In many circumstances the finite free convolution of two real-rooted polynomials is also real-rooted. For p,q𝒫dp,q\in\mathcal{P}_{d}, then

  1. (i)

    p,q𝒫d()pdq𝒫d()p,q\in\mathcal{P}_{d}(\mathbb{R})\ \Longrightarrow\ p\boxplus_{d}q\in\mathcal{P}_{d}(\mathbb{R}).

  2. (ii)

    p𝒫d(),q𝒫d(0)pdq𝒫d()p\in\mathcal{P}_{d}(\mathbb{R}),\ q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0})\ \Longrightarrow\ p\boxtimes_{d}q\in\mathcal{P}_{d}(\mathbb{R}).

  3. (iii)

    p,q𝒫d(0)pnq𝒫d(0)p,q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0})\ \Longrightarrow\ p\boxtimes_{n}q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}).

If we replace above the sets 0\mathbb{R}_{\geq 0} by strict inclusion >0\mathbb{R}_{>0} the statements remain valid. Moreover we can use a rule of signs to determine the location of the roots when doing a multiplicative convolution of polynomials in 𝒫d(0)\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) or 𝒫d(0)\mathcal{P}_{d}(\mathbb{R}_{\leq 0}), see [MMP24a, Section 2.5].

Definition 2.12 (Interlacing).

Let p,q𝒫d()p,q\in\mathcal{P}_{d}(\mathbb{R}). We say that qq interlaces pp, denoted by pqp\preccurlyeq q, if

(2.7) λd(p)λd(q)λd1(p)λd1(q)λ1(p)λ1(q).\quad\lambda_{d}(p)\leq\lambda_{d}(q)\leq\lambda_{d-1}(p)\leq\lambda_{d-1}(q)\leq\cdots\leq\lambda_{1}(p)\leq\lambda_{1}(q).

We use the notation pqp\prec q when all inequalities in (2.7) are strict.

From the real-root preservation and the linearity of the free finite convolution, one can derive an interlacing-preservation property for the free convolutions, see [MMP24a, Proposition 2.11].

Proposition 2.13 (Preservation of interlacing).

If p,q𝒫d()p,q\in\mathcal{P}_{d}(\mathbb{R}) and pqp\preccurlyeq q, then

r𝒫d()pdrqdrr\in\mathcal{P}_{d}(\mathbb{R})\quad\Rightarrow\quad p\boxplus_{d}r\preccurlyeq q\boxplus_{d}r

and

r𝒫d(0)pdrqdr.r\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0})\quad\Rightarrow\quad p\boxtimes_{d}r\preccurlyeq q\boxtimes_{d}r.

The same statements hold if we replace all \preccurlyeq by \prec.

The finite free cumulants were defined in [AP18] as an analogue of the free cumulants [Spe94]. Below we define the finite free cumulants using the coefficient-cumulant formula from [AP18, Remark 3.5] and briefly mention the basic facts that will be used in this paper. For a detailed explanation of these objects we refer the reader to [AP18].

Definition 2.14 (Finite free cumulants).

The finite free cumulants of a polynomial p𝒫dp\in\mathcal{P}_{d} are the sequence (κn(d)(p))n=1d(\kappa_{n}^{(d)}(p))_{n=1}^{d} defined in terms of the coefficients as

(2.8) κn(d)(p):=(d)n1(n1)!πP(n)(1)|π|1(|π|1)!Vπ𝖾~|V|(d)(p) for n=1,2,,d,\kappa_{n}^{(d)}(p)\mathrel{\mathop{\ordinarycolon}}=\frac{(-d)^{n-1}}{(n-1)!}\sum_{\pi\in P(n)}\ (-1)^{|\pi|-1}(|\pi|-1)!\prod_{V\in\pi}\widetilde{\mathsf{e}}_{|V|}^{(d)}(p)\qquad\text{ for }n=1,2,\dots,d,

where P(n)P(n) is the set of all partitions of {1,,n}\{1,\dots,n\}, |π||\pi| is the number of blocks of π\pi and |V||V| is the size of the block VπV\in\pi.

The main property of finite free cumulants is that they linearize the finite free additive convolution d\boxplus_{d}.

Proposition 2.15 (Proposition 3.6 of [AP18]).

For p,q𝒫dp,q\in\mathcal{P}_{d} it holds that

κn(d)(pdq)=κn(d)(p)+κn(d)(q)for n=1,,d.\kappa_{n}^{(d)}(p\boxplus_{d}q)=\kappa_{n}^{(d)}(p)+\kappa_{n}^{(d)}(q)\qquad\text{for }n=1,\dots,d.

2.5.2. Limit theorems

The connection between free and finite free probability is revealed in the asymptotic regime when we take the degree dd\to\infty. To simplify the presentation throughout this paper, we introduce some notation for a sequence of polynomials that converge to a measure.

Notation 2.16.

We say that a sequence of polynomials (pj)j\left(p_{j}\right)_{j\in\mathbb{N}} converges to a measure μ\mu\in\mathcal{M} if

(2.9) pj𝒫djfor allj,limjdj=andμpj𝑤μasj.p_{j}\in\mathcal{P}_{d_{j}}\,\,\text{for all}\,\,j\in\mathbb{N},\qquad\lim_{j\to\infty}d_{j}=\infty\qquad\text{and}\qquad\mu\left\llbracket p_{j}\right\rrbracket\xrightarrow{w}\mu\,\,\text{as}\,\,j\to\infty.

Furthermore, given a degree sequence (dj)j(d_{j})_{j\in\mathbb{N}} we say that (kj)j(k_{j})_{j\in\mathbb{N}} is a diagonal sequence with ratio limit tt if

(2.10) kj{1,,dj}for alljandlimjkjdj=t.k_{j}\in\{1,\dots,d_{j}\}\,\,\text{for all}\,\,j\in\mathbb{N}\qquad\text{and}\qquad\lim_{j\to\infty}\frac{k_{j}}{d_{j}}=t.

Notice that if CC\subset\mathbb{C} is a closed subset and 𝔭𝒫d(C)\mathfrak{p}\subset\mathcal{P}_{d}(C) converges to μ\mu then necessarily μ(C)\mu\in\mathcal{M}(C).

In case when the limit distribution is compactly supported, we can characterize the convergence in distribution via the moments or cumulants. While not stated explicitly, the proof of this proposition is implicit in [AP18].

Proposition 2.17.

Fix KK a compact subset on \mathbb{R}. Let μ()\mu\in\mathcal{M}(\mathbb{R}) and (pd)d(p_{d})_{d\in\mathbb{N}} be a sequence of polynomials of degree dd such that every pd𝒫d(K)p_{d}\in\mathcal{P}_{d}(K) has only roots in KK. The following assertions are equivalent.

  1. (1)

    The sequence (μpd)d\left(\mu\left\llbracket p_{d}\right\rrbracket\right)_{d\in\mathbb{N}} converges to μ\mu in distribution.

  2. (2)

    Moment convergence: limdmn(pd)=mn(μ)\lim_{d\rightarrow\infty}m_{n}(p_{d})=m_{n}(\mu) for all nn\in\mathbb{N}.

  3. (3)

    Cumulant convergence: limdκn(d)(pd)=𝒓n(μ)\lim_{d\rightarrow\infty}\kappa_{n}^{(d)}(p_{d})={\bm{r}}_{n}\left(\mu\right) for all nn\in\mathbb{N}.

Proposition 2.18 (Corollary 5.5 in [AP18] and Theorem 1.4 in [AGP23]).

Let pd,qd𝒫d()p_{d},q_{d}\in\mathcal{P}_{d}(\mathbb{R}) be two sequences of polynomials converging to compactly supported measures μ,ν()\mu,\nu\in\mathcal{M}(\mathbb{R}), respectively. Then

  1. (1)

    The sequence (μpddqd)d\left(\mu\left\llbracket p_{d}\boxplus_{d}q_{d}\right\rrbracket\right)_{d\in\mathbb{N}} converges to μν\mu\boxplus\nu in distribution.

  2. (2)

    If additionally qd𝒫d(0)q_{d}\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and ν(0)\nu\in\mathcal{M}(\mathbb{R}_{\geq 0}) then (μpddqd)d\left(\mu\left\llbracket p_{d}\boxtimes_{d}q_{d}\right\rrbracket\right)_{d\in\mathbb{N}} converges to μν\mu\boxtimes\nu in distribution.

A finite free version of Tucci, Haagerup and Möller limit from (2.3) was studied by Fujie and Ueda [FU23]. Given p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) a polynomial with multiplicity rr at root 0, then [FU23, Theorem 3.2] asserts that there exists a limiting polynomial

(2.11) Φd(p):=limn(pdn)1/n.\Phi_{d}(p)\mathrel{\mathop{\ordinarycolon}}=\lim_{n\to\infty}(p^{\boxtimes_{d}n})^{\langle{1}/{n}\rangle}.

Moreover, the roots of Φd(p)\Phi_{d}(p) can be explicitly written in terms of the coefficients of pp:

(2.12) λk(Φd(p))={𝖾~k(d)(p)𝖾~k1(d)(p)if 1kdr,0if dr+1kd.\lambda_{k}\left(\Phi_{d}(p)\right)=\begin{cases}\dfrac{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}&\text{if }1\leq k\leq d-r,\vspace{2mm}\\ 0&\text{if }d-r+1\leq k\leq d.\end{cases}

3. Finite free cumulants of derivatives

Since the work of Marcus [Mar21] and Marcus, Spielman and Srivastava [MSS22], it has been clear that the finite free convolutions behave well when applying differential operators, in particular, with respect to differentiation. One instance of such behaviour is the content of Theorem 1.2 from the introduction, stating that the asymptotic root distribution of polynomials after repeated differentiation tends to fractional free convolution.

In this section, we will collect some simple but powerful lemmas regarding finite free cumulants and differentiation that will a source of useful insight for the rest of the paper. In particular, some of the ideas are key steps in the proof of our main theorem. Interestingly enough, the results of this section allow us to provide a more direct proof of Theorem 1.2, which is given at the end of this section. Note that later in Section 7.3 we will generalize this result to Theorem 7.7.

Notation 3.1.

We will denote by k|d:𝒫d𝒫k\partial_{k|d}\,\mathrel{\mathop{\ordinarycolon}}\mathcal{P}_{d}\to\mathcal{P}_{k} the operation

k|dp:=p(dk)(d)dk\partial_{k|d}\,p\mathrel{\mathop{\ordinarycolon}}=\frac{p^{(d-k)}}{\left(d\right)_{d-k}}

that differentiates dkd-k times a polynomial of degree dd and then normalizes by 1(d)dk\frac{1}{\left(d\right)_{d-k}} to obtain a monic polynomial of degree kk.

Notice that directly from the definition, we have that

j|kk|d=j|dfor jkd.\partial_{j|k}\,\circ\partial_{k|d}\,=\partial_{j|d}\,\qquad\text{for }j\leq k\leq d.

This can be understood as the finite free analogue of the following well-known property of the fractional convolution powers:

(μt)u=μtufor t,u1.\left(\mu^{\boxplus t}\right)^{\boxplus u}=\mu^{\boxplus tu}\qquad\text{for $t,u\geq 1$}.

This will be clear from the next series of lemmas in particular from Corollary 3.5.

Our first main claim is that the normalized coefficients of a polynomial are invariant under the operations that we just introduced.

Lemma 3.2.

If p𝒫dp\in\mathcal{P}_{d} then

𝖾~j(k)(k|dp)=𝖾~j(d)(p) for 1jkd.\widetilde{\mathsf{e}}_{j}^{(k)}(\partial_{k|d}\,p)=\widetilde{\mathsf{e}}_{j}^{(d)}(p)\qquad\text{ for }1\leq j\leq k\leq d.
Proof.

Recall that we write the polynomial pp of degree dd as

p(x)=j=0d(1)j(dj)𝖾~j(d)(p)xdj.p(x)=\sum_{j=0}^{d}(-1)^{j}\binom{d}{j}\widetilde{\mathsf{e}}_{j}^{(d)}(p)x^{d-j}.

Then

p(x)d=j=0d1(1)j(d1j)𝖾~j(d)(p)xdj1.\displaystyle\frac{p^{\prime}(x)}{d}=\sum_{j=0}^{d-1}(-1)^{j}\binom{d-1}{j}\widetilde{\mathsf{e}}_{j}^{(d)}(p)x^{d-j-1}.

Thus 𝖾~j(d)(p)=𝖾~j(d1)(d1|dp)\widetilde{\mathsf{e}}_{j}^{(d)}(p)=\widetilde{\mathsf{e}}_{j}^{(d-1)}(\partial_{d-1|d}\,p) for j=1,,d1j=1,\dots,d-1. If we now fix jj and iterate this procedure then we conclude that

𝖾~j(d)(p)=𝖾~j(d1)(d1|dp)=𝖾~j(d2)(d2|dp)==𝖾~j(k)(k|dp)\widetilde{\mathsf{e}}_{j}^{(d)}(p)=\widetilde{\mathsf{e}}_{j}^{(d-1)}(\partial_{d-1|d}\,p)=\widetilde{\mathsf{e}}_{j}^{(d-2)}(\partial_{d-2|d}\,p)=\dots=\widetilde{\mathsf{e}}_{j}^{(k)}(\partial_{k|d}\,p)

as desired. ∎

Remark 3.3.

Since additive and multiplicative convolutions only depend on these normalized coefficients, a direct implication is that convolutions are operations that commute with differentiation. Specifically, for kdk\leq d and p,q𝒫dp,q\in\mathcal{P}_{d}, one has

k|d(pdq)=(k|dp)k(k|dq) andk|d(pdq)=(k|dp)k(k|dq).\partial_{k|d}\,(p\boxplus_{d}q)=\left(\partial_{k|d}\,p\right)\boxplus_{k}\left(\partial_{k|d}\,q\right)\qquad\text{ and}\qquad\partial_{k|d}\,(p\boxtimes_{d}q)=\left(\partial_{k|d}\,p\right)\boxtimes_{k}\left(\partial_{k|d}\,q\right).

To the best of our knowledge, these identities have not appeared before in the literature. However, the formula for the additive convolution follows easily from two facts mentioned in [MSS22]: additive convolution commutes with differentiation (Section 1.1) and a relation between polynomials with different degrees (Lemma 1.16).

Lemma 3.2 can be converted to a similar expression, but here we use finite free cumulants instead of coefficients.

Proposition 3.4.

Given a polynomial p𝒫dp\in\mathcal{P}_{d}, one has

κn(j)(j|dp)=(jd)n1κn(d)(p) for 1njd.\kappa_{n}^{(j)}(\partial_{j|d}\,p)=\left(\frac{j}{d}\right)^{n-1}\kappa_{n}^{(d)}(p)\quad\text{ for }1\leq n\leq j\leq d.
Proof.

For 1njd1\leq n\leq j\leq d, we compute

κn(j)(j|dp)\displaystyle\kappa_{n}^{(j)}(\partial_{j|d}\,p) =(j)n1(n1)!πP(n)(1)|π|1(|π|1)!Vπ𝖾~|V|(j)(j|dp)\displaystyle=\frac{(-j)^{n-1}}{(n-1)!}\sum_{\pi\in P(n)}\ (-1)^{|\pi|-1}(|\pi|-1)!\prod_{V\in\pi}\widetilde{\mathsf{e}}_{|V|}^{(j)}(\partial_{j|d}\,p) (by (2.8))
=(jd)n1(d)n1(n1)!π𝒫(n)(1)|π|1(|π|1)!Vπ𝖾~|V|(d)(p)\displaystyle=\left(\frac{j}{d}\right)^{n-1}\frac{(-d)^{n-1}}{(n-1)!}\sum_{\pi\in\mathcal{P}(n)}(-1)^{|\pi|-1}(|\pi|-1)!\prod_{V\in\pi}\widetilde{\mathsf{e}}_{|V|}^{(d)}(p) (by Lemma 3.2)
=(jd)n1κn(d)(p).\displaystyle=\left(\frac{j}{d}\right)^{n-1}\kappa_{n}^{(d)}(p). (by (2.8))

By basic properties of cumulants, the factor (j/d)n1(j/d)^{n-1} can be interpreted as doing a dilation and a fractional convolution:

Corollary 3.5.

Given a polynomial p𝒫dp\in\mathcal{P}_{d}, one has

κn(j)(j|dp)=κn(d)(Diljdpddj)for 1njd,\kappa_{n}^{(j)}\left(\partial_{j|d}\,p\right)=\kappa_{n}^{(d)}\left({\rm Dil}_{\frac{j}{d}}p^{\boxplus_{d}\frac{d}{j}}\right)\quad\text{for $1\leq n\leq j\leq d$,}

where the free fractional finite free convolution power pdtp^{\boxplus_{d}t} is a polynomial determined by κn(d)(pdt)=tκn(d)(p)\kappa_{n}^{(d)}\left(p^{\boxplus_{d}t}\right)=t\kappa_{n}^{(d)}\left(p\right) for t1t\geq 1.

As a direct consequence of this result we will now prove that derivatives of a sequence of polynomials tend to the free fractional convolution of the limiting distribution. This result was conjectured by Steinerberger [Ste19, Ste21] and then proved formally by Hoskins and Kabluchko [HK21] using differential equations. A proof using finite free multiplicative convolution was given by Arizmendi, Garza-Vargas and Perales [AGP23]. Notice that the main upshot of this new proof is that we no longer need to use the fact that finite free multiplicative convolution tends to free multiplicative convolution. Instead, we will use this result later to give a new proof that finite free multiplicative convolution tends to free multiplicative convolution.

New proof of Theroem 1.2.

By Proposition 3.4 we know that

κn(ji)(ji|dipi)=(jidi)n1κn(di)(pi) for 1nji.\kappa_{n}^{(j_{i})}\left(\partial_{j_{i}|d_{i}}\,p_{i}\right)=\left(\frac{j_{i}}{d_{i}}\right)^{n-1}\kappa_{n}^{(d_{i})}(p_{i})\qquad\text{ for }1\leq n\leq j_{i}.

Clearly, since limijidi=t>0\lim_{i\to\infty}\frac{j_{i}}{d_{i}}=t>0 then limiji=\lim_{i\to\infty}j_{i}=\infty. Thus, if we fix nn\in\mathbb{N}, the above equality holds for all sufficiently large ii. Then

limiκn(ji)(ji|dipi)=limi(jidi)n1κn(di)(pi)=tn1𝒓n(μ)=𝒓n(Dilt(μ1t)),\lim_{i\to\infty}\kappa_{n}^{(j_{i})}\left(\partial_{j_{i}|d_{i}}\,p_{i}\right)=\lim_{i\to\infty}\left(\frac{j_{i}}{d_{i}}\right)^{n-1}\kappa_{n}^{(d_{i})}(p_{i})=t^{n-1}{\bm{r}}_{n}\left(\mu\right)={\bm{r}}_{n}\left({\rm Dil}_{t}\left(\mu^{\boxplus\frac{1}{t}}\right)\right),

and convergence of all finite free cumulants for a measure with compact support is equivalent to weak convergence of measures by Proposition 2.17. ∎

In Section 7.4, we will show that the assertion of Theorem 1.2 still holds if we drop the assumption that μ\mu is compact, see Theorem 7.7.

Remark 3.6.

We should add a few words on Theorem 1.2 and its generalization, Theorem 7.7. We can always extend the result for t=1t=1 but not for t=0t=0. Let us explain this case in detail. First we consider t=1t=1. Given pd𝒫d()p_{d}\in\mathcal{P}_{d}(\mathbb{R}) and any interval II\subset\mathbb{R} by interlacing property, we can see that

|μpd(I)μ(d1)|dpd(I)|2d1.\left|\mu\left\llbracket p_{d}\right\rrbracket(I)-\mu\left\llbracket\partial_{(d-1)|d}\,p_{d}\right\rrbracket(I)\right|\leq\frac{2}{d-1}.

Indeed, if the number of roots of pdp_{d} included in II is kk then the possibility for the number of roots of pdp_{d}^{\prime} included in II is only k1k-1, kk, or k+1k+1. This together with the inequality k+1d1>kd>k1d1\frac{k+1}{d-1}>\frac{k}{d}>\frac{k-1}{d-1}, gives the bound above.

Hence, if j/d1j/d\to 1 then

|μpd(I)μj|dpd(I)|i=1dj2di2(dj)j0\left|\mu\left\llbracket p_{d}\right\rrbracket(I)-\mu\left\llbracket\partial_{j|d}\,p_{d}\right\rrbracket(I)\right|\leq\sum_{i=1}^{d-j}\frac{2}{d-i}\leq\frac{2(d-j)}{j}\to 0

when dd\to\infty. This means that μj|dpd𝑤μ\mu\left\llbracket\partial_{j|d}\,p_{d}\right\rrbracket\xrightarrow{w}\mu.

For t=0t=0, the situation becomes a bit more complicated. This case is essentially related to the law of large numbers of (finite) free probability. For instance, let us consider the sequence of polynomials pd(x)=xd1(xd)p_{d}(x)=x^{d-1}(x-d). Clearly, μpd𝑤δ0\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\delta_{0}, but 1|dpd=(x1)\partial_{1|d}\,p_{d}=(x-1) and then μ1|dpd=δ1↛δ0\mu\left\llbracket\partial_{1|d}\,p_{d}\right\rrbracket=\delta_{1}\not\to\delta_{0}. So, there should be some additional assumptions. If polynomials are uniformly supported by some compact set, we can use the finite free cumulants and their convergence. Precisely,

κn(j)(j|dpd)=(jd)n1κn(d)(pd).\kappa_{n}^{(j)}(\partial_{j|d}\,p_{d})=\left(\frac{j}{d}\right)^{n-1}\kappa_{n}^{(d)}(p_{d}).

Hence, if j/d0j/d\to 0 and jj\to\infty,the finite cumulants satify that κn(j)(j|dpd)0\kappa_{n}^{(j)}(\partial_{j|d}\,p_{d})\to 0 for n1n\geq 1 and κ1(j)(j|dpd)=k1(pd)=m1(pd)\kappa_{1}^{(j)}(\partial_{j|d}\,p_{d})=k_{1}(p_{d})=m_{1}(p_{d}). Thus, the limit measure of μj|dpd\mu\left\llbracket\partial_{j|d}\,p_{d}\right\rrbracket is δa\delta_{a} where aa is the mean of μ\mu, which corresponds to the limit of μt=Dilt(μ1/t)\mu_{t}={\rm Dil}_{t}(\mu^{\boxplus 1/t}) as t0t\to 0 by the law of large numbers. In other words, it is necessary to assume that the limit measure μ\mu has the first moment aa and also the convergence of first moments of pdp_{d}, i.e. 𝖾~1(d)(pd)a\widetilde{\mathsf{e}}_{1}^{(d)}(p_{d})\to a because the limit of μ1|dpd\mu\left\llbracket\partial_{1|d}\,p_{d}\right\rrbracket should be δa\delta_{a}.

Besides, let us consider the polynomials pd(x)=xd2(xd)(x+d)p_{d}(x)=x^{d-2}(x-d)(x+d) for d3d\geq 3. It is trivial that μpd𝑤δ0\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\delta_{0} and 𝖾~1(d)(pd)=0\widetilde{\mathsf{e}}_{1}^{(d)}(p_{d})=0 but 𝖾~2(d)(pd)=2d(d1)2\widetilde{\mathsf{e}}_{2}^{(d)}(p_{d})=\frac{2d}{(d-1)}\to 2 as dd\to\infty, which means 2|dpdx22\partial_{2|d}\,p_{d}\to x^{2}-2. Thus, we additionally need to assume 𝖾~2(d)(pd)a2\widetilde{\mathsf{e}}_{2}^{(d)}(p_{d})\to a^{2}.

Under this two assumptions, we may conclude by the following key formula:

κ2(d)(pd)d=𝖾~1(d)(pd)2𝖾~2(d)(pd)=Var(pd)d1=Var(j|dpd)j1.\frac{\kappa_{2}^{(d)}(p_{d})}{d}=\widetilde{\mathsf{e}}_{1}^{(d)}(p_{d})^{2}-\widetilde{\mathsf{e}}_{2}^{(d)}(p_{d})=\frac{\operatorname{Var}(p_{d})}{d-1}=\frac{\operatorname{Var}(\partial_{j|d}\,p_{d})}{j-1}.

Hence, if 𝖾~2(d)(pd)a2\widetilde{\mathsf{e}}_{2}^{(d)}(p_{d})\to a^{2} then Var(j|dpd)0\operatorname{Var}(\partial_{j|d}\,p_{d})\to 0 for a fixed integer jj. That is, μj|dpd𝑤δa\mu\left\llbracket\partial_{j|d}\,p_{d}\right\rrbracket\xrightarrow{w}\delta_{a}. For jj\to\infty and j/d0j/d\to 0, we need a bit stronger assumption on the boundedness of Var(pd)\operatorname{Var}(p_{d}): if

Var(j|dpd)=j1d1Var(pd)0,\operatorname{Var}(\partial_{j|d}\,p_{d})=\frac{j-1}{d-1}\operatorname{Var}(p_{d})\to 0,

that is, κd(2)(pd)=o(d/j)\kappa_{d}^{(2)}(p_{d})=o(d/j), then we have μj|dpdδa\mu\left\llbracket\partial_{j|d}\,p_{d}\right\rrbracket\to\delta_{a}.

Finally, we conclude this remark by mentioning that the CLT for t=0t=0 was recently dealt with by A. Campbell, S. O’rourke, and D. Renfrew in [COR24]. The considerations above can be modified to give a proof of this result, using finite free cumulants.

4. A partial order on polynomials

In this section we equip 𝒫d()\mathcal{P}_{d}(\mathbb{R}) with a partial order \ll that compares the roots of a pair of polynomials. This partial order, defined through the bijection between 𝒫d()\mathcal{P}_{d}(\mathbb{R}) and d()\mathcal{M}_{d}(\mathbb{R}), comes from a partial order in measures which was studied in connection to free probability in [BV93].

Notation 4.1 (A partial order on measures).

Given μ,ν()\mu,\nu\in\mathcal{M}(\mathbb{R}) we say that μν\mu\ll\nu if their cumulative distribution functions satisfy

Fμ(t)Fν(t) for all t.F_{\mu}(t)\geq F_{\nu}(t)\qquad\text{ for all }t\in\mathbb{R}.

In [BV93, Propositions 4.15 and 4.16] it was shown that for measures μ,ν()\mu,\nu\in\mathcal{M}(\mathbb{R}) such that μν\mu\ll\nu,

  1. (i)

    if ρ()\rho\in\mathcal{M}(\mathbb{R}), then (μρ)(νρ)(\mu\boxplus\rho)\ll(\nu\boxplus\rho), and

  2. (ii)

    if ρ(0)\rho\in\mathcal{M}(\mathbb{R}_{\geq 0}), then (μρ)(νρ)(\mu\boxtimes\rho)\ll(\nu\boxtimes\rho).

In, particular, by considering ρ=(1t)δ0+tδ1\rho=(1-t)\delta_{0}+t\delta_{1} in (ii) above, since μρ=(1t)δ0+tDilt(μ1/t)\mu\boxtimes\rho=(1-t)\delta_{0}+t{\rm Dil}_{t}(\mu^{\boxplus 1/t}), we readily obtain the following.

Corollary 4.2.

If μν\mu\ll\nu then Dilt(μ1/t)Dilt(ν1/t){\rm Dil}_{t}(\mu^{\boxplus 1/t})\ll{\rm Dil}_{t}(\nu^{\boxplus 1/t}) for t(0,1)t\in(0,1).

The goal of this section is to prove finite analogues of the previous results. First we define a partial order in 𝒫d()\mathcal{P}_{d}(\mathbb{R}). Recall that given a polynomial p𝒫d()p\in\mathcal{P}_{d}(\mathbb{R}) its roots are denoted by λ1(p)λ2(p)λd(p)\lambda_{1}(p)\geq\lambda_{2}(p)\geq\dots\geq\lambda_{d}(p).

Notation 4.3 (A partial order on polynomials).

Given p,q𝒫d()p,q\in\mathcal{P}_{d}(\mathbb{R}) we say that pqp\ll q if the roots of pp are smaller than the roots of qq in the following sense:

λi(p)λi(q)for all i=1,2,,d.\lambda_{i}(p)\leq\lambda_{i}(q)\qquad\text{for all }i=1,2,\dots,d.

It is readily seen that

pqμpμq.p\ll q\qquad\Longleftrightarrow\qquad\mu\left\llbracket p\right\rrbracket\ll\mu\left\llbracket q\right\rrbracket.
Theorem 4.4 (d\boxplus_{d} and d\boxtimes_{d} preserve \ll).

Let p,q𝒫d()p,q\in\mathcal{P}_{d}(\mathbb{R}) such that pqp\ll q.

  1. (1)

    If r𝒫d()r\in\mathcal{P}_{d}(\mathbb{R}) then (pdr)(qdr)(p\boxplus_{d}r)\ll(q\boxplus_{d}r).

  2. (2)

    If r𝒫d(0)r\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) then (pdr)(qdr)(p\boxtimes_{d}r)\ll(q\boxtimes_{d}r).

Proof.

The case of p=qp=q is clear, so below we assume that pqp\neq q.

First we assume that pp and qq are both simple. For every t[0,1]t\in[0,1] we construct the polynomial pt𝒫d()p_{t}\in\mathcal{P}_{d}(\mathbb{R}) with roots given by

λi(pt)=(1t)λi(p)+tλi(q) for i=1,,d,\lambda_{i}(p_{t})=(1-t)\lambda_{i}(p)+t\lambda_{i}(q)\qquad\text{ for }i=1,\dots,d,

so that (pt)t[0,1](p_{t})_{t\in[0,1]} is a continuous interpolation from p0=pp_{0}=p to p1=qp_{1}=q. Consider the constant

M:=max{|λi(q)λi(p)|:i=1,2,,d}>0.M\mathrel{\mathop{\ordinarycolon}}=\max\left\{|\lambda_{i}(q)-\lambda_{i}(p)|\mathrel{\mathop{\ordinarycolon}}i=1,2,\dots,d\right\}>0.

Notice that the assumption, λi(p)λi(q)\lambda_{i}(p)\leq\lambda_{i}(q), implies that for every 0st10\leq s\leq t\leq 1 and i=1,,di=1,\dots,d it holds that

(4.1) 0λi(pt)λi(ps)=(ts)(λi(q)λi(p))(ts)M.0\leq\lambda_{i}(p_{t})-\lambda_{i}(p_{s})=(t-s)(\lambda_{i}(q)-\lambda_{i}(p))\leq(t-s)M.

We now consider the constant

m:=min{|λi(p)λj(p)|,|λi(q)λj(q)|:1i<jd}.m\mathrel{\mathop{\ordinarycolon}}=\min\left\{|\lambda_{i}(p)-\lambda_{j}(p)|,|\lambda_{i}(q)-\lambda_{j}(q)|\mathrel{\mathop{\ordinarycolon}}1\leq i<j\leq d\right\}.

Notice that for every t[0,1]t\in[0,1] and for 1i<jd1\leq i<j\leq d, one has the lower bound

λi(pt)λj(pt)\displaystyle\lambda_{i}(p_{t})-\lambda_{j}(p_{t}) =(1t)λi(p)+tλi(q)((1t)λj(p)+tλj(q))\displaystyle=(1-t)\lambda_{i}(p)+t\lambda_{i}(q)-((1-t)\lambda_{j}(p)+t\lambda_{j}(q))
=(1t)(λi(p)λj(p))+t(λi(q)λj(q))\displaystyle=(1-t)(\lambda_{i}(p)-\lambda_{j}(p))+t(\lambda_{i}(q)-\lambda_{j}(q))
(1t)m+tm\displaystyle\geq(1-t)m+tm
=m.\displaystyle=m.

This bound, together with (4.1), guarantees that for 0tt+ε10\leq t\leq t+\varepsilon\leq 1 and ε<mM\varepsilon<\frac{m}{M}. Then, we have

0λi+1(pt+ε)λi+1(pt)εM<mλi(pt)λi+1(pt).0\leq\lambda_{i+1}(p_{t+\varepsilon})-\lambda_{i+1}(p_{t})\leq\varepsilon M<m\leq\lambda_{i}(p_{t})-\lambda_{i+1}(p_{t}).

By adding λi+1(pt)\lambda_{i+1}(p_{t}), we obtain that

λi+1(pt)λi+1(pt+ε)λi(pt),\lambda_{i+1}(p_{t})\leq\lambda_{i+1}(p_{t+\varepsilon})\leq\lambda_{i}(p_{t}),

which means that the following interlacing inequality holds:

ptpt+εfor ε<mM and 0tt+ε1.p_{t}\preccurlyeq p_{t+\varepsilon}\qquad\text{for $\varepsilon<\frac{m}{M}$ and $0\leq t\leq t+\varepsilon\leq 1$.}

Thus, the family of polynomials (pt)t[0,1](p_{t})_{t\in[0,1]} is monotonically increasing, i.e., their roots increase as tt increases.

We now explain how to prove part (1). Since additive convolution preserves interlacing, then

(ptdr)(pt+εdr)for ε<mM and 0tt+ε1.(p_{t}\boxplus_{d}r)\preccurlyeq(p_{t+\varepsilon}\boxplus_{d}r)\qquad\text{for }\varepsilon<\frac{m}{M}\text{ and }0\leq t\leq t+\varepsilon\leq 1.

In particular, we get that (ptdr)(pt+εdr)(p_{t}\boxplus_{d}r)\ll(p_{t+\varepsilon}\boxplus_{d}r). In other words, for every i=1,,di=1,\dots,d, the function fi:[0,1]f_{i}\mathrel{\mathop{\ordinarycolon}}[0,1]\to\mathbb{R} defined as fi(t)=λi(ptdr)f_{i}(t)=\lambda_{i}(p_{t}\boxplus_{d}r) is increasing, and (1) follows from

λi(pdr)=λi(p0dr)λi(p1dr)=λi(qdr).\lambda_{i}(p\boxplus_{d}r)=\lambda_{i}(p_{0}\boxplus_{d}r)\leq\lambda_{i}(p_{1}\boxplus_{d}r)=\lambda_{i}(q\boxplus_{d}r).

Part (2) follows by a similar method.

For the case, when pp or qq have multiple roots, we can simply approximate them with polynomials that are simple. For example, for nn\in\mathbb{N} consider the polynomials pnp_{n} and qnq_{n} with roots

λ1(p)1n,λ2(p)2n,,λd(p)dnandλ1(q)1n,λ2(q)2n,,λd(q)dn,\lambda_{1}(p)-\tfrac{1}{n},\lambda_{2}(p)-\tfrac{2}{n},\dots,\lambda_{d}(p)-\tfrac{d}{n}\qquad\text{and}\qquad\lambda_{1}(q)-\tfrac{1}{n},\lambda_{2}(q)-\tfrac{2}{n},\dots,\lambda_{d}(q)-\tfrac{d}{n},

respectively. Then pn,qnp_{n},q_{n} are simple, and satisfy pnqnp_{n}\ll q_{n}, limnpn=p\lim_{n\to\infty}p_{n}=p and limnqn=q\lim_{n\to\infty}q_{n}=q. The general result then follows from the continuity of d\boxplus_{d} and d\boxtimes_{d}. ∎

As a direct corollary we get an inequality for the derivatives of polynomials, which can be seen as the finite free analogue of Corollary 4.2.

Corollary 4.5 (Differentiation preserves \ll).

Given p,q𝒫d()p,q\in\mathcal{P}_{d}(\mathbb{R}) such that pqp\ll q, one has

(k|dp)(k|dq)for k=1,,d.(\partial_{k|d}\,p)\ll(\partial_{k|d}\,q)\qquad\text{for }k=1,\dots,d.
Proof.

Fix a k=1,,dk=1,\dots,d and recall that if r(x)=xdk(x1)kr(x)=x^{d-k}(x-1)^{k} then pdr(x)=xdkk|dp(x)p\boxtimes_{d}r(x)=x^{d-k}\partial_{k|d}\,p(x) and qdr=xdkk|dqq\boxtimes_{d}r=x^{d-k}\partial_{k|d}\,q. By Theorem 4.4 we get that (xdkk|dp)(xdkk|dq)(x^{d-k}\partial_{k|d}\,p)\ll(x^{d-k}\partial_{k|d}\,q) and this is equivalent to (k|dp)(k|dq)(\partial_{k|d}\,p)\ll(\partial_{k|d}\,q). ∎

Another interesting consequence of the theorem, which will be useful later, is that the map Φd:𝒫d(0)𝒫d(0)\Phi_{d}\mathrel{\mathop{\ordinarycolon}}\mathcal{P}_{d}(\mathbb{R}_{\geq 0})\to\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) from Equation (2.11) also preserves the partial order.

Proposition 4.6.

Given p,q𝒫d(0)p,q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) such that pqp\ll q, one has Φd(p)Φd(q)\Phi_{d}(p)\ll\Phi_{d}(q).

Proof.

First notice that using induction we can prove pdnqdnp^{\boxtimes_{d}n}\ll q^{\boxtimes_{d}n}. Indeed, the base case n=1n=1 is just our assumption and the inductive step follows from applying Theorem 4.4 twice:

pd(n+1)=(pdndp)(pdndq)(qdndq)=qd(n+1).p^{\boxtimes_{d}(n+1)}=\left(p^{\boxtimes_{d}n}\boxtimes_{d}p\right)\ll\left(p^{\boxtimes_{d}n}\boxtimes_{d}q\right)\ll\left(q^{\boxtimes_{d}n}\boxtimes_{d}q\right)=q^{\boxtimes_{d}(n+1)}.

Letting nn\to\infty, we conclude that

Φd(p)=limn(pdn)1/nlimn(qdn)1/n=Φd(q).\Phi_{d}(p)=\lim_{n\to\infty}\left(p^{\boxtimes_{d}n}\right)^{\langle{1}/{n}\rangle}\ll\lim_{n\to\infty}\left(q^{\boxtimes_{d}n}\right)^{\langle{1}/{n}\rangle}=\Phi_{d}(q).

5. Root bounds on Jacobi polynomials

The purpose of this section is to show a uniform bound on the extreme roots of Jacobi polynomials, which will be crucial in our proof of Theorem 1.1. The bound readily follows from a classic result of Moak, Saff and Varga [MSV79] after reparametrization of Jacobi polynomials, see Theorem 5.2.

The Jacobi polynomials have been studied thoroughly (notably by Szegö [Sze75]) and the literature on them is extensive. In this section we restrict ourselves to those results that are necessary to prove the bound. Notice that these polynomials are a particular case of the much larger class of hypergeometric polynomials. These polynomials will be reviewed in detail later in Section 10.3, providing plenty of examples using our main theorem once proved.

5.1. Basic facts from a free probability perspective.

We will adopt a slightly different point of view on Jacobi polynomials to emphasize the intuition coming from free probability. Specifically, by making a simple change of variables we can make the parameters of the polynomials coincide with those parameters of the measures obtained as a weak limit of the polynomials. This provides interesting insights into the roles of these polynomials within finite free probability by drawing an analogy of the corresponding measures in free probability.

Following the notation from [MMP24a, Section 5], if we fix a degree dd\in\mathbb{N} and parameters a{1d,2d,,d1d}a\in\mathbb{R}\setminus\left\{\tfrac{1}{d},\tfrac{2}{d},\dots,\tfrac{d-1}{d}\right\} and bb\in\mathbb{R}, the (modified) Jacobi polynomial of parameters a,ba,b is the polynomial d[.ba.]𝒫d\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}\in\mathcal{P}_{d} with coefficients given by

(5.1) 𝖾~j(d)(d[.ba.]):=(bd)j(ad)j for j=1,,d.\widetilde{\mathsf{e}}_{j}^{(d)}\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}\right)\mathrel{\mathop{\ordinarycolon}}=\frac{\left(bd\right)_{j}}{\left(ad\right)_{j}}\qquad\text{ for }j=1,\dots,d.

Notice that with a simple reparametrization this new notation can be readily translated into the more common expression in terms of F12{}_{2}F_{1} hypergeometric functions or in terms of the standard notation used for Jacobi polynomials Pd(α,β)P^{(\alpha,\beta)}_{d}. In particular, this is the notation in the literature [Sze75, MSV79], from where we will import some results:

d[.ba.](x)\displaystyle\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}(x) =(1)d(bd)d(ad)d2F1(.d,add+1bdd+1.;x)\displaystyle=\frac{(-1)^{d}\left(bd\right)_{d}}{\left(ad\right)_{d}}{\ }_{2}F_{1}{\left(\genfrac{.}{.}{0.0pt}{}{-d,ad-d+1}{bd-d+1};x\right)} [MMP24a, Eq. (80)]
=d!(ad)dPd((1+ab)d,(b1)d)(2x1).\displaystyle=\frac{d!}{\left(ad\right)_{d}}\,P^{((-1+a-b)d,(b-1)d)}_{d}(2x-1). [MMP24a, Eq. (27)]

With the standard notation Pd(α,β)P^{(\alpha,\beta)}_{d}, the classical Jacobi polynomials correspond to parameters α,β[1,)\alpha,\beta\in[-1,\infty) and are orthogonal on [1,1][-1,1] with respect to the weight function (1x)α(1+x)β(1-x)^{\alpha}(1+x)^{\beta}. In particular, they have only simple roots, all contained in [1,1][-1,1]. In our new notation, this means that for b>11db>1-\tfrac{1}{d} and a>b+11da>b+1-\tfrac{1}{d} we obtain that

d[.ba.]𝒫d([0,1]).\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}\in\mathcal{P}_{d}([0,1]).

The derivatives of Jacobi polynomials are again Jacobi polynomials [Sze75, Eq. (4.21.7)]. Specifically, from Eq. (5.1) and Lemma 3.2 we know that for any integer kdk\leq d and arbitrary parameters a,ba,b, one has

(5.2) k|dd[.ba.]=k[.bdkadk.].\partial_{k|d}\,\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}=\mathcal{H}_{k}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{bd}{k}}{\frac{ad}{k}}\right]}.

5.2. Bernoulli and free-binomial distribution

For non-standard parameters, the Jacobi polynomials may have multiple roots. We are particularly interested in polynomials with all roots equal to 0 or 11. From [MMP24a, Page 40] we know that

(5.3) d[.ld1.](x)=xdl(x1)lfor 0ld\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{d}}{1}\right]}(x)=x^{d-l}(x-1)^{l}\qquad\text{for }0\leq l\leq d

are polynomials whose empirical root distribution follows a Bernoulli distribution. Clearly, if we let dd\to\infty and l/du[0,1]l/d\to u\in[0,1] then we can approximate an arbitrary Bernoulli distribution βu:=(1u)δ0+uδ1\beta_{u}\mathrel{\mathop{\ordinarycolon}}=(1-u)\delta_{0}+u\delta_{1} with atoms in {0,1}\{0,1\} and probability uu.

We want to understand the behaviour of these polynomials after repeated differentiation. From [AGP23, Lemma 3.5] we know that this is the same as studying multiplicative convolution of two of these polynomials. Moreover, using (5.2) we have that

(5.4) d[.kd1.]dd[.ld1.](x)=xdkk|dd[.ld1.](x)=xdkk[.lkdk.](x)for k,l=1,,d.\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k}{d}}{1}\right]}\boxtimes_{d}\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{d}}{1}\right]}(x)=x^{d-k}\partial_{k|d}\,\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{d}}{1}\right]}(x)=x^{d-k}\mathcal{H}_{k}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{k}}{\frac{d}{k}}\right]}(x)\qquad\text{for }k,l=1,\dots,d.

By Theorem 1.2, when dd tends to infinity with kdt\tfrac{k}{d}\to t and ldu\tfrac{l}{d}\to u, the limit of empirical root distributions is given by

βtβu=(1t)δ0+Dilt(βu1/t).\beta_{t}\boxtimes\beta_{u}=(1-t)\delta_{0}+{\rm Dil}_{t}(\beta_{u}^{\boxplus 1/t}).

The distribution Dilt(βu1/t){\rm Dil}_{t}(\beta_{u}^{\boxplus 1/t}) has been studied in connection to free probability under the name of free binomial distribution. It is also related to the free beta distributions of Yoshida [Yos20], see also [SY01].

Remark 5.1.

Since the expression on the left-hand side of (5.4) is symmetric, the roles of kk and ll can be interchanged. In particular, the largest roots of them coincide:

(5.5) λ1(d[.kd1.]dd[.ld1.])=λ1(k[.lkdk.])=λ1(l[.kldl.]).\lambda_{1}\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k}{d}}{1}\right]}\boxtimes_{d}\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{d}}{1}\right]}\right)=\lambda_{1}\left(\mathcal{H}_{k}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{k}}{\frac{d}{k}}\right]}\right)=\lambda_{1}\left(\mathcal{H}_{l}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k}{l}}{\frac{d}{l}}\right]}\right).

In the next section we will uniformly bound the roots of these polynomials.

5.3. Uniform bound on the extreme roots

Finally, our main ingredient for bounding the roots is the well-understood limiting behaviour of the empirical root distribution of the Jacobi polynomials. In particular, we will use the following result.

Theorem 5.2.

Let us consider a degree sequence (dj)j(d_{j})_{j\in\mathbb{N}} and sequences (aj)j(a_{j})_{j\in\mathbb{N}} and (bj)j(b_{j})_{j\in\mathbb{N}} such that

(5.6) bj>11djandaj>bj+11djfor all j,b_{j}>1-\tfrac{1}{d_{j}}\qquad\text{and}\qquad a_{j}>b_{j}+1-\tfrac{1}{d_{j}}\qquad\text{for all }j\in\mathbb{N},
and with limits limjaj=aandlimjbj=b.\text{and with limits }\qquad\lim_{j\to\infty}a_{j}=a\qquad\text{and}\qquad\lim_{j\to\infty}b_{j}=b.

Then, by [MMP24a, page 42] the sequence of polynomials pj:=dj[.bjaj.]p_{j}\mathrel{\mathop{\ordinarycolon}}=\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{j}}{a_{j}}\right]} weakly converges to the measure μb,a\mu_{b,a} with density

dμb,a=a4π(L+x)(xL)x(1x)dx,d\mu_{b,a}=\frac{a}{4\pi}\frac{\sqrt{(L_{+}-x)(x-L_{-})}}{x(1-x)}dx,

where L+L_{+} and LL_{-} are the extremes of the support and depend on the parameters a,ba,b:

(5.7) L±(a,b)=(ab±(a1)ba)2.L_{\pm}(a,b)=\left(\frac{\sqrt{a-b}\pm\sqrt{(a-1)b}}{a}\right)^{2}.

Furthermore, [MSV79, Theorem 1] guarantees the smallest and largest root converges to the extremes of the support. Namely,

(5.8) limjλ1(pj)=L+(a,b)andlimjλdj(pj)=L(a,b).\lim_{j\to\infty}\lambda_{1}(p_{j})=L_{+}(a,b)\qquad\text{and}\qquad\lim_{j\to\infty}\lambda_{d_{j}}(p_{j})=L_{-}(a,b).
Remark 5.3.

For t,u(0,1)t,u\in(0,1), we define the values

α±(t,u):=L±(1t,ut)=(t(1u)±u(1t))2\alpha_{\pm}(t,u)\mathrel{\mathop{\ordinarycolon}}=L_{\pm}\left(\tfrac{1}{t},\tfrac{u}{t}\right)=\left(\sqrt{t(1-u)}\pm\sqrt{u(1-t)}\right)^{2}

in preparation for our next result. Notice that this value is symmetric with respect to tt and uu;

α±(t,u)=α±(u,t).\alpha_{\pm}(t,u)=\alpha_{\pm}(u,t).

Furthermore, it is easily seen that the following identities hold:

α±(t,u)=α±(1t,1u),\alpha_{\pm}(t,u)=\alpha_{\pm}(1-t,1-u),
α±(t,u)=1α(t,1u).\alpha_{\pm}(t,u)=1-\alpha_{\mp}(t,1-u).

We are now ready to prove the main result of this section, which concerns the asymptotic behaviour of the extreme roots of d[.ld1.]\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{d}}{1}\right]} after repeated differentiation.

Lemma 5.4.

Fix t,u(0,1)t,u\in(0,1) and let (dj)j=1(d_{j})_{j=1}^{\infty} be a divergent sequence of integers and (kd)d=1(k_{d})_{d=1}^{\infty}, (ld)d=1(l_{d})_{d=1}^{\infty} diagonal sequences with limit t,u(0,1)t,u\in(0,1), respectively.

  1. (1)

    If t+u<1t+u<1 then the max roots λ1(kj|djdj[.ljdj1.])\lambda_{1}\left(\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}\right) converge to α(t,u)+\alpha(t,u)_{+}.

  2. (2)

    If t<ut<u then the min roots λkj(kj|djdj[.ljdj1.])\lambda_{k_{j}}\left(\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}\right) converge to α(t,u)\alpha(t,u)_{-}.

Proof.

We first prove (1) under the assumption t<ut<u. By Equation (5.2), we have

kj|djdj[.ljdj1.]=kj[.ljkjdjkj.].\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}=\mathcal{H}_{k_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{k_{j}}}{\frac{d_{j}}{k_{j}}}\right]}.

Thus in Theorem 5.2 we shall consider the case aj=djkja_{j}=\tfrac{d_{j}}{k_{j}} and bj=ljkjb_{j}=\tfrac{l_{j}}{k_{j}}. Since

limjkjdj=t<u=limjljdj,\lim_{j\to\infty}\tfrac{k_{j}}{d_{j}}=t<u=\lim_{j\to\infty}\tfrac{l_{j}}{d_{j}},

we have that kj<ljk_{j}<l_{j} for jj large enough. Similarly, since u+t<1u+t<1, we obtain that kj+lj<djk_{j}+l_{j}<d_{j} for jj large enough. So both inequalities in hypothesis (5.6) are satisfied for every jj larger than some NN. Since limjaj=1t\lim_{j\to\infty}a_{j}=\frac{1}{t} and limjbj=ut\lim_{j\to\infty}b_{j}=\frac{u}{t}, then the max root of dj[.ljdj1.]\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]} converges to α+(t,u)\alpha_{+}\left(t,u\right) as desired.

In the case t>ut>u, since λ1(lj|djdj[.kjdj1.])=λ1(kj|djdj[.ljdj1.])\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}}{d_{j}}}{1}\right]}\right)=\lambda_{1}\left(\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}\right) by Eq. (5.4), we have the same conclusion.

In the case t=ut=u, let t1<t<t2t_{1}<t<t_{2} such that t2+u<1t_{2}+u<1 and diagonal sequences (kj(1))j=1(k_{j}^{(1)})_{j=1}^{\infty}, (kj(2))j=1(k_{j}^{(2)})_{j=1}^{\infty} with limit t1,t2(0,1)t_{1},t_{2}\in(0,1), respectively. Then, by Corollary 4.5,

λ1(lj|djdj[.kj(1)dj1.])λ1(lj|djdj[.kjdj1.])λ1(lj|djdj[.kj(2)dj1.])\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}^{(1)}}{d_{j}}}{\overset{\ }{\scalebox{1.01}{1}}}\right]}\right)\leq\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}}{d_{j}}}{1}\right]}\right)\leq\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}^{(2)}}{d_{j}}}{\overset{\ }{\scalebox{1.01}{1}}}\right]}\right)

for large enough jj and

limjλ1(lj|djdj[.kj(i)dj1.])=α(ti,u)+\lim_{j\to\infty}\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}^{(i)}}{d_{j}}}{\overset{\ }{\scalebox{1.01}{1}}}\right]}\right)=\alpha(t_{i},u)_{+}

for i=1,2i=1,2. It implies

α(t1,u)+lim infjλ1(lj|djdj[.kjdj1.])lim supjλ1(lj|djdj[.kjdj1.])α(t2,u)+.\alpha(t_{1},u)_{+}\leq\liminf_{j\to\infty}\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}}{d_{j}}}{1}\right]}\right)\leq\limsup_{j\to\infty}\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}}{d_{j}}}{1}\right]}\right)\leq\alpha(t_{2},u)_{+}.

Letting t1,t2tt_{1},t_{2}\to t, we obtain the result limjλ1(lj|djdj[.kjdj1.])=α(t,u)+\lim_{j\to\infty}\lambda_{1}\left(\partial_{l_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{k_{j}}{d_{j}}}{1}\right]}\right)=\alpha(t,u)_{+}.

For the proof of (2), note that d[.ld1.](x)=(1)dd[.dld1.](1x)\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l}{d}}{1}\right]}(x)=(-1)^{d}\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{d-l}{d}}{1}\right]}(1-x), which is just the changing of the variables. Then one has λkj(kj|djdj[.ljdj1.])=1λ1(kj|djdj[.djljdj1.])\lambda_{k_{j}}\left(\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}\right)=1-\lambda_{1}\left(\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{d_{j}-l_{j}}{d_{j}}}{1}\right]}\right). Hence, if t<ut<u (equivalently t+(1u)<1t+(1-u)<1) then

limjλkj(kj|djdj[.ljdj1.])=1α(t,1u)+=α(t,u)\lim_{j\to\infty}\lambda_{k_{j}}\left(\partial_{k_{j}|d_{j}}\,\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}\right)=1-\alpha(t,1-u)_{+}=\alpha(t,u)_{-}

from the result of (1) and Remark 5.3. ∎

Lemma 5.5.

Let (pj)j(p_{j})_{j\in\mathbb{N}} be a sequence of polynomials converging to a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}). For every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with limit t(0,1μ({0}))t\in(0,1-\mu(\{0\})), there exists ϵ>0\epsilon>0 such that

lim infjλkj(kj|djpj)ϵ.\liminf_{j\to\infty}\lambda_{k_{j}}(\partial_{k_{j}|d_{j}}\,p_{j})\geq\epsilon.
Proof.

Let us take u(t,1μ({0})).u\in(t,1-\mu(\{0\})). Since μ({0})<1u\mu(\{0\})<1-u, there exists a>0a>0 such that Fμ(a)<1uF_{\mu}(a)<1-u and aa is a continuous point of FμF_{\mu}. Then we consider some sequence of integers (lj)j(l_{j})_{j\in\mathbb{N}} such that kjljdjk_{j}\leq l_{j}\leq d_{j} and lj/djul_{j}/d_{j}\to u and use it to construct a sequence of polynomials

qj(x):=Diladj[.ljdj1.](x)=xdjlj(xa)ljfor j.q_{j}(x)\mathrel{\mathop{\ordinarycolon}}={\rm Dil}_{a}\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{\frac{l_{j}}{d_{j}}}{1}\right]}(x)=x^{d_{j}-l_{j}}(x-a)^{l_{j}}\qquad\text{for }j\in\mathbb{N}.

By construction we know qjpjq_{j}\ll p_{j}, and using Corollary 4.5 we obtain that kj|djqjkj|djpj\partial_{k_{j}|d_{j}}\,q_{j}\ll\partial_{k_{j}|d_{j}}\,p_{j}. Finally, Lemma 5.4 yields that limλkj(kj|djqj)=aα(t,u)>0\lim\lambda_{k_{j}}(\partial_{k_{j}|d_{j}}\,q_{j})=a\alpha(t,u)_{-}>0. Thus, letting ε:=aα(t,u)\varepsilon\mathrel{\mathop{\ordinarycolon}}=a\alpha(t,u)_{-} we conclude that lim infλkj(kj|djpj)ϵ\liminf\lambda_{k_{j}}(\partial_{k_{j}|d_{j}}\,p_{j})\geq\epsilon. ∎

Corollary 5.6.

Let μ\mu be a measure on μ(0){δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}. For any t(0,1μ({0}))t\in(0,1-\mu(\{0\})), there exists ϵ>0\epsilon>0 such that the measure μ1/t\mu^{\boxplus 1/t} is supported on [ϵ,)[\epsilon,\infty).

6. Finite SS-transform

In this section we introduce our main object, the finite SS-transform. The development is parallel to that of Section 2.4.3, but in the finite free probability framework. After defining the finite SS-transform we also introduce a finite TT-transform, which contains the same information but is more suitable to deal with certain cases. Then we study all the basic properties of the finite SS-transform. These properties can be readily transferred to properties of the finite TT-transform.

6.1. Definition of Finite SS-transform and Finite TT-transform

We are now ready to introduce the finite SS-transform that was advertised in the introduction.

Definition 6.1 (Finite SS-transform).

Let p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) such that p(x)xdp(x)\neq x^{d} and rr the multiplicity of the root 0 in pp. We define the finite SS-transform of pp as the map

Sp(d):{kd|k=1,2,,dr}>0S_{p}^{(d)}\mathrel{\mathop{\ordinarycolon}}\left\{\left.-\frac{k}{d}\ \right|\ k=1,2,\dots,d-r\right\}\to\mathbb{R}_{>0}

such that

(6.1) Sp(d)(kd):=𝖾~k1(d)(p)𝖾~k(d)(p) for k=1,2,,dr.S_{p}^{(d)}\left(-\frac{k}{d}\right)\mathrel{\mathop{\ordinarycolon}}=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}\quad\text{ for }\quad k=1,2,\dots,d-r.
Remark 6.2.

Notice that since pp has all non-negative roots, then 𝖾~k(d)(p)>0\widetilde{\mathsf{e}}_{k}^{(d)}(p)>0 if and only if 0kdr0\leq k\leq d-r. Thus, the SS-transform is well defined, and cannot be extended to k=dr+1,,dk=d-r+1,\dots,d as it would produce a division by 0. Similar to what was pointed out in Remark 2.8, it is useful for the intuition to allow the values Sp(d)(kd):=S_{p}^{(d)}\left(-\tfrac{k}{d}\right)\mathrel{\mathop{\ordinarycolon}}=\infty when k=dr+1,,dk=d-r+1,\dots,d. This will be formally explained below when considering the modified TT-transform.

Another natural question is why the domain is a discrete set. Notice that in principle the domain of Sp(d)S_{p}^{(d)} can be extended to the whole interval (1+rd,0)\left(-1+\tfrac{r}{d},0\right) so that it resembles more Voiculescu’s SS-transform. There are several ways to achieve this: defining a continuous piece-wise linear function whose non-differentiable points are at kd-\frac{k}{d}; using Lagrange’s interpolation333Seems like this option produces a function that is closer to Marcus’ mm-finite SS-transform in [Mar21, Eq. (14)]. We believe this because looking at [Mar21, page 20], the function fAf_{A}, that is related to Marcus’ SS-transform, is obtained as Lagrange interpolation in the same set. However, we were unable to devise a clear connection between our definition and Marcus’ definition. to define a polynomial with values 𝖾~k1(d)(p)𝖾~k(d)(p)\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)} at kd-\frac{k}{d}; or simply defining a step function. Although the last option do not produce a continuous function, it seems to be in more agreement with the intuition coming from the TT-transform.

Since it is not clear which extension is the best, we simply opted to restrict the definition to the discrete set {kd|k=1,2,,dr}\left\{\left.-\frac{k}{d}\ \right|\ k=1,2,\dots,d-r\right\}. Recall that the values of the finite SS-transform at this point are enough to recover the coefficients. Indeed, we just need to multiply them:

(6.2) 𝖾~k(d)(p)=1Sp(d)(kd)Sp(d)(k1d)Sp(d)(1d).\widetilde{\mathsf{e}}_{k}^{(d)}(p)=\frac{1}{S_{p}^{(d)}\left(-\frac{k}{d}\right)S_{p}^{(d)}\left(-\frac{k-1}{d}\right)\cdots S_{p}^{(d)}\left(-\frac{1}{d}\right)}.

Recall from Equation (2.2) that the TT-transform in free probability is a shift of the multiplicative inverse of the SS-transform. Alternatively, from Equation (2.3), the TT-transform of μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) can be understood as the inverse of the cumulative distribution function of Φ(μ)\Phi(\mu). Since the map Φd\Phi_{d} from (2.11) is the finite version of map Φ\Phi, we will use the last interpretation to define the finite counterpart of the TT-transform.

Definition 6.3 (Finite TT-transform).

Given a polynomial p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) we define the finite TT-transform of pp as the function Td(p)(t):(0,1)0T_{d}^{(p)}(t)\mathrel{\mathop{\ordinarycolon}}(0,1)\to\mathbb{R}_{\geq 0} that is the right-continuous inverse of FμΦd(p)F_{\mu\left\llbracket\Phi_{d}(p)\right\rrbracket} in (0,1)(0,1).

Remark 6.4.

Using (2.12) it is straightforward that the finite TT-transform can be explicitly defined in terms of the coefficients of pp. Indeed, if rr is the multiplicity of the root 0 of pp, then Td(p):(0,1)0T_{d}^{(p)}\mathrel{\mathop{\ordinarycolon}}(0,1)\to\mathbb{R}_{\geq 0} is the map such that

(6.3) Tp(d)(t):={0if t(0,rd)𝖾~dk+1(d)(p)𝖾~dk(d)(p)if t[k1d,kd) for k=r+1,,d.T^{(d)}_{p}(t)\mathrel{\mathop{\ordinarycolon}}=\begin{cases}0&\text{if }t\in\left(0,\tfrac{r}{d}\right)\vspace{2mm}\\ \dfrac{\widetilde{\mathsf{e}}_{d-k+1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{d-k}^{(d)}(p)}&\text{if }t\in\left[\tfrac{k-1}{d},\tfrac{k}{d}\right)\text{ for }k=r+1,\dots,d.\end{cases}

Then, it is also clear that

(6.4) Tp(d)(dkd)=1Sp(d)(kd)for k=1,,dr.T^{(d)}_{p}\left(\frac{d-k}{d}\right)=\frac{1}{S_{p}^{(d)}\left(-\frac{k}{d}\right)}\qquad\text{for }k=1,\dots,d-r.

6.2. Basic properties of SS-transform

We now turn to the study the basic properties of Sp(d)S_{p}^{(d)}. All these basic facts are analogous to those of Voiculescu’s SS-transform. Notice also that these properties can be readily adapted to fit the finite TT-transform; after each result we will leave a brief comment pointing out the corresponding result.

First we notice that the SS-transform is non-increasing and compute its extreme values.

Proposition 6.5 (Monotonicity and extreme values).

Let p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) such that p(x)xdp(x)\neq x^{d}. Let λ1λd\lambda_{1}\geq\dots\geq\lambda_{d} be the roots of pp, and denote by rr the multiplicity of the root 0 in pp, namely λdr>λdr+1=0\lambda_{d-r}>\lambda_{d-r+1}=0. Then

  1. (1)

    If p(x)=(xc)dp(x)=(x-c)^{d} for some constant c>0c>0, then

    (6.5) Sp(d)(kd)=1cfor k=1,,d.S_{p}^{(d)}\left(-\frac{k}{d}\right)=\frac{1}{c}\qquad\text{for }k=1,\dots,d.
  2. (2)

    Otherwise, whenever pp has at least two distinct roots, the finite SS-transform is strictly decreasing:

    (6.6) Sp(d)(k+1d)>Sp(d)(kd)for k=1,,dr1.S_{p}^{(d)}\left(-\frac{k+1}{d}\right)>S_{p}^{(d)}\left(-\frac{k}{d}\right)\qquad\text{for }k=1,\dots,d-r-1.

Moreover the smallest and largest values are, respectively,

(6.7) Sp(d)(1d)=(1dj=1dλj)1andSp(d)(drd)=r+1drj=1dr1λj.S_{p}^{(d)}\left(-\frac{1}{d}\right)=\left(\frac{1}{d}\sum_{j=1}^{d}\lambda_{j}\right)^{-1}\quad\text{and}\quad S_{p}^{(d)}\left(-\frac{d-r}{d}\right)=\frac{r+1}{d-r}\sum_{j=1}^{d-r}\frac{1}{\lambda_{j}}.

When pp has no roots at 0 (when r=0r=0), we can identify the latter as the value at 0 of the Cauchy transform of the empirical root distribution of pp

(6.8) Sp(d)(1)=1dj=1d1λj=Gμp(0).S_{p}^{(d)}(-1)=\frac{1}{d}\sum_{j=1}^{d}\frac{1}{\lambda_{j}}=-G_{\mu\left\llbracket p\right\rrbracket}(0).
Proof.

The coefficients of p(x)=(xc)dp(x)=(x-c)^{d} are of the form 𝖾~k(d)(p)=ck\widetilde{\mathsf{e}}_{k}^{(d)}(p)=c^{k}, which directly implies Equation (6.5).

Assertion (6.6) follows from Newton’s inequality:

Sp(d)(k+1d)=𝖾~k(d)(p)𝖾~k+1(d)(p)>𝖾~k1(d)(p)𝖾~k(d)(p)=Sp(d)(kd)for k=1,,dr1.S_{p}^{(d)}\left(-\frac{k+1}{d}\right)=\frac{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k+1}^{(d)}(p)}>\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}=S_{p}^{(d)}\left(-\frac{k}{d}\right)\qquad\text{for }k=1,\dots,d-r-1.

This in turn implies that the smallest and largest values of Sp(d)S_{p}^{(d)} are attained at 1d-\frac{1}{d} and drd-\frac{d-r}{d}, respectively. Using the definition we can compute these values explicitly:

Sp(d)(1d)\displaystyle S_{p}^{(d)}\left(-\frac{1}{d}\right) =𝖾~0(d)(p)𝖾~1(d)(p)=(1dj=1dλj)1and\displaystyle=\frac{\widetilde{\mathsf{e}}_{0}^{(d)}(p)}{\widetilde{\mathsf{e}}_{1}^{(d)}(p)}=\left(\frac{1}{d}\sum_{j=1}^{d}\lambda_{j}\right)^{-1}\quad\text{and}
Sp(d)(drd)\displaystyle S_{p}^{(d)}\left(-\frac{d-r}{d}\right) =𝖾~dr1(d)(p)𝖾~dr(d)(p)=(ddr)(ddr1)j=1drλ1λj1λj+1λdrλ1λdr=r+1drj=1dr1λj.\displaystyle=\frac{\widetilde{\mathsf{e}}_{d-r-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{d-r}^{(d)}(p)}=\frac{\binom{d}{d-r}}{\binom{d}{d-r-1}}\sum_{j=1}^{d-r}\frac{\lambda_{1}\cdots\lambda_{j-1}\lambda_{j+1}\cdots\lambda_{d-r}}{\lambda_{1}\cdots\lambda_{d-r}}=\frac{r+1}{d-r}\sum_{j=1}^{d-r}\frac{1}{\lambda_{j}}.

Remark 6.6.

The previous result implies that the TT-transform is non-decreasing step function with smallest value

limt0Tp(d)(t)={(1di=1d1λi)1if r=0,0if r>0,\lim_{t\to 0}T_{p}^{(d)}(t)=\begin{cases}\left(\frac{1}{d}\sum_{i=1}^{d}\frac{1}{\lambda_{i}}\right)^{-1}&\text{if }r=0,\\ 0&\text{if }r>0,\end{cases}

and largest value

limt1Tp(d)(t)=1dj=1dλj.\lim_{t\to 1}T_{p}^{(d)}(t)=\frac{1}{d}\sum_{j=1}^{d}\lambda_{j}.

A fundamental property of Voiculescu’s SS-transform is that it is multiplicative with respect to \boxtimes. The analogous property for our finite SS-transform is an easy consequence of the fact that the coefficients of the polynomial are multiplicative with respect to d\boxtimes_{d}.

Proposition 6.7.

Let p,q𝒫d(0)p,q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) with p(x)xdq(x)p(x)\neq x^{d}\neq q(x) and let r,sr,s be the multiplicities of root 0 in p,qp,q, respectively. Then we have

Spdq(d)(kd)=Sp(d)(kd)Sq(d)(kd)for all k=1,2,dmax{r,s}.S_{p\boxtimes_{d}q}^{(d)}\left(-\frac{k}{d}\right)=S_{p}^{(d)}\left(-\frac{k}{d}\right)S_{q}^{(d)}\left(-\frac{k}{d}\right)\qquad\text{for all }k=1,2\dots,d-\max\{r,s\}.
Proof.

From the definitions of finite SS-transform (6.1) and finite multiplicative convolution it follows that

Spdq(d)(kd)=𝖾~k1(d)(pdq)𝖾~k(d)(pdq)=𝖾~k1(d)(p)𝖾~k1(d)(q)𝖾~k(d)(p)𝖾~k(d)(q)=Sp(d)(kd)Sq(d)(kd)S_{p\boxtimes_{d}q}^{(d)}\left(-\frac{k}{d}\right)=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p\boxtimes_{d}q)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p\boxtimes_{d}q)}=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)\widetilde{\mathsf{e}}_{k-1}^{(d)}(q)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)\widetilde{\mathsf{e}}_{k}^{(d)}(q)}=S_{p}^{(d)}\left(-\frac{k}{d}\right)S_{q}^{(d)}\left(-\frac{k}{d}\right)

for k=1,2,dmax{r,s}k=1,2\dots,d-\max\{r,s\}. ∎

Notice that in terms of the finite TT-transform we do not need to worry about excluding xdx^{d} or the multiplicity of the root 0. For p,q𝒫d(0)p,q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) one has that

(6.9) Tpdq(d)(t)=Tp(d)(t)Tq(d)(t)for all t(0,1).T_{p\boxtimes_{d}q}^{(d)}(t)=T_{p}^{(d)}(t)T_{q}^{(d)}(t)\qquad\text{for all }t\in(0,1).
Remark 6.8.

Let us mention that Marcus [Mar21, Lemma 4.10] defined a modified finite SS-transform and proves its multiplicativity, similar to Proposition 6.7. However, while he relates his finite SS-transform with Voiculescu’s SS-transform, to the best of our knowledge, this cannot be used to connect d\boxtimes_{d} with \boxtimes.

We present here a direct application of the finite SS-transform (or TT-transform which is easier to handle in the situation below).

Proposition 6.9.

Let p,q𝒫d(0)p,q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and assume pdq(x)=(xc)dp\boxtimes_{d}q(x)=(x-c)^{d} for some c0c\in\mathbb{R}_{\geq 0}. If c=0c=0, then p(x)=xdp(x)=x^{d} or q(x)=xdq(x)=x^{d}. If c>0c>0, there exist a,b>0a,b\in\mathbb{R}_{>0} such that ab=cab=c, p(x)=(xa)dp(x)=(x-a)^{d}, and q(x)=(xb)dq(x)=(x-b)^{d}.

Proof.

If c=0c=0, we have 𝖾~1(d)(p)𝖾~1(d)(q)=0\widetilde{\mathsf{e}}_{1}^{(d)}(p)\widetilde{\mathsf{e}}_{1}^{(d)}(q)=0 and hence 𝖾~1(d)(p)=0\widetilde{\mathsf{e}}_{1}^{(d)}(p)=0 or 𝖾~1(d)(q)=0\widetilde{\mathsf{e}}_{1}^{(d)}(q)=0. Since pp and qq have only non-negative roots, it means p(x)=xdp(x)=x^{d} or q(x)=xdq(x)=x^{d}.

If c>0c>0, we have Tp(d)(t)Tq(d)(t)cT_{p}^{(d)}(t)T_{q}^{(d)}(t)\equiv c. Since the finite TT-transform is non-negative and weakly increasing, Tp(d)(t)aT_{p}^{(d)}(t)\equiv a and Tq(d)(t)bT_{q}^{(d)}(t)\equiv b for some a,b>0a,b>0 such that ab=cab=c. It implies p(x)=(xa)dp(x)=(x-a)^{d} and q(x)=(xb)dq(x)=(x-b)^{d}. ∎

Note that similar arguments lead to the corresponding result for free multiplicative convolution. Now we show a formula for the reversed polynomial.

Proposition 6.10 (Reversed polynomial).

Given a polynomial p𝒫(>0)p\in\mathcal{P}(\mathbb{R}_{>0}) and its reversed polynomial p1p^{\langle-1\rangle}, their SS-transforms satisfy the relation

(6.10) Sp(d)(kd)Sp1(d)(d+1kd)=1,for k=1,,d.S_{p}^{(d)}\left(-\frac{k}{d}\right)S_{p^{\langle-1\rangle}}^{(d)}\left(-\frac{d+1-k}{d}\right)=1,\qquad\text{for }k=1,\dots,d.
Proof.

By formula (2.1), the coefficients of the reversed polynomial are

𝖾~dk(d)(p1)=𝖾~k(d)(p)𝖾~d(d)(p).\widetilde{\mathsf{e}}_{d-k}^{(d)}(p^{\langle-1\rangle})=\frac{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}{\widetilde{\mathsf{e}}_{d}^{(d)}(p)}.

In terms of the SS-transform, this yields

Sp1(d)(d+1kd)=𝖾~dk(d)(p1)𝖾~d+1k(d)(p1)=𝖾~k(d)(p)𝖾~k1(d)(p)=1Sp(d)(kd)S_{p^{\langle-1\rangle}}^{(d)}\left(-\frac{d+1-k}{d}\right)=\frac{\widetilde{\mathsf{e}}_{d-k}^{(d)}(p^{\langle-1\rangle})}{\widetilde{\mathsf{e}}_{d+1-k}^{(d)}(p^{\langle-1\rangle})}=\frac{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}=\frac{1}{S_{p}^{(d)}\left(-\frac{k}{d}\right)}

as desired. ∎

We then continue to study some other properties of the finite SS-transform in connection to the behaviour under taking derivatives and shifts of polynomials. We can study these operations using the Cauchy transform of the empirical root distribution of the polynomials. These properties can also be understood as finite counterparts of known facts in free probability, specifically Lemma 2.9.

Lemma 6.11 (Derivatives and shifts).

Consider p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and let rr be the multiplicity of root 0 in pp.

  1. (1)

    For any kldrk\leq l\leq d-r, we have

    Sp(d)(kd)=Sl|dp(l)(kl).S_{p}^{(d)}\left(-\frac{k}{d}\right)=S_{\partial_{l|d}\,p}^{(l)}\left(-\frac{k}{l}\right).
  2. (2)

    Given a>0a>0, we obtain

    SShiap(d)(kd)=Gμk|dp(a)for 1kd.S_{{\rm Shi}_{a}p}^{(d)}\left(-\frac{k}{d}\right)=-G_{\mu\left\llbracket\partial_{k|d}\,p\right\rrbracket}(-a)\qquad\text{for }1\leq k\leq d.

    If r=0r=0, then this can be extended to a=0a=0:

    Sp(d)(kd)=Gμk|dp(0).S_{p}^{(d)}\left(-\frac{k}{d}\right)=-G_{\mu\left\llbracket\partial_{k|d}\,p\right\rrbracket}(0).
Remark 6.12.

The equation in part (1) has the following analogue in free probability

SDilu(μ1/u)(tu)=Sμ(t)for 0<t<u<1μ({0}),S_{{\rm Dil}_{u}(\mu^{\boxplus 1/u})}\left(-\frac{t}{u}\right)=S_{\mu}(-t)\qquad\text{for }0<t<u<1-\mu(\{0\}),

that follows easily from Lemma 2.9 (1).

Proof.

Part (1) follows from the fact that coefficients of a polynomial are preserved under differentiation, Lemma 3.2:

Sp(d)(kd)=𝖾~k1(d)(p)𝖾~k(d)(p)=𝖾~k1(l)(l|dp)𝖾~k(l)(l|dp)=Sl|dp(l)(kl)for all kldr.S_{p}^{(d)}\left(-\frac{k}{d}\right)=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}=\frac{\widetilde{\mathsf{e}}_{k-1}^{(l)}(\partial_{l|d}\,p)}{\widetilde{\mathsf{e}}_{k}^{(l)}(\partial_{l|d}\,p)}=S_{\partial_{l|d}\,p}^{(l)}\left(-\frac{k}{l}\right)\qquad\text{for all }k\leq l\leq d-r.

For part (2) we use that differentiation and shift operator commute k|dShia=Shiak|d\partial_{k|d}\,\circ{\rm Shi}_{a}={\rm Shi}_{a}\circ\partial_{k|d}\,, and thus

SShiap(d)(kd)\displaystyle S_{{\rm Shi}_{a}p}^{(d)}\left(-\frac{k}{d}\right) =Sk|dShiap(k)(1)\displaystyle=S_{\partial_{k|d}\,{\rm Shi}_{a}p}^{(k)}(-1) (by part (1) above)
=SShiak|dp(k)(1)\displaystyle=S_{{\rm Shi}_{a}\partial_{k|d}\,p}^{(k)}(-1)
=GμShiak|dp(0)\displaystyle=-G_{\mu\left\llbracket{\rm Shi}_{a}\partial_{k|d}\,p\right\rrbracket}(0) (by Equation (6.8))
=Gμk|dp(a).\displaystyle=-G_{\mu\left\llbracket\partial_{k|d}\,p\right\rrbracket}(-a).

Notice that the assumption a>0a>0 ensures the polynomial has no roots at 0 and hence SShiak|dp(k)(1)S_{{\rm Shi}_{a}\partial_{k|d}\,p}^{(k)}(-1) is well defined. In the case r=0r=0, the previous computation holds for a=0a=0 as well. ∎

To finish this section we prove the interesting fact that the partial order studied in Section 4 implies an inequality for the finite SS-transforms of the polynomial.

Lemma 6.13.

Let p,q𝒫d(0)p,q\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) with rr the multiplicity of the root 0 in pp. If pqp\ll q then

Sp(d)(kd)Sq(d)(kd)for all 1kdr.S_{p}^{(d)}\left(-\frac{k}{d}\right)\geq S_{q}^{(d)}\left(-\frac{k}{d}\right)\qquad\text{for all }1\leq k\leq d-r.
Proof.

By Proposition 4.6 we know that the map Φd\Phi_{d} preserves the partial order \ll. Thus we get that Φd(p)Φd(q)\Phi_{d}(p)\ll\Phi_{d}(q), and Using (2.12) we conclude that

Sp(d)(kd)=𝖾~k1(d)(p)𝖾~k(d)(p)=1λk(Φd(p))1λk(Φd(q))=𝖾~k1(d)(q)𝖾~k(d)(q)=Sq(d)(kd)S_{p}^{(d)}\left(-\frac{k}{d}\right)=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(p)}{\widetilde{\mathsf{e}}_{k}^{(d)}(p)}=\frac{1}{\lambda_{k}(\Phi_{d}(p))}\geq\frac{1}{\lambda_{k}(\Phi_{d}(q))}=\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(q)}{\widetilde{\mathsf{e}}_{k}^{(d)}(q)}=S_{q}^{(d)}\left(-\frac{k}{d}\right)

for all 1kdr1\leq k\leq d-r. ∎

7. Finite SS-transform tends to SS-transform

The goal of this section is to prove Theorem 1.1 advertised in the introduction, which is our main approximation theorem. In order to simplify the presentation, we will make use of the concept of converging sequence of polynomials and diagonal sequence introduced in Notation 2.16. Recall that given a sequence of polynomials (pj)j(p_{j})_{j\in\mathbb{N}} we say (pj)j(p_{j})_{j\in\mathbb{N}} converges to μ\mu\in\mathcal{M} if

(7.1) {pj𝒫djfor all j,limjdj=,μpj𝑤μ as j.\begin{cases}p_{j}\in\mathcal{P}_{d_{j}}\quad\text{for all }j\in\mathbb{N},\\ \displaystyle\lim_{j\to\infty}d_{j}=\infty,\\ \mu\left\llbracket p_{j}\right\rrbracket\xrightarrow{w}\mu\text{ as }j\to\infty.\end{cases}

Furthermore, if (pj)j(p_{j})_{j\in\mathbb{N}} is a sequence of polynomials converging to μ\mu with degree sequence (dj)j(d_{j})_{j\in\mathbb{N}} we say that (kj)j(k_{j})_{j\in\mathbb{N}} is a diagonal sequence with ratio limit tt if

(7.2) {kj{1,,dj} for every j,limjkjdj=t.\begin{cases}k_{j}\in\{1,\dots,d_{j}\}\text{ for every $j$,}\\ \displaystyle\lim_{j\to\infty}\frac{k_{j}}{d_{j}}=t.\end{cases}

With this new terminology, Theorem 1.1 can be rephrased as follows. Given a measure μ(0){δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}, the following are equivalent:

  1. (1)

    The sequence of polynomials (pj)j𝒫(0)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}_{\geq 0}) converges to μ\mu.

  2. (2)

    For every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with ratio limit t(0,1μ({0}))t\in(0,1-\mu(\{0\})), it holds that

    limjSpj(dj)(kjdj)=Sμ(t).\lim_{j\to\infty}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=S_{\mu}(-t).

The following lemma relating the SS-transform of a measure evaluated at tt with the Cauchy transform of the tt-fractional convolution evaluated at 0 will be useful throughout this section.

Lemma 7.1.

Let μ\mu be measure on μ(0){δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}. Then,

Sμ(t)=GDilt(μ1/t)(0)for t(0,1μ({0})).S_{\mu}(-t)=-G_{{\rm Dil}_{t}(\mu^{\boxplus 1/t})}(0)\qquad\text{for }t\in(0,1-\mu(\{0\})).

As a consequence, Sμδa(t)=GDilt(μ1/t)(a)S_{\mu\boxplus\delta_{a}}(-t)=-G_{{\rm Dil}_{t}(\mu^{\boxplus 1/t})}(-a) for all t(0,1)t\in(0,1) and a>0a>0.

Proof.

Fix t(0,1μ({0}))t\in(0,1-\mu(\{0\})), and consider the measure μt:=Dilt(μ1/t)\mu_{t}\mathrel{\mathop{\ordinarycolon}}={\rm Dil}_{t}(\mu^{\boxplus 1/t}). By Corollary 5.6, μt\mu_{t} is supported on [ϵ,)[\epsilon,\infty) for some ϵ>0\epsilon>0.

Then, the Cauchy transform is well-defined in [ϵ,)\mathbb{C}\setminus[\epsilon,\infty), in particular, we know that

Gμt(0)=0x1μt(dx)1ε<.G_{\mu_{t}}(0)=\int^{\infty}_{0}x^{-1}\mu_{t}(dx)\leq\frac{1}{\varepsilon}<\infty.

On the other hand, since μt({0})=0\mu_{t}(\{0\})=0, [HM13, Lemma 4] yields

(7.3) limz1Sμt(z)=0x1μt(dx)=Gμt(0).\lim_{z\to 1}S_{\mu_{t}}(-z)=\int^{\infty}_{0}x^{-1}\mu_{t}(dx)=G_{\mu_{t}}(0).

Finally, using relation Sμ(tz)=Sμt(z)S_{\mu}(-tz)=S_{\mu_{t}}(-z) from Lemma 2.9 (1) and the continuity of the SS-transform on (0,1μ({0}))(0,1-\mu(\{0\})) we get

(7.4) Sμ(t)=limz1Sμ(zt)=limz1Sμt(z).S_{\mu}(-t)=\lim_{z\to 1}S_{\mu}(-zt)=\lim_{z\to 1}S_{\mu_{t}}(-z).

Putting (7.3) and (7.4) together, we obtain the desired result. ∎

In its simplest form, when (pj)j𝒫(C)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(C) and C=[α,β](0,)C=[\alpha,\beta]\subset(0,\infty) is a compact interval that does not contain 0, the proof of (1)(2)(1)\Rightarrow(2) follows naturally from the basic properties of the SS-transform. However, the same proof in the most general case, requires of several steps, where we gradually generalize each result, building upon simpler cases. On the other hand, the proof of (2)(1)(2)\Rightarrow(1) in the general case follows from using Helly’s selection Theorem to guarantee that there exist a limit, and then using the implication (1)(2)(1)\Rightarrow(2) to assure that the limit must coincide with the given measure.

To guide the reader on how our claim is generalized in each step, this section is divided into several cases. Each case builds upon the previous until we reach the most general case. In Section 7.1 we illustrate the simplicity of the ideas used in the case where all the roots of the polynomials lie on a compact interval that does not contain 0. Then, in Section 7.2 we use a uniform bound of the smallest root after differentiation, to reduce the case when all the roots of the polynomials lie on a compact interval that contains 0, to the previous case (where the interval does not contain 0). In Section 7.3 we use the reversed polynomial of a shift, to generalize Theorem 1.2 to allow measures with unbounded support, then the same is done for our main result, using the same ideas from the previous section. Finally, in 7.4 we explain how to obtain the converse statement and prove Theorem 1.1.

7.1. Compact interval not containing 0.

We first prove it for the case where all the roots of the polynomials lie on a compact interval that does not contain zero. In this case, the proof of the first implication follows easily from the basic properties of the SS-transform.

Proposition 7.2 (Compact interval not containing 0).

Fix a compact interval C=[α,β](0,)C=[\alpha,\beta]\subset(0,\infty). If (pj)j𝒫(C)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(C) is a sequence of polynomials converging to μ(C)\mu\in\mathcal{M}(C), then for every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with ratio limit t(0,1)t\in(0,1) it holds that

limjSpj(dj)(kjdj)=Sμ(t).\lim_{j\to\infty}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=S_{\mu}(-t).
Proof.

By Lemma 6.11 (2) we know that

Spj(dj)(kjdj)=Gμkj|djpj(0).\displaystyle S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=-G_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(0).

Since μpj𝑤μ\mu\left\llbracket p_{j}\right\rrbracket\xrightarrow{w}\mu, then Theorem 1.2 implies that μkj|djpj𝑤Dilt(μ1/t)\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}{(\mu^{\boxplus 1/t})}.

Using Lemma 2.3 and Lemma 7.1, we conclude that

limjSpj(dj)(kjdj)=limjGμkj|djpj(0)=GDilt(μ1/t)(0)=Sμ(t)\lim_{j\to\infty}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=-\lim_{j\to\infty}G_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(0)=-G_{{\rm Dil}_{t}{(\mu^{\boxplus 1/t})}}(0)=S_{\mu}(-t)

for every t(0,1)t\in(0,1), as desired. ∎

7.2. Compact interval containing 0.

The goal of this section is to extend Proposition 7.2 to the case where the interval is allowed to contain 0. The approach is to reduce to the previous case by observing that after repeatedly differentiating polynomials with non-negative roots, we can find a uniform lower bound (away from zero) of the smallest roots. To achieve this we make use of the bounds obtained in Section 5 (that in turn rely on classical bounds of Jacobi polynomials), and then we use the partial order from Section 4 to extrapolate this bound to an arbitrary polynomial.

Proposition 7.3 (Compact support containing 0).

Fix a compact interval C=[0,β]C=[0,\beta] and let (pj)j𝒫(C)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(C) be sequence of polynomials converging to μ(C){δ0}\mu\in\mathcal{M}(C)\setminus\{\delta_{0}\}. Then for every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with ratio limit t(0,1μ({0}))t\in(0,1-\mu(\{0\})) it holds that

limjSpj(dj)(kjdj)=Sμ(t).\lim_{j\to\infty}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=S_{\mu}(-t).
Proof.

Since μpj𝑤μ\mu\left\llbracket p_{j}\right\rrbracket\xrightarrow{w}\mu, Theorem 1.2 implies that μkj|djpj𝑤Dilt(μ1/t)\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}{(\mu^{\boxplus 1/t})}. Also, by Lemma 5.4, there exists some ϵ>0\epsilon>0 such that μkj|djpj(ϵ)\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket\in\mathcal{M}(\mathbb{R}_{\geq\epsilon}) for large enough jj. Hence, by Lemmas 6.11 (2) and 2.3, we conclude that

limjSpj(dj)(kjdj)=limjGμkj|djpj(0)=GDilt(μ1/t)(0)=Sμ(t).\lim_{j\to\infty}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=-\lim_{j\to\infty}G_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(0)=-G_{{\rm Dil}_{t}{(\mu^{\boxplus 1/t})}}(0)=S_{\mu}(-t).

Remark 7.4.

The authors thank Jorge Garza-Vargas for some insightful discussions regarding relations between the supports of μt\mu^{\boxplus t} and of μkj|djpj\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket that ultimately helped in the proof of Proposition 7.3.

7.3. Case with unbounded support

We now turn to the study of measures with unbounded support. The key idea here is that if μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}), then if we shift μ\mu by a positive constant aa and then take the reversed measure, we obtain a measure contained in a compact interval. In this way we can reduce the problem to the previous case. First, we introduce the cut-down and cut-up measures.

Notation 7.5 (Cut-up and cut-down measures).

Given a measure μ()\mu\in\mathcal{M}(\mathbb{R}) with cumulative distribution function FμF_{\mu}, we define the cut-down measure at aa\in\mathbb{R} as the measure μ|a(a)\mu|_{a}\in\mathcal{M}(\mathbb{R}_{\geq a}) with cumulative distribution function

Fμ|a(x)={0if x<a,Fμ(x)if xa.F_{\mu|_{a}}(x)=\begin{cases}0&\text{if }x<a,\\ F_{\mu}(x)&\text{if }x\geq a.\end{cases}

Similarly, we define the cut-up measure at aa\in\mathbb{R} as the measure μ|a(a)\mu|^{a}\in\mathcal{M}(\mathbb{R}_{\leq a}) with cumulative distribution function

Fμ|a(x)={Fμ(x)if x<a,1if xa,F_{\mu|^{a}}(x)=\begin{cases}F_{\mu}(x)&\text{if }x<a,\\ 1&\text{if }x\geq a,\end{cases}

We can define the corresponding cut-down and cut-up measures on polynomials using the bijection μ\mu\left\llbracket\cdot\right\rrbracket between 𝒫d()\mathcal{P}_{d}(\mathbb{R}) and d()\mathcal{M}_{d}(\mathbb{R}). Then for every p𝒫d()p\in\mathcal{P}_{d}(\mathbb{R}) and aa\in\mathbb{R} the cut-down polynomial p|a𝒫d(a)p|_{a}\in\mathcal{P}_{d}(\mathbb{R}_{\geq a}) and cut-up polynomial p|a𝒫d(a)p|^{a}\in\mathcal{P}_{d}(\mathbb{R}_{\leq a}) have roots given by

λi(p|a)={aif λi(p)a,λi(p)if λi(p)>aandλi(p|a)={λi(p)if λi(p)<a,aif λi(p)a.\lambda_{i}(p|_{a})=\begin{cases}a&\text{if }\lambda_{i}(p)\leq a,\\ \lambda_{i}(p)&\text{if }\lambda_{i}(p)>a\end{cases}\qquad\text{and}\qquad\lambda_{i}(p|^{a})=\begin{cases}\lambda_{i}(p)&\text{if }\lambda_{i}(p)<a,\\ a&\text{if }\lambda_{i}(p)\geq a.\end{cases}
Remark 7.6.

We will use three basic properties of the cut-down and cut-up measures that follow directly from the definition.

  1. (1)

    μ|a𝑤μ\mu|_{a}\xrightarrow{w}\mu as aa\to-\infty and μ|a𝑤μ\mu|^{a}\xrightarrow{w}\mu as aa\to\infty.

  2. (2)

    Fμ|aFμF_{\mu|_{a}}\leq F_{\mu}, so μ(μ|a)\mu\ll(\mu|_{a}). And similarly, (μ|a)μ(\mu|^{a})\ll\mu.

  3. (3)

    If (pj)j(p_{j})_{j\in\mathbb{N}} is a sequence of polynomials converging to μ\mu then (pj|a)j(p_{j}|_{a})_{j\in\mathbb{N}} converges to μ|a\mu|_{a}, and similarly (pj|a)j(p_{j}|^{a})_{j\in\mathbb{N}} converges to μ|a\mu|^{a}.

We are now ready to extend Theorem 1.2 on the limits of derivatives of polynomials to the case of measures with unbounded support.

Theorem 7.7.

Let (pj)j𝒫()(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}) be a sequence of polynomials converging to μ()\mu\in\mathcal{M}(\mathbb{R}) and let (kj)d(k_{j})_{d\in\mathbb{N}}\subset\mathbb{N} be diagonal sequence with limit t(0,1)t\in(0,1). Then,

(7.5) μkj|djpj𝑤Dilt(μ1/t) as j.\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}(\mu^{\boxplus 1/t})\qquad\text{ as }\qquad j\to\infty.
Proof.

We first prove the claim assuming that (pj)j𝒫(L)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}_{\geq L}) for some LL\in\mathbb{R}. We fix aa\in\mathbb{R} such that a+L>0a+L>0, so that Shiapj𝒫dj([a+L,)){\rm Shi}_{a}p_{j}\in\mathcal{P}_{d_{j}}([a+L,\infty)). Thus, we can consider qj:=(Shiapj)1𝒫dj((0,1a+L])q_{j}\mathrel{\mathop{\ordinarycolon}}=({\rm Shi}_{a}p_{j})^{\langle-1\rangle}\subset\mathcal{P}_{d_{j}}\left(\left(0,\tfrac{1}{a+L}\right]\right). By Proposition 6.10 and Lemma 6.11 (2) we know that

Gμkj|djpj(a)=SShiapj(dj)(kjdj)=1Sqj(dj)(dj+1kjdj).-G_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(-a)=S_{{\rm Shi}_{a}p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=\frac{1}{S_{q_{j}}^{(d_{j})}\left(-\frac{d_{j}+1-k_{j}}{d_{j}}\right)}.

On the other hand, since the roots of the polynomials (qj)j(q_{j})_{j\in\mathbb{N}} are contained in a compact interval and μqi\mu\left\llbracket q_{i}\right\rrbracket converges weakly to (μδa)1(\mu\boxplus\delta_{a})^{\langle-1\rangle}, Proposition 7.3 yields

limj1Sqj(dj)(dj+1kjdj)=1S(μδa)1(t1)=Sμδa(t)=GDilt(μ1/t)(a).\lim_{j\to\infty}\frac{1}{S_{q_{j}}^{(d_{j})}\left(-\frac{d_{j}+1-k_{j}}{d_{j}}\right)}=\frac{1}{S_{(\mu\boxplus\delta_{a})^{\langle-1\rangle}}(t-1)}=S_{\mu\boxplus\delta_{a}}(-t)=-G_{{\rm Dil}_{t}(\mu^{\boxplus{1/t}})}(-a).

Therefore,

limjGμkj|djpj(a)=GDilt(μ1/t)(a),for all a<L\lim_{j\to\infty}G_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(-a)=G_{{\rm Dil}_{t}(\mu^{\boxplus{1/t}})}(-a),\qquad\text{for all }-a<L

and by Lemma 2.3 we conclude that μkj|djpj𝑤Dilt(μ1/t)\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}(\mu^{\boxplus{1/t}}) as jj\to\infty. Thus, the claim is proved in the case (pj)j𝒫(L)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}_{\geq L}) for some LL\in\mathbb{R}.

Notice that the claim is also true if (pj)j𝒫(L)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}_{\leq L}) for some LL\in\mathbb{R}. Indeed, we simply apply the reflection map Dil1{\rm Dil}_{-1} to the polynomials, and use the previous case on the new sequence.

Now, the proof of the general case, when (pj)j𝒫()(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}), follows from using the cut-down and cut-up measures from Notation 7.5 to reduce the problem to the previous case. Indeed, if we fix an LL\in\mathbb{N} then from Remark 7.6 (2) and Corollary 4.5 we know that

(kj|djpj|L)(kj|djpj)(kj|djpj|L)for every j.\left(\partial_{k_{j}|d_{j}}\,\left.p_{j}\right|^{L}\right)\ll\left(\partial_{k_{j}|d_{j}}\,p_{j}\right)\ll\left(\partial_{k_{j}|d_{j}}\,p_{j}|_{-L}\right)\qquad\text{for every }j\in\mathbb{N}.

By Remark 7.6 (3), as jj\to\infty we have the weak convergence

μkj|djpj|L𝑤Dilt(μ|L)1/tandμkj|djpj|L𝑤Dilt(μ|L)1/t.\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}|_{-L}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}(\mu|_{-L})^{\boxplus 1/t}\qquad\text{and}\qquad\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}|^{L}\right\rrbracket\xrightarrow{w}{\rm Dil}_{t}(\mu|^{L})^{\boxplus 1/t}.

Therefore, at every point xx\in\mathbb{R} one has

FDilt(μ|L)1/t(x)lim infFμkj|djpj(x)lim supFμkj|djpj(x)FDilt(μ|L)1/t(x).F_{{\rm Dil}_{t}(\mu|_{-L})^{\boxplus 1/t}}(x)\leq\liminf F_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(x)\leq\limsup F_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(x)\leq F_{{\rm Dil}_{t}(\mu|^{L})^{\boxplus 1/t}}(x).

Letting LL\to\infty, we conclude that limjFμkj|djpj(x)=FDilt(μ1/t)(x)\displaystyle\lim_{j\to\infty}F_{\mu\left\llbracket\partial_{k_{j}|d_{j}}\,p_{j}\right\rrbracket}(x)=F_{{\rm Dil}_{t}(\mu^{\boxplus 1/t})}(x) for every continuous point xx\in\mathbb{R} of Dilt(μ1/t){\rm Dil}_{t}(\mu^{\boxplus 1/t}) and this is equivalent to (7.5). ∎

With this theorem, we finally upgrade our main result to measures with unbounded support.

Proposition 7.8 (Unbounded support).

Let (pj)j𝒫(0)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}_{\geq 0}) be a sequence of polynomials converging to μ(0){δ0}\mu\in\mathcal{M}(\mathbb{R}_{\geq 0})\setminus\{\delta_{0}\}. Then for every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with limit t(0,1μ({0}))t\in(0,1-\mu(\{0\})) it holds that

limjSpj(dj)(kjdj)=Sμ(t).\lim_{j\to\infty}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=S_{\mu}(-t).
Proof.

The proof is almost identical to the proof of Proposition 7.3, except that Theorem 1.2 is replaced by Theorem 7.7. ∎

7.4. The converse and proof of Theorem 1.1

For the proof of our main theorem to be complete, we must prove that if the finite SS-transform converges to the SS-transform of a measure, then the sequence of polynomials converge to the measure.

Proposition 7.9 (Converse).

Let (pj)j(p_{j})_{j\in\mathbb{N}} be a sequence of polynomials with pj𝒫dj(0)p_{j}\in\mathcal{P}_{d_{j}}(\mathbb{R}_{\geq 0}). Assume there exist a t0[0,1)t_{0}\in[0,1) and a function S:(1+t0,0)>0S\mathrel{\mathop{\ordinarycolon}}(-1+t_{0},0)\to\mathbb{R}_{>0} such that for every t(0,1t0)t\in(0,1-t_{0}) and every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with limit tt one has

limjSpj(dj)(kjdj)=S(t).\lim_{\begin{subarray}{c}j\to\infty\end{subarray}}S_{p_{j}}^{(d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=S(-t).

Then there exists a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) such that μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu, μ({0})t0\mu(\{0\})\leq t_{0}, and Sμ(t)=S(t)S_{\mu}(-t)=S(-t) for all t(0,1t0)t\in(0,1-t_{0}).

Proof.

Consider the sequence of cumulative distribution functions (Fμpj)j(F_{\mu\left\llbracket p_{j}\right\rrbracket})_{j\in\mathbb{N}}. By Helly’s selection, every subsequence of functions has a further subsequence, denoted by (Fi)i(F_{i})_{i\in\mathbb{N}}, converging to some function FF. It is clear that FF is non-decreasing, FF is equal to 0 on the negative real line, and the image of FF is contained in [0,1][0,1]. In order to justify that FF is the cumulative distribution function of some probability measure μ\mu, we just need to check that limxF(x)=1\lim_{x\to\infty}F(x)=1. For the sake of contradiction, we assume that there exists u(0,1)u\in(0,1) such that F(x)<1uF(x)<1-u for all xx\in\mathbb{R}. Let a>0a>0 and lj:=udjl_{j}\mathrel{\mathop{\ordinarycolon}}=\lfloor ud_{j}\rfloor so that limjljdj=u\lim_{j\to\infty}\frac{l_{j}}{d_{j}}=u. Then the sequence of polynomials qj:=xdjlj(xa)ljq_{j}\mathrel{\mathop{\ordinarycolon}}=x^{d_{j}-l_{j}}(x-a)^{l_{j}} satisfies that qjpjq_{j}\ll p_{j} for large enough jj. By Lemma 6.13, this implies

Sqj(d)(kjdj)Spj(d)(kjdj).S_{q_{j}}^{(d)}\left(-\frac{k_{j}}{d_{j}}\right)\geq S_{p_{j}}^{(d)}\left(-\frac{k_{j}}{d_{j}}\right).

Since the qjq_{j} converge to Dilaν=δaν{\rm Dil}_{a}{\nu}=\delta_{a}\boxtimes\nu where ν=(1u)δ0+uδ1\nu=(1-u)\delta_{0}+u\delta_{1}. Then, in the limit we obtain

Sν(t)aS(t)>0\frac{S_{\nu}(-t)}{a}\geq S(-t)>0

On the other hand, we can choose aa arbitrarily large so that Sν(t)a<S(t)\frac{S_{\nu}(-t)}{a}<S(-t), which yields a contradiction.

Therefore, F=FμF=F_{\mu} is the cumulative distribution function of some probability measure μ\mu. By Proposition 7.8 we obtain that Sμ(t)=S(t)S_{\mu}(-t)=S(-t) for all t(0,min{1μ({0}),t0})t\in(0,\min\{1-\mu(\{0\}),t_{0}\}).

Recall that this μ\mu was obtained as the convergent subsequence of an arbitrary subsequence of the original sequence of polynomials. However, since the SS-transforms of any two such measures coincide in a small neighborhood (ϵ,0)(-\epsilon,0), the measures are the same. Thus, there is a unique limiting measure and we conclude that μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu. ∎

The proof of the main Theorem 1.1 now follows from the previous two results.

Proof of Theorem 1.1.

(1) \Rightarrow (2). This implication follows from Proposition 7.8.

(2) \Rightarrow (1). This implication is a particular case of Proposition 7.9. ∎

8. Symmetric and unitary case

8.1. Symmetric case

We say that a probability measure μ()\mu\in\mathcal{M}(\mathbb{R}) is symmetric if μ(B)=μ(B)\mu(-B)=\mu(B) for all Borel sets BB of \mathbb{R}, we denote by S()\mathcal{M}^{S}(\mathbb{R}) the set of symmetric probability measures on the real line. There is a natural bijection from S()\mathcal{M}^{S}(\mathbb{R}) to (0)\mathcal{M}(\mathbb{R}_{\geq 0}) by taking the square of the measure. Specifically, we denote by 𝐒𝐪(μ)(0)\mathbf{Sq}(\mu)\in\mathcal{M}(\mathbb{R}_{\geq 0}) the pushforward of μ\mu by the map xx2x\mapsto x^{2} for xx\in\mathbb{R}. Arizmendi and Pérez-Abreu [AP09] used this map to extend the definition of SS-transform to symmetric measures:

(8.1) S~μ(z):=1+zzS𝐒𝐪(μ)(z)for z in some neighborhood of 0.\widetilde{S}_{\mu}(z)\mathrel{\mathop{\ordinarycolon}}=\sqrt{\frac{1+z}{z}S_{\mathbf{Sq}(\mu)}(z)}\qquad\text{for }z\text{ in some neighborhood of 0}.

A similar approach works to define a finite SS-transform for symmetric polynomials. We say that p2d()p\in\mathbb{P}_{2d}(\mathbb{R}) is a symmetric polynomial if its roots are of the form:

λ1(p)λ2(p)λd(p)0λd(p)λ2(p)λ1(p),\lambda_{1}(p)\geq\lambda_{2}(p)\geq\dots\geq\lambda_{d}(p)\geq 0\geq-\lambda_{d}(p)\geq\dots\geq-\lambda_{2}(p)\geq-\lambda_{1}(p),

and denote by 𝒫2dS()\mathcal{P}_{2d}^{S}(\mathbb{R}) the subset of symmetric polynomials. Given p𝒫2dS()p\in\mathcal{P}_{2d}^{S}(\mathbb{R}) we denote by 𝐒𝐪(p)𝒫d(0)\mathbf{Sq}(p)\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) the polynomial with roots

(λ1(p))2(λ2(p))2(λd(p))20.(\lambda_{1}(p))^{2}\geq(\lambda_{2}(p))^{2}\geq\dots\geq(\lambda_{d}(p))^{2}\geq 0.

It is readily seen that 𝐒𝐪(μp)=μ𝐒𝐪(p)\mathbf{Sq}\left(\mu\left\llbracket p\right\rrbracket\right)=\mu\left\llbracket\mathbf{Sq}(p)\right\rrbracket. Moreover, pp and 𝐒𝐪(p)\mathbf{Sq}(p) are easily related by the formula

𝐒𝐪(p)(x2)=p(x).\mathbf{Sq}(p)(x^{2})=p(x).

In particular,

(8.2) (2d2k)𝖾~2k(p)=(1)k(dk)𝖾~k(𝐒𝐪(p))for k=1,,d.\binom{2d}{2k}\widetilde{\mathsf{e}}_{2k}(p)=(-1)^{k}\binom{d}{k}\widetilde{\mathsf{e}}_{k}(\mathbf{Sq}(p))\qquad\text{for }k=1,\dots,d.

With this in hand, we can extend our definition.

Definition 8.1 (SS-transform for symmetric polynomials).

Let p𝒫2dS()p\in\mathcal{P}_{2d}^{S}(\mathbb{R}) with a multiplicity of 2r2r in the root 0. We define its finite SS-transform map

S~p(2d):{kd|k=1,2,,dr}(1)0\widetilde{S}_{p}^{(2d)}\mathrel{\mathop{\ordinarycolon}}\left\{\left.-\frac{k}{d}\ \right|\ k=1,2,\dots,d-r\right\}\to(\sqrt{-1})\mathbb{R}_{\geq 0}

such that

S~p(2d)(kd):=𝖾~2(k1)(2d)(p)𝖾~2k(2d)(p)for k=1,,dr.\widetilde{S}_{p}^{(2d)}\left(-\frac{k}{d}\right)\mathrel{\mathop{\ordinarycolon}}=\sqrt{\frac{\widetilde{\mathsf{e}}_{2(k-1)}^{(2d)}(p)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(p)}}\qquad\text{for }k=1,\dots,d-r.
Remark 8.2.

Notice that the SS-transform is well defined because 𝖾~2k(2d)(p)0\widetilde{\mathsf{e}}_{2k}^{(2d)}(p)\neq 0 for k=1,,drk=1,\dots,d-r. Moreover, from (8.2) it follows that 𝖾~2(k1)(2d)(p)𝖾~2k(2d)(p)<0\frac{\widetilde{\mathsf{e}}_{2(k-1)}^{(2d)}(p)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(p)}<0 for k=1,,drk=1,\dots,d-r. Thus, the image is actually contained in the positive imaginary line (1)0(\sqrt{-1})\mathbb{R}_{\geq 0}. Using (8.2), one can also verify that

(8.3) S~p(2d)(kd)=1kd+12dkd+12dS𝐒𝐪(p)(d)(kd)\widetilde{S}_{p}^{(2d)}\left(-\frac{k}{d}\right)=\sqrt{\frac{1-\frac{k}{d}+\frac{1}{2d}}{-\frac{k}{d}+\frac{1}{2d}}S_{\mathbf{Sq}(p)}^{(d)}\left(-\frac{k}{d}\right)}

We can use this new transform to study the multiplicative convolution of polynomials, one of which is symmetric.

Proposition 8.3.

Let p𝒫2dS()p\in\mathcal{P}_{2d}^{S}(\mathbb{R}), q𝒫2d(0)q\in\mathcal{P}_{2d}(\mathbb{R}_{\geq 0}) and let rr be the maximum of the multiplicities at the root 0 of pp and qq. Then

(S~p2dq(2d)(kd))2=(S~p(2d)(kd))2Sq(2d)(2k2d)Sq(2d)(2k12d)\left(\widetilde{S}_{p\boxtimes_{2d}q}^{(2d)}\left(-\frac{k}{d}\right)\right)^{2}=\left(\widetilde{S}_{p}^{(2d)}\left(-\frac{k}{d}\right)\right)^{2}S_{q}^{(2d)}\left(-\frac{2k}{2d}\right)S_{q}^{(2d)}\left(-\frac{2k-1}{2d}\right)

for k=1,2,,drk=1,2,\dots,d-r.

Proof.

Using the definition, we compute

(S~p2dq(2d)(kd))2\displaystyle\left(\widetilde{S}_{p\boxtimes_{2d}q}^{(2d)}\left(-\frac{k}{d}\right)\right)^{2} =𝖾~2(k1)(2d)(p2dq)𝖾~2k(2d)(p2dq)\displaystyle=\frac{\widetilde{\mathsf{e}}_{2(k-1)}^{(2d)}(p\boxtimes_{2d}q)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(p\boxtimes_{2d}q)}
=𝖾~2k2(2d)(p)𝖾~2k(2d)(p)𝖾~2k2(2d)(q)𝖾~2k(2d)(q)\displaystyle=\frac{\widetilde{\mathsf{e}}_{2k-2}^{(2d)}(p)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(p)}\cdot\frac{\widetilde{\mathsf{e}}_{2k-2}^{(2d)}(q)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(q)}
=𝖾~2k2(2d)(p)𝖾~2k(2d)(p)𝖾~2k2(2d)(q)𝖾~2k1(2d)(q)𝖾~2k(2d)(q)𝖾~2k1(2d)(q)\displaystyle=\frac{\widetilde{\mathsf{e}}_{2k-2}^{(2d)}(p)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(p)}\cdot\frac{\widetilde{\mathsf{e}}_{2k-2}^{(2d)}(q)\widetilde{\mathsf{e}}_{2k-1}^{(2d)}(q)}{\widetilde{\mathsf{e}}_{2k}^{(2d)}(q)\widetilde{\mathsf{e}}_{2k-1}^{(2d)}(q)}
=(S~p(2d)(kd))2Sq(2d)(2k2d)Sq(2d)(2k12d).\displaystyle=\left(\widetilde{S}_{p}^{(2d)}\left(-\frac{k}{d}\right)\right)^{2}S_{q}^{(2d)}\left(-\frac{2k}{2d}\right)S_{q}^{(2d)}\left(-\frac{2k-1}{2d}\right).

It is also easy to check that our finite symmetric SS-tranform tends to the symmetric SS-transform from [AP09] in the limit.

Proposition 8.4.

Let (pj)j𝒫S()(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}^{S}(\mathbb{R}) be a sequence of symmetric polynomials with degree sequence (2dj)j(2d_{j})_{j\in\mathbb{N}} and assume (pj)j(p_{j})_{j\in\mathbb{N}} converges to μS()\mu\in\mathcal{M}^{S}(\mathbb{R}). Then for every diagonal sequence (2kj)j(2k_{j})_{j\in\mathbb{N}}\subset\mathbb{N} with limit t(0,1μ({0}))t\in(0,1-\mu(\{0\})), it holds that

limjS~pj(2dj)(kjdj)=S~μ(t).\lim_{j\to\infty}\widetilde{S}_{p_{j}}^{(2d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=\widetilde{S}_{\mu}(-t).
Proof.

Using Theorem 1.1 with the sequence (𝐒𝐪(pj))j\left(\mathbf{Sq}(p_{j})\right)_{j\in\mathbb{N}}, Equation (8.3) tends to Equation (8.1) in the limit:

limjS~pj(2dj)(kjdj)=limj1kjdj+12djkjdj+12djS𝐒𝐪(pj)(d)(kd)=1ttS𝐒𝐪(μ)(t)=S~μ(t).\lim_{j\to\infty}\widetilde{S}_{p_{j}}^{(2d_{j})}\left(-\frac{k_{j}}{d_{j}}\right)=\lim_{j\to\infty}\sqrt{\frac{1-\frac{k_{j}}{d_{j}}+\frac{1}{2d_{j}}}{-\frac{k_{j}}{d_{j}}+\frac{1}{2d_{j}}}S_{\mathbf{Sq}(p_{j})}^{(d)}\left(-\frac{k}{d}\right)}=\sqrt{\frac{1-t}{-t}S_{\mathbf{Sq}(\mu)}(-t)}=\widetilde{S}_{\mu}(-t).

8.2. Unitary case

It would be interesting to construct an SS-transform that can handle the set 𝒫d(𝕋)\mathcal{P}_{d}(\mathbb{T}) of polynomials with roots in the unit circle 𝕋:={z:|z|=1}\mathbb{T}\mathrel{\mathop{\ordinarycolon}}=\{z\in\mathbb{C}\mathrel{\mathop{\ordinarycolon}}|z|=1\}. However, if we naively try to apply the same approach used in the previous cases, we run into some problems. To illustrate such difficulties, let us consider the following example. When considering polynomials that resemble the Haar unitary measure in \mathbb{C}, namely polynomials with roots uniformly distributed in 𝕋\mathbb{T}, there are at least two natural candidates:

hd(x)=xd1=k=1d(xe2πikd),h_{d}(x)=x^{d}-1=\prod_{k=1}^{d}(x-e^{\frac{2\pi ik}{d}}),
h^d1(x)=xd1x1=j=0d1xj=k=1d1(xe2πikd).\widehat{h}_{d-1}(x)=\frac{x^{d}-1}{x-1}=\sum_{j=0}^{d-1}x^{j}=\prod_{k=1}^{d-1}(x-e^{\frac{2\pi ik}{d}}).

Notice that hdh_{d} has the same roots as h^d1\widehat{h}_{d-1} with an extra root in 1. Thus, when dd\to\infty, the empirical distributions of hdh_{d} and h^d1\widehat{h}_{d-1} both tend to χ\chi, the uniform distribution on 𝕋\mathbb{T}.

For hdh_{d}, the only non-vanishing coefficients are 𝖾~0(d)(hd)\widetilde{\mathsf{e}}_{0}^{(d)}\left(h_{d}\right) and 𝖾~d(d)(hd)\widetilde{\mathsf{e}}_{d}^{(d)}\left(h_{d}\right). Thus, our method of looking at the quotient of coefficients {𝖾~k(d)(hd)}k=0n\{\widetilde{\mathsf{e}}_{k}^{(d)}\left(h_{d}\right)\}_{k=0}^{n} does not work at all simply because all the quotients are undefined. On the other hand, for h^d\widehat{h}_{d},

𝖾~k(d)(h^d)=(1)k(dk)1.\widetilde{\mathsf{e}}_{k}^{(d)}(\widehat{h}_{d})=(-1)^{k}\binom{d}{k}^{-1}.

Thus, their ratio limit is

𝖾~k1(d)(h^d)𝖾~k(d)(h^d)=(dk)(dk1)=dk+1k1tt,\frac{\widetilde{\mathsf{e}}_{k-1}^{(d)}(\widehat{h}_{d})}{\widetilde{\mathsf{e}}_{k}^{(d)}(\widehat{h}_{d})}=-\frac{\binom{d}{k}}{\binom{d}{k-1}}=-\frac{d-k+1}{k}\to-\frac{1-t}{t},

as dd\to\infty with k/dtk/d\to t. On the other hand, since m1(χ)=0m_{1}(\chi)=0, the SS-transform of χ\chi cannot be defined and thus there is no relation to the last limit.

Even though our approach does not seem to work every sequence of polynomials contained in (𝕋)\mathbb{P}(\mathbb{T}), in some cases we do obtain the expected limit. For instance, fix t0t\geq 0 and consider the unitary Hermite polynomials

Hd(z;t)=k=0d(1)k(dk)exp(tk(dk)2d)H_{d}(z;t)=\sum_{k=0}^{d}(-1)^{k}\binom{d}{k}\exp\left(-\frac{tk(d-k)}{2d}\right)

that where studied in [AFU24, Section 6] and [Kab22]. Then, if we take the ratio of consecutive coefficients and take the corresponding diagonal limit approaching t(0,1)t\in(0,1), we obtain the SS-transform of σt\sigma_{t}, the free normal distribution on 𝕋\mathbb{T}:

Sσt(z)=exp(t(z+12)).S_{\sigma_{t}}(z)=\exp\left(t\left(z+\frac{1}{2}\right)\right).

We can also prove that d\boxtimes_{d} approaches to \boxtimes as dd\to\infty on the unitary case without using the SS-transform, to the best our knowledge, there is no literature which explicitly states the assertion:

Proposition 8.5.

Let pd,qd𝒫d(𝕋)p_{d},q_{d}\in\mathcal{P}_{d}(\mathbb{T}) for dd\in\mathbb{N} and μ,ν(𝕋)\mu,\nu\in\mathcal{M}(\mathbb{T}). If μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu and μqd𝑤ν\mu\left\llbracket q_{d}\right\rrbracket\xrightarrow{w}\nu as dd\to\infty, repectively, then μpddqd𝑤μν\mu\left\llbracket p_{d}\boxtimes_{d}q_{d}\right\rrbracket\xrightarrow{w}\mu\boxtimes\nu.

Since the proof of this result is very similar to the proof in the real case (see Propositions 2.17 and 2.18), we only provide the idea of the proof. For the details, we refer the reader to [AP18, proof of Corollary 5.5].

Idea of the proof.

Since 𝕋\mathbb{T} is compact, then the moment convergence of polynomial sequence (pd)d𝒫(𝕋)(p_{d})_{d\in\mathbb{N}}\subset\mathcal{P}(\mathbb{T}) is equivalent to weak convergence. The proof then follows from the equivalence of the convergences of moments and of finite free cumulants of (pd)(p_{d}), which is similar to how it is done in the real case. ∎

9. Approximation of Tucci, Haagerup, and Möller

The purpose of this section is to prove Theorem 1.3 stating that Fujie and Ueda’s limit theorem [FU23] is an approximation of Tucci, Haagerup and Möller’s limit theorem [Tuc10, HM13].

The main idea is that this approximation is equivalent to the convergence of the finite TT-transform from the Section 6 to the TT-transform introduced in (2.2). This in turn is almost equivalent to the convergence of the finite SS-transform to the SS-transform, except that the TT-transform is more adequate to handle the case where the polynomial has roots in 0. Notice that we also need include the case of δ0\delta_{0}. First, we will adapt Theorem 1.1 to a version with TT-transforms.

Theorem 9.1.

Given a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) and a sequence of polynomials (pj)j𝒫(0)(p_{j})_{j\in\mathbb{N}}\subset\mathcal{P}(\mathbb{R}_{\geq 0}), the following are equivalent:

  1. (1)

    The weak convergence of (pj)j(p_{j})_{j\in\mathbb{N}} to μ\mu.

  2. (2)

    For every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with limit t(0,1)t\in(0,1), it holds that

    limjTpj(dj)(kjdj)=Tμ(t).\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}\left(\frac{k_{j}}{d_{j}}\right)=T_{\mu}(t).
  3. (3)

    For every t(0,1)t\in(0,1), it holds that

    limjTpj(dj)(t)=Tμ(t).\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}(t)=T_{\mu}(t).
Proof.

Using the definition of TT-transform in terms of SS-transform from (2.2), its finite analogue (6.4) and our main Theorem 1.1, we obtain that μpj𝑤μ\mu\left\llbracket p_{j}\right\rrbracket\xrightarrow{w}\mu with μδ0\mu\neq\delta_{0}, if and only if for every diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}} with limit t(μ({0}),1)t\in(\mu(\{0\}),1) one has that

limjTpj(dj)(kjdj)=limj1Spj(dj)(kjdjdj)=1Sμ(t1)=Tμ(t).\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}\left(\frac{k_{j}}{d_{j}}\right)=\lim_{j\to\infty}\frac{1}{S_{p_{j}}^{(d_{j})}\left(\tfrac{k_{j}-d_{j}}{d_{j}}\right)}=\frac{1}{S_{\mu}(t-1)}=T_{\mu}(t).

Since the functions Tpj(dj)T_{p_{j}}^{(d_{j})} are positive and non-decreasing (see Remark 6.6) and TμT_{\mu} is positive, continuous and increasing, then the previous limit can be extended to the whole interval (0,1)(0,1), and this is equivalent to part (2).

The equivalence between (2) and (3) follows by the increasing property of the TT-transform. Indeed, for any t(0,1)t\in(0,1) one can find diagonal sequences (kj)j(k_{j})_{j} and (kj)j(k^{\prime}_{j})_{j} with kj/djtkj/djk_{j}/d_{j}\leq t\leq k^{\prime}_{j}/d_{j} with limit tt and then

Tμ(t)=limjTpj(dj)(kjdj)limjTpj(dj)(t)limjTpj(dj)(kjdj)=Tμ(t).T_{\mu}(t)=\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}\left(\frac{k_{j}}{d_{j}}\right)\leq\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}\left(t\right)\leq\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}\left(\frac{k_{j}^{\prime}}{d_{j}}\right)=T_{\mu}(t).

The converse statement follows by a similar argument.

Therefore, we are only left to check what happens when μ=δ0\mu=\delta_{0} and Tμ(t)=0T_{\mu}(t)=0 for t(0,1)t\in(0,1). If (pj)j(p_{j})_{j\in\mathbb{N}} converges to δ0\delta_{0}, it is clear μShiϵ(pj)𝑤δϵ\mu\left\llbracket{\rm Shi}_{\epsilon}(p_{j})\right\rrbracket\xrightarrow{w}\delta_{\epsilon} as jj\to\infty for any ϵ>0\epsilon>0. Thus,

lim supjTpj(dj)(t)ϵfor t(0,1)\limsup_{j\to\infty}T_{p_{j}}^{(d_{j})}(t)\leq\epsilon\qquad\text{for }t\in(0,1)

because limjTShiϵ(pj)(dj)(t)=Tδϵ(t)=ϵ\displaystyle\lim_{j\to\infty}T_{{\rm Shi}_{\epsilon}(p_{j})}^{(d_{j})}(t)=T_{\delta_{\epsilon}}(t)=\epsilon and Tpj(dj)(t)TShiϵ(pj)(dj)(t)T_{p_{j}}^{(d_{j})}(t)\leq T_{{\rm Shi}_{\epsilon}(p_{j})}^{(d_{j})}(t) by Proposition 4.6. Letting ϵ\epsilon tend to 0 we obtain limjTpj(dj)(t)=0\displaystyle\lim_{{j\to\infty}}T_{p_{j}}^{(d_{j})}(t)=0.

For the converse, assume that limjTpj(dj)(t)=0\displaystyle\lim_{j\to\infty}T_{p_{j}}^{(d_{j})}(t)=0 for t(0,1)t\in(0,1). For the sake of contradiction, we assume that (pj)j(p_{j})_{j\in\mathbb{N}} does not converge to δ0\delta_{0}. Then there exist ϵ>0\epsilon>0 and a>0a\in\mathbb{R}_{>0} such that

lim supjμpj(a)ϵ.\limsup_{j\to\infty}\mu\left\llbracket p_{j}\right\rrbracket(\mathbb{R}_{\geq a})\geq\epsilon.

By taking a subsequence, we may assume μpj(a)ϵ\mu\left\llbracket p_{j}\right\rrbracket(\mathbb{R}_{\geq a})\geq\epsilon for all jj. Let us set another sequence of polynomials qj(x):=xdjkj(xa)kjq_{j}(x)\mathrel{\mathop{\ordinarycolon}}=x^{d_{j}-k_{j}}(x-a)^{k_{j}} where kj=ϵdjk_{j}=\lfloor\epsilon d_{j}\rfloor. Then it is clear that μqj𝑤(1ϵ)δ0+ϵδa=:ν\mu\left\llbracket q_{j}\right\rrbracket\xrightarrow{w}(1-\epsilon)\delta_{0}+\epsilon\delta_{a}=\mathrel{\mathop{\ordinarycolon}}\nu and Tqj(dj)(t)Tpj(dj)(t)T_{q_{j}}^{(d_{j})}(t)\leq T_{p_{j}}^{(d_{j})}(t) for all t(0,1)t\in(0,1) by Proposition 4.6. However, Tqj(dj)(t)Tν(t)0T_{q_{j}}^{(d_{j})}(t)\to T_{\nu}(t)\not\equiv 0. This is a contradiction. Therefore we conclude that (pj)j1(p_{j})_{j\geq 1} converges to δ0\delta_{0}. ∎

Recall that given a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) the map Φ\Phi from (2.3) yields a measure Φ(μ)(0)\Phi(\mu)\in\mathcal{M}(\mathbb{R}_{\geq 0}) such that TμT_{\mu} and FΦ(μ)F_{\Phi(\mu)} are inverse functions

We are now ready to give a proof of Theorem 1.3, namely that

μpd𝑤μμΦd(pd)𝑤Φ(μ).\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu\qquad\Leftrightarrow\qquad\mu\left\llbracket\Phi_{d}(p_{d})\right\rrbracket\xrightarrow{w}\Phi(\mu).
Proof of Theorem 1.3.

Notice that μΦd(pd)𝑤Φ(μ)\mu\left\llbracket\Phi_{d}(p_{d})\right\rrbracket\xrightarrow{w}\Phi(\mu) is equivalent to the convergence FμΦd(pd)(x)FΦ(μ)F_{\mu\left\llbracket\Phi_{d}(p_{d})\right\rrbracket}(x)\to F_{\Phi(\mu)} as dd\to\infty for every xx\in\mathbb{R} that is a continuous point of FΦ(μ)F_{\Phi(\mu)}. In turn, Lemma 2.5 and the definition of the TT-transform assures us that the later is equivalent to the convergence Tpd(d)(x)Tμ(x)T^{(d)}_{p_{d}}(x)\to T_{\mu}(x) for every continuous point x(0,1)x\in(0,1) of TμT_{\mu}. Since TμT_{\mu} is continuous on (0,1)(0,1), the later is equivalent to μpd𝑤μ\mu\left\llbracket p_{d}\right\rrbracket\xrightarrow{w}\mu due to Theorem 9.1. ∎

10. Examples and applications

In this section, we present various limit theorems relating finite free probability to free probability. Thus, throughout the whole section, we will consider situations where the dimension dd or djd_{j} tends to infinity, and assume that the polynomials converge to a measure, as in Notation 2.16.

10.1. pddqdp_{d}\boxtimes_{d}q_{d}  approximates μν\mu\boxtimes\nu

As announced, we present the first application of our main theorems. That is, we present a new independent proof of part (2) in Proposition 2.18 which includes the general case.

Proposition 10.1.

Let (pd)d(p_{d})_{d\in\mathbb{N}} and (qd)d(q_{d})_{d\in\mathbb{N}} be sequences of polynomials such that pd,qd𝒫d(0)p_{d},q_{d}\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}) and let μ,ν(0)\mu,\nu\in\mathcal{M}(\mathbb{R}_{\geq 0}) such that (pd)d(p_{d})_{d\in\mathbb{N}} and (qd)d(q_{d})_{d\in\mathbb{N}} weakly converge to μ\mu and ν\nu, respectively. Then (pddqd)d(p_{d}\boxtimes_{d}q_{d})_{d\in\mathbb{N}} weakly converges to μν\mu\boxtimes\nu.

Proof.

By Equation (6.9) and Theorem 9.1, we obtain

Tpddqd(d)(t)=Tpd(d)(t)Tqd(d)(t)Tμ(t)Tν(t)=Tμν(t)T^{(d)}_{p_{d}\boxtimes_{d}q_{d}}(t)=T_{p_{d}}^{(d)}(t)T_{q_{d}}^{(d)}(t)\rightarrow T_{\mu}(t)T_{\nu}(t)=T_{\mu\boxtimes\nu}(t)

as dd\to\infty for every t(0,1)t\in(0,1). Hence, μpddqd𝑤μν\mu\left\llbracket p_{d}\boxtimes_{d}q_{d}\right\rrbracket\xrightarrow{w}\mu\boxtimes\nu by Theorem 9.1 again. ∎

10.2. A limit for the coefficients of a sequence of polynomials.

Our main theorem provides a limit for the ratio of consecutive coefficients, in a converging sequence of polynomials. The convergence of ratios can be easily translated to understand other ratios, or the behaviour of the coefficients alone.

Proposition 10.2.

Fix a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) and fix 0<t<u<1μ({0})0<t<u<1-\mu(\{0\}). Let pj𝒫dj()p_{j}\in\mathcal{P}_{d_{j}}(\mathbb{R}) be a sequence of polynomials converging to μ\mu and let (kj)j(k_{j})_{j\in\mathbb{N}} and (lj)j(l_{j})_{j\in\mathbb{N}}\subset\mathbb{N} be diagonal sequences with ratio limit tt and uu, respectively. Then

limj𝖾~lj(dj)(pj)𝖾~kj(dj)(pj)dj=exp(tulogSμ(x)𝑑x).\lim_{j\to\infty}\sqrt[d_{j}]{\frac{\widetilde{\mathsf{e}}_{l_{j}}^{(d_{j})}(p_{j})}{\widetilde{\mathsf{e}}_{k_{j}}^{(d_{j})}(p_{j})}}=\exp\left(-\int_{t}^{u}\log S_{\mu}(-x)dx\right).

Additionally, unless μ=δa\mu=\delta_{a} for some a>0a>0,

limj𝖾~lj(dj)(pj)𝖾~kj(dj)(pj)dj=exp(tulogSμ(x)𝑑x)=exp(Tμ(1u)Tμ(1t)logxΦ(μ)(dx)).\lim_{j\to\infty}\sqrt[d_{j}]{\frac{\widetilde{\mathsf{e}}_{l_{j}}^{(d_{j})}(p_{j})}{\widetilde{\mathsf{e}}_{k_{j}}^{(d_{j})}(p_{j})}}=\exp\left(-\int_{t}^{u}\log S_{\mu}(-x)dx\right)=\exp\left(\int_{T_{\mu}(1-u)}^{T_{\mu}(1-t)}\log x\,\Phi(\mu)(dx)\right).
Proof.

By Theorem 9.1, one has the convergence of Tpj(dj)T_{p_{j}}^{(d_{j})} to TμT_{\mu}. Note that it is locally uniform by Polya’s theorem since they are monotone functions. The same is true for the convergence of logTpj(dj)\log T_{p_{j}}^{(d_{j})} to logTμ\log T_{\mu}. Hence,

(10.1) limjtulogTpj(dj)(x)𝑑x=tulogTμ(x)𝑑x.\lim_{j\to\infty}\int_{t}^{u}\log T_{p_{j}}^{(d_{j})}(x)dx=\int_{t}^{u}\log T_{\mu}(x)dx.

Besides, if μ\mu is not a Dirac measure then Tμ(x)T_{\mu}(x) is a strictly monotone function. Thus, by the change of variables y=Tμ(x)y=T_{\mu}(x) that is equivalent to FΦ(μ)(y)=xF_{\Phi(\mu)}(y)=x, one has

tulogTμ(x)𝑑x=Tμ(t)Tμ(u)logyΦ(μ)(dy).\int_{t}^{u}\log T_{\mu}(x)dx=\int_{T_{\mu}(t)}^{T_{\mu}(u)}\log y\;\Phi(\mu)(dy).

For diagonal sequences (kj)j,(lj)j(k_{j})_{j\in\mathbb{N}},(l_{j})_{j\in\mathbb{N}} with ratio limits limjkjdj=t\lim_{j\to\infty}\frac{k_{j}}{d_{j}}=t and limjljdj=u\lim_{j\to\infty}\frac{l_{j}}{d_{j}}=u, the left-hand limit of (10.1) coincides with the limit of

kjdjljdjlogTpj(dj)(x)𝑑x=1dji=kjlj1logTpj(dj)(idj)=1dji=kjlj1log(𝖾~dji(dj)(pj)𝖾~dji1(dj)(pj))=1djlog(𝖾~djlj1(dj)(pj)𝖾~djkj1(dj)(pj))\begin{split}\int_{\frac{k_{j}}{d_{j}}}^{\frac{l_{j}}{d_{j}}}\log T_{p_{j}}^{(d_{j})}(x)dx&=\frac{1}{d_{j}}\sum_{i=k_{j}}^{l_{j}-1}\log T_{p_{j}}^{(d_{j})}\left(\frac{i}{d_{j}}\right)\\ &=\frac{1}{d_{j}}\sum_{i=k_{j}}^{l_{j}-1}\log\left(\frac{\widetilde{\mathsf{e}}_{d_{j}-i}^{(d_{j})}(p_{j})}{\widetilde{\mathsf{e}}_{d_{j}-i-1}^{(d_{j})}(p_{j})}\right)\\ &=\frac{1}{d_{j}}\log\left(\frac{\widetilde{\mathsf{e}}_{d_{j}-l_{j}-1}^{(d_{j})}(p_{j})}{\widetilde{\mathsf{e}}_{d_{j}-k_{j}-1}^{(d_{j})}(p_{j})}\right)\end{split}

by the approximation of Riemann sum. Taking the exponential, we have

limj𝖾~djlj1(dj)(pj)𝖾~djkj1(dj)(pj)dj=exp(tulogTμ(x)𝑑x).\lim_{j\to\infty}\sqrt[d_{j}]{\frac{\widetilde{\mathsf{e}}_{d_{j}-l_{j}-1}^{(d_{j})}(p_{j})}{\widetilde{\mathsf{e}}_{d_{j}-k_{j}-1}^{(d_{j})}(p_{j})}}=\exp\left(\int_{t}^{u}\log T_{\mu}(x)dx\right).

Finally, by the simple parameter change, we obtain the desired result. ∎

Notice that if we may take t=0t=0 in the previous result then we have the following:

Corollary 10.3.

Fix a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}) and a sequence of polynomials pj𝒫dj()p_{j}\in\mathcal{P}_{d_{j}}(\mathbb{R}) converging to μ\mu. Assume Sμ(0)=1/Tμ(1)=1/m1(μ)(0,)S_{\mu}(0)=1/T_{\mu}(1)=1/m_{1}(\mu)\in(0,\infty) and

Spj(dj)(1dj)=1Tpj(dj)(dj1dj)=1𝖾~1(dj)(pj)1m1(μ)S_{p_{j}}^{(d_{j})}\left(-\frac{1}{d_{j}}\right)=\frac{1}{T_{p_{j}}^{(d_{j})}\left(\frac{d_{j}-1}{d_{j}}\right)}=\frac{1}{\widetilde{\mathsf{e}}_{1}^{(d_{j})}(p_{j})}\to\frac{1}{m_{1}(\mu)}

as jj\to\infty, which means the first moment convergence of pjp_{j} to μ\mu. Then for every t(0,1μ({0}))t\in(0,1-\mu(\{0\})) and diagonal sequence (kj)j(k_{j})_{j\in\mathbb{N}}\subset\mathbb{N} with ratio limit tt, we have

limj(𝖾~kj(dj)(pj))1dj=exp(0tlogSμ(x)𝑑x).\lim_{j\to\infty}\left(\widetilde{\mathsf{e}}_{k_{j}}^{(d_{j})}(p_{j})\right)^{\frac{1}{d_{j}}}=\exp\left(-\int_{0}^{t}\log S_{\mu}(-x)dx\right).

Additionally, unless μ=δa\mu=\delta_{a} for some a>0a>0,

limj(𝖾~kj(dj)(pj))1dj=exp(0tlogSμ(x)𝑑x)=exp(Tμ(1t)Tμ(1)logxΦ(μ)(dx)),\lim_{j\to\infty}\left(\widetilde{\mathsf{e}}_{k_{j}}^{(d_{j})}(p_{j})\right)^{\frac{1}{d_{j}}}=\exp\left(-\int_{0}^{t}\log S_{\mu}(-x)dx\right)=\exp\left(\int_{T_{\mu}(1-t)}^{T_{\mu}(1)}\log x\,\Phi(\mu)(dx)\right),

where we may replace Tμ(1)=m1(μ)T_{\mu}(1)=m_{1}(\mu) by \infty because the support of Φ(μ)\Phi(\mu) is included in [Tμ(0),Tμ(1)][T_{\mu}(0),T_{\mu}(1)], see Equation (2.4).

Remark 10.4.

In [HM13] it is shown that the following integrals are all equal:

01logSμ(x)𝑑x=0logxμ(dx)=0logxΦ(μ)(dx),\int_{0}^{1}\log S_{\mu}(-x)dx=\int_{0}^{\infty}\log x\,\mu(dx)=\int_{0}^{\infty}\log x\Phi(\mu)(dx),

whenever one of them is finite. Now, recall from [FK52] that the Fuglede-Kadison determinant of a positive operator TT in a tracial WW^{*}-algebra, with distribution μT\mu_{T} is given by

Δ(T)=exp(0logxμT(dx)).\Delta(T)=\exp\left(\int_{0}^{\infty}\log x\,\mu_{T}(dx)\right).

In the case of a positive matrix AdA_{d} of dimension dd with eigenvalues {λj}j=1d\{\lambda_{j}\}_{j=1}^{d}, the Fuglede-Kadison determinant can be written as

Δ(Ad)=(detAd)1d=idλi1d=(𝖾~d(d))1d.\Delta(A_{d})=(\det A_{d})^{\frac{1}{d}}=\prod^{d}_{i}{\lambda_{i}}^{\frac{1}{d}}=\left(\widetilde{\mathsf{e}}_{d}^{(d)}\right)^{\frac{1}{d}}.

Hence, the statements above can be seen as a generalization of the convergence of the Fuglede-Kadison determinant for finite dimensional operators that converge weakly to an operator TT.

10.3. Hypergeometric polynomials

The hypergeometric polynomials are a particular family of generalized hypergeometric functions. They were studied in connection with finite free probability in [MMP24a] and several families of parameters where the polynomials have all positive roots were determined. This large family of polynomials contains as particular cases some important families of orthogonal polynomials, such as Laguerre, Bessel and Jacobi polynomials. In this section, we compute their finite SS-transform and use it to directly obtain the SS-transform of their limiting root distribution.

Definition 10.5 (Hypergeometric polynomials).

For a1,,ai{1d,2d,,d1d}a_{1},\dots,a_{i}\in\mathbb{R}\setminus\left\{\tfrac{1}{d},\tfrac{2}{d},\dots,\tfrac{d-1}{d}\right\} and b1,,bjb_{1},\dots,b_{j}\in\mathbb{R}, we denote444Our notation slightly differs from that in [MMP24a], by a dilation of dijd^{i-j} on the roots. This simplifies the study of the asymptotic behaviour of the roots, and does not change the fact that roots are all real (or all positive). by d[.b1,,bja1,,ai.]\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1},\dots,b_{j}}{a_{1},\dots,a_{i}}\right]} the unique monic polynomial of degree dd with coefficients given by

𝖾~k(d)(d[.b1,,bja1,,ai.]):=dk(ij)s=1j(bsd)kr=1i(ard)k for k=1,,d.\widetilde{\mathsf{e}}_{k}^{(d)}\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1},\dots,b_{j}}{a_{1},\dots,a_{i}}\right]}\right)\mathrel{\mathop{\ordinarycolon}}=d^{k(i-j)}\frac{\prod_{s=1}^{j}\left(b_{s}d\right)_{k}}{\prod_{r=1}^{i}\left(a_{r}d\right)_{k}}\qquad\text{ for }k=1,\dots,d.

Notice that the reason why we do not allow a parameter below to be of the form kd\frac{k}{d} is to avoid indeterminacy (a division by 0).

We also allow the cases where there is no parameter below or no parameter above (i=0i=0 or j=0j=0), in these cases the coefficients are

𝖾~k(d)(d[.b1,,bj.]):=dkjs=1j(bsd)kand𝖾~k(d)(d[.a1,,ai.]):=dkir=1i1(ard)k.\widetilde{\mathsf{e}}_{k}^{(d)}\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1},\dots,b_{j}}{-}\right]}\right)\mathrel{\mathop{\ordinarycolon}}=d^{-kj}\prod_{s=1}^{j}\left(b_{s}d\right)_{k}\quad\text{and}\quad\widetilde{\mathsf{e}}_{k}^{(d)}\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{-}{a_{1},\dots,a_{i}}\right]}\right)\mathrel{\mathop{\ordinarycolon}}=d^{ki}\prod_{r=1}^{i}\frac{1}{\left(a_{r}d\right)_{k}}.

Since the ratio of two consecutive coefficients of a hypergeometric polynomial is easily expressed in terms of the parameters, a direct computation yields their finite SS-transform.

Lemma 10.6.

For parameters a1,,ai{1d,2d,,d1d}a_{1},\dots,a_{i}\in\mathbb{R}\setminus\left\{\tfrac{1}{d},\tfrac{2}{d},\dots,\tfrac{d-1}{d}\right\} and b1,,bjb_{1},\dots,b_{j}\in\mathbb{R}, the finite SS-transform of the polynomial p:=d[.b1,,bja1,,ai.]p\mathrel{\mathop{\ordinarycolon}}=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1},\dots,b_{j}}{a_{1},\dots,a_{i}}\right]} is

Sp(d)(kd)=r=1i(ark1d)s=1j(bsk1d).S_{p}^{(d)}\left(-\frac{k}{d}\right)=\frac{\prod_{r=1}^{i}(a_{r}-\frac{k-1}{d})}{\prod_{s=1}^{j}(b_{s}-\frac{k-1}{d})}.

Equivalently, the roots of the polynomial Φd(p)\Phi_{d}(p) are given by:

s=1j(bsk1d)r=1i(ark1d)for k=1,,d.\frac{\prod_{s=1}^{j}(b_{s}-\frac{k-1}{d})}{\prod_{r=1}^{i}(a_{r}-\frac{k-1}{d})}\qquad\text{for }k=1,\dots,d.
Proof.

Directly from the definition we compute

Sp(d)(kd)\displaystyle S_{p}^{(d)}\left(-\frac{k}{d}\right) =r=1is=1jd(k1)(ij)(bsd)k1(ard)k1(ard)kdk(ij)(bsd)k\displaystyle=\prod_{r=1}^{i}\prod_{s=1}^{j}\frac{d^{(k-1)(i-j)}\left(b_{s}d\right)_{k-1}}{\left(a_{r}d\right)_{k-1}}\frac{\left(a_{r}d\right)_{k}}{d^{k(i-j)}\left(b_{s}d\right)_{k}}
=r=1is=1jdjiardk+1bsdk+1\displaystyle=\prod_{r=1}^{i}\prod_{s=1}^{j}d^{j-i}\frac{a_{r}d-k+1}{b_{s}d-k+1}
=r=1is=1jark1dbsk1d\displaystyle=\prod_{r=1}^{i}\prod_{s=1}^{j}\frac{a_{r}-\frac{k-1}{d}}{b_{s}-\frac{k-1}{d}}

as desired. ∎

As a direct corollary we can determine the SS-transform of the limiting measure whenever the hypergeometric polynomials have all positive real roots. Notice that a more general result for hypergeometric polynomials that does not necessarily have real roots was recently proved in [MMP24, Theorem 3.7]; there, the approach relies on the three-term recurrence relation that is specific to this family of polynomials. We would like to highlight that with our approach, the computation of the limiting measure using our main theorem is straightforward.

Corollary 10.7.

For every d1d\geq 1, consider parameters a1(d),,ai(d){1d,2d,,d1d}a_{1}^{(d)},\dots,a^{(d)}_{i}\in\mathbb{R}\setminus\left\{\tfrac{1}{d},\tfrac{2}{d},\dots,\tfrac{d-1}{d}\right\} and b1(d),,bj(d)b_{1}^{(d)},\dots,b_{j}^{(d)}\in\mathbb{R} such that the following limits exist

limda1(d)=a1,,limdai(d)=ai,limdb1(d)=b1,,limdbj(d)=bj.\lim_{d\to\infty}a_{1}^{(d)}=a_{1},\quad\dots,\quad\lim_{d\to\infty}a_{i}^{(d)}=a_{i},\quad\lim_{d\to\infty}b_{1}^{(d)}=b_{1},\quad\dots,\quad\lim_{d\to\infty}b_{j}^{(d)}=b_{j}.

Assume further pd:=d[.b1(d),,bj(d)a1(d),,ai(d).]𝒫d(>0)p_{d}\mathrel{\mathop{\ordinarycolon}}=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1}^{(d)},\ \dots,\ b_{j}^{(d)}}{a_{1}^{(d)},\ \dots,\ a^{(d)}_{i}}\right]}\in\mathcal{P}_{d}(\mathbb{R}_{>0}) for every dd. Then

limdμpd=μ,\lim_{d\to\infty}\mu\left\llbracket p_{d}\right\rrbracket=\mu,

where μ\mu is the measure supported in the positive real line, with SS-transform given by

Sμ(t)=r=1i(ar+t)s=1j(bs+t)for t(1,0).S_{\mu}(t)=\frac{\prod_{r=1}^{i}(a_{r}+t)}{\prod_{s=1}^{j}(b_{s}+t)}\qquad\text{for }t\in(-1,0).
Remark 10.8.

Notice that in Corollary 10.7 one can also consider polynomials with all negative roots, or equivalently, one can let the sequence of polynomials be

pd:=Dil1d[.b1(d),,bj(d)a1(d),,ai(d).]𝒫d(<0).p_{d}\mathrel{\mathop{\ordinarycolon}}={\rm Dil}_{-1}\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1}^{(d)},\ \dots,\ b_{j}^{(d)}}{a_{1}^{(d)},\ \dots,\ a^{(d)}_{i}}\right]}\in\mathcal{P}_{d}(\mathbb{R}_{<0}).

In this case, the odd coefficients of the polynomials change sign, and this means that the finite SS-transforms changes sign, ultimately we obtain that the SS-transform of the limiting measure is

Sμ(t)=r=1i(ar+t)s=1j(bs+t)for t(1,0).S_{\mu}(t)=-\frac{\prod_{r=1}^{i}(a_{r}+t)}{\prod_{s=1}^{j}(b_{s}+t)}\qquad\text{for }t\in(-1,0).
Example 10.9 (Identity for d\boxtimes_{d}).

When there are no parameters above nor below, we obtain the polynomial

pd=d[..](x)=(x1)d,p_{d}=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{-}{-}\right]}(x)=(x-1)^{d},

which is the identity for the multiplicative convolution. Its finite SS-transform is given by

Spd(d)(kd)=1for k=1,,d.S_{p_{d}}^{(d)}\left(-\frac{k}{d}\right)=1\qquad\text{for }k=1,\dots,d.

Thus the limiting distribution is the δ1\delta_{1} measure.

Example 10.10 (Laguerre polynomials).

In the case where we just consider one parameter bb above and no parameter below, the polynomial p=d[.b.]p=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{-}\right]} is the well-known Laguerre polynomial and has all positive roots whenever b>11db>1-\frac{1}{d}. By Lemma 10.6 the finite SS-transform is

Sp(d)(kd)=1bk1dfor k=1,,d.S_{p}^{(d)}\left(-\frac{k}{d}\right)=\frac{1}{b-\frac{k-1}{d}}\qquad\text{for }k=1,\dots,d.

With respect to the asymptotic behaviour, notice that using Corollary 10.7 we can retrieve the known result that the limiting zero counting measure of a sequence of Laguerre polynomials is the Marchenko-Pastur distribution. Indeed, for a sequence (bd)d1[1,)(b_{d})_{d\geq 1}\subset[1,\infty) with limdbd=b1\lim_{d\to\infty}b_{d}=b\geq 1, then in the limit we obtain the Marchenko-Pastur distribution of parameter bb:

limdμd[.bd.]=𝐌𝐏b,\lim_{d\to\infty}\mu\left\llbracket\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{d}}{-}\right]}\right\rrbracket={\bf MP}_{b},

which is determined by the SS-transform

S𝐌𝐏b(t)=1b+tfor t(1,0).S_{{\bf MP}_{b}}(t)=\frac{1}{b+t}\qquad\text{for }t\in(-1,0).

On the other hand, the spectral measure of Φd(p)\Phi_{d}(p) is the uniform distribution on {b1,b1+1d,,b1d}\{b-1,b-1+\frac{1}{d},\dots,b-\frac{1}{d}\} and, in the limit, we retrieve the fact that

Φ(𝐌𝐏b)=𝐔(b1,b).\Phi({\bf MP}_{b})={\bf U}(b-1,b).

As an application of the previous example and Proposition 10.1, we know that the multiplicative convolution of a polynomial with a Laguerre polynomial provides us with a finite approximation of compound free Poisson distribution.

Corollary 10.11.

Let (pj)j(p_{j})_{j\in\mathbb{N}} be a sequence of polynomials with pj𝒫dj(0)p_{j}\in\mathcal{P}_{d_{j}}(\mathbb{R}_{\geq 0}) such that μpj𝑤ν(0)\mu\left\llbracket p_{j}\right\rrbracket\xrightarrow{w}\nu\in\mathcal{M}(\mathbb{R}_{\geq 0}). Then (dj[.b.]djpj)j\left(\mathcal{H}_{d_{j}}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{-}\right]}\boxtimes_{d_{j}}p_{j}\right)_{j\in\mathbb{N}} weakly converges to the compound free Poisson distribution 𝐌𝐏𝐛ν{\rm\bf MP_{b}}\boxtimes\nu (with rate bb and jump distribution ν\nu).

We now turn our attention to the reversed Laguerre polynomials.

Example 10.12 (Bessel polynomials).

In the case where we just consider one parameter aa below and no parameter above, the polynomial p(x)=d[.a.](x)p(x)=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{-}{a}\right]}(-x) goes by the name of Bessel polynomial and has all positive roots whenever a<0a<0. By Remark 10.8 the finite SS-transform is

Sp(d)(kd)=a+k1dfor k=1,,d.S_{p}^{(d)}\left(-\frac{k}{d}\right)=-a+\frac{k-1}{d}\qquad\text{for }k=1,\dots,d.

If we consider a sequence (ad)d1(,0)(a_{d})_{d\geq 1}\subset(-\infty,0) with limdad=a<0\lim_{d\to\infty}a_{d}=a<0, then the limiting SS-transform is equal to

S𝐑𝐌𝐏1a(t)=tafor t(1,0)S_{{\bf RMP}_{1-a}}(t)=-t-a\qquad\text{for }t\in(-1,0)

that corresponds to the measure 𝐑𝐌𝐏1a=(𝐌𝐏1a)1{\bf RMP}_{1-a}=({\bf MP}_{1-a})^{\langle-1\rangle} that is the reciprocal distribution of a Marchenko-Pastur distribution of parameter 1a1-a, see [Yos20, Proposition 3.1].

Example 10.13 (Jacobi polynomials).

In the case where we consider one parameter bb above and one parameter aa below, we obtain the Jacobi polynomials p=d[.ba.]p=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}. Recall that we already encountered these polynomials in Section 5 and proved some bounds for some specific parameters. We now complete the picture. There are several regions of parameters where the polynomial has all positive roots. Below we highlight the three main regions of parameters where we get a polynomial with positive roots. The reader is referred to [MMP24a, Section 5.2 and Table 2] for the complete description of all the regions. Same as for the previous examples, we can consider sequences (ad)d1(a_{d})_{d\geq 1}, (bd)d1(b_{d})_{d\geq 1} in those region of parameters and with limits limdad=a\lim_{d\to\infty}a_{d}=a, limdbd=b\lim_{d\to\infty}b_{d}=b. By Corollary 10.7 we can compute the SS-transform of the limiting measure that we denote by μa,b\mu_{a,b}.

  • For b>1b>1 and a>b+1a>b+1 then d[.ba.]𝒫d([0,1])\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}\in\mathcal{P}_{d}([0,1]). The SS-transform of the limiting measure μa,b\mu_{a,b} is given by

    Sμa,b(t)=a+tb+t=1+abb+tfor t(1,0).S_{\mu_{a,b}}(t)=\frac{a+t}{b+t}=1+\frac{a-b}{b+t}\qquad\text{for }t\in(-1,0).
  • For b>1b>1 and a<0a<0 then d[.ba.](x)𝒫d(0)\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}(-x)\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}). The SS-transform of the limiting measure μa,b\mu_{a,b} is given by

    Sμa,b(t)=atb+tfor t(1,0).S_{\mu_{a,b}}(t)=\frac{-a-t}{b+t}\qquad\text{for }t\in(-1,0).

    Notice that in this case, μa,b=𝐌𝐏b(𝐌𝐏1a)1\mu_{a,b}={\bf MP}_{b}\boxtimes({\bf MP}_{1-a})^{\langle-1\rangle} is the free beta prime distribution fβ(b,1a)f\beta^{\prime}(b,1-a) studied by Yoshida [Yos20, Eq (2)]. In other words, a simple change of variable yields that for a,b>1a,b>1 the limiting measure of the polynomials d[.ad1bd.](x)\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{a_{d}}{1-b_{d}}\right]}(-x) tends to the measure fβ(a,b)f\beta^{\prime}(a,b).

  • For a<0a<0 and b<a1b<a-1, then d[.ba.]𝒫d(>0)\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b}{a}\right]}\in\mathcal{P}_{d}(\mathbb{R}_{>0}). The SS-transform of the limiting measure μa,b\mu_{a,b} is given by

    Sμa,b(t)=a+tb+tfor t(1,0).S_{\mu_{a,b}}(t)=\frac{a+t}{b+t}\qquad\text{for }t\in(-1,0).

To finish this section we want to highlight a case where our main result applies for hypergeometric polynomials with several parameters.

Proposition 10.14.

For every d1d\geq 1, let a1(d),,ai(d)<0a_{1}^{(d)},\dots,a^{(d)}_{i}<0 and b1(d),,bj(d)>1b_{1}^{(d)},\dots,b_{j}^{(d)}>1 such that the following limits exist

limda1(d)=a1,,limdai(d)=ai,limdb1(d)=b1,,limdbj(d)=bj.\lim_{d\to\infty}a_{1}^{(d)}=a_{1},\quad\dots,\quad\lim_{d\to\infty}a_{i}^{(d)}=a_{i},\quad\lim_{d\to\infty}b_{1}^{(d)}=b_{1},\quad\dots,\quad\lim_{d\to\infty}b_{j}^{(d)}=b_{j}.

Let pd(x):=d[.b1(d),,bj(d)a1(d),,ai(d).]((1)ix)p_{d}(x)\mathrel{\mathop{\ordinarycolon}}=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{b_{1}^{(d)},\ \dots,\ b_{j}^{(d)}}{a_{1}^{(d)},\ \dots,\ a^{(d)}_{i}}\right]}\left((-1)^{i}x\right) for every dd. Then

limdμpd=μ,\lim_{d\to\infty}\mu\left\llbracket p_{d}\right\rrbracket=\mu,

where μ\mu is the measure supported in the positive real line with SS-transform given by

Sμ(t)=r=1i(art)s=1j(bs+t)for t(1,0).S_{\mu}(t)=\frac{\prod_{r=1}^{i}(-a_{r}-t)}{\prod_{s=1}^{j}(b_{s}+t)}\qquad\text{for }t\in(-1,0).
Proof.

From [MMP24, Theorem 4.6] we know that all polynomials pdp_{d} have positive real roots, so we can apply Corollary 10.7 when ii is even (or Remark 10.8 when ii is odd) and the conclusion follows directly. ∎

Remark 10.15.

Notice that in the previous result, we can identify μ\mu as the free multiplicative convolution of Marchenko-Pastur and reversed Marchenko-Pastur distributions. Indeed, it follows from the multiplicativity of the SS-transform and Examples 10.10 and 10.12 that

μ=𝐌𝐏b1𝐌𝐏bj(𝐌𝐏1a1)1(𝐌𝐏1ai)1.\mu={\bf MP}_{b_{1}}\boxtimes\cdots\boxtimes{\bf MP}_{b_{j}}\boxtimes({\bf MP}_{1-a_{1}})^{\langle-1\rangle}\boxtimes\cdots\boxtimes({\bf MP}_{1-a_{i}})^{\langle-1\rangle}.

10.4. Finite analogue of some free stable laws

The purpose of this section is to give some finite analogue of free stable laws. Free stable laws are defined as distributions μ\mu satisfying that for every a,b>0a,b>0 there exist c>0c>0 and tt\in\mathbb{R} such that

DilaμDilbμ=Dilcμδt.{\rm Dil}_{a}\mu\boxplus{\rm Dil}_{b}\mu={\rm Dil}_{c}\mu\boxplus\delta_{t}.

Any free stable law can be uniquely (up to a scaling parameter) characterized by a pair (α,ρ)(\alpha,\rho), which is in the set of admissible parameters:

𝒜:={(α,ρ):0<α1, 0ρ1}{(α,ρ):1<α2, 1α1ρα1}.\mathcal{A}\mathrel{\mathop{\ordinarycolon}}=\{(\alpha,\rho)\mathrel{\mathop{\ordinarycolon}}0<\alpha\leq 1,\ 0\leq\rho\leq 1\}\cup\{(\alpha,\rho)\mathrel{\mathop{\ordinarycolon}}1<\alpha\leq 2,\ 1-\alpha^{-1}\leq\rho\leq\alpha^{-1}\}.

More precisely, the Voiculescu transform φμ(z):=Gμ1(z1)z\varphi_{\mu}(z)\mathrel{\mathop{\ordinarycolon}}={G_{\mu}^{\langle-1\rangle}(z^{-1})-z} of the free stable law μ\mu is given by

φμ(z)=eiπαρzα+1,(α,ρ)𝒜,z+,\varphi_{\mu}(z)=-e^{i\pi\alpha\rho}z^{-\alpha+1},\qquad(\alpha,\rho)\in\mathcal{A},\quad z\in\mathbb{C}^{+},

see [BP99, AH16, HSW20] for details. We then denote by 𝐟𝐬α,ρ{\bf fs}_{\alpha,\rho} the free stable law with an admissible parameter (α,ρ)𝒜(\alpha,\rho)\in\mathcal{A}. In particular, the following cases are well-known.

  • (Wigner’s semicircle law) 𝐟𝐬2,12(dx)=μsc(dx):=12π4x2𝟏[2,2](x)dx.{\bf fs}_{2,\frac{1}{2}}(dx)=\mu_{\mathrm{sc}}(dx)\mathrel{\mathop{\ordinarycolon}}=\frac{1}{2\pi}\sqrt{4-x^{2}}\mathbf{1}_{[-2,2]}(x)dx.

  • (Cauchy distribution) 𝐟𝐬1,12(dx)=1π(1+x2)𝟏(x)dx{\bf fs}_{1,\frac{1}{2}}(dx)=\frac{1}{\pi(1+x^{2})}\mathbf{1}_{\mathbb{R}}(x)dx.

  • (Positive free 12\frac{1}{2}-stable law) 𝐟𝐬12,1(dx)=4x12πx2𝟏[14,)(x)dx{\bf fs}_{\frac{1}{2},1}(dx)=\frac{\sqrt{4x-1}}{2\pi x^{2}}\mathbf{1}_{[\frac{1}{4},\infty)}(x)dx.

In the following, we construct some finite analogues of free stable laws.

Example 10.16 (Hermite polynomials).

It is well known that the Hermite polynomials (with an appropiate normalization) converge in distribution to the semicircle law, see for instance [Mar21]. Specifically, if let

H2d(x):=k=0d(2d2k)(1)k(2k)!k!(4d)kx2d2k𝒫2dS()H_{2d}(x)\mathrel{\mathop{\ordinarycolon}}=\sum_{k=0}^{d}\binom{2d}{2k}(-1)^{k}\frac{(2k)!}{k!(4d)^{k}}x^{2d-2k}\in\mathcal{P}_{2d}^{S}(\mathbb{R})

denote the Hermite polynomial of degree 2d2d, then μH2d𝑤μsc\mu\left\llbracket H_{2d}\right\rrbracket\xrightarrow{w}\mu_{\mathrm{sc}} as dd\to\infty. Thus, we can interpret H2dH_{2d} as the finite analogue of the symmetric free 22-stable law 𝐟𝐬2,12{\bf fs}_{2,\frac{1}{2}}. The finite symmetric SS-transform of H2dH_{2d} can be easily computed:

S~H2d(2d)(kd)=1kd+12d.\widetilde{S}_{H_{2d}}^{(2d)}\left(-\frac{k}{d}\right)=\frac{1}{\sqrt{-\tfrac{k}{d}+\tfrac{1}{2d}}}.
Example 10.17 (Positive finite 12\frac{1}{2}-stable).

From [BP99] we know that 𝐌𝐏11=𝐟𝐬12,1{\bf MP}_{1}^{\langle-1\rangle}={\bf fs}_{\frac{1}{2},1} is the positive free 12\frac{1}{2}-stable law. We also have that the compound free Poisson distribution 𝐌𝐏1𝐟𝐬12,1{\bf MP}_{1}\boxtimes{\bf fs}_{\frac{1}{2},1} coincides with the positive boolean 12\frac{1}{2}-stable law, see [SW97]. Now, we provide the finite counterparts as follows.

Recall from Example 10.10 that the Laguerre polynomial d[.1.]\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{1}{-}\right]} is the finite free analogue of the Marchenko-Pastur distribution 𝐌𝐏1{\bf MP}_{1}. From [MMP24a, Eq. (81)] we know that its reversed polynomial is

(10.2) fd(x):=d[.1d.](x),f_{d}(x)\mathrel{\mathop{\ordinarycolon}}=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{-}{-\frac{1}{d}}\right]}(-x),

and letting dd\to\infty the empirical root distribution of these polynomials tends to the positive free 12\frac{1}{2}-stable law 𝐌𝐏11=𝐟𝐬12,1{\bf MP}_{1}^{\langle-1\rangle}={\bf fs}_{\frac{1}{2},1}. Clearly, we have

Sfd(d)(kd)=kdfor k=1,,d.S_{f_{d}}^{(d)}\left(-\frac{k}{d}\right)=\frac{k}{d}\qquad\text{for }k=1,\dots,d.

By [MMP24a, Eq (82)] and Corollary 10.11, this means that the Jacobi polynomials

d[.11d.](x)=(d[.1.](x))d(d[.1d.](x))\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{1}{-\frac{1}{d}}\right]}(-x)=\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{1}{-}\right]}(x)\right)\boxtimes_{d}\left(\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{-}{-\frac{1}{d}}\right]}(-x)\right)

are the finite analogue of the positive boolean 12\frac{1}{2}-stable law.

Example 10.18 (Symmetric finite 23\frac{2}{3}-stable).

According to [AP09, Theorem 12], there is an interesting relation between positive free stable laws and symmetric free stable laws via the free multiplicative convolution. That is,

(10.3) 𝐟𝐬α,12=μsc𝐟𝐬2αα+2,1{\bf fs}_{\alpha,\frac{1}{2}}=\mu_{\mathrm{sc}}\boxtimes{\bf fs}_{\frac{2\alpha}{\alpha+2},1}

for any α(0,2)\alpha\in(0,2). Here we give the finite analogue of the symmetric 23\frac{2}{3}-stable law.

We define

g2d:=H2d2df2d.g_{2d}\mathrel{\mathop{\ordinarycolon}}=H_{2d}\boxtimes_{2d}f_{2d}.

By Proposition 8.3 and Lemma 10.6, we have

(10.4) (S~g2d(2d)(kd))2=(S~H2d(2d)(kd))2Sf2d(2d)(2k2d)Sf2d(2d)(2k12d)=2d2k12k2d2k12d=kd,\begin{split}\left(\widetilde{S}_{g_{2d}}^{(2d)}\left(-\frac{k}{d}\right)\right)^{2}&=\left(\widetilde{S}_{H_{2d}}^{(2d)}\left(-\frac{k}{d}\right)\right)^{2}S_{f_{2d}}^{(2d)}\left(-\frac{2k}{2d}\right)S_{f_{2d}}^{(2d)}\left(-\frac{2k-1}{2d}\right)\\ &=-\frac{2d}{2k-1}\cdot\frac{2k}{2d}\cdot\frac{2k-1}{2d}\\ &=-\frac{k}{d},\end{split}

and therefore

S~g2d(2d)(kd)=kd.\widetilde{S}_{g_{2d}}^{(2d)}\left(-\frac{k}{d}\right)=\sqrt{-\frac{k}{d}}.

By Proposition 8.4, if we let dd\to\infty with kdt(0,1)\frac{k}{d}\to t\in(0,1), then

S~g2d(2d)(kd)t=S~𝐟𝐬23,12(t),\widetilde{S}_{g_{2d}}^{(2d)}\left(-\frac{k}{d}\right)\to\sqrt{-t}=\widetilde{S}_{{\bf fs}_{\frac{2}{3},\frac{1}{2}}}(-t),

where the last equality follows from [AH16]. Clearly, we notice this example is the finite analogue of (10.3) in the case when α=23\alpha=\frac{2}{3}.

10.5. Finite free multiplicative Poisson’s law of small numbers

In [BV92, Lemma 7.2] it was shown that for λ0\lambda\geq 0 and β[0,1]\beta\in\mathbb{R}\setminus[0,1] there exists a measure Πλ,β(0)\Pi_{\lambda,\beta}\in\mathcal{M}(\mathbb{R}_{\geq 0}) with SS-transform given by

SΠλ,β(t)=exp(λt+β).S_{\Pi_{\lambda,\beta}}(t)=\exp\left(\frac{\lambda}{t+\beta}\right).

This measure can be understood as a free multiplicative Poisson’s law. The purpose of this section is to give a finite counterpart.

In this case, we can think of it as a limit of convolution powers of polynomials of the form

(xβ1β)(x1)d1=d[.β1dβ.],(x-\tfrac{\beta-1}{\beta})(x-1)^{d-1}=\mathcal{H}_{d}{\left[\genfrac{.}{.}{0.0pt}{1}{\beta-\frac{1}{d}}{\beta}\right]},

where the equality follows from a direct computation, see also [MMP24a, Eq (60)].

Proposition 10.19.

Let λ0\lambda\geq 0 and β[0,1]\beta\in\mathbb{R}\setminus[0,1], and for each dd consider the polynomial

pd(x):=(xβ1β)(x1)d1.p_{d}(x)\mathrel{\mathop{\ordinarycolon}}=(x-\tfrac{\beta-1}{\beta})(x-1)^{d-1}.

Then

μpddn𝑤Πλ,βas d with ndλ.\mu\left\llbracket p_{d}^{\boxtimes_{d}n}\right\rrbracket\xrightarrow{w}\Pi_{\lambda,\beta}\qquad\text{as }d\to\infty\quad\text{ with }\frac{n}{d}\to\lambda.
Proof.

Consider α(0,1)\alpha\in(0,1). Recall that the coefficients of pdp_{d} are

𝖾~k(d)(pd)=(dβ1)k(dβ)k=dβkdβ,\widetilde{\mathsf{e}}_{k}^{(d)}(p_{d})=\frac{\left(d\beta-1\right)_{k}}{\left(d\beta\right)_{k}}=\frac{d\beta-k}{d\beta},

so its finite SS-transform is given by

Spd(d)(kd)=dβk+1dβk=1+1dβkfor k=1,,d,S_{p_{d}}^{(d)}\left(-\frac{k}{d}\right)=\frac{d\beta-k+1}{d\beta-k}=1+\frac{1}{d\beta-k}\qquad\text{for }k=1,\dots,d,

and the finite SS-transform of qd:=pddnq_{d}\mathrel{\mathop{\ordinarycolon}}=p_{d}^{\boxtimes_{d}n} is given by

Sqd(d)(kd)=(1+1dβk)n=(1+1d1βkd)dndfor k=1,,d.S_{q_{d}}^{(d)}\left(-\frac{k}{d}\right)=\left(1+\frac{1}{d\beta-k}\right)^{n}=\left(1+\frac{1}{d}\frac{1}{\beta-\frac{k}{d}}\right)^{d\frac{n}{d}}\qquad\text{for }k=1,\dots,d.

Then, if we let dd\to\infty with kdt\frac{k}{d}\to t and ndλ\frac{n}{d}\to\lambda then we obtain that

limdSqd(d)(kd)=exp(λβt)=SΠλ,β(t).\lim_{d\to\infty}S_{q_{d}}^{(d)}\left(-\frac{k}{d}\right)=\exp\left(\frac{\lambda}{\beta-t}\right)=S_{\Pi_{\lambda,\beta}}(-t).

The conclusion follows from Theorem 1.1. ∎

10.6. Finite max-convolution powers

In 2006, Ben Arous and Voiculescu [BV06] introduced the free analogue of max-convolution. Given two measures ν1,ν2()\nu_{1},\nu_{2}\in\mathcal{M}(\mathbb{R}), their free max-convolution, denoted by ν1\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyleν2\nu_{1}\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\nu_{2}, is the measures with cumulative distribution function given by

Fν1\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyleν2(x):=max{Fν1(x)+Fν2(x)1,0}for all x.F_{\nu_{1}\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\nu_{2}}(x)\mathrel{\mathop{\ordinarycolon}}=\max\{F_{\nu_{1}}(x)+F_{\nu_{2}}(x)-1,0\}\qquad\text{for all }x\in\mathbb{R}.

Similarly, given ν()\nu\in\mathcal{M}(\mathbb{R}) and t1t\geq 1, one can define the free max-convolution of ν\nu to the power tt as the unique measure ν\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet\nu^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t} with cumulative distribution function given by

(10.5) Fν\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet(x):=max{tFν(x)(t1),0},t1.\displaystyle F_{\nu^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t}}(x)\mathrel{\mathop{\ordinarycolon}}=\max\{tF_{\nu}(x)-(t-1),0\},\qquad t\geq 1.

This notion was introduced by Ueda, who used Tucci-Haagerup-Möller’s limit to relate it to free additive convolution powers in [Ued21, Theorem 1.1]: given t1t\geq 1 and μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}), one has that

(10.6) Φ(Dil1/t(μt))=Φ(μ)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet.\Phi({\rm Dil}_{1/t}(\mu^{\boxplus t}))=\Phi(\mu)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t}.

The purpose of this section is to prove a finite free analogue of this relation and to show that it approximates (10.6) as the degree dd tends to \infty.

Definition 10.20.

Given p𝒫d()p\in\mathcal{P}_{d}(\mathbb{R}) and 1kd1\leq k\leq d, we define the finite max-convolution power dk\frac{d}{k} of pp as the polynomial p\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledk𝒫k()p^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}\in\mathcal{P}_{k}(\mathbb{R}) with roots given by

λj(p\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledk)=λj(p)for j=1,,k.\lambda_{j}\left(p^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}\right)=\lambda_{j}(p)\qquad\text{for }j=1,\dots,k.

It is straightforward from the definition of free-max convolution in (10.5) that

μp\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledk=(μp)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledk.\mu\left\llbracket p^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}\right\rrbracket=\left(\mu\left\llbracket p\right\rrbracket\right)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}.

Then, we have the following.

Proposition 10.21.

Let p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}). Then

(10.7) Φk(k|dp)=(Φd(p))\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledkfor k=1,,d.\Phi_{k}(\partial_{k|d}\,p)=\left(\Phi_{d}(p)\right)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}\qquad\text{for }k=1,\dots,d.
Proof.

The proof uses two main ingredients: Equation (2.12) that expresses the roots of Φd(p)\Phi_{d}(p) in terms of the coefficients of pp; and Lemma 3.2 that relates the coefficients of k|dp\partial_{k|d}\,p and pp.

Let us denote by rr the multiplicity of the root 0 in pp. Then, the multiplicity of k|dp\partial_{k|d}\,p in 0 is max{rk,0}\max\{r-k,0\}. Using Lemma 3.2 and Equation (2.12) twice, we compute

λj(Φk(k|dp))=𝖾~j(k)(k|dp)𝖾~j1(k)(k|dp)=𝖾~j(d)(p)𝖾~j1(d)(p)=λj(Φd(p))for 1jmin{dr,k},\lambda_{j}\left(\Phi_{k}(\partial_{k|d}\,p)\right)=\frac{\widetilde{\mathsf{e}}_{j}^{(k)}(\partial_{k|d}\,p)}{\widetilde{\mathsf{e}}_{j-1}^{(k)}(\partial_{k|d}\,p)}=\frac{\widetilde{\mathsf{e}}_{j}^{(d)}(p)}{\widetilde{\mathsf{e}}_{j-1}^{(d)}(p)}=\lambda_{j}\left(\Phi_{d}(p)\right)\qquad\text{for }1\leq j\leq\min\{d-r,k\},

and in the case where kdrk\geq d-r we can also check that

λj(Φk(k|dp))=0=λj(Φd(p))for dr+1jk,\displaystyle\lambda_{j}\left(\Phi_{k}(\partial_{k|d}\,p)\right)=0=\lambda_{j}\left(\Phi_{d}(p)\right)\qquad\text{for }d-r+1\leq j\leq k,

as desired. ∎

From Equation (10.7) and Theorem 1.3 it readily follows that in the limit we obtain an approximation of Equation (10.6).

Corollary 10.22.

Fix t1t\geq 1 and a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}). Let (dj)j(d_{j})_{j\in\mathbb{N}}\subset\mathbb{N} and (kj)j(k_{j})_{j\in\mathbb{N}}\subset\mathbb{N} a diagonal sequence with ratio limit 1/t1/t. If pj𝒫dj(0)p_{j}\in\mathcal{P}_{d_{j}}(\mathbb{R}_{\geq 0}) is a sequence of polynomials converging to μ\mu, then

μΦkj(kj|djpdj)𝑤Φ(μ)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet as j.\mu\left\llbracket\Phi_{k_{j}}(\partial_{k_{j}|d_{j}}\,p_{d_{j}})\right\rrbracket\xrightarrow{w}\Phi(\mu)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t}\qquad\text{ as }j\to\infty.

We now look at an example related to free stable laws and their finite counterparts.

Example 10.23.

From [Ued21, Example 4.2] we know that Φ(𝐟𝐬12,1)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet\Phi\left({\bf fs}_{\frac{1}{2},1}\right)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t} coincides with the free Fréchet distribution (or Pareto distribution) with index t1t\geq 1.

In the finite free framework, recall from Example 10.17 that these polynomials correspond to Φk(k|dfd)\Phi_{k}\left(\partial_{k|d}\,f_{d}\right), where fdf_{d} was defined in (10.2) as the reversed polynomial of the standard Laguerre polynomial. Therefore, if we let dd\to\infty and dkt1\frac{d}{k}\to t\geq 1, Corollary 10.22 yields

(10.8) μΦk(k|dfd)𝑤Φ(𝐟𝐬12,1)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet.\mu\left\llbracket\Phi_{k}\left(\partial_{k|d}\,f_{d}\right)\right\rrbracket\xrightarrow{w}\Phi\left({\bf fs}_{\frac{1}{2},1}\right)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t}.

To finish this section, we study the finite analogue of the map Θ\Theta on positive measures that was defined in [HU21]. If we denote

Θ(μ):=Φ(μ𝐌𝐏11)=Φ(μ𝐟𝐬12,1)for μ(0),\Theta(\mu)\mathrel{\mathop{\ordinarycolon}}=\Phi(\mu\boxtimes{\bf MP}_{1}^{\langle-1\rangle})=\Phi\left(\mu\boxtimes{\bf fs}_{\frac{1}{2},1}\right)\qquad\text{for }\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}),

then it holds that

(10.9) Θ(μt)=Θ(μ)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyletfor all t1.\Theta(\mu^{\boxplus t})=\Theta(\mu)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t}\qquad\text{for all }t\geq 1.

We can define the finite free analogue of the map Θ\Theta as

Θd(p):=Φd(pdfd)for p𝒫d(0)\Theta_{d}(p)\mathrel{\mathop{\ordinarycolon}}=\Phi_{d}\left(p\boxtimes_{d}f_{d}\right)\qquad\text{for }p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0})

To obtain the finite analogue of (10.9), we first compute the derivatives of fdf_{d}.

Lemma 10.24.

For 1kd1\leq k\leq d, we get

Dilkd(k|dfd)=fk.{\rm Dil}_{\frac{k}{d}}\left(\partial_{k|d}\,f_{d}\right)=f_{k}.
Proof.

For 1jk1\leq j\leq k, we compare the jj-th coefficients of both polynomials. One can see that

𝖾~j(k)(fk)=kjj!.\widetilde{\mathsf{e}}_{j}^{(k)}(f_{k})=\frac{k^{j}}{j!}.

On the other hand, we have

𝖾~j(k)(Dilkd(k|dfd))\displaystyle\widetilde{\mathsf{e}}_{j}^{(k)}\left({\rm Dil}_{\frac{k}{d}}\left(\partial_{k|d}\,f_{d}\right)\right) =(kd)j𝖾~j(k)(k|dfd)\displaystyle=\left(\frac{k}{d}\right)^{j}\widetilde{\mathsf{e}}_{j}^{(k)}(\partial_{k|d}\,f_{d})
=(kd)j𝖾~j(d)(fd)\displaystyle=\left(\frac{k}{d}\right)^{j}\widetilde{\mathsf{e}}_{j}^{(d)}(f_{d}) (by Lemma 3.2)
=(kd)jdjj!=kjj!\displaystyle=\left(\frac{k}{d}\right)^{j}\cdot\frac{d^{j}}{j!}=\frac{k^{j}}{j!}

as desired. ∎

Using this, we infer the following formula.

Proposition 10.25.

Let p𝒫d(0)p\in\mathcal{P}_{d}(\mathbb{R}_{\geq 0}). Then

Θk(Dildkk|dp)=(Θd(p))\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledkfor k=1,,d.\Theta_{k}\left({\rm Dil}_{\frac{d}{k}}\partial_{k|d}\,p\right)=\left(\Theta_{d}(p)\right)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}\qquad\text{for }k=1,\dots,d.
Proof.

For 1kd1\leq k\leq d, we have

Θd(p)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledk\displaystyle\Theta_{d}(p)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}} =(Φd(pdfd))\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStyledk\displaystyle=(\Phi_{d}(p\boxtimes_{d}f_{d}))^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}\frac{d}{k}}
=Φk(k|d(pdfd))\displaystyle=\Phi_{k}(\partial_{k|d}\,(p\boxtimes_{d}f_{d})) (by (10.7))
=Φk((k|dp)k(k|dfd))\displaystyle=\Phi_{k}((\partial_{k|d}\,p)\boxtimes_{k}(\partial_{k|d}\,f_{d})) (by Remark 3.3)
=Φk((Dildkk|dp)k(Dilkdk|dfd))\displaystyle=\Phi_{k}\left(\left({\rm Dil}_{\frac{d}{k}}\partial_{k|d}\,p\right)\boxtimes_{k}\left({\rm Dil}_{\frac{k}{d}}\partial_{k|d}\,f_{d}\right)\right)
=Φk((Dildkk|dp)fk)\displaystyle=\Phi_{k}\left(\left({\rm Dil}_{\frac{d}{k}}\partial_{k|d}\,p\right)\boxtimes f_{k}\right) (by Lemma 10.24)
=Θk(Dildkk|dp).\displaystyle=\Theta_{k}\left({\rm Dil}_{\frac{d}{k}}\partial_{k|d}\,p\right).

Thanks to Proposition 10.25, the last claim can be seen as an approximation of its free counterpart introduced in Equation (10.9).

Corollary 10.26.

Fix t1t\geq 1 and a measure μ(0)\mu\in\mathcal{M}(\mathbb{R}_{\geq 0}). Let (dj)j(d_{j})_{j\in\mathbb{N}}\subset\mathbb{N} and (kj)j(k_{j})_{j\in\mathbb{N}}\subset\mathbb{N} a diagonal sequence with ratio limit 1/t1/t. If pj𝒫dj(0)p_{j}\in\mathcal{P}_{d_{j}}(\mathbb{R}_{\geq 0}) us a sequence of polynomials converging to μ\mu, then

μΘk(Dildkk|dpd)𝑤Θ(μ)\ThisStyle\ensurestackMath\stackinsetc.05\LMptc.2\LMpt\SavedStyle\SavedStylet as j.\mu\left\llbracket\Theta_{k}({\rm Dil}_{\frac{d}{k}}\partial_{k|d}\,p_{d})\right\rrbracket\xrightarrow{w}\Theta(\mu)^{\mathop{\ThisStyle{\ensurestackMath{\stackinset{c}{-.05\LMpt}{c}{-.2\LMpt}{\SavedStyle\lor}{\SavedStyle\square}}}}t}\qquad\text{ as }j\to\infty.

Acknowledgements

The authors thank Takahiro Hasebe and Jorge Garza-Vargas for fruitful discussions in relation to this project. We thank Andrew Campbell for useful comments that help improving the presentation of the paper.

The authors gratefully acknowledge the financial support by the grant CONAHCYT A1-S-9764 and JSPS Open Partnership Joint Research Projects grant no. JPJSBP120209921. We greatly appreciate the hospitality of Hokkaido University of Education during June 2023, where this project originated. K.F. was supported by the Hokkaido University Ambitious Doctoral Fellowship (Information Science and AI) and JSPS Research Fellowship for Young Scientists PD (KAKENHI Grant Number 24KJ1318). D.P. was partially supported by the AMS-Simons Travel Grant, and by Simons Foundation via Michael Anshelevich’s grant. Y.U. is supported by JSPS Grant-in-Aid for Young Scientists 22K13925 and for Scientific Research (C) 23K03133.

References

  • [AFU24] Octavio Arizmendi, Katsunori Fujie and Yuki Ueda “New Combinatorial Identity for the Set of Partitions and Limit Theorems in Finite Free Probability Theory” In International Mathematics Research Notices, 2024, pp. 10450–10484 DOI: 10.1093/imrn/rnae089
  • [AGP23] Octavio Arizmendi, Jorge Garza-Vargas and Daniel Perales “Finite free cumulants: Multiplicative convolutions, genus expansion and infinitesimal distributions” In Transactions of the American Mathematical Society 376.06, 2023, pp. 4383–4420
  • [AH13] Octavio Arizmendi and Takahiro Hasebe “On a class of explicit Cauchy–Stieltjes transforms related to monotone stable and free Poisson laws” In Bernoulli 19(5B), 2013, pp. 2750–2767 DOI: https://doi.org/10.3150/12-BEJ473
  • [AH16] Octavio Arizmendi and Takahiro Hasebe “Classical scale mixtures of Boolean stable laws” In Transactions of the American Mathematical Society 368.7, 2016, pp. 4873–4905
  • [AP18] Octavio Arizmendi and Daniel Perales “Cumulants for finite free convolution” In Journal of Combinatorial Theory, Series A 155 Elsevier, 2018, pp. 244–266
  • [AP09] Octavio Arizmendi E and Victor Pérez-Abreu “The S-transform of symmetric probability measures with unbounded supports” In Proceedings of the American Mathematical Society 137.9, 2009, pp. 3057–3066
  • [Bel03] Serban T Belinschi “The Atoms of the Free Multiplicative Convolution of Two Probability Distributions” In Integral Equations and Operator Theory 46 Springer, 2003, pp. 377–386
  • [BN08] Serban T. Belinschi and Alexandru Nica “On a remarkable semigroup of homomorphisms with respect to free multiplicative convolution” In Indiana University Mathematics Journal 57.4, 2008, pp. 1679–1713 DOI: 10.1512/iumj.2008.57.3285
  • [BV06] G. Ben Arous and Dan V. Voiculescu “Free extreme values” In Annals of Probability 34.5 Institute of Mathematical Statistics, 2006, pp. 2037–2059
  • [BP99] Hari Bercovici and Vittorino Pata “Stable laws and domains of attraction in free probability theory” With an appendix by Philippe Biane In Annals of Mathematics. Second Series 149.3, 1999, pp. 1023–1060 DOI: 10.2307/121080
  • [BV93] Hari Bercovici and Dan Voiculescu “Free convolution of measures with unbounded support” In Indiana University Mathematics Journal 42.3, 1993, pp. 733–773 DOI: 10.1512/iumj.1993.42.42033
  • [BV92] Hari Bercovici and Dan V. Voiculescu “Lévy-Hinčin type theorems for multiplicative and additive free convolution” In Pacific journal of mathematics 153.2 Mathematical Sciences Publishers, 1992, pp. 217–248
  • [Bor19] Charles Bordenave “Lecture notes on random matrix theory” In https://www.math.univ-toulouse.fr/ bordenave/IMPA-RMT.pdf, 2019
  • [COR24] Andrew Campbell, Sean O’Rourke and David Renfrew “Universality for roots of derivatives of entire functions via finite free probability” In arXiv preprint arXiv:2410.06403, 2024
  • [Dur19] Rick Durrett “Probability: theory and examples” Cambridge university press, 2019
  • [Dyk07] Kenneth J Dykema “Multilinear function series and transforms in free probability theory” In Advances in Mathematics 208.1 Elsevier, 2007, pp. 351–407
  • [FK52] Bent Fuglede and Richard V Kadison “Determinant theory in finite factors” In Annals of Mathematics 55.3 JSTOR, 1952, pp. 520–530
  • [FU23] Katsunori Fujie and Yuki Ueda “Law of large numbers for roots of finite free multiplicative convolution of polynomials” In SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 19 SIGMA. Symmetry, IntegrabilityGeometry: MethodsApplications, 2023, pp. 004
  • [HM13] Uffe Haagerup and Sören Möller “The law of large numbers for the free multiplicative convolution” In Operator Algebra and Dynamics. Springer Proc. Math. Stat. 58.2 Springer, Heidelberg, 2013, pp. 157–186
  • [HS07] Uffe Haagerup and Hanne Schultz “Brown measures of unbounded operators affiliated with a finite von Neumann algebra” In Mathematica Scandinavica 100.2, 2007, pp. 209–263
  • [HSW20] Takahiro Hasebe, Thomas Simon and Min Wang “Some properties of the free stable distributions” In Annales de l’Institut Henri Poincaré Probabilités et Statistiques 56.1, 2020, pp. 296–325 DOI: 10.1214/19-AIHP962
  • [HU21] Takahiro Hasebe and Yuki Ueda “Homomorphisms relative to additive convolutions and max-convolutions: Free, boolean and classical cases” In Proceedings of the American mathematical society 149.11, 2021, pp. 4799–4814
  • [HK21] Jeremy Hoskins and Zakhar Kabluchko “Dynamics of zeroes under repeated differentiation” In Experimental Mathematics Taylor & Francis, 2021, pp. 1–27
  • [Kab21] Zakhar Kabluchko “Repeated differentiation and free unitary Poisson process” In arXiv preprint arXiv:2112.14729, 2021
  • [Kab22] Zakhar Kabluchko “Lee-Yang zeroes of the Curie-Weiss ferromagnet, unitary Hermite polynomials, and the backward heat flow” In arXiv preprint arXiv:2203.05533, 2022
  • [Mar21] Adam W Marcus “Polynomial convolutions and (finite) free probability” In arXiv preprint arXiv:2108.07054, 2021
  • [MSS22] Adam W Marcus, Daniel A Spielman and Nikhil Srivastava “Finite free convolutions of polynomials” In Probability Theory and Related Fields 182.3 Springer, 2022, pp. 807–848
  • [MMP24] Andrei Martinez-Finkelshtein, Rafael Morales and Daniel Perales “Zeros of generalized hypergeometric polynomials via finite free convolution. Applications to multiple orthogonality” In arXiv preprint arXiv:2404.11479, 2024
  • [MMP24a] Andrei Martínez-Finkelshtein, Rafael Morales and Daniel Perales “Real Roots of Hypergeometric Polynomials via Finite Free Convolution” In International Mathematics Research Notices, 2024 DOI: 10.1093/imrn/rnae120
  • [MS17] James A Mingo and Roland Speicher “Free probability and random matrices” Springer, 2017
  • [MSV79] D.. Moak, E.. Saff and R.. Varga “On the zeros of Jacobi polynomials Pn(αn,βn)(x)P_{n}^{(\alpha_{n},\beta_{n})}(x) In Transactions of the American Mathematical Society 249.1, 1979, pp. 159–162 DOI: 10.2307/1998916
  • [NS96] Alexandru Nica and Roland Speicher “On the multiplication of free NN-tuples of noncommutative random variables” In American Journal of Mathematics JSTOR, 1996, pp. 799–837
  • [NS06] Alexandru Nica and Roland Speicher “Lectures on the combinatorics of free probability” Cambridge University Press, 2006
  • [SY01] N Saitoh and H Yoshida “The infinite divisibility and orthogonal polynomials with a constant recursion formula in free probability theory” In Probability and Mathematical Statistics 21.1, 2001, pp. 159–170
  • [ST22] Dimitri Shlyakhtenko and Terence Tao “Fractional free convolution powers” In Indiana University Mathematics Journal 71.6, 2022
  • [SW97] R. Speicher and R. Woroudi “Boolean convolution” In In Free Probability Theory (Waterloo,ON,1995). Fields Inst. Commun. 12, 1997, pp. 267–279
  • [Spe94] Roland Speicher “Multiplicative functions on the lattice of non-crossing partitions and free convolution” In Mathematische Annalen 298.1 Springer, 1994, pp. 611–628
  • [Ste19] Stefan Steinerberger “A nonlocal transport equation describing roots of polynomials under differentiation” In Proceedings of the American Mathematical Society 147.11, 2019, pp. 4733–4744
  • [Ste21] Stefan Steinerberger “Free convolution powers via roots of polynomials” In Experimental Mathematics Taylor & Francis, 2021, pp. 1–6
  • [Sze22] Gábor Szegö “Bemerkungen zu einem Satz von JH Grace über die Wurzeln algebraischer Gleichungen” In Mathematische Zeitschrift 13.1 Springer, 1922, pp. 28–55
  • [Sze75] Gábor Szegö “Orthogonal Polynomials, 4th edn., vol” In XXIII (American Mathematical Society, Colloquium Publications, Providence, 1975) MATH, 1975
  • [Tuc10] Gabriel H. Tucci “Limits laws for geometric means of free random variables” In Indiana University Mathematics Journal 59.1, 2010, pp. 1–13 DOI: 10.1512/iumj.2010.59.3775
  • [Ued21] Yuki Ueda “Max-convolution semigroups and extreme values in limit theorems for the free multiplicative convolution” In Bernoulli 27 Bernoulli, 2021, pp. 502–531
  • [Voi87] Dan Voiculescu “Multiplication of certain non-commuting random variables” In Journal of Operator Theory JSTOR, 1987, pp. 223–235
  • [VDN92] Dan V Voiculescu, Ken J Dykema and Alexandru Nica “Free random variables” American Mathematical Society, 1992
  • [Wal22] Joseph L Walsh “On the location of the roots of certain types of polynomials” In Transactions of the American Mathematical Society 24.3 JSTOR, 1922, pp. 163–180
  • [Yos20] Hiroaki Yoshida “Remarks on a Free Analogue of the Beta Prime Distribution” In Journal of Theoretical Probability 33 Springer, 2020, pp. 1363–1400