This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Asymptotic free independence and entry permutations for Gaussian random matrices.
Part II: infinitesimal freeness

Mihai Popa Department of Mathematics
University of Texas at San Antonio
One UTSA Circle San Antonio
Texas 78249, USA
and “Simon Stoilow” Institute of Mathematics of the Romanian Academy
P.O. Box 1-764
014700 Bucharest, Romania
[email protected]
Kamil Szpojankowski Wydział Matematyki i Nauk Informacyjnych
Politechnika Warszawska
ul. Koszykowa 75
00-662 Warszawa, Poland.
[email protected]
 and  Pei-Lun Tseng Department of Mathematics
New York University Abu Dhabi
Saadiyat Marina District, Abu Dhabi, United Arab Emirates
[email protected]
Abstract.

We study asymptotic infinitesimal distributions of Gaussian Unitary Ensembles with permuted entries. We show that for a uniformly random permutation the asymptotically permuted GUE matrix has a null infinitesimal distribution. Moreover, we show that asymptotically different permutations of the same GUE matrix are infinitesimally free. Besides this we study a particular example of entry permutation - the transpose, and we show that while a GUE matrix is asymptotically free from its transpose it is not infinitesimally free from it.

KSz: This research was funded in part by National Science Centre, Poland WEAVE-UNISONO grant BOOMER 2022/04/Y/ST1/00008.
For the purpose of Open Access, the authors have applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.

1. Introduction

Free probability introduced by D. Voiculescu [28, 30] is a non–commutative analogue of classical probability theory, where classical random variables are replaced by non–commutative operators. This theory was introduced in connection to some fundamental problems from the theory of operator algebras (such as the Free Group Factors isomorphism problem), however very quickly (see [29]) it was noticed that the novel theory has deep connections with the random matrix theory. Since then, the connections between free probability and random matrices have become a very active field with many advances in recent years (see [15, 3, 4]). A rough explanation of the phenomenon behind is as follows – big unitarily invariant and independent random matrices are asymptotically free. By this we mean that for two sequences of random matrices the quantity 1NE(Tr(P(AN,BN)))\tfrac{1}{N}E\left(\textrm{Tr}(P(A_{N},B_{N}))\right), for a non-commutative polynomial PP converges, as matrix size NN goes to infinity, to φ(P(a,b))\varphi\left(P(a,b)\right) where a,ba,b are free random variables with respect to φ\varphi, and a,ba,b have distributions equal to the weak limits of the expected empirical eigenvalue distributions of ANA_{N} and BNB_{N} respectively.

In recent years some attention was given to 1/N1/N correction in the convergence above (see e.g. [25]), i.e. to look not only at φ\varphi but also consider another functional φ\varphi^{\prime} which satisfies

(1) 1NE(Tr(P(AN,BN)))=φ(P(a,b))+1Nφ(P(a,b))+o(1N) as N.\displaystyle\frac{1}{N}E\left(\textrm{Tr}(P(A_{N},B_{N}))\right)=\varphi(P(a,b))+\frac{1}{N}\varphi^{\prime}(P(a,b))+o\left(\frac{1}{N}\right)\mbox{ as }N\to\infty.

The special interest from non-commutative probability point of view is the case when a,ba,b are infinitesimally free with respect to the pair of functionals (φ,φ)\left(\varphi,\varphi^{\prime}\right) (we state the precise definition of infinitesimal freeness in the next section). This notion, under in the framework free probability of type B, was introduced in [6] and the infinitesimal freeness interpretation was found in [5], see also [11] for combinatorial developments related to infinitesimal freeness. As mentioned above asymptotic freeness connects directly to random matrix theory and several classes of random matrices have been proved to be asymptotically infinitesimally free [17, 8, 23] and many related properties have been discovered [11, 26, 27, 7, 10, 12].

Another recent development in random matrix theory is that freeness emerges also when one looks at different entry permutations of a given random matrix. In order to explain this phenomenon let us fix some notation. For any N1N\geq 1 let AN=(ai,j)A_{N}=(a_{i,j}) be an N×NN\times N matrix and σN:[N]2[N]2\sigma_{N}:[N]^{2}\to[N]^{2} be a bijection; that is, σN\sigma_{N} is a permutation on [N]2:={1,,N}2[N]^{2}:=\{1,\dots,N\}^{2}. By ANσNA_{N}^{\sigma_{N}} we denote the permuted matrix, that is we have [ANσN]i,j=aσ(i,j)[A_{N}^{\sigma_{N}}]_{i,j}=a_{\sigma(i,j)}. Among permutations there are many interesting mappings such as partial transposes, which are of interest in quantum information theory [1, 2, 13], and the mixed map in quantum physics [9, 16]. The connection between matrix permutation and free probability were also explored in [18, 19, 20, 24, 22, 21]. In [24] the author showed that for a given sequence of Gaussian random matrices (GN)N(G_{N})_{N}, and two sequences of permutations of these matrices (GNσN)N\left(G^{\sigma_{N}}_{N}\right)_{N} and (GNτN)N\left(G^{\tau_{N}}_{N}\right)_{N} the permuted matrices are asymptotically (as NN\to\infty) circular and asymptotically free, whenever pairs of permutation sequences (σN)N(\sigma_{N})_{N} and (τN)N(\tau_{N})_{N} satisfy certain conditions. Moreover, following these developments in [22], the authors proved that such conditions occurs with probability one, which means that GNσNG^{\sigma_{N}}_{N} and GNτNG^{\tau_{N}}_{N} are asymptotically circular and asymptotically free for almost all pairs of independent permutation sequences (σN)N(\sigma_{N})_{N} and (τN)N(\tau_{N})_{N}.

In the present paper we show that the framework described above gives not only the asymptotic freeness but also asymptotic infinitesimal freeness. More precisely we show that:

  • Asymptotically the infinitesimal distribution of a randomly permuted (with uniformly chosen permutation of entries) growing GUE matrix is zero.

  • Independent permutations of a sequence of growing GUE matrices are asymptotically infinitesimally free.

  • a GUE matrix and its transpose are not asymptotically infinitesimally free, even though they are asymptotically free [18], but we can explicitly compute their joint infinitesimal distribution.

Moreover we show that the phenomenon described above does not hold for any sequence of matrix permutations, namely we show that the sequence of GUE matrices is not asymptotically infinitesimally free from its transposes, although asymptotic freeness takes place as it was shown in [18].

The paper is organized as follows. We review the framework and properties of infinitesimal freeness in Section 2. In addition, some notation and a basic lemma on permuted Gaussian matrices are also included in Section 2. The infinitesimal distribution of the generic permuted Gaussian matrix is considered in Section 3.

In Section 4, we consider pairs of independent sequences of random permutations (σN)N(\sigma_{N})_{N} and (τN)N(\tau_{N})_{N}, such that that σN\sigma_{N} and τN\tau_{N} are uniformly distributed random permutations from S([N]2)S([N]^{2}) for each NN. We discuss the joint infinitesimal distribution of GN,GNσN,G_{N},G_{N}^{\sigma_{N}}, and GNτNG_{N}^{\tau_{N}}. From [22] and [24] one can deduce that almost surely GNσNG_{N}^{\sigma_{N}} and GNτNG_{N}^{\tau_{N}} are asymptotically circular and asymptotically free. Here we show that that they have zero infinitesimal distribution. Moreover we prove that {GN,GNσN,GNτN}\{G_{N},G_{N}^{\sigma_{N}},G_{N}^{\tau_{N}}\} are asymptotically infinitesimally free.

Recall that Gaussian random matrix GNG_{N} and its transpose GNG_{N}^{\top} are asymptotically free [18]. However, GNG_{N} and its transpose GNG_{N}^{\top} is not asymptotically infinitesimally free. Indeed, we find the asymptotic joint infinitesimal law of GNG_{N} and GNG_{N}^{\top} in Section 5. More precisely, we show that the asymptotic values (as NN\rightarrow\infty) of the infinitesimally free joint cumulants of GNG_{N} and GNG_{N}^{\top} are (here each εj\varepsilon_{j} is either the identity or the matrix transpose):

limNκp(GNε1,GNε2,,GNεp)={1 if p=2m,ε1εm+1,εmε2m, and εsε2m+1s for s=2,,m10 otherwise. \displaystyle\lim_{N\rightarrow\infty}\kappa^{\prime}_{p}(G_{N}^{\varepsilon_{1}},G_{N}^{\varepsilon_{2}},\dots,G_{N}^{\varepsilon_{p}})=\left\{\begin{array}[]{ll}1&\textrm{ if }p=2m,\varepsilon_{1}\neq\varepsilon_{m+1},\varepsilon_{m}\neq\varepsilon_{2m},\\ &\textrm{ and }\varepsilon_{s}\neq\varepsilon_{2m+1-s}\textrm{ for }s=2,\dots,m-1\\ 0&\textrm{ otherwise. }\end{array}\right.

Which shows that GNG_{N} and GNG_{N}^{\top} are not asymptotically infinitesimally free, but have very regular joint infinitesimal free cumulants.

2. Preliminaries

In this section, we first introduce the framework of infinitesimal freeness, then we review the notion of Gaussian matrices and establish the notation that we use for studying permuted Gaussian matrices.

2.1. Infinitesimal Free Probability

Let us begin by recalling some notions in free probability theory. We say that (𝒜,φ)(\mathcal{A},\varphi) is a non-commutative probability space (ncps for short) whenever 𝒜\mathcal{A} is a unital *-algebra over \mathbb{C}, and φ:𝒜\varphi:\mathcal{A}\to\mathbb{C} is a linear functional such that φ(1)=1\varphi(1)=1 and φ(aa)0\varphi(a^{*}a)\geq 0 for all a𝒜a\in\mathcal{A}. We say unital subalgebras 𝒜1,,𝒜s\mathcal{A}_{1},\dots,\mathcal{A}_{s} are free if a1,,an𝒜a_{1},\dots,a_{n}\in\mathcal{A} with φ(aj)=0\varphi(a_{j})=0 for each j=1,,nj=1,\dots,n and aj𝒜ija_{j}\in\mathcal{A}_{i_{j}} with i1i2in1ini_{1}\neq i_{2}\neq\cdots\neq i_{n-1}\neq i_{n} we have

φ(a1an)=0.\varphi(a_{1}\cdots a_{n})=0.

If (𝒜,φ)(\mathcal{A},\varphi) is a ncps with an additional linear functional φ:𝒜\varphi^{\prime}:\mathcal{A}\to\mathbb{C} such that φ(1)=0\varphi^{\prime}(1)=0 and φ(a)=φ(a)¯\varphi^{\prime}(a^{*})=\overline{\varphi^{\prime}(a)} for all a𝒜a\in\mathcal{A}, then we call the triple (𝒜,φ,φ)(\mathcal{A},\varphi,\varphi^{\prime}) an infinitesimal probability space.

The natural framework of an infinitesimal probability space for algebras of random matrices was considered in [5, 25].

Denote by X1,,Xk\mathbb{C}\langle X_{1},\dots,X_{k}\rangle the complex unital algebra of polynomials in kk non-commuting indeterminates. Assume that (AN(1),,AN(k))N(A_{N}^{(1)},\dots,A_{N}^{(k)})_{N} is a sequence of kk-tuples of random matrices such that each AN(1),,AN(k)A_{N}^{(1)},\dots,A_{N}^{(k)} are N×NN\times N random matrices, and consider the sequence of linear maps (φN)N(\varphi_{N})_{N} on X1,,Xk\mathbb{C}\langle X_{1},\dots,X_{k}\rangle defined by

φN(P)=1N(ETr)(P(AN(1),,AN(k))).\varphi_{N}(P)=\frac{1}{N}(E\circ Tr)\left(P(A_{N}^{(1)},\dots,A_{N}^{(k)})\right).

We say that the sequence (AN(1),,AN(k))N(A_{N}^{(1)},\dots,A_{N}^{(k)})_{N} has asymptotic distribution if the limit φ(P):=limNφN(P)\displaystyle\varphi(P):=\lim_{N\to\infty}\varphi_{N}(P) exists for all PX1,,XkP\in\mathbb{C}\langle X_{1},\dots,X_{k}\rangle. Furthermore, if

φ(P):=limNN[φN(P)φ(P)]\varphi^{\prime}(P):=\lim_{N\to\infty}N[\varphi_{N}(P)-\varphi(P)]

exists for all PP, then {AN(1),,AN(k)}\{A_{N}^{(1)},\dots,A_{N}^{(k)}\} is said to have the asymptotic infinitesimal distribution. We say that {AN(1),,AN(k)}\{A_{N}^{(1)},\dots,A_{N}^{(k)}\} are asymptotically infinitesimally free when asymptotically their joint moments with respect to to the pair of functionals (φ,φ)(\varphi,\varphi^{\prime}) are calculated according to the rule defined below.

Definition 2.1.

Suppose that (𝒜,φ,φ)(\mathcal{A},\varphi,\varphi^{\prime}) is an infinitesimal probability space. We say that unital subalgebras 𝒜1,,𝒜s\mathcal{A}_{1},\dots,\mathcal{A}_{s} are infinitesimally free if for every nn and for all a1,,an𝒜a_{1},\dots,a_{n}\in\mathcal{A} with φ(aj)=0\varphi(a_{j})=0 for each j=1,,nj=1,\dots,n and aj𝒜ija_{j}\in\mathcal{A}_{i_{j}} with i1i2in1ini_{1}\neq i_{2}\neq\cdots\neq i_{n-1}\neq i_{n} we have

φ(a1an)\displaystyle\varphi(a_{1}\cdots a_{n}) =\displaystyle= 0;\displaystyle 0;
(2) φ(a1an)\displaystyle\varphi^{\prime}(a_{1}\cdots a_{n}) =\displaystyle= j=1nφ(aj)φ(a1aj1aj+1an).\displaystyle\sum\limits_{j=1}^{n}\varphi^{\prime}(a_{j})\varphi(a_{1}\cdots a_{j-1}a_{j+1}\cdots a_{n}).

It is easy to see that the condition (2.1) in the definition of infinitesimal freeness is equivalent to (see [11] for a detailed explanation)

φ(a1an)\displaystyle\varphi^{\prime}(a_{1}\cdots a_{n})
=\displaystyle= {φ(a1an)φ(a2an1)φ(a(n1)/2a(n+3)/2)φ(a(n+1)/2)ifnis odd andi1=in,i2=in1,i(n1)/2=i(n+1)/20otherwise.\displaystyle\left\{\begin{array}[]{lr}\varphi(a_{1}a_{n})\varphi(a_{2}a_{n-1})\cdots\varphi(a_{(n-1)/2}a_{(n+3)/2})\varphi^{\prime}(a_{(n+1)/2})\\ \ \mbox{if}\ n\ \mbox{is odd and}\ i_{1}=i_{n},i_{2}=i_{n-1}\dots,i_{(n-1)/2}=i_{(n+1)/2}\\ 0\ \mbox{otherwise}\end{array}.\right.

Fix a unital algebra 𝒜\mathcal{A} and sequences of multilinear functionals {fn:𝒜n}n1\{f_{n}:\mathcal{A}^{n}\to\mathbb{C}\}_{n\geq 1} and {fn:𝒜n}n1\{f_{n}^{\prime}:\mathcal{A}^{n}\to\mathbb{C}\}_{n\geq 1}. For a given partition πNC(n)\pi\in NC(n), we define

fπ(a1,,an)=Vπf|V|(a1,,anV)f_{\pi}(a_{1},\dots,a_{n})=\prod_{V\in\pi}f_{|V|}(a_{1},\dots,a_{n}\mid V)

where (a1,,an|V)=(ai1,,ais)(a_{1},\dots,a_{n}|V)=(a_{i_{1}},\dots,a_{i_{s}}) whenever V={i1<<is}V=\{i_{1}<\cdots<i_{s}\}.

Moreover, we define fπ\partial f_{\pi} by

fπ(a1,,an)=Vπfπ,V(a1,,an)\partial f_{\pi}(a_{1},\dots,a_{n})=\sum_{V\in\pi}\partial f_{\pi,V}(a_{1},\dots,a_{n})

where

fπ,V(a1,,an)=f|V|(a1,,anV)Wπ,WVf|W|(a1,,anW).\partial f_{\pi,V}(a_{1},\dots,a_{n})=f^{\prime}_{|V|}(a_{1},\dots,a_{n}\mid V)\prod_{W\in\pi,\ W\neq V}f_{|W|}(a_{1},\dots,a_{n}\mid W).
Definition 2.2.

Let (𝒜,φ,φ)(\mathcal{A},\varphi,\varphi^{\prime}) be an infinitesimal probability space, the free cumulants {κn:𝒜n}n\{\kappa_{n}:\mathcal{A}^{n}\to\mathbb{C}\}_{n} and infinitesimal free cumulants {κn:𝒜n}n\{\kappa^{\prime}_{n}:\mathcal{A}^{n}\to\mathbb{C}\}_{n} are sequences of multilinear functionals are defined inductively via

φ(a1an)\displaystyle\varphi(a_{1}\cdots a_{n}) =\displaystyle= πNC(n)κπ(a1,,an), and\displaystyle\sum\limits_{\pi\in NC(n)}\kappa_{\pi}(a_{1},\dots,a_{n}),\text{ and }
(4) φ(a1an)\displaystyle\varphi^{\prime}(a_{1}\cdots a_{n}) =\displaystyle= πNC(n)κπ(a1,,an).\displaystyle\sum\limits_{\pi\in NC(n)}\partial\kappa_{\pi}(a_{1},\dots,a_{n}).

In [11] the authors showed that the infinitesimal freeness can be characterized by the vanishing of (κn,κn)n1(\kappa_{n},\kappa_{n}^{\prime})_{n\geq 1}. For reader’s convenience we recall here the precise statement.

Theorem 2.3.

Suppose that (𝒜,φ,φ)(\mathcal{A},\varphi,\varphi^{\prime}) is an infinitesimal probability space, and 𝒜1,,𝒜n\mathcal{A}_{1},\dots,\mathcal{A}_{n} are unital subalgebras. Then the following statement are equivalent.

  • (i)

    𝒜1,,𝒜n\mathcal{A}_{1},\dots,\mathcal{A}_{n} are infinitesimally free;

  • (ii)

    For each s2s\geq 2 and i1,,is[n]i_{1},\dots,i_{s}\in[n] which are not all equal, and for a1𝒜i1,,as𝒜isa_{1}\in\mathcal{A}_{i_{1}},\dots,a_{s}\in\mathcal{A}_{i_{s}}, we have

    κs(a1,,an)=κs(a1,,an)=0.\kappa_{s}(a_{1},\dots,a_{n})=\kappa^{\prime}_{s}(a_{1},\dots,a_{n})=0.

2.2. Permutations for Gaussian Random Matrices

In this subsection we recall some notation and relevant facts about matrices with permuted entries. In particular we review results from [24] and [22] and we refer to these two papers for proofs.

Definition 2.4.

By an N×NN\times N Gaussian random matrix we will understand a matrix G=[gij]1i,jNG=[g_{ij}]_{1\leq i,j\leq N} whose entries satisfy the following conditions:

  • (i)

    gi,j=gj,i¯g_{i,j}=\overline{g_{j,i}} for all i,j[N]i,j\in[N];

  • (ii)

    {gi,j:1ijN}\{g_{i,j}:1\leq i\leq j\leq N\} is a family of independent, identically distributed complex ((if ij)i\neq j) or real ((if i=j)i=j) Gaussian random variables of mean 0 and variance 1N\displaystyle\frac{1}{N}.

If nn is a positive integer, we shall denote by [n][n] the ordered set {1,2,,n}\{1,2,\dots,n\}. The set of pair partitions of [n][n] is denoted by P2(n)P_{2}(n). In particular, if nn is odd, then P2(n)=P_{2}(n)=\emptyset.

The set of all permutations of [N]×[N][N]\times[N] will be denoted by 𝒮([N]2)\mathcal{S}([N]^{2}). For (i,j)[N]×[N](i,j)\in[N]\times[N], we denote by \top the transpose, i.e. we have (i,j)=(j,i)\top(i,j)=(j,i).

In addition, for a given σ𝒮([N]2)\sigma\in\mathcal{S}([N]^{2}), we define

σ(A)={σ(i,j):(i,j)A} for A[N]×[N].\sigma(A)=\{\sigma(i,j):(i,j)\in A\}\text{ for }A\subset[N]\times[N].

Recall that for an N×NN\times N Gaussian random matrix GNG_{N} and σ𝒮([N]2)\sigma\in\mathcal{S}([N]^{2}), we denote GNσG_{N}^{\sigma} to be the random matrix whose (i,j)(i,j)-entry equals the σ(i,j)\sigma(i,j)-entry of GNG_{N}; i.e.

[GNσ]i,j=gσ(i,j).[G_{N}^{\sigma}]_{i,j}=g_{\sigma(i,j)}.

Assume that for each positive integer NN and each k=1,2,,mk=1,2,\dots,m, σk,N\sigma_{k,N} is a permutation from 𝒮([N]2)\mathcal{S}([N]^{2}). With these notations, we have that

Etr(GNσ1,NGNσ2,NGNσm,N)\displaystyle E\circ\mathrm{tr}\Big{(}G_{N}^{\sigma_{1,N}}\cdot G_{N}^{\sigma_{2,N}}\cdots G_{N}^{\sigma_{m,N}}\Big{)} =\displaystyle=
1i1,,imN\displaystyle\sum_{1\leq i_{1},\dots,i_{m}\leq N} 1NE([GNσ1,N]i1,i2[GNσ2,N]i2,i3[GNσm,N]im,i1)\displaystyle\frac{1}{N}E\big{(}[G_{N}^{\sigma_{1,N}}]_{i_{1},i_{2}}\cdot[G_{N}^{\sigma_{2,N}}]_{i_{2},i_{3}}\cdots[G_{N}^{\sigma_{m,N}}]_{i_{m},i_{1}}\big{)}

As shown in [24], using Wick’s formula (see [14]) for the right-hand side of the equation above, with the identification im+1=i1i_{m+1}=i_{1}, we obtain

(5) Etr(GNσ1,NGNσ2,NGNσm,N)=πP2(m)𝒱σN(π)\displaystyle E\circ\mathrm{tr}\Big{(}G_{N}^{\sigma_{1,N}}\cdot G_{N}^{\sigma_{2,N}}\cdots G_{N}^{\sigma_{m,N}}\Big{)}=\sum_{\pi\in P_{2}(m)}\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi)

where we use short-hand notation σN=(σ1,N,σ2,N,,σm,N)\overrightarrow{\sigma_{N}}=(\sigma_{1,N},\sigma_{2,N},\dots,\sigma_{m,N}) and

𝒱σN(π)=1N1i1,,imN(k,l)πE([GNσk,N]ik,ik+1[GNσl,N]il,il+1).\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi)=\frac{1}{N}\sum_{1\leq i_{1},\dots,i_{m}\leq N}\prod_{(k,l)\in\pi}E\big{(}[G_{N}^{\sigma_{k,N}}]_{i_{k},i_{k+1}}\cdot[G_{N}^{\sigma_{l,N}}]_{i_{l},i_{l+1}}\big{)}.

Moreover, since GNG_{N} is Gaussian, E([GNσk,N]ik,ik+1[GNσl,N]il,il+1)=1Nδσk,N(ik,ik+1)σl,N(il,il+1).\displaystyle E\big{(}[G_{N}^{\sigma_{k,N}}]_{i_{k},i_{k+1}}\cdot[G_{N}^{\sigma_{l,N}}]_{i_{l},i_{l+1}}\big{)}=\frac{1}{N}\delta_{\sigma_{k,N}(i_{k},i_{k+1})}^{\top\circ\sigma_{l,N}(i_{l},i_{l+1})}. Since it is important to keep track of which indices are equal to each other, it is standard to encode this with pair partitions. Therefore for a given πP2(m)\pi\in P_{2}(m), we denote by 𝒜π,σN\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}} such sequences of indices which respect σN\overrightarrow{\sigma}_{N} and π\pi in the sense that they contribute in the sum above. To make the above precise we will view any pair partition as a permutation, where each block becomes a cycle, and we will write π(k)\pi(k), to mean the image of kk under the permutation (induced by) π\pi. With the identification im+1=i1i_{m+1}=i_{1} we define

𝒜π,σN={(is,js)s[m]:jk=ik+1 and σk,N(ik,jk)=\displaystyle\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}}=\big{\{}(i_{s},j_{s})_{s\in[m]}:\ j_{k}=i_{k+1}\textrm{ and }\sigma_{k,N}(i_{k},j_{k})=\top\circ σπ(k),N(iπ(k),jπ(k))\displaystyle\sigma_{\pi(k),N}(i_{\pi(k)},j_{\pi(k)})
 for all k[m]}\displaystyle\textrm{ \ \ \ \ \ for all }k\in[m]\big{\}}

and

𝔞π,σN=logN|𝒜π,σN|(m2+1).\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}=\log_{N}|\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}}|-(\frac{m}{2}+1).

Then (5) can be rewritten as

Etr(GNσ1,NGNσ2,NGNσm,N)=πP2(m)|𝒜π,σN|N(m2+1)=πP2(m)N𝔞π,σN.E\circ\mathrm{tr}\Big{(}G_{N}^{\sigma_{1,N}}\cdot G_{N}^{\sigma_{2,N}}\cdots G_{N}^{\sigma_{m,N}}\Big{)}=\sum_{\pi\in P_{2}(m)}|\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}}|N^{-(\frac{m}{2}+1)}=\sum_{\pi\in P_{2}(m)}N^{\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}}.

For B={β1,β2,,βr}B=\{\beta_{1},\beta_{2},\dots,\beta_{r}\} an ordered subset of [m][m] and (is,js)s[m]N2m(i_{s},j_{s})_{s\in[m]}\in N^{2m} we will be interested in subsequences (is,js)sB(i_{s},j_{s})_{s\in B} i.e. (iβ1,jβ1,,iβr,jβr)(i_{\beta_{1}},j_{\beta_{1}},\dots,i_{\beta_{r}},j_{\beta_{r}}) which can be extended to an element from 𝒜π,σN\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}}, thus we define

𝒜π,σN(B)={(is,js)sB: there exists some\displaystyle\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}}(B)=\big{\{}(i_{s},j_{s})_{s\in B}:\ \textrm{ there exists some } (is,js)s[m]B\displaystyle(i_{s},j_{s})_{s\in[m]\setminus B}
such that (is,js)s[m]𝒜π,σN}\displaystyle\textrm{ such that }(i_{s},j_{s})_{s\in[m]}\in\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}\big{\}}

and let

𝔞π,σN(B)=logN|𝒜π,σN(B)||B|+12|B|π2|1.\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B)=\log_{N}|\mathcal{A}_{\pi,\overrightarrow{\sigma}_{N}}(B)|-|B|+\frac{1}{2}|B^{2}\,_{|\pi}|-1.

where B|π2B^{2}\,_{|\pi} can be understand as the range of π\pi i.e. we have B|π2={(k,l)B×B:π(k)=l}B^{2}\,_{|\pi}=\{(k,l)\in B\times B:\ \pi(k)=l\}.

The following results are shown in [24] (see Lemmas 2.2 –2.4).

Lemma 2.5.

  1. (i)

    If kBk\in B, then 𝔞π,σN(B{k1})𝔞π,σN(B)\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B\cup\{k-1\})\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B) and
                                                    𝔞π,σN(B{k+1})𝔞π,σN(B)\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B\cup\{k+1\})\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B).

    In particular, 𝔞π,σN(B)𝔞π,σN\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B)\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}} for all B[m]B\subset[m].

  2. (ii)

    If kBk\in B, then 𝔞π,σN(B{π(k)})𝔞π,σN(B).\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B\cup\{\pi(k)\})\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B).

  3. (iii)

    If {k,π(k)}B=\{k,\pi(k)\}\cap B=\emptyset and {k1,k+1}B\{k-1,k+1\}\subseteq B, then
                                                𝔞π,σN(B{k})𝔞π,σN(B)1.\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B\cup\{k\})\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma}_{N}}(B)-1.

2.3. Basic probabilistic tools

We will repeatedly use Borel–Cantelli lemma in order to prove almost sure convergence of some statistics of random uniform permutations. We show several similar lemmas, which nevertheless have substantially different proofs. We prove each lemma separately, in each case invoking the following basic fact.

Lemma 2.6.

Let (XN)N(X_{N})_{N} be a sequence of non-negative random variables, for XN10X_{N}\stackrel{{\scriptstyle 1}}{{\to}}0 it suffices to show that for any ε>0\varepsilon>0 there exist δ>0\delta>0 and C>0C>0 such that (XN>ε)CN1+δ\mathbb{P}(X_{N}>\varepsilon)\leq\tfrac{C}{N^{1+\delta}}.

3. The generic permuted Gaussian random matrix has zero infinitesimal distribution

As explained in the previous section, in order to understand the joint moments of a Gaussian matrix with permuted entries it suffices to understand 𝒱σN(π)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi) for any pairing π\pi. In this section we will consider random permutations, and we will consider asymptotic behaviour of 𝒱σN\mathcal{V}_{\overrightarrow{\sigma_{N}}}, where we assume that (σN)N(\sigma_{N})_{N} is a sequence of uniformly random permutations with σ(N)𝒮([N]2)\sigma(N)\in\mathcal{S}([N]^{2}) for each N1N\geq 1. Note that in 𝒱σN(π)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi) we integrate with respect to the distribution of entries of the matrix, so the almost sure limits mentioned below are with respect to the sequence of random permutations only. Since a Gaussian matrix after a random permutation most likely is not self-adjoint we have to take care of complex-conjugate together with the matrix permutation. Observe that (Gσ)(i,j)=Gσ(j,i)¯(G^{\sigma})^{*}_{(i,j)}=\overline{G_{\sigma(j,i)}} and this is exactly the same as G(i,j)σNG^{\top\circ\sigma_{N}\circ\top}_{(i,j)}, which motivates the notations we introduce below. The goal of this section is to prove the following theorem.

Theorem 3.1.

Let (σN)N\big{(}\sigma_{N}\big{)}_{N} be a sequence of uniformly random permutations from 𝒮([N]2)\mathcal{S}([N]^{2}) and ε(1),,ε(m){1,}\varepsilon(1),\dots,\varepsilon(m)\in\{1,\ast\}. Define

σk,N={σN if ε(k)=1σN if ε(k)=.\sigma_{k,N}=\left\{\begin{array}[]{ll}\sigma_{N}&\textrm{ if }\varepsilon(k)=1\\ \top\circ\sigma_{N}\circ\top&\textrm{ if }\varepsilon(k)=\ast.\end{array}\right.

With the notations from Section 2.2, almost surely we have that 𝒱σN(π)=o(N1)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi)=o(N^{-1}) unless π\pi is non-crossing and ε(k)ε(π(k))\varepsilon(k)\neq\varepsilon(\pi(k)) for all k[m]k\in[m].

In order to prove Theorem 3.1 we need to establish several technical results.

Lemma 3.2.

Let (σN)N\big{(}\sigma_{N}\big{)}_{N} be a sequence of uniformly random permutations, with each σN\sigma_{N} from 𝒮([N]2)\mathcal{S}([N]^{2}). For any constant θ>0\theta>0 we have the following almost sure limits:

  1. (i)

    limNN(12+θ)|{(i,j)[N]2:σN(i,j)=σN(j,i)}|=0\displaystyle\lim_{N\rightarrow\infty}N^{-(\frac{1}{2}+\theta)}\cdot\big{|}\{(i,j)\in[N]^{2}:\ \sigma_{N}(i,j)=\top\circ\sigma_{N}(j,i)\}\big{|}=0

  2. (ii)

    limNNθsup1iN|{(j,k)[N]2:σN(i,j){σN(j,k),σN(j,k)}}|=0.\displaystyle\lim_{N\rightarrow\infty}N^{-\theta}\cdot\sup_{1\leq i\leq N}\big{|}\{(j,k)\in[N]^{2}:\ \sigma_{N}(i,j)\in\{\top\circ\sigma_{N}(j,k),\sigma_{N}\circ\top(j,k)\}\}\big{|}=0.

Proof.

For part (i), for each i,j[N]i,j\in[N], denote by Ii,jI_{i,j} the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}) given by Ii,j(σ)={1 if σ(i,j)=σ(j,i)0 if σ(i,j)σ(j,i)I_{i,j}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(i,j)=\top\circ\sigma(j,i)\\ 0&\textrm{ if }\sigma(i,j)\neq\top\circ\sigma(j,i)\end{array}\right. and let XN=i,j=1NIi,j.\displaystyle X_{N}=\sum_{i,j=1}^{N}I_{i,j}.

Using Markov’s inequality, we have that (N(12+θ)XN>ε)𝔼(XN2)ε2N(1+2θ)\displaystyle\mathbb{P}(N^{-(\frac{1}{2}+\theta)}X_{N}>\varepsilon)\leq\frac{\mathbb{E}(X_{N}^{2})}{\varepsilon^{2}N^{(1+2\theta)}} so from Lemma 2.6 it suffices to show that 𝔼(XN2)\mathbb{E}(X_{N}^{2}) is bounded.

Since Ii,j=Ij,i=Ii,j2I_{i,j}=I_{j,i}=I^{2}_{i,j}, we have that

𝔼(XN2)=𝔼((i,j=1NIi,j)2)=2i,j=1N𝔼(Ii,j)+1i,j,k,lN(i,j){(k,l),(l,k)}𝔼(Ii,jIk,l).\displaystyle\mathbb{E}(X_{N}^{2})=\mathbb{E}\big{(}\big{(}\sum_{i,j=1}^{N}I_{i,j}\big{)}^{2}\big{)}=2\sum_{i,j=1}^{N}\mathbb{E}(I_{i,j})+\sum_{\begin{subarray}{c}1\leq i,j,k,l\leq N\\ (i,j)\notin\{(k,l),(l,k)\}\end{subarray}}\mathbb{E}(I_{i,j}I_{k,l}).

To estimate 𝔼(Ii,j)\mathbb{E}(I_{i,j}), note that if jjj\neq j and Ii,j(σ)0I_{i,j}(\sigma)\neq 0 the value of σ(i,j)\sigma(i,j) uniquely determines the value of σ(j,i)\sigma(j,i); there are N2N^{2} possible choices for σ(i,j)\sigma(i,j) and hence σ(j,i)\sigma(j,i) and (N22)!(N^{2}-2)! possibilities for the rest of the values of σ\sigma. For elements on the diagonal, i.e. when i=ji=j, we have that Ii,j(σ)0I_{i,j}(\sigma)\neq 0 if and only if σ(j,i)\sigma(j,i) is also on the diagonal. Therefore

i,j=1N𝔼(Ii,j)=ij𝔼(Ii,j)+i𝔼(Ii,i)2(N2)N2(N22)!N2!+N1NN2.\displaystyle\sum_{i,j=1}^{N}\mathbb{E}(I_{i,j})=\sum_{i\neq j}\mathbb{E}(I_{i,j})+\sum_{i}\mathbb{E}(I_{i,i})\leq 2{N\choose 2}\frac{N^{2}\cdot(N^{2}-2)!}{N^{2}!}+N\frac{1}{N}\xrightarrow[N\rightarrow\infty]{}2.

Similarly, if (i,j){(k,l),(l,k)}(i,j)\notin\{(k,l),(l,k)\}, for Ii,j(σ)Ik,l(σ)0I_{i,j}(\sigma)I_{k,l}(\sigma)\neq 0 there are N2N^{2} possible choices for σ(i,j)\sigma(i,j), each uniquely determining at most one possible choice for σ(j,i)\sigma(j,i). Then there are N22N^{2}-2 possible choices for σ(k,l)\sigma(k,l), each determining at most one possible choice for σ(l,k)\sigma(l,k) and N24N^{2}-4 possible choices for the rest of the values of the permutation σ\sigma. Hence

1i,j,k,lN(i,j){(k,l),(l,k)}𝔼(Ii,jIk,l)N2(N22)N2(N22)(N24)!N2!N1.\displaystyle\sum_{\begin{subarray}{c}1\leq i,j,k,l\leq N\\ (i,j)\notin\{(k,l),(l,k)\}\end{subarray}}\mathbb{E}(I_{i,j}I_{k,l})\leq N^{2}(N^{2}-2)\frac{N^{2}\cdot(N^{2}-2)\cdot(N^{2}-4)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

For part (ii) fix ε>0\varepsilon>0 and observe that using exchangeability of rows and subadditivity, we have that for any fixed a[N]a\in[N]

(sup1iN|{(j,k)[N]2\displaystyle\mathbb{P}\big{(}\sup_{1\leq i\leq N}\big{|}\{(j,k)\in[N]^{2} :σN(i,j)=σN(j,k)}|>Nθε)\displaystyle:\ \sigma_{N}(i,j)=\top\circ\sigma_{N}(j,k)\}\big{|}>N^{\theta}\cdot\varepsilon\big{)}
\displaystyle\leq i=1N(|{(j,k)[N]2:σN(i,j)=σN(j,k)}|>Nθε)\displaystyle\sum_{i=1}^{N}\mathbb{P}\big{(}\big{|}\{(j,k)\in[N]^{2}:\ \sigma_{N}(i,j)=\top\circ\sigma_{N}(j,k)\}\big{|}>N^{\theta}\cdot\varepsilon\big{)}
=\displaystyle= N(|{(j,k)[N]2:σN(a,j)=σN(j,k)}|>Nθε).\displaystyle N\cdot\mathbb{P}\big{(}\big{|}\{(j,k)\in[N]^{2}:\ \sigma_{N}(a,j)=\top\circ\sigma_{N}(j,k)\}\big{|}>N^{\theta}\cdot\varepsilon\big{)}.

Denoting YN=1i,jNFi,jY_{N}=\sum_{1\leq i,j\leq N}F_{i,j} where each Fi,jF_{i,j} the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}) given by Fi,j(σ)={1 if σ(a,i)=σ(i,j)0 if σ(a,i)σ(i,j)F_{i,j}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(a,i)=\top\circ\sigma(i,j)\\ 0&\textrm{ if }\sigma(a,i)\neq\top\circ\sigma(i,j)\end{array}\right. and applying Markov’s inequality we obtain

(sup1iN|{(j,k)[N]2:σN(i,j)=σN(j,k)}|>Nθε)N𝔼(YNK)(Nθε)K,\displaystyle\mathbb{P}\big{(}\sup_{1\leq i\leq N}\big{|}\{(j,k)\in[N]^{2}:\ \sigma_{N}(i,j)=\top\circ\sigma_{N}(j,k)\}\big{|}>N^{\theta}\cdot\varepsilon\big{)}\leq N\frac{\mathbb{E}\big{(}Y_{N}^{K}\big{)}}{(N^{\theta}\varepsilon)^{K}},

for any positive integer KK, in particular for K>3θK>\frac{3}{\theta}.

Since Fi,j2=Fi,jF_{i,j}^{2}=F_{i,j}, we have that 𝔼(YNK)=s=1K𝔼(Fi1,j1Fis,js)\mathbb{E}\big{(}Y_{N}^{K}\big{)}=\displaystyle\sum_{s=1}^{K}\mathbb{E}\big{(}\sum F_{i_{1},j_{1}}\cdots F_{i_{s},j_{s}}\big{)} where the second sum goes over {(i1,j1,,is,js)[N]2s:(i1,j1),,(is,js) distinct}.\{(i_{1},j_{1},\dots,i_{s},j_{s})\in[N]^{2s}:\ (i_{1},j_{1}),\dots,(i_{s},j_{s})\textrm{ distinct}\}. So Lemma 2.6 implies that it suffices to show the uniform boundedness in NN of the expression 𝔼(DFi1,j1Fis,js)\displaystyle\mathbb{E}\big{(}\sum_{D}F_{i_{1},j_{1}}\cdots F_{i_{s},j_{s}}\big{)} where ss is some positive integer and

D={(i1,j1,,is,js)[N]2s:(i1,j1),,(is,js) distinct }.\displaystyle D=\{(i_{1},j_{1},\dots,i_{s},j_{s})\in[N]^{2s}:\ (i_{1},j_{1}),\dots,(i_{s},j_{s})\textrm{ distinct }\}.

We write D as a disjoint union D=D1D2D=D_{1}\sqcup D_{2}, where

D1\displaystyle D_{1} ={(i1,j1,,is,js)D:(ik,jk)=(a,a) for some k[s]}\displaystyle=\{(i_{1},j_{1},\dots,i_{s},j_{s})\in D:\ (i_{k},j_{k})=(a,a)\textrm{ for some }k\in[s]\}
D2\displaystyle D_{2} ={(i1,j1,,is,js)D:(ik,jk)(a,a) for all k[s]}.\displaystyle=\{(i_{1},j_{1},\dots,i_{s},j_{s})\in D:\ (i_{k},j_{k})\neq(a,a)\textrm{ for all }k\in[s]\}.
  • \bullet

    (i1,j1,,is,js)D1(i_{1},j_{1},\ldots,i_{s},j_{s})\in D_{1} if there is 1ks1\leq k\leq s such that (ik,jk)=(a,a)(i_{k},j_{k})=(a,a) (in particular, since the pairs of indices in DD are distinct, there can be at most one such kk).

  • \bullet

    (i1,j1,,is,js)D2(i_{1},j_{1},\ldots,i_{s},j_{s})\in D_{2} if for all 1ks1\leq k\leq s we have (ik,jk)(a,a)(i_{k},j_{k})\neq(a,a).

Under the condition Fi1,j1Fis,js0F_{i_{1},j_{1}}\cdots F_{i_{s},j_{s}}\neq 0, for each (i1,j1,,is,js)D2(i_{1},j_{1},\ldots,i_{s},j_{s})\in D_{2} there are at most N2!(N2s)!\frac{N^{2}!}{(N^{2}-s)!} possible choices for the ss-tuple (σ(a,i1),σ(a,is))(\sigma(a,i_{1})\dots,\sigma(a,i_{s})) each of them determining at most one possible choice for the ss-tuple (σ(i1,j1),σ(is,js))(\sigma(i_{1},j_{1})\dots,\sigma(i_{s},j_{s})) and (N22s)!(N^{2}-2s)! possible choices for the rest of values of σ\sigma. Furthermore, for each (i1,j1,,is,js)D1(i_{1},j_{1},\ldots,i_{s},j_{s})\in D_{1} there are at most NN choices for the value of σ(a,a)\sigma(a,a) and at most (N21)!(N2s)!\frac{(N^{2}-1)!}{(N^{2}-s)!} for the values of all other σ(a,ik)\sigma(a,i_{k}) such that Fi1,j1Fis,js0F_{i_{1},j_{1}}\cdots F_{i_{s},j_{s}}\neq 0; each such choice determines at most one possible choice for (σ(i1,j1),σ(is,js))(\sigma(i_{1},j_{1})\dots,\sigma(i_{s},j_{s})) and at most (N22s)!(N^{2}-2s)! possible choices for the rest of values of σ\sigma. Also, since (ik,jk)=(a,a)(i_{k},j_{k})=(a,a) for a unique k[s]k\in[s], we have at most N2s1N^{2s-1} choices for the 2s2s-tuple (i1,j1,,is,js)D2(i_{1},j_{1},\ldots,i_{s},j_{s})\in D_{2}.

Therefore

𝔼(DFi1,j1Fis,js)\displaystyle\mathbb{E}\big{(}\sum_{D}F_{i_{1},j_{1}}\cdots F_{i_{s},j_{s}}\big{)}\leq N2s1N2!N2!(N2s)!1(N22s)!\displaystyle N^{2s}\cdot\frac{1}{N^{2}!}\cdot\frac{N^{2}!}{(N^{2}-s)!}\cdot 1\cdot(N^{2}-2s)!
+\displaystyle+ N2s11N2!N(N21)!(N2s)!1(N22s)!N1,\displaystyle N^{2s-1}\cdot\frac{1}{N^{2}!}\cdot\frac{N(N^{2}-1)!}{(N^{2}-s)!}\cdot 1\cdot(N^{2}-2s)!\xrightarrow[N\rightarrow\infty]{}1,

where N2sN^{2s} comes from estimating the number of terms in the sum, hence the conclusion. The argument of the case σN(i.j)=σN(j,k)\sigma_{N}(i.j)=\sigma_{N}\circ\top(j,k) is similar. ∎

Lemma 3.3.

For each positive integer NN let ϕN\phi_{N} be a fixed map from [N]×[N][N]\times[N] to [N][N]. Suppose that each ωs,N\omega_{s,N}, s{1,2,3}s\in\{1,2,3\} is either the transpose for each NN, or the identity for each NN and denote σs,N=ωs,NσNωs,N\sigma_{s,N}=\omega_{s,N}\circ\sigma_{N}\circ\omega_{s,N}. With this notations, for any θ>0\theta>0 the following relations hold true almost surely:

  1. (i)

    limNN(1+θ)|{(i,j,k,l)[N]4:σN(i,j)=σ1,N(k,l) and 
    σ2,N(j,k)=σ3,N(l,i)}
    |=0.
    \displaystyle\lim_{N\rightarrow\infty}N^{-(1+\theta)}\cdot\big{|}\{(i,j,k,l)\in[N]^{4}:\ \sigma_{N}(i,j)=\top\circ\sigma_{1,N}(k,l)\textrm{ and }\\ {}\hskip 184.9429pt\sigma_{2,N}(j,k)=\top\circ\sigma_{3,N}(l,i)\}\big{|}=0.

  2. (ii)

    limNN(52+θ)|{(i,j,a,b)[N]4:σN(a,ϕN(σN(i,j)))=σ1,N(b,i)}|=0\displaystyle\lim_{N\rightarrow\infty}N^{-(\frac{5}{2}+\theta)}\cdot\big{|}\big{\{}(i,j,a,b)\in[N]^{4}:\ \sigma_{N}(a,\phi_{N}(\sigma_{N}(i,j)))=\top\circ\sigma_{1,N}(b,i)\big{\}}\big{|}=0.

Proof.

To simplify the writing, NN shall be omitted within this proof by writting σ\sigma, σs\sigma_{s}, ωS\omega_{S} for σN\sigma_{N}, σs,N\sigma_{s,N}, ωs,N\omega_{s,N} respectively (where 1s41\leq s\leq 4 ).

For part (i), note first that if i=ji=j, then there are at most NN possible choices for ii, each determining at most one possible choice for k,lk,l, hence

|{(i,j,k,l)[N]4:i=j,σN(i,j)=σ1,N(k,l),σ2,N(j,k)=σ3,N(l,i)}|=o(N).\big{|}\{(i,j,k,l)\in[N]^{4}:\ i=j,\sigma_{N}(i,j)=\top\circ\sigma_{1,N}(k,l),\sigma_{2,N}(j,k)=\top\circ\sigma_{3,N}(l,i)\}\big{|}=o(N).

Similar relations hold true for k=lk=l, for i=li=l and for j=kj=k. Furthermore, if i=ki=k and l=jl=j, then the equality σ(i,j)=σ1(k,l)\sigma(i,j)=\top\circ\sigma_{1}(k,l) gives either i=ji=j if ω1=IdN\omega_{1}=\text{Id}_{N} or σ(i,j)=σ(i,j)\sigma(i,j)=\top\circ\sigma(i,j) if ω1\omega_{1} is the transpose. In the last case there are again at most NN possible choices for (i,j)(i,j) as it is in the preimage of the diagonal under σN\sigma_{N}.

Therefore it remains to prove the statement for (i,j,k,l)D(i,j,k,l)\in D, where

D={(i,j,k,l)[N]4:(i,j),ω1(k,l),ω2(j,k),ω3(l,i) are distinct}.\displaystyle D=\big{\{}(i,j,k,l)\in[N]^{4}:\ (i,j),\,\omega_{1}(k,l),\,\omega_{2}(j,k),\,\omega_{3}(l,i)\textrm{ are distinct}\big{\}}.

For each (i,j,k,l)D(i,j,k,l)\in D, consider a mapping Ii,j,k,lI_{i,j,k,l} on 𝒮([N]2)\mathcal{S}([N]^{2}) given by

Ii,j,k,l(σ)={1 if condition () holds true 0 otherwise,\displaystyle I_{i,j,k,l}(\sigma)=\begin{cases}1&\mbox{ if condition ($\ast$) holds true }\\ 0&\mbox{ otherwise, }\end{cases}

where condition (\ast) is that σ(i,j)=σ1(k,l)\sigma(i,j)=\top\circ\sigma_{1}(k,l) and σ2(j,k)=σ4(l,i)\sigma_{2}(j,k)=\top\circ\sigma_{4}(l,i).

From Markov’s inequality, we have

((i,j,k,l)DIi,j,k,l>εN1+θ)ε1𝔼((i,j,k,l)DIi,j,k,l)/N1+θ,\mathbb{P}(\sum_{(i,j,k,l)\in D}I_{i,j,k,l}>\varepsilon N^{1+\theta})\leq\varepsilon^{-1}\cdot\mathbb{E}\big{(}\sum_{(i,j,k,l)\in D}I_{i,j,k,l}\big{)}/N^{1+\theta},

where the (not displayed) argument of Ii,j,k,lI_{i,j,k,l} is an uniformly random permutation on 𝒮([N]2)\mathcal{S}([N]^{2}). From Lemma 2.6 it sufficces to show that the expectation above is bounded.

For Ii,j,k,l(σ)0I_{i,j,k,l}(\sigma)\neq 0, there are N2N^{2} possible choices for σ(i,j)\sigma(i,j), each giving at most one possible choice for σ(ω1,N(k,l))\sigma(\omega_{1,N}(k,l)) and N22N^{2}-2 choices for σ(ω2(j,k))\sigma(\omega_{2}(j,k)), each of them determining at most one possible choice for σ(ω3(l,i))\sigma(\omega_{3}(l,i)); finally, there are at most (N24)!(N^{2}-4)! possible choices for the rest of the values of σ\sigma. Therefore

𝔼((i,j,k,l)DIi,j,k,l)<N4N2(N22)(N24)!N2!N1,\displaystyle\mathbb{E}\big{(}\sum_{(i,j,k,l)\in D}I_{i,j,k,l}\big{)}<N^{4}\cdot\frac{N^{2}\cdot(N^{2}-2)\cdot(N^{2}-4)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1,

hence the conclusion.

For part (ii), let us denote by (\ast\ast) the relation σ(a,ϕN(σ(i,j)))=σ1(b,i)\sigma(a,\phi_{N}(\sigma(i,j)))=\top\circ\sigma_{1}(b,i).

Remark that it suffices to show the property for tuples (i,j,a,b)(i,j,a,b) such that a,b,{i,j}a,b,\notin\{i,j\}. If a{i,j}a\in\{i,j\}, then there are N2N^{2} possible choices for the triple (i,j,a)(i,j,a), each determining, via condition (\ast\ast), at most one possible value for bb. Similarly for b{i,j}b\in\{i,j\}.

Let D1={(i,j,a,b)[N]4:a,b{i,j}}D_{1}=\{(i,j,a,b)\in[N]^{4}:\ a,b\notin\{i,j\}\}. As before, using the Borel-Cantelli Lemma, it suffices to show that for each ε>0\varepsilon>0,

N=1(|{(i,j,a,b)D1: () holds true}|>εN52+θ)<,\displaystyle\sum_{N=1}^{\infty}\mathbb{P}\big{(}\big{|}\{(i,j,a,b)\in D_{1}:\ \textrm{ ($\ast\ast$) holds true}\}\big{|}>\varepsilon\cdot N^{\frac{5}{2}+\theta}\big{)}<\infty,

which holds true if

N1+2θ(|{(i,j,a,b)D1: () holds true}|>εN52+θ)\displaystyle N^{1+2\theta}\cdot\mathbb{P}\big{(}\big{|}\{(i,j,a,b)\in D_{1}:\ \textrm{ ($\ast\ast$) holds true}\}\big{|}>\varepsilon\cdot N^{\frac{5}{2}+\theta}\big{)}

is uniformly bounded in NN.

For each (i,j,a,b)D1(i,j,a,b)\in D_{1} consider a mapping Fi,j,a,b,F_{i,j,a,b,} on 𝒮([N]2)\mathcal{S}([N]^{2}) given by

Fi,j,a,b(σ)={1 if () holds true0otherwise.\displaystyle F_{i,j,a,b}(\sigma)=\begin{cases}\begin{array}[]{ll}1&\textrm{ if ($\ast\ast$) holds true}\\ 0&\textrm{otherwise}.\end{array}\end{cases}

Markov’s inequality gives that

(|{(i,j,a,b)D1: () holds true}|>εN52+θ)𝔼(((i,j,a,b)D1Fi,j,a,b)2)ε2N5+2θ\displaystyle\mathbb{P}\big{(}\big{|}\{(i,j,a,b)\in D_{1}:\ \textrm{ ($\ast\ast$) holds true}\}\big{|}>\varepsilon\cdot N^{\frac{5}{2}+\theta}\big{)}\leq\frac{\mathbb{E}\big{(}\displaystyle(\sum_{(i,j,a,b)\in D_{1}}F_{i,j,a,b})^{2}\big{)}}{\varepsilon^{2}N^{5+2\theta}}

where the implicit argument of Fi,j,a,bF_{i,j,a,b} is a uniformly random permutation on 𝒮([N]2)\mathcal{S}([N]^{2}). So by Lemma 2.6 it suffices to show that

(6) N4𝔼(((i,j,a,b)D1Fi,j,a,b)2) is uniformly bounded in N.\displaystyle N^{-4}\cdot\mathbb{E}\big{(}\displaystyle(\sum_{(i,j,a,b)\in D_{1}}F_{i,j,a,b})^{2}\big{)}\textrm{ is uniformly bounded in $N$. }

First, note that if bbb\neq b^{\prime}, then Fi,j,a,bFi,j,a,b=0F_{i,j,a,b}\cdot F_{i,j,a,b^{\prime}}=0. since condition (\ast\ast) implies that σ1(b,i)=σ(a,ϕN(σ(i,j)))=σ1(b,i)\top\circ\sigma_{1}(b,i)=\sigma(a,\phi_{N}(\sigma(i,j)))=\top\circ\sigma_{1}(b^{\prime},i). Similarly, if aaa\neq a^{\prime}, then Fi,j,a,bFi,j,a,b=0F_{i,j,a,b}\cdot F_{i,j,a^{\prime},b}=0.

Next, if (i,j)(i,j)(i,j)\neq(i^{\prime},j^{\prime}) but (a,ϕN(σN(i,j)))=(a,ϕN(σ(i,j)))(a,\phi_{N}(\sigma_{N}(i,j)))=(a^{\prime},\phi_{N}(\sigma(i^{\prime},j^{\prime}))) then (\ast\ast) gives that (b,i)=(b,i)(b,i)=(b^{\prime},i^{\prime}).

Denote

D2\displaystyle D_{2} ={(i,j,a,b,a,b):aa,bb and (i,j,a,b),(i,j,a,b)D1}\displaystyle=\{(i,j,a,b,a^{\prime},b^{\prime}):\ a\neq a^{\prime},b\neq b^{\prime}\textrm{ and }(i,j,a,b),(i,j,a^{\prime},b^{\prime})\in D_{1}\}
D3\displaystyle D_{3} ={(i,j,j,a,b):(i,j,a,b),(i,j,a,b)D1}\displaystyle=\{(i,j,j^{\prime},a,b):\ (i,j,a,b),(i,j^{\prime},a,b)\in D_{1}\}
D4\displaystyle D_{4} ={(i,j,i,j,a,b,a,b):(i,j)(i,j),(a,b)(a,b) and\displaystyle=\{(i,j,i^{\prime},j^{\prime},a,b,a^{\prime},b^{\prime}):\ (i,j)\neq(i^{\prime},j^{\prime}),(a,b)\neq(a^{\prime},b^{\prime})\textrm{ and }
(i,j,a,b),(i,j,a,b)D1}\displaystyle\hskip 207.7052pt(i,j,a,b),(i^{\prime},j^{\prime},a^{\prime},b^{\prime})\in D_{1}\}

Utilizing the observations above and that Fi,j,a,b=Fi,j,a,b2F_{i,j,a,b}=F_{i,j,a,b}^{2}, we have that

𝔼(((i,j,a,b)D1Fi,j,a,b)2)=𝔼\displaystyle\mathbb{E}\big{(}(\sum_{(i,j,a,b)\in D_{1}}F_{i,j,a,b})^{2}\big{)}=\mathbb{E} (D1Fi,j,a,b)+𝔼(D2Fi,j,a,bFi,j,a,b)\displaystyle\big{(}\sum_{D_{1}}F_{i,j,a,b}\big{)}+\mathbb{E}\big{(}\sum_{D_{2}}F_{i,j,a,b}\cdot F_{i,j,a^{\prime},b^{\prime}}\big{)}
+𝔼(D3Fi,j,a,bFi,j,a,b)+𝔼(D4Fi,j,a,bFi,j,a,b).\displaystyle+\mathbb{E}\big{(}\sum_{D_{3}}F_{i,j,a,b}\cdot F_{i,j^{\prime},a,b}\big{)}+\mathbb{E}\big{(}\sum_{D_{4}}F_{i,j,a,b}\cdot F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}.

For Fi,j,a,b(σN)=1F_{i,j,a,b}(\sigma_{N})=1, there are at most N2N^{2} possible choices for σ(i,j)\sigma(i,j), each giving at most N21N^{2}-1 possible choices for σ(a,ϕN(σ(i,j)))\sigma(a,\phi_{N}(\sigma(i,j))), each giving at most one possible choice for σ(ω1(b,i))\sigma(\omega_{1}(b,i)) and at most (N23)!(N^{2}-3)! possible choices for the rest of values of σ\sigma. Therefore

(7) N4𝔼(D1Fi,j,a,b)N4N4N2(N21)1(N23)!N2!N0.\displaystyle N^{-4}\cdot\mathbb{E}\big{(}\sum_{D_{1}}F_{i,j,a,b}\big{)}\leq N^{-4}\cdot N^{4}\cdot\frac{N^{2}\cdot(N^{2}-1)\cdot 1\cdot(N^{2}-3)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0.

For (i,j,a,b,a,b)D2(i,j,a,b,a^{\prime},b^{\prime})\in D_{2} and Fi,j,a,bFi,j,a,b(σ)=1F_{i,j,a,b}\cdot F_{i,j,a^{\prime},b^{\prime}}(\sigma)=1, there are N2N^{2} possible choices for σ(i,j)\sigma(i,j), each giving less than N4N^{4} possible choices for σ(a,ϕN(σ(i,j)))\sigma(a,\phi_{N}(\sigma(i,j))), σ(a,ϕN(σ(i,j)))\sigma(a^{\prime},\phi_{N}(\sigma(i,j))) each giving at most one choice for σ(ω1(b,i))\sigma(\omega_{1}(b,i)), σ(ω1(b,i))\sigma(\omega_{1}(b^{\prime},i)) and (N25)!(N^{2}-5)! possible choices for the rest of values of σ\sigma. Therefore

(8) N4𝔼(D2Fi,j,a,bFi,j,a,b)<N4N6N2N4(N25)!N2!N0.\displaystyle N^{-4}\cdot\mathbb{E}\big{(}\sum_{D_{2}}F_{i,j,a,b}\cdot F_{i,j,a^{\prime},b^{\prime}}\big{)}<N^{-4}\cdot N^{6}\cdot\frac{N^{2}\cdot N^{4}\cdot(N^{2}-5)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0.

For (i,j,j,a,b)D3(i,j,j^{\prime},a,b)\in D_{3} and Fi,j,a,bFi,j,a,b=1F_{i,j,a,b}\cdot F_{i,j^{\prime},a,b}=1, there are N2N^{2} possible choices for σ(i,j)\sigma(i,j), each giving less than N2N^{2} possible choices for σ(a,ϕN(σ(i,j)))\sigma(a,\phi_{N}(\sigma(i,j))), each giving at most one possible choice for σ(ω1(b,i))\sigma(\omega_{1}(b,i)) and at most (N23)!(N^{2}-3)! possible choices for the rest of values of σ\sigma. Therefore

(9) N4𝔼(D3Fi,j,a,bFi,j,a,b)<N4N5N4(N23)!N2!N0.\displaystyle N^{-4}\cdot\mathbb{E}\big{(}\sum_{D_{3}}F_{i,j,a,b}\cdot F_{i,j^{\prime},a,b}\big{)}<N^{-4}\cdot N^{5}\cdot\frac{N^{4}\cdot(N^{2}-3)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0.

For (i,j,i,j,a,b,a,b)D4(i,j,i^{\prime},j^{\prime},a,b,a^{\prime},b^{\prime})\in D_{4} together with Fi,j,a,bFi,j,a,b=1F_{i,j,a,b}\cdot F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}=1 there are at most N2N^{2} possible choices for σ(i,j)\sigma(i,j), each giving less than N2N^{2} possible choices for σ(a,ϕN(σ(i,j)))\sigma(a,\phi_{N}(\sigma(i,j))), each giving one possible choice for σ(ω1(b,i))\sigma(\omega_{1}(b,i)), then less than N4N^{4} possible choices for σ(i,j)\sigma(i^{\prime},j^{\prime}), σ(a,ϕN(σ(i,j)))\sigma(a^{\prime},\phi_{N}(\sigma(i^{\prime},j^{\prime}))), each with at most one choice for σ(ω1(b,i))\sigma(\omega_{1}(b^{\prime},i^{\prime})) and at most (N26)!(N^{2}-6)! possible choices for the rest of values of σ\sigma. Therefore

(10) N4𝔼(D4Fi,j,a,bFi,j,a,b)<N4N8N8(N26)!N2!N1.\displaystyle N^{-4}\cdot\mathbb{E}\big{(}\sum_{D_{4}}F_{i,j,a,b}\cdot F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}<N^{-4}\cdot N^{8}\cdot\frac{N^{8}\cdot(N^{2}-6)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

The conclusion follows summing relations (7) - (10).

Proof of Theorem 3.1.

According to Lemma 2.5(i), for any subset SS of [m][m], we have that 𝔞π,σN(S)𝔞π,σN\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(S)\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}. Therefore it suffices to show that there is some S[m]S\subseteq[m] and some ε>0\varepsilon>0 such that 𝔞π,σN(S)<1ε\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(S)<-1-\varepsilon.

Next, we observe that if we remove from π\pi any block of π\pi of the form (k,k+1)(k,k+1) such that ε(k)ε(k+1)\varepsilon(k)\neq\varepsilon(k+1) it does not change the value of 𝒱σN(π)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi). Indeed, if π(k)=k+1\pi(k)=k+1 and ε(k)ε(k+1)\varepsilon(k)\neq\varepsilon(k+1), that is σk+1,N=σk,N\sigma_{k+1,N}=\top\circ\sigma_{k,N}\circ\top, then the relation δσk,N(ik,ik+1)σk+1,N(ik+1,ik+2)=1\delta_{\sigma_{k,N}(i_{k},i_{k+1})}^{\top\circ\sigma_{k+1,N}(i_{k+1},i_{k+2})}=1 is equivalent to ik=ik+2i_{k}=i_{k+2}. Hence

𝒱σN(π)=\displaystyle\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi)= 1N1iνN1νm1Nδσk,N(ik,ik+1)σk+1,N(ik+1,ik+2)(s,t)πsktE([GNσk,N]is,is+1[GNσl,N]it,it+1)\displaystyle\frac{1}{N}\sum_{\begin{subarray}{c}1\leq i_{\nu}\leq N\\ 1\leq\nu\leq m\end{subarray}}\frac{1}{N}\delta_{\sigma_{k,N}(i_{k},i_{k+1})}^{\top\circ\sigma_{k+1,N}(i_{k+1},i_{k+2})}\prod_{\begin{subarray}{c}(s,t)\in\pi\\ s\neq k\neq t\end{subarray}}E\big{(}[G_{N}^{\sigma_{k,N}}]_{i_{s},i_{s+1}}\cdot[G_{N}^{\sigma_{l,N}}]_{i_{t},i_{t+1}}\big{)}
(11) =\displaystyle= ik+1=1N1N𝒱σN(π)=𝒱σN(π)\displaystyle\sum_{i_{k+1}=1}^{N}\frac{1}{N}\mathcal{V}_{\overrightarrow{\sigma_{N}}^{\prime}}(\pi^{\prime})=\mathcal{V}_{\overrightarrow{\sigma_{N}}^{\prime}}(\pi^{\prime})

where π\pi^{\prime} and σN\overrightarrow{\sigma_{N}}^{\prime} are obtained by removing (k,k+1)(k,k+1) from π\pi and removing σk,N\sigma_{k,N} and σk+1,N\sigma_{k+1,N} from σN\overrightarrow{\sigma_{N}}.

Without loss of generality from now on we assume that π\pi does not contain a block of the form (k,k+1)(k,k+1) such that ε(k)ε(k+1)\varepsilon(k)\neq\varepsilon(k+1). Observe that non-crossing pairings which at the same time are alternating in 11 and * always contain such a pair. Next we shall show that for any other pairing (either crossing or non-crossing but non-alternating) we have that 𝒱σN(π)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi) vanishes asymptotically.

Suppose that π\pi has a block of the form (k,k+1)(k,k+1) with ε(k)=ε(k+1)\varepsilon(k)=\varepsilon(k+1). Via a circular permutation of the set [m][m], we can then assume that π(1)=2\pi(1)=2. If m=2m=2, then the result follows from Lemma 3.2(i). If m>2m>2, then from our first assumption, the restriction of π\pi to {3,4,,m}\{3,4,\dots,m\} either has a crossing or a block of the form (s,s+1)(s,s+1).

For the case π(1)=2\pi(1)=2 and π(s)=s+1\pi(s)=s+1 for some s>2s>2 (see Figure 1 below), Lemma 3.2(ii) gives that for any θ>0\theta>0, almost surely |𝒜π,σN({1,2}))|=o(Nθ)|\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,2\}))|=o(N^{\theta}), therefore 𝔞π,σN({1,2}))<1+θ\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,2\}))<-1+\theta. Hence, taking B={1,,s1}B=\{1,\dots,s-1\}, part (ii) of Lemma 2.5 gives that 𝔞π,σN(B)<1+θ\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(B)<-1+\theta.

Figure 1.

On the other hand, given (iν,jν)νB𝒜π,σN(B)(i_{\nu},j_{\nu})_{\nu\in B}\in\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}(B), the component is=js1i_{s}=j_{s-1} is fixed, so according to Lemma 3.2 (ii) almost surely there are o(Nθ)o(N^{\theta}) triples (is,is+1,js+1)(i_{s},i_{s+1},j_{s+1}) such that σs,N(is,js)=σs+1,N(is+1,js+1)\sigma_{s,N}(i_{s},j_{s})=\top\circ\sigma_{s+1,N}(i_{s+1},j_{s+1}), that is

𝒜π,σN(B{s,s+1})=o(Nθ)𝒜π,σN(B).\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}(B\cup\{s,s+1\})=o(N^{\theta})\cdot\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}(B).

We get then 𝔞π,σN(B{s,s+1})<2+2θ\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(B\cup\{s,s+1\})<-2+2\theta and the conclusion follows from Lemma 2.5(i).

The case π(1)=2\pi(1)=2 and the set {3,4,,m}\{3,4,\dots,m\} contains a crossing of π\pi is similar. Suppose that a,b,c,da,b,c,d is such a crossing see Figure 2 below).

Figure 2

As above, Lemma 3.2(ii) and Lemma 2.5(i) give that 𝔞π,σN([b1])<1+θ\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([b-1])<-1+\theta. Since π(a)=c\pi(a)=c, Lemma 2.5(ii) gives that 𝔞π,σN([b1]{c})<1+θ\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([b-1]\cup\{c\})<-1+\theta, and furthermore, utilizing again Lemma 2.5(i), we get

𝔞π,σN([c]{b})<1+θ.\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([c]\setminus\{b\})<-1+\theta.

Finally, applying part (iii) of Lemma 2.5, we get that 𝔞π,σN([c])<2+θ\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([c])<-2+\theta, hence the conclusion.

Next we shall analyse the case when π\pi does not have any block with consecutive elements and hence it has a crossing. Let a<b<c<da<b<c<d be such that π(a)=c\pi(a)=c, π(b)=d\pi(b)=d and cbc-b is minimal. Since π\pi does not have any block with consecutive elements, the minimality of cbc-b give that c=b+1c=b+1. We can also suppose, via a circular permutation of the set [m][m], that a=1a=1.

Furthermore, remark that if the set [d][d] is not invariant under π\pi, then the conclusion of the theorem holds true, that is 𝒱σN(π)=o(N1)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi)=o(N^{-1}). To show it, suppose that there are some t,st,s with t<d<st<d<s and π(t)=s\pi(t)=s. If b+1<t<db+1<t<d, (see Figure 3 below) then 𝔞π,σn({1,b+1})=0\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}(\{1,b+1\})=0, so applying parts (i) and (iii) of Lemma 2.5 we get that 𝔞π,σn([b+1])1\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}([b+1])\leq-1.

Figure 3.

On the other hand, part (ii) of Lemma 2.5 gives then that 𝔞π,σn([b+1])𝔞π,σn([b+1]{d})\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}([b+1])\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}([b+1]\cup\{d\}) and another application of parts (i) and (iii) give that

𝔞π,σn([d])𝔞π,σn([d]{t})1𝔞π,σn([b+1]{d})12,\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}([d])\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}([d]\setminus\{t\})-1\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{n}}}([b+1]\cup\{d\})-1\leq-2,

hence the conclusion. The argument for the case 1<t<b1<t<b is similar.

We can assume then that π([d])=[d]\pi([d])=[d]. Furthermore, this allows us to assume that d=md=m: if d<md<m, then the restriction of π\pi to [m][d][m]\setminus[d] has some crossing (since no block has consecutive elements), and again Lemma 2.5 gives that 𝔞πσN2\mathfrak{a}_{\pi\overrightarrow{\sigma_{N}}}\leq-2. Finally, without loss of generality, we can also suppose that ε(1)=1\varepsilon(1)=1, that is σ1,N=σN\sigma_{1,N}=\sigma_{N}.

If m=4m=4, then, putting ωs,N{IdN,}\omega_{s,N}\in\{\text{Id}_{N},\top\} in part (i) of Lemma 3.3 we obtain that 𝒜π,σN=o(N1+θ)\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}=o(N^{1+\theta}), hence

𝔞π,σN<1+θ|{1,2,3,4}|+|{(1,3),(2,4)}|1=2+θ.\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}<1+\theta-|\{1,2,3,4\}|+|\{(1,3),(2,4)\}|-1=-2+\theta.

If m>4m>4, then applying part (ii) of Lemma 3.3 for (i,j,a,b)=(i1,ib,ib+1,im)(i,j,a,b)=(i_{1},i_{b},i_{b+1},i_{m}), ωs,N{IdN,}\omega_{s,N}\in\{\text{Id}_{N},\top\} and ϕN\phi_{N} one of the canonical projections, we obtain that

𝒜π,σN({1,b,b+1,m})=o(N52+θ),\mathcal{A}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,b,b+1,m\})=o(N^{\frac{5}{2}+\theta}),

hence

𝔞π,σN({1,b,b+1,m})<52+θ4+21=12+θ.\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,b,b+1,m\})<\frac{5}{2}+\theta-4+2-1=-\frac{1}{2}+\theta.

On the other hand, since m>4m>4 and π\pi does not have any blocks with consecutive elements, then either the sets {2,3,b1}\{2,3\dots,b-1\} and {b+2,b+3,,m1}\{b+2,b+3,\dots,m-1\} are invariant under π\pi and at least one of them is nonvoid, or there are some t,st,s with π(t)=s\pi(t)=s and 1<t<b1<t<b and b+1<s<mb+1<s<m.

Let us suppose that the set {2,3,,b1}\{2,3,\dots,b-1\} is nonvoid and invariant under π\pi. Since π\pi does not have any blocks with consecutive elements, the restriction of π\pi to the set {2,3,,b1}\{2,3,\dots,b-1\} contains at least one crossing, (a,b,c,d)(a^{\prime},b^{\prime},c^{\prime},d^{\prime}) (see Figure 4 below).

Figure 4.

Applying parts (i), then (ii) of Lemma 2.5, we have that:

𝔞π,σN({1,b,b+1,m})𝔞π,σN([a]{b,b+1,m})𝔞π,σN([a]{c,b,b+1,m}).\displaystyle\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,b,b+1,m\})\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([a^{\prime}]\cup\{b,b+1,m\})\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([a^{\prime}]\cup\{c^{\prime},b,b+1,m\}).

Denoting B=[c]{b,b+1,m}B=[c^{\prime}]\cup\{b,b+1,m\} and applying again Lemma 2.5(i), we have further obtain

𝔞π,σN(B{b})𝔞π,σN([a]{c,b,b+1,m}),\displaystyle\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(B\setminus\{b^{\prime}\})\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([a^{\prime}]\cup\{c^{\prime},b,b+1,m\}),

and another application of Lemma 2.5(iii) gives that

𝔞π,σN(B)𝔞π,σN(B{b})1𝔞π,σN({1,b,b+1,m})1<32+θ,\displaystyle\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(B)\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(B\setminus\{b^{\prime}\})-1\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,b,b+1,m\})-1<-\frac{3}{2}+\theta,

hence the conclusion. The case when {b+2,b+2,,m1}\{b+2,b+2,\dots,m-1\} is nonvoid is similar.

Finally, let us suppose that there are some t,st,s with π(t)=s\pi(t)=s and 1<t<b1<t<b and b+1<s<mb+1<s<m (see Figure 5 below).

Figure 5.

Applying part(i), then part (iii) of the Lemma 2.5 we obtain that

12+θ>𝔞π,σN({1,b,b+1,m})𝔞π,σN([b+1]{t}{m})\displaystyle-\frac{1}{2}+\theta>\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}(\{1,b,b+1,m\})\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([b+1]\setminus\{t\}\cup\{m\})

respectively that

𝔞π,σN([b+1]{t}{m})𝔞π,σN([b+1]{m})+1,\displaystyle\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([b+1]\setminus\{t\}\cup\{m\})\geq\mathfrak{a}_{\pi,\overrightarrow{\sigma_{N}}}([b+1]\cup\{m\})+1,

hence the conclusion.

4. Asymptotic Infinitesimal Freeness Relations

In [24], it is shown that GNσNG_{N}^{\sigma_{N}} and GNτNG_{N}^{\tau_{N}} are asymptotically circular and asymptotically free whenever the pair of permutation sequences (σN)N(\sigma_{N})_{N} and (τN)N(\tau_{N})_{N} satisfies certain conditions. Similar properties for Haar matrices were studied in [22] where we also allowed that the sequences of permutations are random. One of the consequences of [22] (not stated explicitly anywhere) is that conditions from [24] hold almost surely, which implies that for almost all pairs of permutation sequences (σN)N(\sigma_{N})_{N} and (τN)N(\tau_{N})_{N}, we have GNσNG_{N}^{\sigma_{N}} and GNτNG_{N}^{\tau_{N}} are asymptotically circular and asymptotically free.

In this section, we show that the property also holds on the level of infinitesimal distributions. More precisely, we show the following theorem:

Theorem 4.1.

Suppose that GNG_{N} is a Gaussian random matrix for each positive integer NN. Moreover assume that sequences of uniformly random permutations (σN)N\big{(}\sigma_{N}\big{)}_{N} and (τN)N\big{(}\tau_{N}\big{)}_{N} from 𝒮([N]2)\mathcal{S}([N]^{2}) are independent from each other and independent from the sequence (GN)N(G_{N})_{N}. We have almost surely that GNσNG_{N}^{\sigma_{N}} and GNτNG_{N}^{\tau_{N}} are asymptotically circular with zero infinitesimal distribution and infinitesimally free from each other and from GNG_{N}.

Remark 4.2.

Let us clarify what we mean by the ”almost sure” statement above. We consider asymptotic freeness with respect to sequence of functionals (1/N)𝔼Tr(1/N)\mathbb{E}\circ\textrm{Tr} i.e. we take the expectation of the normalized trace. We will study as in the previous section the quantity 𝒱σN(π)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi) defined as in equation (5), which becomes a random variable when we let σN\overrightarrow{\sigma_{N}} be a sequence of random permutations. Hence the almost sure limit here refers to the limit with probability one with respect to an uniformly random permutation.

The proof of the theorem above is very similar to the argument from the previous section. More precisely, in equation (5) we consider that each σk,N\sigma_{k,N} is one of the following permutations: IdN,σN,σN,τN,τN\text{Id}_{N},\sigma_{N},\top\circ\sigma_{N}\circ\top,\tau_{N},\top\circ\tau_{N}\circ\top and prove a results similar to the ones from Section 3.

Lemma 4.3.

Suppose that (ωN)N\big{(}\omega_{N}\big{)}_{N} is a given sequence of permutations with ωN𝒮([N]2)\omega_{N}\in\mathcal{S}([N]^{2}) for each NN. If θ>0\theta>0 is a constant, then for almost all (σN)N(\sigma_{N})_{N} we have that:

  1. (i)

    limNN(12+θ)|{(i,j)[N]2:ωN(i,j)=σN(j,i)}|=0\displaystyle\lim_{N\rightarrow\infty}N^{-(\frac{1}{2}+\theta)}\cdot|\{(i,j)\in[N]^{2}:\ \omega_{N}(i,j)=\sigma_{N}(j,i)\}|=0

  2. (ii)

    limNNθsup1iN|(j,k)[N]2:ωN(i,j)=σN(j,k)}|=0.\displaystyle\lim_{N\rightarrow\infty}N^{-\theta}\cdot\sup_{1\leq i\leq N}\big{|}(j,k)\in[N]^{2}:\ \omega_{N}(i,j)=\sigma_{N}(j,k)\}\big{|}=0.

Proof.

For part (i), given i,j[N]i,j\in[N], denote by Ii,jI_{i,j} the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}) given by Ii,j(σ)={1 if σ(i,j)=ωN(j,i)0 if σ(i,j)ωN(j,i).I_{i,j}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(i,j)=\omega_{N}(j,i)\\ 0&\textrm{ if }\sigma(i,j)\neq\omega_{N}(j,i).\end{array}\right.

As in the proof of Lemma 3.2(i), it suffices to show that the sums i,j=1N𝔼(Ii,j)\displaystyle\sum_{i,j=1}^{N}\mathbb{E}(I_{i,j}) and 1i,j,k,lN(i,j){(k,l),(l,k)}𝔼(Ii,jIk,l)\displaystyle\sum_{\begin{subarray}{c}1\leq i,j,k,l\leq N\\ (i,j)\notin\{(k,l),(l,k)\}\end{subarray}}\mathbb{E}(I_{i,j}I_{k,l}) are bounded in NN.

For Ii,j(σ)0I_{i,j}(\sigma)\neq 0 there is only one possible choice for σN(j,i)\sigma_{N}(j,i) and (N21)!(N^{2}-1)! possiblilities for the rest of the values of σ\sigma, therefore

i,j=1N𝔼(Ii,j)N21(N21)!N2!=1.\displaystyle\sum_{i,j=1}^{N}\mathbb{E}(I_{i,j})\leq N^{2}\frac{1\cdot(N^{2}-1)!}{N^{2}!}=1.

If (i,j){(k,l),(l,k)}(i,j)\notin\{(k,l),(l,k)\}, for Ii,j(σ)Ik,l(σ)0I_{i,j}(\sigma)I_{k,l}(\sigma)\neq 0 there is only one possible choice for σ(j,i)\sigma(j,i) and for σ(l,k)\sigma(l,k) and (N22)!(N^{2}-2)! possible choices for the rest of the values of σ\sigma, therefore

1i,j,k,lN(i,j){(k,l),(l,k)}𝔼(Ii,jIk,l)N2(N22)1(N22)!N2!N1,\displaystyle\sum_{\begin{subarray}{c}1\leq i,j,k,l\leq N\\ (i,j)\notin\{(k,l),(l,k)\}\end{subarray}}\mathbb{E}(I_{i,j}I_{k,l})\leq N^{2}(N^{2}-2)\frac{1\cdot(N^{2}-2)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1,

and the conclusion follows.

For part (ii), for a fixed a[N]a\in[N] denote by Fi,jF_{i,j} the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}) given by Fi,j(σ)={1 if σ(i,j)=ωN(a,i)0 if σ(i,j)ωN(a,i)F_{i,j}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(i,j)=\omega_{N}(a,i)\\ 0&\textrm{ if }\sigma(i,j)\neq\omega_{N}(a,i)\end{array}\right. and, as in the proof of Lemma 3.2(ii), it suffices to show the boundedness (as NN\rightarrow\infty) of the expression

𝔼(DFi1,j1Fis,js)\mathbb{E}\big{(}\sum_{D}F_{i_{1},j_{1}}\cdots F_{i_{s},j_{s}}\big{)}

where ss is some positive integer and again

D={(i1,j1,,is,js)[N]2s:(i1,j1),,(is,js) distinct }.\displaystyle D=\{(i_{1},j_{1},\dots,i_{s},j_{s})\in[N]^{2s}:\ (i_{1},j_{1}),\dots,(i_{s},j_{s})\textrm{ distinct }\}.

If Fit,jt0F_{i_{t},j_{t}}\neq 0 for each 1ts1\leq t\leq s, we need that the ss-tuple (σ(i1,j1),,σ(is,js))(\sigma(i_{1},j_{1}),\dots,\sigma(i_{s},j_{s})) coincides to the ss-tuple (ωN(a,i1),,ωN(a,is))(\omega_{N}(a,i_{1}),\dots,\omega_{N}(a,i_{s})) and there are at most (N2s)!(N^{2}-s)! choices for the rest of the values of σ\sigma. Therefore

𝔼(DFj1,k1Fis,js)1N2!N2s(N2s)!N1\displaystyle\mathbb{E}\big{(}\sum_{D}F_{j_{1},k_{1}}\cdots F_{i_{s},j_{s}}\big{)}\leq\frac{1}{N^{2}!}\cdot N^{2s}\cdot(N^{2}-s)!\xrightarrow[N\rightarrow\infty]{}1

hence the conclusion. ∎

Lemma 4.4.

Let θ\theta be a constant. The following relations hold true almost surely for sequences (σN)N\big{(}\sigma_{N}\big{)}_{N} with each σN\sigma_{N} an uniformly random permutation from 𝒮([N]2)\mathcal{S}([N]^{2}):

  1. (i)

    if (fN)N,(gN)N(f_{N})_{N},(g_{N})_{N} are two given sequences of maps, fN,gN:[N]2[N]2f_{N},g_{N}:[N]^{2}\rightarrow[N]^{2}, then

    limNN(1+θ)|{(i,j):σN(fN(i,j))=gN(i,j)}|=0\displaystyle\lim_{N\rightarrow\infty}N^{-(1+\theta)}\cdot\big{|}\{(i,j):\ \sigma_{N}(f_{N}(i,j))=g_{N}(i,j)\}\big{|}=0
  2. (ii)

    if (ϕN)N,(ψN)N(\phi_{N})_{N},(\psi_{N})_{N} are sequences of maps, ϕN,ψN:[N]3[N]2\phi_{N},\psi_{N}:[N]^{3}\rightarrow[N]^{2}, such that (i,a,b)=(i,a,b)(i,a,b)=(i^{\prime},a^{\prime},b^{\prime}) whenever ϕN(i,j,a)=ϕN(i,j,a)\phi_{N}(i,j,a)=\phi_{N}(i^{\prime},j^{\prime},a^{\prime}) and ψN(i,j,b)=ψN(i,j,b)\psi_{N}(i,j,b)=\psi_{N}(i^{\prime},j^{\prime},b^{\prime}), then

    limnN(52+θ)|{(i,j,a,b):σN(ϕN(i,j,a))=ψN(i,j,b)}|=0.\displaystyle\lim_{n\rightarrow\infty}N^{-(\frac{5}{2}+\theta)}\cdot\big{|}\{(i,j,a,b):\ \sigma_{N}(\phi_{N}(i,j,a))=\psi_{N}(i,j,b)\}\big{|}=0.
Proof.

Considering the random variable Ii,j(σ)={1 if σ(fN(i,j))=gN(i,j)0 otherwise I_{i,j}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(f_{N}(i,j))=g_{N}(i,j)\\ 0&\textrm{ otherwise }\end{array}\right. and using Markov’s inequality, we have that

(|{(i,j):σN((fN(i,j))=gN(i,j)}|>εN1+θ)<𝔼(i,j[N]Ii,j)εN1+θ\displaystyle\mathbb{P}\big{(}\big{|}\{(i,j):\ \sigma_{N}((f_{N}(i,j))=g_{N}(i,j)\}\big{|}>\varepsilon\cdot N^{1+\theta}\big{)}<\frac{\mathbb{E}\big{(}\displaystyle\sum_{i,j\in[N]}I_{i,j}\big{)}}{\varepsilon\cdot N^{1+\theta}}

so by Lemma 2.6 it suffices to show that 𝔼(i,j[N]Ii,j)=O(N0)\displaystyle\mathbb{E}\big{(}\sum_{i,j\in[N]}I_{i,j}\big{)}=O(N^{0}).

For Ii,j(σ)=1I_{i,j}(\sigma)=1, there is at most one possible choice for σ(fN(i,j))\sigma(f_{N}(i,j)) and at most (N21)!(N^{2}-1)! possible choices for the rest of the values of σ\sigma, hence

𝔼(i,j[N]Ii,j)N21(N21)!N2!N1.\displaystyle\mathbb{E}\big{(}\sum_{i,j\in[N]}I_{i,j}\big{)}\leq N^{2}\cdot\frac{1\cdot(N^{2}-1)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

Similarly, for part (ii) considering the random variable

Ii,j,a,b(σ)={1 if σ(ϕN(i,j,a))=ψN(i,j,b)0 otherwise I_{i,j,a,b}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(\phi_{N}(i,j,a))=\psi_{N}(i,j,b)\\ 0&\textrm{ otherwise }\end{array}\right.

and applying again Markov’s inequality, we have that

(|{(i,j,a,b):σN(ϕN(i,j,a))=ψN(i,j,b)}|>εN52+θ)<𝔼((i,j,a,bIi,j,a,b)2)ε2N5+2θ\displaystyle\mathbb{P}\big{(}\big{|}\{(i,j,a,b):\sigma_{N}(\phi_{N}(i,j,a))=\psi_{N}(i,j,b)\}\big{|}>\varepsilon\cdot N^{\frac{5}{2}+\theta}\big{)}<\frac{\displaystyle\mathbb{E}\big{(}(\sum_{i,j,a,b}I_{i,j,a,b})^{2}\big{)}}{\varepsilon^{2}\cdot N^{5+2\theta}}

so by Lemma 2.6 it suffices to show that 𝔼((i,j,a,bIi,j,a,b)2)=O(N4)\displaystyle\mathbb{E}\big{(}(\sum_{i,j,a,b}I_{i,j,a,b})^{2}\big{)}=O(N^{4}).

For Ii,j,a,b(σ)=1I_{i,j,a,b}(\sigma)=1 there is only one possible choice for σ(ϕN(i,j,a))\sigma(\phi_{N}(i,j,a)) and (N21)!(N^{2}-1)! possible choices for the other values of σ\sigma, therefore

1N4𝔼(i,j,a,bIi,j,a,b2)=1N4𝔼(i,j,a,bIi,j,a,b)1N4N4(N21)!N2N0.\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{i,j,a,b}I_{i,j,a,b}^{2}\big{)}=\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{i,j,a,b}I_{i,j,a,b}\big{)}\leq\frac{1}{N^{4}}\cdot N^{4}\cdot\frac{(N^{2}-1)!}{N^{2}}\xrightarrow[N\rightarrow\infty]{}0.

Denote D={(i,j,a,b,i,j,a,b):(i,a,b)(i,a,b)}D=\{(i,j,a,b,i^{\prime},j^{\prime},a^{\prime},b^{\prime}):\ (i,a,b)\neq(i^{\prime},a^{\prime},b^{\prime})\}. From the property of ϕN\phi_{N} and ψN\psi_{N}, it follows that if (i,j,a,b,i,j,a,b)D(i,j,a,b,i^{\prime},j^{\prime},a^{\prime},b^{\prime})\in D and ϕN(i,j,a)=ϕN(i,j,a)\phi_{N}(i,j,a)=\phi_{N}(i^{\prime},j^{\prime},a^{\prime}) then Ii,j,a,bIi,j,a,b=0I_{i,j,a,b}I_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}=0. Hence, if (i,j,a,b,i,j,a,b)D(i,j,a,b,i^{\prime},j^{\prime},a^{\prime},b^{\prime})\in D and the relation Ii,j,a,bIi,j,a,b(σ)0I_{i,j,a,b}\cdot I_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}(\sigma)\neq 0 holds true, then there is one possible choice for σ(ϕN(i,j,a))\sigma(\phi_{N}(i,j,a)) and σ(ϕN(i,j,a))\sigma(\phi_{N}(i^{\prime},j^{\prime},a^{\prime})) and (N22)!(N^{2}-2)! possible choices for the rest of the values of σ\sigma. Therefore

1N4𝔼(DIi,j,a,bIi,j,a,b)1N4N8(N22)!N2!N1.\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{D}I_{i,j,a,b}I_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}\leq\frac{1}{N^{4}}\cdot N^{8}\cdot\frac{(N^{2}-2)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

On the other hand, for Ii,j,a,bIi,j,a,b(σ)0I_{i,j,a,b}\cdot I_{i,j^{\prime},a,b}(\sigma)\neq 0, there is one possible choice for σ(ϕN(i,j,a))\sigma(\phi_{N}(i,j,a)) and at most (N21)!(N^{2}-1)! possible choices for the other values of σ\sigma, hence

1N4𝔼(i,a,b,j,jIi,j,a,bIi,j,a,b)1N4N5(N21)!N2!N0,\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{i,a,b,j,j^{\prime}}I_{i,j,a,b}\cdot I_{i,j^{\prime},a,b}\big{)}\leq\frac{1}{N^{4}}\cdot N^{5}\cdot\frac{(N^{2}-1)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0,

and property (ii) follows. ∎

Lemma 4.5.

The following relations hold true almost surely for sequences (σN)N\big{(}\sigma_{N}\big{)}_{N} with each σN\sigma_{N} a permutation from 𝒮([N]2)\mathcal{S}([N]^{2}):

  1. (i)

    Suppose that (fN)N(f_{N})_{N}, (gN)N(g_{N})_{N} are two given sequences of maps and (ηN)N(\eta_{N})_{N} is a sequence of permutations such that fN,gN:[N]2[N]2f_{N},g_{N}:[N]^{2}\rightarrow[N]^{2} and ηN𝒮([N]2).\eta_{N}\in\mathcal{S}([N]^{2}). Then

    limNN(1+θ)|{(i,j)[N]2:fN(i,\displaystyle\lim_{N\rightarrow\infty}N^{-(1+\theta)}\cdot\big{|}\{(i,j)\in[N]^{2}:\ f_{N}(i, j)gN(i,j)\displaystyle j)\neq g_{N}(i,j)
    and σN(fN(i,j))=ηNσN(gN(i,j))}|=0.\displaystyle\sigma_{N}(f_{N}(i,j))=\eta_{N}\circ\sigma_{N}(g_{N}(i,j))\}\big{|}=0.
  2. (ii)

    Suppose that (hN)N(h_{N})_{N} is a sequence of maps and (ωN)N(\omega_{N})_{N}, (ηN)N(\eta_{N})_{N} are sequences of permutations, hN:[N]2[N]h_{N}:[N]^{2}\rightarrow[N] and ωN,ηN𝒮([N]2)\omega_{N},\eta_{N}\in\mathcal{S}([N]^{2}) with (ωN)N(\omega_{N})_{N} either the identity permutation or the matrix transpose. Then

    limNN(52+θ)|{(i,j,a,b)[N]4:\displaystyle\lim_{N\rightarrow\infty}N^{-(\frac{5}{2}+\theta)}\cdot\big{|}\{(i,j,a,b)\in[N]^{4}:\ ωN(b,i)(a,hN(i,j)) and\displaystyle\omega_{N}(b,i)\neq(a,h_{N}(i,j))\textrm{ and }
    σN(a,hN(i,j))=ηNσNωN(b,i)}|=0\displaystyle\sigma_{N}(a,h_{N}(i,j))=\eta_{N}\circ\sigma_{N}\circ\omega_{N}(b,i)\}\big{|}=0
Proof.

For part (i), denote D={(i,j)[N]2:fN(i,j)gN(i,j)}D=\{(i,j)\in[N]^{2}:\ f_{N}(i,j)\neq g_{N}(i,j)\} and consider the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}) given by

Ii,j(σ)={1if σ(fN(i,j))=ηNσ(gN(i,j))0otherwise.I_{i,j}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{if }\sigma(f_{N}(i,j))=\eta_{N}\circ\sigma(g_{N}(i,j))\\ 0&\textrm{otherwise}.\end{array}\right.

Via Markov’s inequality and Lemma 2.6 it suffices to show that 𝔼((i,j)DIi,j)=O(N0)\displaystyle\mathbb{E}\big{(}\sum_{(i,j)\in D}I_{i,j}\big{)}=O(N^{0})

For Ii,j(σ)=1I_{i,j}(\sigma)=1, there are N2N^{2} possible for σN(fN(i,j))\sigma_{N}(f_{N}(i,j)), each giving one possible choice for σN(gN(i,j))\sigma_{N}(g_{N}(i,j)) and (N22)!(N^{2}-2)! possible choices for the rest of values of σ\sigma. Therefore:

𝔼((i,j)DIi,j)<N2N2(N22)!N2!N1,\displaystyle\mathbb{E}\big{(}\sum_{(i,j)\in D}I_{i,j}\big{)}<N^{2}\cdot\frac{N^{2}\cdot(N^{2}-2)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1,

hence the conclusion.

For part (ii), denote V={(i,j,a,b)[N]4:ωN(b,i)(a,hN(i,j))}V=\{(i,j,a,b)\in[N]^{4}:\ \omega_{N}(b,i)\neq(a,h_{N}(i,j))\} and consider the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}) given by

Fi,j,a,b(σ)={1 if σ(a,hN(i,j))=ηNσωN(b,i)0 otherwise. \displaystyle F_{i,j,a,b}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(a,h_{N}(i,j))=\eta_{N}\circ\sigma\circ\omega_{N}(b,i)\\ 0&\textrm{ otherwise. }\end{array}\right.

With these notations, it suffices to show that 𝔼((VFi,j,a,b)2)=O(N4)\displaystyle\mathbb{E}\big{(}(\sum_{V}F_{i,j,a,b})^{2}\big{)}=O(N^{4}).

To simplify the writing, we shall use the notations v\vec{v} (respectively v\vec{v}^{\prime}) for (i,j,a,b)(i,j,a,b) (respectively (i,j,a,b)(i^{\prime},j^{\prime},a^{\prime},b^{\prime})); also, let Z(v)={(a,hN(i,j)),ωN(b,i)}Z(\vec{v})=\{(a,h_{N}(i,j)),\omega_{N}(b,i)\} and introduce the sets W={(v,v)V×V:vv}W=\{(\vec{v},\vec{v}^{\prime})\in V\times V:\ \vec{v}\neq\vec{v}^{\prime}\}, and, for s=0,1,2s=0,1,2, let

Ws={(v,v)W:|Z(v)Z(v))|=s}.\displaystyle W_{s}=\{(\vec{v},\vec{v}^{\prime})\in W:\ \big{|}Z(\vec{v})\cap Z(\vec{v}^{\prime}))\big{|}=s\}.

We have that

𝔼((VFi,j,a,b)2)\displaystyle\mathbb{E}\big{(}(\sum_{V}F_{i,j,a,b})^{2}\big{)} =𝔼(VFi,j,a,b2)+𝔼(WFi,j,a,bFi,j,a,b)\displaystyle=\mathbb{E}\big{(}\sum_{V}F_{i,j,a,b}^{2}\big{)}+\mathbb{E}\big{(}\sum_{W}F_{i,j,a,b}F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}
=𝔼(VFi,j,a,b)+s=02𝔼(WsFi,j,a,bFi,j,a,b).\displaystyle=\mathbb{E}\big{(}\sum_{V}F_{i,j,a,b}\big{)}+\sum_{s=0}^{2}\mathbb{E}\big{(}\sum_{W_{s}}F_{i,j,a,b}F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}.

Since the random variables Fi,j,a,bF_{i,j,a,b} are identically distributed,

1N4𝔼(VFi,j,a,b)=|V|1N4𝔼(Fi,j,a,b)<N41N4=1.\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{V}F_{i,j,a,b}\big{)}=\big{|}V\big{|}\cdot\frac{1}{N^{4}}\mathbb{E}\big{(}F_{i,j,a,b}\big{)}<N^{4}\cdot\frac{1}{N^{4}}=1.

If (v,v)W0(\vec{v},\vec{v}^{\prime})\in W_{0}, then there are at most N2N^{2} possible choices for σ(a,hN(i,j))\sigma(a,h_{N}(i,j)), each giving one possible choice for σ(ωN(b,i)))\sigma(\omega_{N}(b,i))\big{)}, less than N2N^{2} possible choices for the pair (σ(a,hN(i,j)),σ(ωN(b,i)))\big{(}\sigma(a^{\prime},h_{N}(i^{\prime},j^{\prime})),\sigma(\omega_{N}(b^{\prime},i^{\prime}))\big{)} and at most (N24)!(N^{2}-4)! possible choices for the rest of the values of σ\sigma. Therefore:

1N4𝔼(W0Fi,j,a,bFi,j,a,b)1N4N8N4(N24)!N2!N1.\displaystyle\frac{1}{N^{4}}\cdot\mathbb{E}\big{(}\sum_{W_{0}}F_{i,j,a,b}F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}\leq\frac{1}{N^{4}}\cdot N^{8}\cdot\frac{N^{4}\cdot(N^{2}-4)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

If (v,v)W1(\vec{v},\vec{v}^{\prime})\in W_{1} then there are at most N2N^{2} possible choices for σ(a,hN(i,j))\sigma(a,h_{N}(i,j)) each giving one choice for σ(Z(v)Z(v)\sigma(Z(\vec{v})\cup Z(\vec{v}^{\prime}) and at most (N23)!(N^{2}-3)! possible choices for the rest of values of σ\sigma. Also, since ωN\omega_{N} is either the identity of the matrix transpose, we have that |W1|N7|W_{1}|\leq N^{7}. Therefore

1N4𝔼(W1Fi,j,a,bFi,j,a,b)1N4N7N2(N23)!N2!N0.\displaystyle\frac{1}{N^{4}}\cdot\mathbb{E}\big{(}\sum_{W_{1}}F_{i,j,a,b}F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}\leq\frac{1}{N^{4}}\cdot N^{7}\cdot\frac{N^{2}\cdot(N^{2}-3)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0.

If (v,v)W2(\vec{v},\vec{v}^{\prime})\in W_{2} then there are at most N2N^{2} possible choices for σ(a,hN(i,j))\sigma(a,h_{N}(i,j)) each giving one choice for σ(Z(v)Z(v)\sigma(Z(\vec{v})\cup Z(\vec{v}^{\prime}) and at most (N22)!(N^{2}-2)! possible choices for the rest of values of σ\sigma. Moreover, if (a,hN(i,j))=(a,hN(i,j))(a,h_{N}(i,j))=(a^{\prime},h_{N}(i^{\prime},j^{\prime})) and ωN(b,i)=ωN(b,i)\omega_{N}(b,i)=\omega_{N}(b^{\prime},i^{\prime}), then (a,b,i)=(a,b,i)(a,b,i)=(a^{\prime},b^{\prime},i^{\prime}), and |W2|N5|W_{2}|\leq N^{5}. If (a,hN(i,j))=ωN(b,i)(a,h_{N}(i,j))=\omega_{N}(b^{\prime},i^{\prime}) and ωN(b,i)=(a,hN(i,j))\omega_{N}(b,i)=(a^{\prime},h_{N}(i^{\prime},j^{\prime})), then a=ba=b^{\prime} and b=ab=a^{\prime} when ωN\omega_{N} is the identity, respectively a=ia=i^{\prime} and i=ai=a^{\prime} when ωN\omega_{N} is the matrix transpose. Hence |W2|N6|W_{2}|\leq N^{6}. We have that

1N4𝔼(W2Fi,j,a,bFi,j,a,b)1N4N6N2(N22)!N2!N1\displaystyle\frac{1}{N^{4}}\cdot\mathbb{E}\big{(}\sum_{W_{2}}F_{i,j,a,b}F_{i^{\prime},j^{\prime},a^{\prime},b^{\prime}}\big{)}\leq\frac{1}{N^{4}}\cdot N^{6}\cdot\frac{N^{2}\cdot(N^{2}-2)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1

and the conclusion follows. ∎

Lemma 4.6.

Suppose that (η1,N)N,(η2,N)N,(ωN)N(\eta_{1,N})_{N},(\eta_{2,N})_{N},(\omega_{N})_{N} are given sequences of permutations such that ωN\omega_{N} is either the identity or the matrix transpose. Then, almost surely for (σN)N(\sigma_{N})_{N} we have that

limNN(52+θ)|{(i,j,k,l,a,b)[N]6:(a,k)\displaystyle\lim_{N\rightarrow\infty}N^{-(\frac{5}{2}+\theta)}\cdot\big{|}\{(i,j,k,l,a,b)\in[N]^{6}:\ (a,k)\neq ωN(k,l) and σN(a,k)=η1(b,i),\displaystyle\omega_{N}(k,l)\textrm{ and }\sigma_{N}(a,k)=\eta_{1}(b,i),
σN(ωN(k,l))=η2(i,j)}|=0.\displaystyle\sigma_{N}(\omega_{N}(k,l))=\eta_{2}(i,j)\}\big{|}=0.
Proof.

As in the proof of the preceding Lemma 4.5(ii), we use the notation v=(i,j,k,l,a,b)\vec{v}=(i,j,k,l,a,b), v=(i,j,k,l,a,b)\vec{v}^{\prime}=(i^{\prime},j^{\prime},k^{\prime},l^{\prime},a^{\prime},b^{\prime}), and we let Z(v)={(a,k),ωN(k,l)}Z(\vec{v})=\{(a,k),\omega_{N}(k,l)\} and Z(v)={(a,k),ωN(k,l)}Z(\vec{v}^{\prime})=\{(a^{\prime},k^{\prime}),\omega_{N}(k^{\prime},l^{\prime})\}. Consider the sets ( s=0,1,2s=0,1,2):

V={(i,j,k,l,a,b)[N]6:(a,k)ωN(k,l)}\displaystyle V=\{(i,j,k,l,a,b)\in[N]^{6}:\ (a,k)\neq\omega_{N}(k,l)\}
W={(v,v)V2:vv}\displaystyle W=\{(\vec{v},\vec{v}^{\prime})\in V^{2}:\ \vec{v}\neq\vec{v}^{\prime}\}
Ws={(v,v)W:|Z(v)Z(v)|=s}\displaystyle W_{s}=\{(\vec{v},\vec{v}^{\prime})\in W:\ |Z(\vec{v})\cap Z(\vec{v}^{\prime})|=s\}

Next, consider the random variable Fi,j,k,la,bF^{a,b}_{i,j,k,l} on 𝒮([N]2)\mathcal{S}([N]^{2}) given by

Fi,j,k,la,b(σ)={1, if σ(a,k)=η1(b,i) and σ(ωN(k,l))=η2(i,j)0, otherwise. \displaystyle F^{a,b}_{i,j,k,l}(\sigma)=\left\{\begin{array}[]{ll}1,&\textrm{ if }\sigma(a,k)=\eta_{1}(b,i)\textrm{ and }\sigma(\omega_{N}(k,l))=\eta_{2}(i,j)\\ 0,&\textrm{ otherwise. }\end{array}\right.

Using Lemma 2.6 and Markov Inequality, it suffices to show that

1N4𝔼((VFi,j,k,la,b)2)=O(N0).\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}(\sum_{V}F_{i,j,k,l}^{a,b})^{2}\big{)}=O(N^{0}).

For Fi,j,k,la,b(σ)0F^{a,b}_{i,j,k,l}(\sigma)\neq 0, with (i,j,k,l,a,b)V(i,j,k,l,a,b)\in V, there is at most one possible choice for σ(a,k)\sigma(a,k) and σ(ωN(k,l))\sigma(\omega_{N}(k,l)) and (N22)!(N^{2}-2)! possible choices for the rest of the values of σ\sigma. Therefore

1N4𝔼(V(Fi,j,k,la,b)2)=1N4𝔼(Fi,j,k,la,b)1N4N6(N22)!N2!N0.\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{V}(F_{i,j,k,l}^{a,b})^{2}\big{)}=\frac{1}{N^{4}}\mathbb{E}\big{(}F_{i,j,k,l}^{a,b}\big{)}\leq\frac{1}{N^{4}}\cdot N^{6}\cdot\frac{(N^{2}-2)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0.

If (v,v)Ws(\vec{v},\vec{v}^{\prime})\in W_{s}, then for Fi,j,k,la,bFi,j,k,la,b(σ)0F^{a,b}_{i,j,k,l}F^{a^{\prime},b^{\prime}}_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}(\sigma)\neq 0 there is at most one possible choice for σ(Z(v)Z(v))\sigma\big{(}Z(\vec{v})\cup Z(\vec{v}^{\prime})\big{)} and (N2|Z(v)Z(v)|)!(N^{2}-|Z(\vec{v})\cup Z(\vec{v}^{\prime})|)! possible choices for the rest of the values of σ\sigma. Therefore

1N4𝔼(WsFi,j,k,la,bFi,j,k,la,b)|Ws|N4(N2|Z(v)Z(v)|)!N2!\displaystyle\frac{1}{N^{4}}\mathbb{E}\big{(}\sum_{W_{s}}F^{a,b}_{i,j,k,l}F^{a^{\prime},b^{\prime}}_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}\big{)}\leq\frac{|W_{s}^{\ast}|}{N^{4}}\cdot\frac{(N^{2}-|Z(\vec{v})\cup Z(\vec{v}^{\prime})|)!}{N^{2}!}

where Ws={(v,v)Ws:Fi,j,k,la,bFi,j,k,la,b0}W_{s}^{\ast}=\{(\vec{v},\vec{v}^{\prime})\in W_{s}:F^{a,b}_{i,j,k,l}F^{a^{\prime},b^{\prime}}_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}\neq 0\}. Hence it suffices to show that

(12) |Ws|N4+2|Z(v)Z(v)|=N122|Z(v)Z(v)|.\displaystyle|W_{s}^{\ast}|\leq N^{4+2|Z(\vec{v})\cup Z(\vec{v}^{\prime})|}=N^{12-2|Z(\vec{v})\cap Z(\vec{v}^{\prime})|}.

For s=0s=0, relation (12) is trivially verified, as W0V×V[N]12W_{0}^{\ast}\subset V\times V\subseteq[N]^{12}. If s1s\geq 1, then either (a,k)Z(v)(a,k)\in Z(\vec{v}^{\prime}) or ωN(k,l)Z(v)\omega_{N}(k,l)\in Z(\vec{v}^{\prime}). Suppose that (a,k)Z(v)(a,k)\in Z(\vec{v}^{\prime}). If (a,k)=(a,k)(a,k)=(a^{\prime},k^{\prime}), then Fi,j,k,la,bFi,j,k,la,b0F^{a,b}_{i,j,k,l}F^{a^{\prime},b^{\prime}}_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}\neq 0 gives that η1(b,i)=η1(b,i)\eta_{1}(b,i)=\eta_{1}(b^{\prime},i^{\prime}), so (b,i)=(b,i)(b,i)=(b^{\prime},i^{\prime}). It follows that (v,v)(\vec{v},\vec{v}^{\prime}) is uniquely determined by (i,j,k,l,a,b,l,a)(i,j,k,l,a,b,l^{\prime},a^{\prime}), that is |Ws|N8|W_{s}^{\ast}|\leq N^{8}, which implies (12). If (a,k)=ωN(k,l)(a,k)=\omega_{N}(k^{\prime},l^{\prime}) then again Fi,j,k,la,bFi,j,k,la,b0F^{a,b}_{i,j,k,l}F^{a^{\prime},b^{\prime}}_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}\neq 0 gives that η1(b,i)=η2(i,j)\eta_{1}(b,i)=\eta_{2}(i^{\prime},j^{\prime}) so (i,j)=η21η1(b,i)(i^{\prime},j^{\prime})=\eta_{2}^{-1}\circ\eta_{1}(b,i). It folows that (v,v)(\vec{v},\vec{v}^{\prime}) is uniquely determined by (i,j,k,l,a,b,a,b)(i,j,k,l,a,b,a^{\prime},b^{\prime}), that is |Ws|N8|W_{s}^{\ast}|\leq N^{8}, which implies (12).

The case ωN(k,l)Z(v)\omega_{N}(k,l)\in Z(\vec{v}^{\prime}) is similar. ∎

Proof of Theorem 4.1 .

From Theorem 3.1, we can assume that GNσNG_{N}^{\sigma_{N}} and GNτNG_{N}^{\tau_{N}} are both asymptotically circular with individually zero infinitesimal *-distribution. Then, using the free moment-cumulant expansion, if suffices to show that 𝒱σN(π)=o(N1)\mathcal{V}_{\overrightarrow{\sigma_{N}}}(\pi)=o(N^{-1}) unless π\pi is non-crossing and σk,N=σl,N\sigma_{k,N}=\top\circ\sigma_{l,N}\circ\top whenever π(k)=l\pi(k)=l.

As in the proof of Theorem 3.1, eventually by modifying mm, we can assume that π\pi does not have any blocks of the type (k,k+1)(k,k+1) such that σk,N=σl,N\sigma_{k,N}=\top\circ\sigma_{l,N}\circ\top. Next, using Lemma 4.3 in the same way Lemma 3.2 was used in the proof of Theorem 3.1, we can furthermore assume that π\pi does not have any blocks with consecutive elements. It follow that π\pi must have at least one crossing, and, as shown in the proof of Theorem 3.1, we can suppose without loss of generality that there is some b[m2]b\in[m-2] such that π(1)=b+1\pi(1)=b+1 and π(b)=m\pi(b)=m.

Let us denote S={(σN)N,(σN)N}S=\{(\sigma_{N})_{N},(\top\circ\sigma_{N}\circ\top)_{N}\}, T={(τN)N,(τN)N}T=\{(\tau_{N})_{N},(\top\circ\tau_{N}\circ\top)_{N}\} and T1={(IdN)N}TT_{1}=\{(\text{Id}_{N})_{N}\}\cup T.

Suppose first that m=4m=4. It suffices to show that the following result holds true for any sequences (σs,N)NST1(\sigma_{s,N})_{N}\in S\cup T_{1}, s=1,,4s=1,\dots,4:

(13) |{(i,j,k,l)[N]4:σ1,N(i,j)=\displaystyle\big{|}\{(i,j,k,l)\in[N]^{4}:\ \sigma_{1,N}(i,j)=\top\circ σ3,N(k,l) and\displaystyle\sigma_{3,N}(k,l)\textrm{ and }
σ2,N(j,k)=σ4,N(l,i)}|=o(N2).\displaystyle\sigma_{2,N}(j,k)=\top\circ\sigma_{4,N}(l,i)\}\big{|}=o(N^{2}).

Moreover, at least one of the sequences (σs,N)N(\sigma_{s,N})_{N} should be from the set SS and at least one from the set T1T_{1}, so it suffices to show (13) for the following three cases:

  1. (a)

    one of the (σs,N)N(\sigma_{s,N})_{N} is from the set SS and three are from T1T_{1}

  2. (b)

    two of the (σs,N)N(\sigma_{s,N})_{N} are from the set SS and two are from T1T_{1}

  3. (c)

    three of the (σs,N)N(\sigma_{s,N})_{N} are from the set SS and one is (IdN)N(\text{Id}_{N})_{N}.

For case (a), via a circular permutation of the set {1,2,3,4}\{1,2,3,4\}, we can suppose that (σ2,N)NS(\sigma_{2,N})_{N}\in S. Then, for

fN(i,j)\displaystyle f_{N}(i,j) =(j,(π1σ3,N1σ1,N)(i,j))\displaystyle=\left(j,(\pi_{1}\circ\sigma_{3,N}^{-1}\circ\top\circ\sigma_{1,N})(i,j)\right)
gN(i,j)\displaystyle g_{N}(i,j) =σ4,N(π2σ3,N1σ1,N)(i,j),i)\displaystyle=\top\circ\sigma_{4,N}\left(\pi_{2}\circ\sigma_{3,N}^{-1}\circ\top\circ\sigma_{1,N})(i,j),i\right)

Lemma 4.4(i) gives that (13) holds true for any (τN)N(\tau_{N})_{N} and almost surely for (σN)N(\sigma_{N})_{N}.

For case (b), note first that it suffices to show (13) when {i,k}{j,l}=\{i,k\}\cap\{j,l\}=\emptyset. Indeed, if i=ji=j, then there are at most NN possible choices for the pair (i,j)(i,j), each determining at most one possible choice for (k,l)=σ3,N1σ1,N(i,j)(k,l)=\sigma_{3,N}^{-1}\circ\top\circ\sigma_{1,N}(i,j); the other cases are similar.

Next, without loss of generality, we can further assume that (σ1,N)NT1(\sigma_{1,N})_{N}\in T_{1} and (σ2,N)NS(\sigma_{2,N})_{N}\in S.

Suppose that (σ4,N)NS(\sigma_{4,N})_{N}\in S, Hence σ4,N=ωNσNωN\sigma_{4,N}=\omega_{N}\circ\sigma_{N}\circ\omega_{N}, with ωN\omega_{N} either identity or matrix transpose. Let

fN(i,j)=(j,k)=(j,π1σ3,N1σ1,N(i,j))\displaystyle f_{N}(i,j)=(j,k)=\left(j,\pi_{1}\circ\sigma_{3,N}^{-1}\circ\top\circ\sigma_{1,N}(i,j)\right)
gN(i,j)=ωN(l,i)=ωN((π2σ3,N1σ1,N)(i,j),i)\displaystyle g_{N}(i,j)=\omega_{N}(l,i)=\omega_{N}\left((\pi_{2}\circ\sigma_{3,N}^{-1}\circ\top\circ\sigma_{1,N})(i,j),i\right)
ηN=ωN\displaystyle\eta_{N}=\top\circ\omega_{N}

Lemma 4.5(i) gives that (13) holds true if f(i,j)g(i,j)f(i,j)\neq g(i,j), so it suffices to show (13) for (i,j,k,l)(i,j,k,l) such that (j,k)=ωN(l,i)(j,k)=\omega_{N}(l,i), which, since iji\neq j implies ωN=IdN\omega_{N}=\text{Id}_{N}, i=ki=k and j=lj=l. It suffices then to show that

(14) |{(i,j)[N]2:σ1,N(i,j)=σ3,N(i,j)}|=o(N2)\displaystyle\big{|}\{(i,j)\in[N]^{2}:\ \sigma_{1,N}(i,j)=\top\circ\sigma_{3,N}(i,j)\}|=o(N^{2})

holds true almost surely for (σ1,N)N,(σ3,N)N{(IdN)N,(τN)N}(\sigma_{1,N})_{N},(\sigma_{3,N})_{N}\in\{(\text{Id}_{N})_{N},(\tau_{N})_{N}\}.

If one of (σ1,N)N,(σ3,N)N(\sigma_{1,N})_{N},(\sigma_{3,N})_{N} is (τN)N(\tau_{N})_{N}, and the other is (IdN)N(\text{Id}_{N})_{N}, then the conclusion follows from Lemma 4.3(i). If σ1,N=σ3,N\sigma_{1,N}=\sigma_{3,N}, then equation (14) gives that (i,j)σ3,N1({(t,t):t[N]})(i,j)\in\sigma_{3,N}^{-1}(\{(t,t):t\in[N]\}), hence there are at most NN possible choices for (i,j)(i,j), and the conclusion follows.

Finally, suppose that (σ3,N)NS(\sigma_{3,N})_{N}\in S. Then let σ3,N=ωNσNωN\sigma_{3,N}=\omega_{N}\circ\sigma_{N}\circ\omega_{N} with (ωN)N(\omega_{N})_{N} either the identity permutation or the matrix transpose. For tuples (i,j,k,l)(i,j,k,l) with j=lj=l, according to Lemma 4.3, the condition σ1,N(i,j)=σ3,N(k,l)\sigma_{1,N}(i,j)=\sigma_{3,N}(k,l) is satisfied by o(N2)o(N^{2}) tuples for any (σ1,N)N(\sigma_{1,N})_{N} almost surely (σN)N(\sigma_{N})_{N}.

Let D1={(i,j,k,l)[N]4:{i,k}{j,l}= and jl}D_{1}=\{(i,j,k,l)\in[N]^{4}:\ \{i,k\}\cap\{j,l\}=\emptyset\textrm{ and }j\neq l\} and define the random variable

Ii,j,k,l(σ)={1if σ(i,j)=ωNσωN(k,l) and σ(j,k)=σ4,N(l,i)0otherwise.\displaystyle I_{i,j,k,l}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{if }\sigma(i,j)=\top\circ\omega_{N}\circ\sigma\circ\omega_{N}(k,l)\textrm{ and }\sigma(j,k)=\top\circ\sigma_{4,N}(l,i)\\ 0&\textrm{otherwise}.\end{array}\right.

Using Markov Inequality and Lemma 2.6, the result is implied by

𝔼((i,j,k,l)D1Ii,j,k,l)=O(N0).\mathbb{E}\big{(}\sum_{(i,j,k,l)\in D_{1}}I_{i,j,k,l}\big{)}=O(N^{0}).

On the other hand, for Ii,j,k,l(σ)=1I_{i,j,k,l}(\sigma)=1, there is one possible choice for σ(ωN(k,l))\sigma(\omega_{N}(k,l)) and σ(j,k)\sigma(j,k) and (N22)!(N^{2}-2)! possible choices for the rest of the values of σ\sigma. Therefore

𝔼((i,j,k,l)D1Ii,j,k,l)<N4(N22)!N2!N1.\displaystyle\mathbb{E}\big{(}\sum_{(i,j,k,l)\in D_{1}}I_{i,j,k,l}\big{)}<N^{4}\cdot\frac{(N^{2}-2)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

For case (c), we can suppose, without loss of generality, that σ1,N=IdN\sigma_{1,N}=\text{Id}_{N} and σ3,N=σN\sigma_{3,N}=\sigma_{N}. I.e. we have to show that

(15) |{(i,j,k,l):σN(k,l)=(j,i) and σ2,N(j,k)=σ4,N(l,i)}|=o(N2)\displaystyle\big{|}\{(i,j,k,l):\ \sigma_{N}(k,l)=(j,i)\textrm{ and }\sigma_{2,N}(j,k)=\top\circ\sigma_{4,N}(l,i)\}\big{|}=o(N^{2})

holds true almost surely for (σN)N(\sigma_{N})_{N} where (s=1,2s=1,2) σs,N=ωs,NσNωs,N\sigma_{s,N}=\omega_{s,N}\circ\sigma_{N}\circ\omega_{s,N} and (ωs,N)N(\omega_{s,N})_{N} is either identity transforms or matrix transposes.

As discussed above, (15) is trivially verified if i=ji=j, j=kj=k or i=li=l. Furthermore, if k=ik=i, or l=jl=j, then (15) follows applying Lemma 4.3(i) for the condition σN(k,l)=(j,i)\sigma_{N}(k,l)=(j,i). Hence, denoting D2={(i,j,k,l)[N]4:i,j,k,l distinct}D_{2}=\{(i,j,k,l)\in[N]^{4}:\ i,j,k,l\textrm{ distinct}\} it suffices to show that (15) holds true for (i,j,k,l)D2(i,j,k,l)\in D_{2},

Define the random variable on 𝒮([N]2)\mathcal{S}([N]^{2}):

Ji,j,k,l(σ)={1 if σ(k,l)=(j,i) and σ1,N(j,k)=tσ2,N(l,i)0 otherwise \displaystyle J_{i,j,k,l}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(k,l)=(j,i)\textrm{ and }\sigma_{1,N}(j,k)=t\circ\sigma_{2,N}(l,i)\\ 0&\textrm{ otherwise }\end{array}\right.

where σs,N=ωs,Nσωs,N\sigma_{s,N}=\omega_{s,N}\circ\sigma\circ\omega_{s,N} for s=1,2s=1,2.

As before it suffices to show that 𝔼((i,j,k,l)D2Ji,j,k,l)=O(N0)\displaystyle\mathbb{E}\big{(}\sum_{(i,j,k,l)\in D_{2}}J_{i,j,k,l}\big{)}=O(N^{0}).

Note that if (i,j,k,l)D2(i,j,k,l)\in D_{2}, then (k,l)(k,l), ω1,N(j,k)\omega_{1,N}(j,k) and ω2,N(l,i)\omega_{2,N}(l,i) are distinct. Hence, for Ji,j,k,l1J_{i,j,k,l}\neq 1, there is one possible choice for σ(k,l)\sigma(k,l), N21N^{2}-1 possible choices for σ(ω1,N(j,k))\sigma(\omega_{1,N}(j,k)), one possible choice for σ(ω2,N(l,i))\sigma(\omega_{2,N}(l,i)) and (N23)!(N^{2}-3)! possible choices for the rest of values of σ\sigma. Therefore

𝔼((i,j,k,l)D2Ji,j,k,l)<N4(N21)(N23)!N2!N1.\displaystyle\mathbb{E}\big{(}\sum_{(i,j,k,l)\in D_{2}}J_{i,j,k,l}\big{)}<N^{4}\cdot\frac{(N^{2}-1)\cdot(N^{2}-3)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}1.

Next, suppose that m>4m>4. As in the proof of Theorem 3.1, it suffices to show that 𝔞({1,b,b+1,m})<0\mathfrak{a}(\{1,b,b+1,m\})<0, i.e. that the following result holds true for any (σs,N)NST1(\sigma_{s,N})_{N}\in S\cup T_{1}, where s{1,b,b+1,m}s\in\{1,b,b+1,m\}:

(16) |{(i,j,k,l,a,b)[N]6:σ1,N(i,j)=\displaystyle\big{|}\{(i,j,k,l,a,b)\in[N]^{6}:\ \sigma_{1,N}(i,j)= σb+1,N(k,l) and\displaystyle\top\circ\sigma_{b+1,N}(k,l)\textrm{ and }
σb,N(a,k)=σm,N(b,i)}|=o(N3).\displaystyle\sigma_{b,N}(a,k)=\top\circ\sigma_{m,N}(b,i)\}\big{|}=o(N^{3}).

We shall prove the statement above by analysing the same cases (a), (b) and (c) as in the setting m=4m=4.

For case (a), via a circular permutation of the set [m][m] and taking adjoints, we can suppose that (σb,N)N=(σN)N(\sigma_{b,N})_{N}=(\sigma_{N})_{N}. Putting

ϕ(i,j,a)\displaystyle\phi(i,j,a) =(a,k)=(a,π1σb+1,N1σ1,N(i,j))\displaystyle=(a,k)=(a,\pi_{1}\circ\sigma_{b+1,N}^{-1}\cdot\top\cdot\sigma_{1,N}(i,j))
ψ(i,j,b)\displaystyle\psi(i,j,b) =σm,N(b,i)\displaystyle=\sigma_{m,N}(b,i)

note that ϕ(i,j,a)=ϕ(i,j,a)\phi(i,j,a)=\phi(i^{\prime},j^{\prime},a^{\prime}) and ψ(i,j,b)=ψ(i,j,b)\psi(i,j,b)=\psi(i^{\prime},j^{\prime},b^{\prime}) implies (i,a,b)=(i,a,b)(i,a,b)=(i^{\prime},a^{\prime},b^{\prime}), and the conclusion follows from Lemma 4.4(ii).

For case (b) we can assume, without loss of generality, that (σ1,N)NT1(\sigma_{1,N})_{N}\in T_{1} and (σb,N)NS(\sigma_{b,N})_{N}\in S. It suffices to distinguish two subcases, when (σb+1,N)NS(\sigma_{b+1,N})_{N}\in S, respectively when (σm,N)NS(\sigma_{m,N})_{N}\in S.

Suppose that (σm,N)NS(\sigma_{m,N})_{N}\in S, that is σm,N=ωNσNωN\sigma_{m,N}=\omega_{N}\circ\sigma_{N}\circ\omega_{N} with ωN\omega_{N} either the identity or the matrix transpose. Applying Lemma 4.5(ii) for h(i,j)=k=π1σb+1,N1σ1,N(i,j)h(i,j)=k=\pi_{1}\circ\sigma_{b+1,N}^{-1}\circ\sigma_{1,N}(i,j), we obtain that relation (16) with the extra condition (a,k)ωN(b,i)(a,k)\neq\omega_{N}(b,i) is satisfied for all (σ1,N)N,(σb+1,N)NT1(\sigma_{1,N})_{N},(\sigma_{b+1,N})_{N}\in T_{1} and almost surely for (σN)NS(\sigma_{N})_{N}\in S.

Assume that (a,k)=ωN(b,i)(a,k)=\omega_{N}(b,i). If ωN=IdN\omega_{N}=\text{Id}_{N}, then the equality σb,N(a,k)=tσm,N(b,i)\sigma_{b,N}(a,k)=t\circ\sigma_{m,N}(b,i) gives that σN(a,k)=tσN(a,k)\sigma_{N}(a,k)=t\circ\sigma_{N}(a,k), hence (a,k)σN1{(s,s):s[N]}(a,k)\in\sigma_{N}^{-1}\{(s,s):s\in[N]\} so there are at most N2N^{2} possible choices for (a,k,l)(a,k,l), which uniquely determines (i,j,k,l,a,b)(i,j,k,l,a,b). If ωN\omega_{N} is the matrix transpose, then the equality σb,N(a,k)=σm,N(b,i)\sigma_{b,N}(a,k)=\top\circ\sigma_{m,N}(b,i) gives that σN(a,k)=σN(k,a)\sigma_{N}(a,k)=\sigma_{N}(k,a), that is a=ka=k, so again there are at most N2N^{2} possible choices for the triple (a,k,l)(a,k,l).

Suppose that (σb+1,N)NS(\sigma_{b+1,N})_{N}\in S, that is σb+1,N=ωNσNωN\sigma_{b+1,N}=\omega_{N}\circ\sigma_{N}\circ\omega_{N} with ωN\omega_{N} either the identity or the matrix transpose. Applying Lemma 4.6, we obtain that relation (16) with the extra condition (a,k)ωN(k,l)(a,k)\neq\omega_{N}(k,l) is satisfied for all (σ1,N)N,(σb+1,N)NT1(\sigma_{1,N})_{N},(\sigma_{b+1,N})_{N}\in T_{1} and almost surely for (σN)NS(\sigma_{N})_{N}\in S.

Assume that (a,k)=ωN(k,l)(a,k)=\omega_{N}(k,l). Then (a,k,l)(a,k,l) is uniquely determined by (a,k)(a,k). But (i,j,k,l,a,b)(i,j,k,l,a,b) is uniquely detemined by (a,k,l)(a,k,l), so (16) follows.

For case (c), we can assume that σ1,N=IdN\sigma_{1,N}=\text{Id}_{N} and σb+1,N=σN\sigma_{b+1,N}=\sigma_{N}. I.e. we have to show that the following relation holds true almost surely for (σN)N(\sigma_{N})_{N}:

(17) |{(i,j,k,l,a,b)[N]6:σN(k,l)=(j,i) and σb,N(a,k)=σm,N(\displaystyle\big{|}\{(i,j,k,l,a,b)\in[N]^{6}:\ \sigma_{N}(k,l)=(j,i)\textrm{ and }\sigma_{b,N}(a,k)=\top\circ\sigma_{m,N}( b,i)}|\displaystyle b,i)\}\big{|}
=o(N3)\displaystyle=o(N^{3})

where σb,N=ω1,NσNω1,N\sigma_{b,N}=\omega_{1,N}\circ\sigma_{N}\circ\omega_{1,N} and σm,N=ω2,NσNω2,N\sigma_{m,N}=\omega_{2,N}\circ\sigma_{N}\circ\omega_{2,N} with (ωs,N)N(\omega_{s,N})_{N} either the identity permutation or the matrix transpose.

Note that given (σN)N(\sigma_{N})_{N}, the tuple (i,j,k,l,a,b)(i,j,k,l,a,b) is uniquely determined by either of the triples (k,l,a)(k,l,a) and (k,l,b)(k,l,b) hence property (17) is verified under one of the extra conditions (k,l){ω1,N(a,k),ω2,N(b,i)}(k,l)\in\{\omega_{1,N}(a,k),\omega_{2,N}(b,i)\}.

If ω1,N(a,k)=ω2,N(b,i)=(u,v)\omega_{1,N}(a,k)=\omega_{2,N}(b,i)=(u,v), then σ2,N(a,k)=σ4,N(b,i)\sigma_{2,N}(a,k)=\top\circ\sigma_{4,N}(b,i) gives that ω1,N(σN(u,v))=ω2,N(σN(u,v))\omega_{1,N}(\sigma_{N}(u,v))=\top\circ\omega_{2,N}(\sigma_{N}(u,v)), so either ω1,N=ω2,N\omega_{1,N}=\top\circ\omega_{2,N} or σN(u,v)=σN(u,v)\top\circ\sigma_{N}(u,v)=\sigma_{N}(u,v). But ω1,N=ω2,N\omega_{1,N}=\top\circ\omega_{2,N}, gives (k,a)=(b,i)(k,a)=(b,i), so (i,j,k,l,a,b)(i,j,k,l,a,b) is uniquely determined by (k,l)(k,l). Also, if σN(u,v)=σN(u,v)\top\circ\sigma_{N}(u,v)=\sigma_{N}(u,v), then there are at most NN possible choices for σ(u,v)\sigma(u,v), that is for (a,k)(a,k) so there are at most N2N^{2} possible choices for (k,l,a)(k,l,a).

Denote V={(i,j,k,l,a,b)[N]6:(k,l),ω1,N(a,k),ω2,N(b,i) distinct}V=\{(i,j,k,l,a,b)\in[N]^{6}:\ (k,l),\omega_{1,N}(a,k),\omega_{2,N}(b,i)\textrm{ distinct}\}.

Consider the random variable

Ji,j,k,la,b(σ)={1 if σ(k,l)=(j,i) and σ1(a,k)=σ2(b,i)0 otherwise \displaystyle J^{a,b}_{i,j,k,l}(\sigma)=\left\{\begin{array}[]{ll}1&\textrm{ if }\sigma(k,l)=(j,i)\textrm{ and }\sigma_{1}(a,k)=\top\circ\sigma_{2}(b,i)\\ 0&\textrm{ otherwise }\end{array}\right.

where σ1=ω1σω1\sigma_{1}=\omega_{1}\circ\sigma\circ\omega_{1} and σ2=ω2σω2\sigma_{2}=\omega_{2}\circ\sigma\circ\omega_{2}, and applying Markov Inequality and Lemma 2.6, it suffices to show that

(18) N4𝔼((VJi,j,k,la,b)2)=O(N0).\displaystyle N^{-4}\mathbb{E}\big{(}(\sum_{V}J_{i,j,k,l}^{a,b})^{2}\big{)}=O(N^{0}).

For Ji,j,k,la,b(σ)0J^{a,b}_{i,j,k,l}(\sigma)\neq 0 there is one possible choice for σ(k,l)\sigma(k,l), less than N2N^{2} possible choices for σ(ω1,N(a,k))\sigma(\omega_{1,N}(a,k)), one choice for σ(ω2,N(b,i))\sigma(\omega_{2,N}(b,i)) and (N23)!(N^{2}-3)! possible choices for the other values of σ\sigma. Therefore

N4𝔼(V(Ji,j,k,la,b)2)=N4𝔼(VJi,j,k,la,b)<N4N6N2(N23)!N2!N0.\displaystyle N^{-4}\cdot\mathbb{E}\big{(}\sum_{V}(J_{i,j,k,l}^{a,b})^{2}\big{)}=N^{-4}\cdot\mathbb{E}\big{(}\sum_{V}J_{i,j,k,l}^{a,b}\big{)}<N^{-4}\cdot N^{6}\frac{N^{2}\cdot(N^{2}-3)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0.

To simplify the writing, let us denote by v=(i,j,k,l,a,b)\vec{v}=(i,j,k,l,a,b), v=(i,j,k,l)\vec{v}^{\prime}=(i^{\prime},j^{\prime},k^{\prime},l^{\prime}), Z(v)={(k,l),ω1,N(a,k),ω2,N(b,i)}Z(\vec{v})=\{(k,l),\omega_{1,N}(a,k),\omega_{2,N}(b,i)\} and Z(v)={(k,l),ω1,N(a,k),ω1,N(b,i)}Z(\vec{v}^{\prime})=\{(k^{\prime},l^{\prime}),\omega_{1,N}(a^{\prime},k^{\prime}),\omega_{1,N}(b^{\prime},i^{\prime})\}. Note that if Z(v)=Z(v)Z(\vec{v})=Z(\vec{v}^{\prime}), then (k,l,a)=(k,l,a)(k,l,a)=(k^{\prime},l^{\prime},a^{\prime}), so v=v\vec{v}=\vec{v}^{\prime}.

Denoting

W={(v,v)V2:vv}\displaystyle W=\big{\{}(\vec{v},\vec{v}^{\prime})\in V^{2}:\ \vec{v}\neq\vec{v}^{\prime}\big{\}}
W1={(v,v)W:Z(v)Z(v)=}\displaystyle W_{1}=\big{\{}(\vec{v},\vec{v}^{\prime})\in W:\ Z(\vec{v})\cap Z(\vec{v}^{\prime})=\emptyset\big{\}}
W2={(v,v)W:|Z(v)Z(v)|=1}\displaystyle W_{2}=\big{\{}(\vec{v},\vec{v}^{\prime})\in W:\ |Z(\vec{v})\cap Z(\vec{v}^{\prime})|=1\big{\}}
W3={(v,v)W:|Z(v)Z(v)|=2}\displaystyle W_{3}=\big{\{}(\vec{v},\vec{v}^{\prime})\in W:\ |Z(\vec{v})\cap Z(\vec{v}^{\prime})|=2\big{\}}

it follows that

𝔼(WJi,j,k,la,bJi,j,k,la,b)=s=13𝔼(WsJi,j,k,la,bJi,j,k,la,b)\displaystyle\mathbb{E}\big{(}\sum_{W}J_{i,j,k,l}^{a,b}\cdot J_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}^{a^{\prime},b^{\prime}})=\sum_{s=1}^{3}\mathbb{E}\big{(}\sum_{W_{s}}J_{i,j,k,l}^{a,b}\cdot J_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}^{a^{\prime},b^{\prime}}\big{)}

Let us remind that, as discussed above, for Ji,j,k,la,b(σ)0J^{a,b}_{i,j,k,l}(\sigma)\neq 0, there are at most N2N^{2} possible choices for σ(Z(v))\sigma(Z(\vec{v})). So, for Ji,j,k,la,bJi,j,k,la,b(σ)0J_{i,j,k,l}^{a,b}\cdot J_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}^{a^{\prime},b^{\prime}}(\sigma)\neq 0 there are at most N4N^{4} possible choices for σ(Z(v)Z(v))\sigma(Z(\vec{v})\cup Z(\vec{v}^{\prime})) and at most (N2|Z(v)Z(v)|)!(N^{2}-|Z(\vec{v})\cup Z(\vec{v}^{\prime})|)! possible choices for the rest of the values of σ\sigma, therefore

𝔼(WsJi,j,k,la,bJi,j,k,la,b)|Ws|N4N4(N2|Z(v)Z(v)|)!N2!\displaystyle\mathbb{E}\big{(}\sum_{W_{s}}J_{i,j,k,l}^{a,b}\cdot J_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}^{a^{\prime},b^{\prime}}\big{)}\leq|W_{s}|\cdot N^{-4}\cdot\frac{N^{4}\cdot(N^{2}-|Z(\vec{v})\cup Z(\vec{v}^{\prime})|)!}{N^{2}!}

but |Z(v)Z(v)|=6|Z(v)Z(v)||Z(\vec{v})\cup Z(\vec{v}^{\prime})|=6-|Z(\vec{v})\cap Z(\vec{v}^{\prime})|, so it suffices to show that

(19) logN|Ws|<122|Z(v)Z(v)|.\displaystyle\log_{N}|W_{s}|<12-2|Z(\vec{v})\cap Z(\vec{v}^{\prime})|.

For s=1s=1, we have that W1V×V[N]12W_{1}\in V\times V\in[N]^{12}, so (19) holds true. For s=2s=2, since |Z(v)Z(v)|=1|Z(\vec{v})\cap Z(\vec{v}^{\prime})|=1, at least two components of v\vec{v} are equal to two components of v\vec{v}^{\prime}, hence |W2|N10|W_{2}|\leq N^{10}, which implies (19).

If s=3s=3, note that all subsets with two elements of Z(v)Z(\vec{v}) contain four components of v\vec{v}, thus (19) being satisfied, except for {(k,l),ω1,N(a,k)}\{(k,l),\omega_{1,N}(a,k)\} If {(k,l),ω1,N(a,k)}={(k,l),ω1,N(a,k)}\{(k,l),\omega_{1,N}(a,k)\}=\{(k^{\prime},l^{\prime}),\omega_{1,N}(a^{\prime},k^{\prime})\}, then, for Ji,j,k,la,bJi,j,k,la,b(σ)0J_{i,j,k,l}^{a,b}\cdot J_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}^{a^{\prime},b^{\prime}}(\sigma)\neq 0, there are at most N2N^{2} possible choices for σ(Z(v))\sigma(Z(\vec{v})), each giving one possible choice for σ({(k,l),ω1,N(a,k)})\sigma(\{(k,l),\omega_{1,N}(a,k)\}), Hence for σ(Z(v))\sigma(Z(\vec{v}^{\prime})) and (N24)!(N^{2}-4)! possible choices for the rest of the values of σ\sigma. Therefore, denoting W4={(v,v)W:{(k,l),(a,k)}={(k,l),(a,k)}}W_{4}=\big{\{}(\vec{v},\vec{v}^{\prime})\in W:\ \{(k,l),(a,k)\}=\{(k^{\prime},l^{\prime}),(a^{\prime},k^{\prime})\}\big{\}}, we have that

𝔼(W4Ji,j,k,la,bJi,j,k,la,b)<N9N4N2(N24)!N2!N0,\displaystyle\mathbb{E}\big{(}\sum_{W_{4}}J_{i,j,k,l}^{a,b}\cdot J_{i^{\prime},j^{\prime},k^{\prime},l^{\prime}}^{a^{\prime},b^{\prime}}\big{)}<N^{9}\cdot N^{-4}\cdot\frac{N^{2}\cdot(N^{2}-4)!}{N^{2}!}\xrightarrow[N\rightarrow\infty]{}0,

and the conclusion follows. ∎

5. Joint Infinitesimal Distribution of a Gaussian Random Matrix and its Transpose

In this section, we investigate the their joint infinitesimal distribution of Gaussian random matrix and its transpose, we describe their joint infinitesimal free cumulants in the following theorem. In particular, we show that Gaussian random matrix and its transpose are not asymptotically infinitesimally free.

Theorem 5.1.

For each positive integer NN, consider GNG_{N} a complex Gaussian random matrix and GNG_{N}^{\top} its matrix transpose. The asymptotic values (as NN\rightarrow\infty) of the infinitesimally free joint cumulants of GNG_{N} and GNG_{N}^{\top} are computed according to the following rule (here each εj\varepsilon_{j} is either the identity or the matrix transpose):

limNκp(GNε1,GNε2,,GNεp)={1 if p=2m,εsεm+s, for s[m]0 otherwise. \displaystyle\lim_{N\rightarrow\infty}\kappa^{\prime}_{p}(G_{N}^{\varepsilon_{1}},G_{N}^{\varepsilon_{2}},\dots,G_{N}^{\varepsilon_{p}})=\left\{\begin{array}[]{ll}1&\textrm{ if }p=2m,\varepsilon_{s}\neq\varepsilon_{m+s},\textrm{ for }s\in[m]\\ 0&\textrm{ otherwise. }\end{array}\right.

Before proceeding with the proof of the Theorem above, notice that simple computations give the following particular cases of Lemmata 4.4, 4.5, 4.6.

Remark 5.2.

Suppose that for 1s41\leq s\leq 4, σs,N\sigma_{s,N} is either the identity or the matrix transpose in 𝒮([N]2)\mathcal{S}([N]^{2}) and denote

C1(σ1,,σ4)=|{(i,j,k,l)[N]4:\displaystyle C_{1}(\sigma_{1},\dots,\sigma_{4})=\big{|}\{(i,j,k,l)\in[N]^{4}:\, σ1(i,j)=σ3(k,l),\displaystyle\sigma_{1}(i,j)=\top\circ\sigma_{3}(k,l),
σ2(j,k)=σ4(l,i)}}|\displaystyle\sigma_{2}(j,k)=\top\circ\sigma_{4}(l,i)\}\}\big{|}
C2(σ1,,σ4)=|{(i,j,k,l,a,b)[N]6:\displaystyle C_{2}(\sigma_{1},\dots,\sigma_{4})=\big{|}\{(i,j,k,l,a,b)\in[N]^{6}:\, σ1(i,j)=σ3(k,l),\displaystyle\sigma_{1}(i,j)=\top\circ\sigma_{3}(k,l),
σ2(a,k)=σ4(b,i)}|.\displaystyle\sigma_{2}(a,k)=\top\circ\sigma_{4}(b,i)\}\big{|}.

Then

C1(σ1,,σ4)={N2 if σ1σ3 and σ2σ4N otherwise,\displaystyle C_{1}(\sigma_{1},\dots,\sigma_{4})=\left\{\begin{array}[]{ll}N^{2}&\textrm{ if }\sigma_{1}\neq\sigma_{3}\textrm{ and }\sigma_{2}\neq\sigma_{4}\\ N&\textrm{ otherwise,}\end{array}\right.

and

C2(σ1,,σ4)={N3 if σ1σ3 and σ2σ4N2 otherwise.\displaystyle C_{2}(\sigma_{1},\dots,\sigma_{4})=\left\{\begin{array}[]{ll}N^{3}&\textrm{ if }\sigma_{1}\neq\sigma_{3}\textrm{ and }\sigma_{2}\neq\sigma_{4}\\ N^{2}&\textrm{ otherwise.}\end{array}\right.
Proof.

Since the identity commutes with the transpose, for 1s,t41\leq s,t\leq 4, we have that σs1σt(i,j)={(i,j) if σsσt(j,i) if σs=σt.\sigma_{s}^{-1}\circ\top\circ\sigma_{t}(i,j)=\left\{\begin{array}[]{ll}(i,j)&\textrm{ if }\sigma_{s}\neq\sigma_{t}\\ (j,i)&\textrm{ if }\sigma_{s}=\sigma_{t}.\end{array}\right.. So, if σ1σ3\sigma_{1}\neq\sigma_{3} and σ2σ2\sigma_{2}\neq\sigma_{2}, then

C1(σ1,,σ4)\displaystyle C_{1}(\sigma_{1},\dots,\sigma_{4}) =|{(i,j,k,l)[N]4:(i,j)=(k,l),(j,k)=(l,i)}|\displaystyle=\big{|}\{(i,j,k,l)\in[N]^{4}:\ (i,j)=(k,l),\ (j,k)=(l,i)\}\big{|}
=|{(i,j,k,l)[N]4:i=k,j=l}|=N2\displaystyle=\big{|}\{(i,j,k,l)\in[N]^{4}:\ i=k,\ j=l\}\big{|}=N^{2}

and

C2(σ1,,σ4)\displaystyle C_{2}(\sigma_{1},\dots,\sigma_{4}) =|{(i,j,k,l,a,b)[N]6:(i,j)=(k,l),(a,k)=(b,i)}|\displaystyle=\big{|}\{(i,j,k,l,a,b)\in[N]^{6}:\ (i,j)=(k,l),\ (a,k)=(b,i)\}\big{|}
=|{(i,j,k,l,a,b)[N]6:i=k,j=l,a=b}|=N3.\displaystyle=\big{|}\{(i,j,k,l,a,b)\in[N]^{6}:\ i=k,\ j=l,\ a=b\}\big{|}=N^{3}.

If σ1=σ3\sigma_{1}=\sigma_{3}, then i=li=l and j=kj=k, hence σ2(j,k)=σ4(l,i)\sigma_{2}(j,k)=\sigma_{4}(l,i) gives that i=j=k=li=j=k=l, that is C1(σ1,,σ4)=NC_{1}(\sigma_{1},\dots,\sigma_{4})=N. Also, σ2(a,k)=σ4(b,i)\sigma_{2}(a,k)=\sigma_{4}(b,i) gives that (b,i)=σ41σ2(a,j)(b,i)=\sigma_{4}^{-1}\circ\sigma_{2}(a,j), so (i,j,k,l,a,b)(i,j,k,l,a,b) is uniquely determined by (a,j)(a,j), that is C2(σ1,,σ4)=N2C_{2}(\sigma_{1},\dots,\sigma_{4})=N^{2}. The argument for the case σ2=σ4\sigma_{2}=\sigma_{4} is similar. ∎

Lemma 5.3.

Let n,Nn,N be a positive integers and suppose that for each s[n]s\in[n], σs\sigma_{s} is either the identity or the matrix transpose in 𝒮([N]2)\mathcal{S}([N]^{2}). Denote σ=(σ1,,σn)\overrightarrow{\sigma}=(\sigma_{1},\dots,\sigma_{n}), and, with the notations from Section 2, write

𝔼tr(GNσ1GNσ2GNσn)=πP2(n)𝒱σ(π).\displaystyle\mathbb{E}\circ\mathrm{tr}\big{(}G_{N}^{\sigma_{1}}\cdot G_{N}^{\sigma_{2}}\cdots G_{N}^{\sigma_{n}}\big{)}=\sum_{\pi\in P_{2}(n)}\mathcal{V}_{\overrightarrow{\sigma}}(\pi).

Also, write the set P2(m)P_{2}(m) as the disjoint union

P2(n)=P2(σ,0)P2(σ,1)P2(σ,2)\displaystyle P_{2}(n)=P_{2}(\overrightarrow{\sigma},0)\cup P_{2}(\overrightarrow{\sigma},1)\cup P_{2}(\overrightarrow{\sigma},2)

where

P2(σ,0)\displaystyle P_{2}(\overrightarrow{\sigma},0) ={πP2(n):π is non-crossing and σs=σπ(s) for all s[n]}\displaystyle=\{\pi\in P_{2}(n):\ \pi\textrm{ is non-crossing and }\sigma_{s}=\sigma_{\pi(s)}\textrm{ for all }s\in[n]\}
P2(σ,1)\displaystyle P_{2}(\overrightarrow{\sigma},1) ={πP2(n): there exists some B={i(1),i(2),,i(2m)}[n] with\displaystyle=\{\pi\in P_{2}(n):\ \textrm{ there exists some }B=\{i(1),i(2),\dots,i(2m)\}\subseteq[n]\textrm{ with }
i(1)<<i(2m) such that π(i(s))=i(m+s) and σi(s)σi(m+s)\displaystyle i(1)<\dots<i(2m)\textrm{ such that }\pi(i(s))=i(m+s)\textrm{ and }\sigma_{i(s)}\neq\sigma_{i(m+s)}
and σk=σπ(k) whenever k[n]B and the permutation σ1B\displaystyle\textrm{ and }\sigma_{k}=\sigma_{\pi(k)}\textrm{ whenever }k\in[n]\setminus B\textrm{ and the permutation }\sigma\vee 1_{B}
( obtained from σ by considering B as a block) is non-crossing}\displaystyle\textrm{ ( obtained from $\sigma$ by considering $B$ as a block) is non-crossing}\big{\}}
P2(σ,2)\displaystyle P_{2}(\overrightarrow{\sigma},2) =P2(n)(P2(σ,0)P2(σ,1)).\displaystyle=P_{2}(n)\setminus(P_{2}(\overrightarrow{\sigma},0)\cup P_{2}(\overrightarrow{\sigma},1)).

With the notations above, we have that:

(23) 𝒱σ(π)={1 if πP2(σ,0)N1 if πP2(σ,1)O(N2) if πP2(σ,2).\displaystyle\mathcal{V}_{\overrightarrow{\sigma}}(\pi)=\left\{\begin{array}[]{ll}1&\textrm{ if }\pi\in P_{2}(\overrightarrow{\sigma},0)\\ N^{-1}&\textrm{ if }\pi\in P_{2}(\overrightarrow{\sigma},1)\\ O(N^{-2})&\textrm{ if }\pi\in P_{2}(\overrightarrow{\sigma},2).\end{array}\right.
Proof.

As in (3) from the proof of Theorem 3.1, if π(k)=k+1\pi(k)=k+1 and σk=σk+1\sigma_{k}=\sigma_{k+1}, then 𝒱σ(π)=𝒱σ(π)\mathcal{V}_{\overrightarrow{\sigma}}(\pi)=\mathcal{V}_{\overrightarrow{\sigma}^{\prime}}(\pi^{\prime}) where π\pi^{\prime} and σ\overrightarrow{\sigma}^{\prime} are obtained by removing (k,k+1)(k,k+1) respectively σk,σk+1\sigma_{k},\sigma_{k+1} from π\pi, respectively σ\overrightarrow{\sigma}. If πP2(σ,0)\pi\in P_{2}(\overrightarrow{\sigma},0) then iterating (3) n/2n/2 times gives the first part of (23). Moreover, if πP2(σ,0)\pi\notin P_{2}(\overrightarrow{\sigma},0), then, (3) allows us to assume, without loss of generality that σkσk+1\sigma_{k}\neq\sigma_{k+1} whenever π(k)=k+1\pi(k)=k+1.

Under all the assumptions above, suppose first that π\pi is non-crossing. We shall prove (23) by induction on nn. If n=2n=2, then πP2(σ,1)\pi\in P_{2}(\overrightarrow{\sigma},1) and

𝒱σ(π)=N2|{(i,j)[N]2:σ1(i,j)=σ1(j,i)}|=N1,\displaystyle\mathcal{V}_{\overrightarrow{\sigma}}(\pi)=N^{-2}\cdot\big{|}\{(i,j)\in[N]^{2}:\ \sigma_{1}(i,j)=\top\circ\sigma_{1}\circ\top(j,i)\}\big{|}=N^{-1},

hence (23) holds true.

If n4n\geq 4, then πP2(σ,2)\pi\in P_{2}(\overrightarrow{\sigma},2). On the other hand, since π\pi is non-crossing, it has at least one block which is a segment. Via a circular permutation of the set [n][n], we can suppose without loss of generality that π(n1)=n\pi(n-1)=n, which furthermore implies that σn1=σN\sigma_{n-1}=\top\circ\sigma_{N}. Then the condition

σn1(in1,jn1)=σn(in,jn)\displaystyle\sigma_{n-1}(i_{n-1},j_{n-1})=\sigma_{n}\circ\top(i_{n},j_{n})

gives that in2=ini_{n-2}=i_{n} . Therefore, denoting by π\pi^{\prime}, respectively by σ\overrightarrow{\sigma}^{\prime} the restrictions of π\pi, respectively of σ\overrightarrow{\sigma} to the set [n2][n-2], we then have that

𝔞π,σ𝔞π,σ([n2])\displaystyle\mathfrak{a}_{\pi^{\prime},\overrightarrow{\sigma}^{\prime}}\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma}}([n-2])

and

𝔞π,σ𝔞π,σ([n2])1\displaystyle\mathfrak{a}_{\pi,\overrightarrow{\sigma}}\leq\mathfrak{a}_{\pi,\overrightarrow{\sigma}}([n-2])-1

hence (23) follows.

Next, suppose that π\pi is crossing. As in the proof of Theorem 3.1, we can assume that π(1)=b+1\pi(1)=b+1 and π(b)=d\pi(b)=d for 1<b<b+1<dn1<b<b+1<d\leq n. If dnd\neq n, then πP2(σ,2)\pi\in P_{2}(\overrightarrow{\sigma},2) and, as in the proof on Theorem 3.1, in this case we have that 𝔞π,σ2\mathfrak{a}_{\pi,\overrightarrow{\sigma}}\leq-2, so (23) holds true.

If d=nd=n and σ1=σb+1\sigma_{1}=\sigma_{b+1} or σb=σd\sigma_{b}=\sigma_{d}, then again πP2(σ,2)\pi\in P_{2}(\overrightarrow{\sigma},2); Remark 5.2 and Lemma 2.5 give that 𝔞π,σ2\mathfrak{a}_{\pi,\overrightarrow{\sigma}}\leq-2, so (23) holds true.

Suppose that d=nd=n, σ1σb+1\sigma_{1}\neq\sigma_{b+1}, σbσd\sigma_{b}\neq\sigma_{d} and if π(k)=k+1\pi(k)=k+1 we have that σkσk+1\sigma_{k}\neq\sigma_{k+1}. If n=4n=4, then πP2(σ,1)\pi\in P_{2}(\overrightarrow{\sigma},1), and Remark 5.2 gives that 𝔞π,σ=1\mathfrak{a}_{\pi,\overrightarrow{\sigma}}=-1, so (23) holds true in this case. Suppose then that n>4n>4. If there is some s{2,,b1}s\in\{2,\dots,b-1\} or s{b+2,,n1}s\in\{b+2,\dots,n-1\} such that π(s)=s+1\pi(s)=s+1, then πP2(σ,2)\pi\in P_{2}(\overrightarrow{\sigma},2); on the other hand, we assumed that σsσs+1\sigma_{s}\neq\sigma_{s+1} so 𝔞π,σ({s,s+1})=1\mathfrak{a}_{\pi,\overrightarrow{\sigma}}(\{s,s+1\})=-1 and Lemma 2.5 gives that 𝔞π,σ2\mathfrak{a}_{\pi,\overrightarrow{\sigma}}\leq-2 so (23) holds true. The same argument remains valid if there exists a<b<c<da^{\prime}<b^{\prime}<c^{\prime}<d^{\prime} elements either of {2,,b1}\{2,\dots,b-1\} or of {b+2,,n1}\{b+2,\dots,n-1\} such that π(a)=c\pi(a^{\prime})=c^{\prime} and π(b)=d\pi(b^{\prime})=d^{\prime} (see Figure 4). So we can further assume that π([b])=[n][b]\pi([b])=[n]\setminus[b], in particular n=2bn=2b.

Note that the condition π([b])=[n][b]\pi([b])=[n]\setminus[b] gives that Aπ,σ=Aπ,σ([b])A_{\pi,\overrightarrow{\sigma}}=A_{\pi,\overrightarrow{\sigma}}([b]). Furthermore, since π(1)=b+1\pi(1)=b+1 and σ1σb+1\sigma_{1}\neq\sigma_{b+1} we have that ib+1=i1i_{b+1}=i_{1}, so Aπ,σ([b])=Aπ,σ([b1])A_{\pi,\overrightarrow{\sigma}}([b])=A_{\pi,\overrightarrow{\sigma}}([b-1]).

If πP2(σ,1)\pi\in P_{2}(\overrightarrow{\sigma},1), then

Aπ,σ\displaystyle A_{\pi,\overrightarrow{\sigma}} ={(i1,i2,,i2b)[N]2b:(is,is+1)=(im+s,im+s+1),i2b=i1}\displaystyle=\{(i_{1},i_{2},\dots,i_{2b})\in[N]^{2b}:(i_{s},i_{s}+1)=(i_{m+s},i_{m+s}+1),i_{2b}=i_{1}\}
={(i1,i2,,ib)[N]b},\displaystyle=\{(i_{1},i_{2},\dots,i_{b})\in[N]^{b}\},

and since 𝔞π,σ=logN|Aπ,σ|b1\mathfrak{a}_{\pi,\overrightarrow{\sigma}}=\log_{N}\big{|}A_{\pi,\overrightarrow{\sigma}}\big{|}-b-1 it follows that (23) holds true.

Figure 8.

If πP2(σ,2)\pi\in P_{2}(\overrightarrow{\sigma},2), then the set {s[b]:π(s)b+s or σsσb+s}\{s\in[b]:\ \pi(s)\neq b+s\textrm{ or }\sigma_{s}\neq\sigma_{b+s}\} is non-void. Denote by tt its smallest element. Then π(t1)=m+t1\pi(t-1)=m+t-1 and σt1σm+t1\sigma_{t-1}\neq\sigma_{m+t-1} so it=im+ti_{t}=i_{m+t}. Let v=π(m+t)v=\pi(m+t). Then v[b1]{t1}v\in[b-1]\setminus\{t-1\}. and it=im+t{iv,iv+1}i_{t}=i_{m+t}\in\{i_{v},i_{v+1}\}. If vtv\neq t, then vtv+1v\neq t\neq v+1. If v=tv=t, using that σt=σm+t\sigma_{t}=\sigma_{m+t} we get it=it+1i_{t}=i_{t+1}. Either way, there is some w[b1]{t}w\in[b-1]\setminus\{t\} such that (i1,,ib)Aπ,σ([b1])(i_{1},\dots,i_{b})\in A_{\pi,\overrightarrow{\sigma}}([b-1]) implies that iw=iti_{w}=i_{t}, hence |Aπ,σ([b1])|b1\big{|}A_{\pi,\overrightarrow{\sigma}}([b-1])\big{|}\leq b-1, that is 𝔞π,σ2\mathfrak{a}_{\pi,\overrightarrow{\sigma}}\leq-2, and the conclusion follows.

Proof of Theorem 5.1.

Since for each sns\leq n, we have that the sequence (σs,N)N(\sigma_{s,N})_{N} is either the sequence of identity permutations or the sequence of matrix transposes, we can simplify the writing by omitting the index NN and writing σs\sigma_{s} for σs,N\sigma_{s,N}. With this convention, it suffices to show that

limN[𝔼Tr(GNσ1GNσn)NlimN𝔼tr(GNσ1GNσn)]\displaystyle\lim_{N\rightarrow\infty}\big{[}\mathbb{E}\circ\textrm{Tr}(G_{N}^{\sigma_{1}}\cdots G_{N}^{\sigma_{n}})-N\lim_{N\rightarrow\infty}\mathbb{E}\circ\mathrm{tr}(G_{N}^{\sigma_{1}}\cdots G_{N}^{\sigma_{n}})\big{]}
=ρNC(n)[BρB=(i(1),,i(p))κp(σi(1),,σi(p))Dρ{B}D=(j(1),,j(r))κr(σj(1),,σj(r))]\displaystyle=\sum_{\rho\in NC(n)}\big{[}\sum_{\begin{subarray}{c}B\in\rho\\ B=(i(1),\dots,i(p))\end{subarray}}\kappa^{\prime}_{p}(\sigma_{i(1)},\dots,\sigma_{i(p)})\cdot\prod_{\begin{subarray}{c}D\in\rho\setminus\{B\}\\ D=(j(1),\dots,j(r))\end{subarray}}\kappa_{r}(\sigma_{j(1)},\dots,\sigma_{j(r)})\big{]}

where

κr(σ1,,σr)={1 if r=2 and σ1=σ20 otherwise\displaystyle\kappa_{r}(\sigma_{1},\dots,\sigma_{r})=\left\{\begin{array}[]{ll}1&\textrm{ if }r=2\textrm{ and }\sigma_{1}=\sigma_{2}\\ 0&\textrm{ otherwise}\end{array}\right.

and

κp(σ1,,σp)={1 if p=2m and σsσm+s for s[m]0 otherwise.\displaystyle\kappa^{\prime}_{p}(\sigma_{1},\dots,\sigma_{p})=\left\{\begin{array}[]{ll}1&\textrm{ if }p=2m\textrm{ and }\sigma_{s}\neq\sigma_{m+s}\textrm{ for }s\in[m]\\ 0&\textrm{ otherwise.}\end{array}\right.

On the other hand, from Lemma 5.3, we have that

𝔼tr(GNσ1GNσm)=|P2(σ,0)|+1N|P2(σ,1)|+O(N2)\displaystyle\mathbb{E}\circ\mathrm{tr}(G_{N}^{\sigma_{1}}\cdots G_{N}^{\sigma_{m}})=\big{|}P_{2}(\overrightarrow{\sigma},0)\big{|}+\frac{1}{N}\big{|}P_{2}(\overrightarrow{\sigma},1)\big{|}+O(N^{-2})

which gives

limN[𝔼Tr(GNσ1GNσm)NlimN𝔼tr(GNσ1GNσm)]\displaystyle\lim_{N\rightarrow\infty}\big{[}\mathbb{E}\circ\textrm{Tr}(G_{N}^{\sigma_{1}}\cdots G_{N}^{\sigma_{m}})-N\lim_{N\rightarrow\infty}\mathbb{E}\circ\mathrm{tr}(G_{N}^{\sigma_{1}}\cdots G_{N}^{\sigma_{m}})\big{]} =|P2(σ,1)|\displaystyle=\big{|}P_{2}(\overrightarrow{\sigma},1)\big{|}
=ρNC(n)χP2(σ,1)(ρ)\displaystyle=\sum_{\rho\in NC(n)}\chi_{P_{2}(\overrightarrow{\sigma},1)}(\rho)

where χP2(σ,1)(ρ)={1 if ρP2(σ,1)0 otherwise.\displaystyle\chi_{P_{2}(\overrightarrow{\sigma},1)}(\rho)=\left\{\begin{array}[]{ll}1&\textrm{ if }\rho\in P_{2}(\overrightarrow{\sigma},1)\\ 0&\textrm{ otherwise.}\end{array}\right.

But the definition of P2(σ,1)P_{2}(\overrightarrow{\sigma},1) reads

χP2(σ,1)(ρ)=BρB=(i(1),,i(p))κp(σi(1),,σi(p))Dρ{B}D=(j(1),,j(r))κr(σj(1),,σj(r))\chi_{P_{2}(\overrightarrow{\sigma},1)}(\rho)=\sum_{\begin{subarray}{c}B\in\rho\\ B=(i(1),\dots,i(p))\end{subarray}}\kappa^{\prime}_{p}(\sigma_{i(1)},\dots,\sigma_{i(p)})\cdot\prod_{\begin{subarray}{c}D\in\rho\setminus\{B\}\\ D=(j(1),\dots,j(r))\end{subarray}}\kappa_{r}(\sigma_{j(1)},\dots,\sigma_{j(r)})

and the conclusion follows.

Acknowledgement

We would like to thank the anonymous referee for careful reading of the manuscript, and for important remarks which led to an improvement of this paper.

References

  • [1] G. Aubrun, S. Szarek, and E. Werner. Hastings’s additivity counterexample via dvoretzky’s theorem. Communications in mathematical physics, 305(1):85–97, 2011.
  • [2] T. Banica and I. Nechita. Asymptotic eigenvalue distributions of block-transposed wishart matrices. Journal of Theoretical Probability, 26(3):855–869, 2013.
  • [3] S. T. Belinschi, H. Bercovici, and M. Capitaine. On the outlying eigenvalues of a polynomial in large independent random matrices. Int. Math. Res. Not. IMRN, 2021(4):2588–2641, 2021.
  • [4] S. T. Belinschi, T. Mai, and R. Speicher. Analytic subordination theory of operator-valued free additive convolution and the solution of a general random matrix problem. J. Reine Angew. Math., 732:21–53, 2017.
  • [5] S. T. Belinschi and D. Shlyakhtenko. Free probability of type b: analytic interpretation and applications. American Journal of Mathematics, 134(1):193–234, 2012.
  • [6] P. Biane, F. Goodman, and A. Nica. Non-crossing cumulants of type b. Transactions of the American Mathematical Society, 355(6):2263–2303, 2003.
  • [7] A. Celestino, K. Ebrahimi-Fard, and D. Perales. Relations between infinitesimal non-commutative cumulants. Doc. Math., 26:1145–1185, 2021.
  • [8] S. Dallaporta and M. Fevrier. Fluctuations of linear spectral statistics of deformed Wigner matrices. arXiv preprint arXiv:1903.11324, 2019.
  • [9] T. Damour, S. De Buyl, M. Henneaux, and C. Schomblond. Einstein billiards and overextensions of finite-dimensional simple lie algebras. Journal of High Energy Physics, 2002(08):030, 2002.
  • [10] M. Fevrier. Higher order infinitesimal freeness. Indiana University Mathematics Journal, pages 249–295, 2012.
  • [11] M. FPei’evrier and A. Nica. Infinitesimal non-crossing cumulants and free probability of type B. Journal of Functional Analysis, 258(9):2983–3023, 2010.
  • [12] T. Hasebe. Differential independence via an associative product of infinitely many linear functionals. Colloq. Math., 124(1):79–94, 2011.
  • [13] M. Horodecki, P. Horodecki, and R. Horodecki. Separability of n-particle mixed states: necessary and sufficient conditions in terms of linear maps. Physics Letters A, 283(1-2):1–7, 2001.
  • [14] S. Janson. Gaussian Hilbert spaces, volume 129 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1997.
  • [15] C. Male, J. A. Mingo, S. Péché, and R. Speicher. Joint global fluctuations of complex Wigner and deterministic matrices. Random Matrices Theory Appl., 11(2):Paper No. 2250015, 46, 2022.
  • [16] A. Mandarino, T. Linowski, and K. Lun.Zyczkowski. Bipartite unitary gates and billiard dynamics in the weyl chamber. Physical Review A, 98(1):012335, 2018.
  • [17] J. A. Mingo. Non-crossing annular pairings and the infinitesimal distribution of the GOE. Journal of the London Mathematical Society, 100(3):987–1012, 2019.
  • [18] J. A. Mingo and M. Popa. Freeness and the transposes of unitarily invariant random matrices. Journal of Functional Analysis, 271(4):883–921, 2016.
  • [19] J. A. Mingo and M. Popa. Freeness and the partial transposes of wishart random matrices. Canadian Journal of Mathematics, 71(3):659–681, 2019.
  • [20] J. A. Mingo and M. Popa. The partial transpose and asymptotic free independence for Wishart random matrices, II. Pacific J. Math., 317(2):387–421, 2022.
  • [21] J. A. Mingo, M. Popa, and K. Szpojankowski. On the partial transpose of a Haar unitary matrix. Studia Math., 266(3):337–359, 2022.
  • [22] J. A. Mingo, M. Popa, and K. Szpojankowski. Asymptotic *-distribution of permuted Haar unitary matrices. Indiana Univ. Math. J., 72(3):927–967, 2023.
  • [23] J. A. Mingo and J. Vazquez-Becerra. The asymptotic infinitesimal distribution of a real wishart random matrix. arXiv preprint arXiv:2112.15231, 2021.
  • [24] M. Popa. Asymptotic free independence and entry permutations for Gaussian random matrices. Proc. Amer. Math. Soc., 150(8):3379–3394, 2022.
  • [25] D. Shlyakhtenko. Free probability of type-B and asymptotics of finite-rank perturbations of random matrices. Indiana University Mathematics Journal, 67(2):971–991, 2018.
  • [26] P.-L. Tseng. Operator-valued infinitesimal multiplicative convolutions. arXiv preprint arXiv:2201.13187, 2022.
  • [27] P.-L. Tseng. A unified approach to infinitesimal freeness with amalgamation. Internat. J. Math., 34(13):Paper No. 2350079, 26, 2023.
  • [28] D. Voiculescu. Addition of certain non-commuting random variables. Journal of functional analysis, 66(3):323–346, 1986.
  • [29] D. Voiculescu. Limit laws for random matrices and free products. Invent. Math., 104(1):201–220, 1991.
  • [30] D. V. Voiculescu, K. J. Dykema, and A. Nica. Free random variables, volume 1 of CRM Monograph Series. American Mathematical Society, Providence, RI, 1992. A noncommutative probability approach to free products with applications to random matrices, operator algebras and harmonic analysis on free groups.