This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Limiting Distributions of Sums with Random Spectral Weights

Ángel Chávez Department of Mathematics and Statistics, Pomona College, 610 N. College Ave., Claremont, CA 91711 angel.chavez@pomona.edu  and  Jacob Waldor jcwa2017@mymail.pomona.edu
Abstract.

This paper studies the asymptotic properties of weighted sums of the form Zn=i=1naiXiZ_{n}=\sum_{i=1}^{n}a_{i}X_{i}, in which X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d. random variables and a1,a2,,ana_{1},a_{2},\ldots,a_{n} correspond to either eigenvalues or singular values in the classic Erdős-Rényi-Gilbert model. In particular, we prove central limit-type theorems for the sequences n1Znn^{-1}Z_{n} with varying conditions imposed on X1,X2,,XnX_{1},X_{2},\ldots,X_{n}.

Key words and phrases:
random matrix, random graph, eigenvalue, singular value, central limit theorem, graph energy, Schatten norm, sub-gaussian, convolution, method of moments
2020 Mathematics Subject Classification:
60B20, 60F05, 05C80

1. Introduction

Suppose X1,X2,X_{1},X_{2},\ldots is a sequence of random variables. A classic problem in probability is to understand the limiting behavior of the sum

Yn=X1+X2++Xn.\displaystyle Y_{n}=X_{1}+X_{2}+\cdots+X_{n}.

Early versions of this problem, which consider the case when X1,X2,X_{1},X_{2},\ldots are independent Bernoulli trials, are rooted in the work of de Moivre [10] and Laplace [22] pertaining to normal approximations to the binomial distribution. These classic results mark the beginnings of a long standing problem of approximating laws of sums of random variables by normal distributions. This two-hundred year problem ultimately culminated with what is today known as the central limit theorem. Major contributions towards our modern understanding of the central limit theorem are attributed, in particular, to Lévy [23], Lindeberg [24] and Lyapunov [25], among several others.

Suppose X1,X2,X_{1},X_{2},\ldots are independent and identically distributed (i.i.d.) random variables with mean μ\mu and finite positive variance σ2\sigma^{2}. The version of the central limit theorem attributed to Lévy and Lindeberg [5, Thm. 27.1] asserts that the sequence of random variables (Ynnμ)/(σn)(Y_{n}-n\mu)/(\sigma\sqrt{n}) converge in distribution to a standard normal. A theorem attributed to Lyapunov [5, Thm. 27.3] shows the assumption that the variables X1,X2,X_{1},X_{2},\ldots be identically distributed can even be relaxed as long as the absolute moments of the XiX_{i} satisfy a certain (Lyapunov) growth condition.

The present focus is to study the limiting behavior of sequences of the form

Zn(𝒂)=a1X1+a2X2++anXn,\displaystyle Z_{n}(\boldsymbol{a})=a_{1}X_{1}+a_{2}X_{2}+\cdots+a_{n}X_{n}, (1.1)

in which X1,X2,X_{1},X_{2},\ldots are i.i.d.  random variables and the weights a1,a2,a_{1},a_{2},\ldots correspond to either the eigenvalues or the singular values of a random symmetric matrix. Specifically, we take eigenvalues and singular values corresponding to the Erdős-Rényi-Gilbert random graph model. A random graph in this model, which was developed independently by Erdős-Rényi [14, 15] and Gilbert [18], is constructed by attaching edges among a set of labeled vertices independently with probability pp. The random variables aiXia_{i}X_{i} in this case are neither independent nor indentically distributed, and there is no general method available to handle this situation. However, adjacency matrices of Erdős-Rényi-Gilbert graphs have bounded entries which, modulo the constraints imposed by symmetry, are independent. This simple fact, together, with the almost sure convergence of their empirical spectral distributions to the semicircular law, allow us to establish central limit-type theorems for the sequences n1Zn(𝒂)n^{-1}Z_{n}(\boldsymbol{a}).

1.1. Notation and Terminology

Graph theoretic terminology may be found in [6]. A graph GG of order nn is an ordered pair (V,E)(V,E) consisting of a set E=E(G)E=E(G) of edges and a set V=V(G)V=V(G) of vertices such that |V|=n|V|=n. We adopt standard notation and let m=|E(G)|m=|E(G)| denote the number of edges in a graph GG. A graph is simple if it contains no loops or multiple edges and it is connected if it contains no isolated vertices. If GG is a graph of order nn, then its adjacency matrix is the n×nn\times n real symmetric matrix A(G)A(G) whose entries are defined by setting [A(G)]ij=1[A(G)]_{ij}=1 if vertices ii and jj are connected by an edge and [A(G)]ij=0[A(G)]_{ij}=0 otherwise. The spectrum of GG is the spectrum of its adjacency matrix A(G)A(G), and is therefore real since A(G)A(G) is Hermitian. We adopt a standard convention and write the spectrum of GG in non-increasing order,

λ1λ2λn.\displaystyle\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}.

Moreover, the singular spectrum of GG consists of the singular values of A(G)A(G). We remark that the singular values of the Hermitian matrix A(G)A(G) correspond to the moduli of its eigenvalues. Again, we adopt a standard convention and write the singular spectrum of GG in non-increasing order,

s1s2sn.\displaystyle s_{1}\geq s_{2}\geq\cdots\geq s_{n}.

For q1q\geq 1, we let Lq(Ω,,Ω)L^{q}(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) denote the vector space of random variables defined on the probability space (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) with finite LqL^{q}-norm defined by

Xq=(𝔼|X|q)1/q.\displaystyle\|X\|_{q}=\Big{(}\mathbb{E}|X|^{q}\Big{)}^{1/q}.

A random variable XX defined on a probability space (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) is called sub-gaussian if XqXψ2q\|X\|_{q}\leq\|X\|_{\psi_{2}}\sqrt{q} for all q1q\geq 1, where

Xψ2=supq1{q1/2Xq}\displaystyle\|X\|_{\psi_{2}}=\sup_{q\geq 1}\Big{\{}q^{-1/2}\|X\|_{q}\Big{\}}

is called the sub-gaussian norm of XX [32, Def. 2.5.6]. Gaussian, Bernoulli and bounded random variables are typical examples of sub-gaussian random variables [32, Ex. 2.5.8].

1.2. Statement of Results

This paper establishes central limit-type theorems for the sequences of weighted sums

Wn(𝒂)=1nj=1najXj,\displaystyle W_{n}(\boldsymbol{a})=\frac{1}{n}\sum_{j=1}^{n}a_{j}X_{j}, (1.2)

in which X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d. random variables and a1,a2,,ana_{1},a_{2},\ldots,a_{n} correspond to either the eigenvalues or the singular values of Erdős-Rényi-Gilbert graphs. Graphs in the Erdős-Rényi-Gilbert 𝒢(n,p)\mathcal{G}(n,p) model are constructed by attaching edges between each vertex pair from a set of nn labeled vertices independently with probability p[0,1]p\in[0,1]. Here and henceforth we assume p0,1p\neq 0,1.

The first two theorems illustrate the relatively simple limiting distributions of Wn(𝝀)W_{n}(\boldsymbol{\lambda}) in the case when certain symmetry conditions are imposed on X1,X2,X_{1},X_{2},\ldots. The third theorem illustrates the simple limiting distributions of Wn(𝒔)W_{n}(\boldsymbol{s}) in the case when X1,X2,X_{1},X_{2},\ldots are sub-gaussian with mean zero but not necessarily symmetric.

Theorem 1.1.

Suppose XX is a normal random variable with mean μ\mu and variance σ2\sigma^{2}. If X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are i.i.d.  random variables, then Wn(𝛌)/(σp)W_{n}(\boldsymbol{\lambda})/(\sigma\sqrt{p}) converges in distribution to a standard normal.

Theorem 1.2.

Suppose XX is a symmetric sub-gaussian random variable with variance σ2\sigma^{2}. If X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are i.i.d. random variables, then

Wn(𝝀)pX+Np\displaystyle W_{n}(\boldsymbol{\lambda})\to pX+N_{p}

in distribution, where NpN_{p} is a normal random variable, independent of pXpX, with mean zero and variance p(1p)σ2p(1-p)\sigma^{2}.

Theorem 1.3.

Suppose XX is a sub-gaussian random variable with mean zero and variance σ2\sigma^{2}. If X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are i.i.d. random variables, then

Wn(𝒔)pX+Np\displaystyle W_{n}(\boldsymbol{s})\to pX+N_{p}

in distribution, where NpN_{p} is a normal random variable, independent of pXpX, with mean zero and variance p(1p)σ2p(1-p)\sigma^{2}.

The final theorem illustrates, in particular, how sensitive the sequences Wn(𝒔)W_{n}(\boldsymbol{s}) are with respect to the conditions imposed on X1,X2,,XnX_{1},X_{2},\ldots,X_{n}. Very distinct behavior emerges by simply choosing random variables with non-zero mean. Ultimately, this distinction is rooted in the asymptotic behavior of the Schatten qq norm of a graph, which is defined for q1q\geq 1 by

GSq=(s1q+s2q++snq)1/q.\displaystyle\|G\|_{S_{q}}=\Big{(}s_{1}^{q}+s_{2}^{q}+\cdots+s_{n}^{q}\Big{)}^{1/q}.

In particular, the large nn behavior of GSq\|G\|_{S_{q}} is very different for the cases q=1q=1 and q>1q>1 [27, Thm. 5]. Note the sub-gaussian condition is also removed in the following theorem.

Theorem 1.4.

Suppose XX is any random variable with non-zero mean μ\mu which admits a moment generating function. If X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are i.i.d. random variables, then Wn(𝐬)/(μn)W_{n}(\boldsymbol{s})/(\mu\sqrt{n}) converges in distribution to a point mass at 83πp(1p)\frac{8}{3\pi}\sqrt{p(1-p)}.

There exist various central limit-type theorems in the literature pertaining to sums of eigenvalues of random matrices e.g. [8, 21, 26, 28]. Among the earliest results in this direction are due to Johansson [21]. These specific results concern random Hermitian matrices distributed according to the probability measure

dμn,τ(A)exp(τtrV(A))dA,\displaystyle d\mu_{n,\tau}(A)\propto\exp\big{(}-\tau\operatorname{tr}V(A)\big{)}\,dA, (1.3)

in which τ>0\tau>0, V(x)V(x) is a polynomial with positive even degree and a positive leading coefficient and dAdA denotes Lebesgue measure on the space of n×nn\times n complex Hermitian matrices. It bears worth mentioning that setting τ=n/2\tau=n/2 and V(x)=x2V(x)=x^{2} gives rise to the Gaussian unitary ensemble introduced by Dyson [12, 13]. The main result of [21] establishes that the linear eigenvalue statistic kf(λk)\sum_{k}f(\lambda_{k}) converges in distribution to a normal random variable with mean zero. The sums we consider in this paper can be thought of as randomized graph-theoretic versions of the sums originally considered by Johansson.

1.3. Examples and Simulations

The following examples and simulations illustrate a few of the main theorems. All code used to generate these plots is made available by contacting the authors.

Example 1.5.

Suppose XX is a normal random variable with mean μ\mu and variance σ2\sigma^{2}. Theorem 1.1 ensures that Wn(𝝀)/(σp)W_{n}(\boldsymbol{\lambda})/(\sigma\sqrt{p}) converges in distribution to a standard normal. The simulation in Figure 1 is performed with μ=σ2=1\mu=\sigma^{2}=1.

Refer to caption
Figure 1. Histogram plot for Wn(𝝀)/(σp)W_{n}(\boldsymbol{\lambda})/(\sigma\sqrt{p}) using 750 trials taken with n=1000n=1000 and p=1/2p=1/2.
Example 1.6.

Suppose XX is a Rademacher random variable. In particular, one has (X=1)=(X=1)=1/2\mathbb{P}(X=1)=\mathbb{P}(X=-1)=1/2. The random variable XX is sub-gaussian and Theorem 1.2 ensures that Wn(𝝀)W_{n}(\boldsymbol{\lambda}) converges in distribution to the sum of pXpX with an independent normal random variable with mean zero and variance σp2=p(1p)\sigma_{p}^{2}=p(1-p). The probability density of this sum is given by the convolution of the gaussian f(x)=(σp2π)1exp[x2/(2σp2)]f(x)=(\sigma_{p}\sqrt{2\pi})^{-1}\exp\big{[}-x^{2}/(2\sigma_{p}^{2})\big{]} with [δ(x+p)+δ(xp)]/2\big{[}\delta(x+p)+\delta(x-p)\big{]}/2. Interestingly, this density corresponds to the gaussian mixture [f(x+p)+f(xp)]/2\big{[}f(x+p)+f(x-p)\big{]}/2, which is bimodal in the case p>1/2p>1/2. Figure 2 shows histogram plots for Wn(𝝀)W_{n}(\boldsymbol{\lambda}) in the cases p=1/2p=1/2 and p=3/4p=3/4.

Refer to caption
Refer to caption
Figure 2. Histogram plots for Wn(𝝀)W_{n}(\boldsymbol{\lambda}) using 750 trials taken with n=1000n=1000 and p=1/2p=1/2 (left) and p=3/4p=3/4 (right).
Example 1.7.

Suppose XX is a normal random variable with non-zero mean μ\mu and variance σ2\sigma^{2}. Theorem 1.4 ensures that Wn(𝒔)/(μn)W_{n}(\boldsymbol{s})/(\mu\sqrt{n}) converges in distribution to a point mass at 83πp(1p)\frac{8}{3\pi}\sqrt{p(1-p)}. The simulation in Figure 3 is performed with μ=σ2=1\mu=\sigma^{2}=1.

Refer to caption
Figure 3. Histogram plot for Wn(𝒔)/(μn)W_{n}(\boldsymbol{s})/(\mu\sqrt{n}) using 750 trials with n=1000n=1000 and p=3/4p=3/4.

1.4. Outline of the Paper

This paper, which is intended for a wide probabilistic audience, takes us on a short journey through the spectral analysis of large random matrices and is organized as follows. Section 2 highlights a few classic results in random matrix theory which serve as prerequisites for later sections. No background in random matrices is assumed. Section 3 provides a computational lemma that we use to expand the partial moments of Wn(𝒂)W_{n}(\boldsymbol{a}) in terms of power sum symmetric functions. Section 4 establishes the asymptotics for the partial moments of Wn(𝒂)W_{n}(\boldsymbol{a}) by analyzing the limiting behavior of the power sum symmetric functions. Theorems 1.1 and 1.2 are proved in Section 5. Theorem 1.3 is proved in Section 6 and Theorem 1.4 is proved in Section 7. Finally, we conclude with possible directions for future work and closing remarks.

2. Random Matrix Prerequisites

The limiting spectral analysis for large random matrices has become a widely studied topic in probability since the pioneering work of Eugene Wigner who proved that the expected empirical spectral distribution of a normalized n×nn\times n (Wigner) matrix tends to the semicircular law μsc\mu_{\operatorname{sc}}. To begin, suppose AA is an n×nn\times n Hermitian matrix with complex entries. The eigenvalues λ1,λ2,,λn\lambda_{1},\lambda_{2},\ldots,\lambda_{n} of AA are real and we can define the one-dimensional distribution function

μA(x)=1n|{in:λix}|\displaystyle\mu_{A}(x)=\frac{1}{n}\Big{|}\{i\leq n\,:\,\lambda_{i}\leq x\}\Big{|}

called the empircal spectral distribution (ESD) of AA. The relation [3, Sec. 1.3.1]

1ntr(Ak)=xk𝑑μA(x)\displaystyle\frac{1}{n}\operatorname{tr}(A^{k})=\int_{-\infty}^{\infty}x^{k}\,d\mu_{A}(x) (2.1)

plays a fundamental role in random matrix theory. Specifically, it turns the problem of establishing convergence, in whatever sense, for the ESD of a sequence {An}\{A_{n}\} of random matrices into the problem of establishing convergence of the sequence {1ntr(Ank)}\{\frac{1}{n}\operatorname{tr}(A_{n}^{k})\} for each fixed kk.

An n×nn\times n symmetric Wigner matrix is an n×nn\times n real symmetric matrix whose entries, modulo the symmetry condition ξij=ξji\xi_{ij}=\xi_{ji}, are independent. Specifically, we permit i.i.d. mean zero entries ξij\xi_{ij} above the main diagonal and i.i.d. mean zero entries ξii\xi_{ii} on the main diagonal. These two families need not share the same distribution, however. Moreover, we impose the condition that all entries have bounded moments and share a common second moment. If BnB_{n} is an n×nn\times n symmetric Wigner matrix, then we denote An=Bn/nA_{n}=B_{n}/\sqrt{n}. The pioneering work of Wigner [33, 34] establishes

limn1n𝔼tr(Ank)=xk𝑑μsc(x)\displaystyle\lim_{n\to\infty}\frac{1}{n}\mathbb{E}\operatorname{tr}(A_{n}^{k})=\int_{-\infty}^{\infty}x^{k}\,d\mu_{\operatorname{sc}}(x) (2.2)

for all integers k1k\geq 1. In particular, the expected ESD of a normalized n×nn\times n symmetric Wigner matrix tends to the semicircular law whose density is given by

fsc(x)=12π4x2 1[2,2].\displaystyle f_{\operatorname{sc}}(x)=\frac{1}{2\pi}\sqrt{4-x^{2}}\,\mathbb{1}_{[-2,2]}.

This original result due to Wigner has been extended in several aspects. Grenander [20] proved the empirical spectral distribution converges to μsc\mu_{\operatorname{sc}} in probability. Arnold [1, 2] further improved this result by showing the empirical spectral distribution converges to the semicircular law almost surely. We remark that the matrix ensembles underlying (2.2) can be generalized beyond those originally considered by Wigner and refer the reader to [3] and [31] for excellent surveys on the rich and rapidly developing field of spectral analysis of large random matrices.

2.1. Almost Sure Convergence of the ESD

The form of (2.2) that we need for later sections is due to Arnold. In particular, suppose BnB_{n} is an n×nn\times n real symmetric matrix whose entries, modulo the symmetry condition ξij=ξji\xi_{ij}=\xi_{ji}, are independent. Assume the upper-triangular entries share a common distribution with finite positive variance σ2\sigma^{2}. In addition, we assume the diagonal entries also share a common distribution. Furthermore, assume the entries ξij\xi_{ij} have finite fourth and sixth moments for i>ji>j and the diagonal entries ξii\xi_{ii} have finite second and fourth moments. Define the normalized matrix An=Bn/(σn)A_{n}=B_{n}/(\sigma\sqrt{n}). Arnold proves that μAnμsc\mu_{A_{n}}\to\mu_{\operatorname{sc}} almost surely in the sense that

ϕ(x)𝑑μAn(x)ϕ(x)𝑑μsc(x)\displaystyle\int_{\mathbb{R}}\phi(x)\,d\mu_{A_{n}}(x)\to\int_{\mathbb{R}}\phi(x)\,d\mu_{\operatorname{sc}}(x) (2.3)

for all continuous and compactly supported test functions ϕ:\phi:\mathbb{R}\to\mathbb{R} [2, Thm. 2].

2.2. Real Symmetric Matrices with Independent Bounded Entries

Here and throughout we adopt standard asymptotic notation. In particular, we write f(n)=O(g(n))f(n)=O\big{(}g(n)\big{)} if there exists a constant CC such that |f(n)|Cg(n)|f(n)|\leq Cg(n) for all nn sufficiently large. Moreover, we write f(n)=o(g(n))f(n)=o\big{(}g(n)\big{)} whenever f(n)/g(n)0f(n)/g(n)\to 0 as nn\to\infty.

A result due to Füredi and Komlós allows us to analyze the limiting behavior of polynomials in a1,a2,,ana_{1},a_{2},\ldots,a_{n}. Suppose AnA_{n} is an n×nn\times n real symmetric matrix with bounded entries. Moreover, we assume the entries of AnA_{n}, modulo the constraint ξij=ξji\xi_{ij}=\xi_{ji}, are independent. Let μ=𝔼ξij>0\mu=\mathbb{E}\xi_{ij}>0 denote the common mean of the upper-triangular entries and let σ2=𝔼(ξijμ)2\sigma^{2}=\mathbb{E}(\xi_{ij}-\mu)^{2} denote their common variance. Furthermore, suppose the diagonal entries share a common mean, ν=𝔼ξii\nu=\mathbb{E}\xi_{ii}. Füredi and Komlós [17, Thm. 1] show that the distribution of the largest eigenvalue λ1\lambda_{1} of AnA_{n} can be approximated in order n1/2n^{-1/2} by a normal distribution with mean (n1)μ+ν+σ2/μ(n-1)\mu+\nu+\sigma^{2}/\mu and variance 2σ22\sigma^{2}. Moreover, with high probability (w.h.p.) we have

|λi|<σn+O(n1/3logn)\displaystyle|\lambda_{i}|<\sigma\sqrt{n}+O(n^{1/3}\log n)

whenever i1i\neq 1.

3. A Computational Lemma for the Partial Moments of Wn(𝒂)W_{n}(\boldsymbol{a})

Suppose j1j\geq 1 and X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d.  random variables defined on the probability space (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}). The lemma we present is a simple, albeit useful, computational tool for evaluating the moments

fj(𝒂)=fj(n)(𝒂)=𝔼Ω(a1X1+a2X2++anXn)j,\displaystyle f_{j}(\boldsymbol{a})=f_{j}^{(n)}(\boldsymbol{a})=\mathbb{E}_{\Omega}\big{(}a_{1}X_{1}+a_{2}X_{2}+\cdots+a_{n}X_{n})^{j},

in which 𝔼Ω\mathbb{E}_{\Omega} denotes expectation with respect to X1,X2,,XnX_{1},X_{2},\ldots,X_{n}. This lemma expresses fj(𝒂)f_{j}(\boldsymbol{a}) as a sum taken over all partitions of jj and involves power sum symmetric polynomials in the variables a1,a2,,ana_{1},a_{2},\ldots,a_{n}. We recall these definitions below and refer the reader to [29, Sec. 1.7] and [30, Sec. 7.7] for in depth discussions.

A partition of an integer j1j\geq 1 is a non-increasing sequence 𝝅=(j1,j2,,jr)\boldsymbol{\pi}=(j_{1},j_{2},\ldots,j_{r}) of positive integers such that j1+j2++jr=jj_{1}+j_{2}+\cdots+j_{r}=j. If j2j\geq 2 is an even integer, then a partition 𝝅=(j1,j2,,jr)\boldsymbol{\pi}=(j_{1},j_{2},\ldots,j_{r}) is a partition into even parts if j1,j2,,jrj_{1},j_{2},\ldots,j_{r} are even integers. We let P(j)P(j) and E(j)E(j) denote the set of all partitions of jj and the set of all partitions of jj into even parts, respectively. We define

y𝝅=i1(i!)mimi!,\displaystyle y_{\boldsymbol{\pi}}=\prod_{i\geq 1}(i!)^{m_{i}}m_{i}!,

in which mi=mi(𝝅)m_{i}=m_{i}(\boldsymbol{\pi}) denotes the multiplicity of ii appearing in a partition 𝝅\boldsymbol{\pi}. Lastly, the power sum symmetric polynomial of degree jj in the variables a1,a2,,ana_{1},a_{2},\ldots,a_{n} is the homogeneous polynomial defined by setting

𝓅𝒿(a)=𝓅𝒿(𝓃)(a)=𝒶1𝒿+𝒶2𝒿++𝒶𝓃𝒿.\displaystyle\mathpzc{p}_{j}(\boldsymbol{a})=\mathpzc{p}_{j}^{(n)}(\boldsymbol{a})=a_{1}^{j}+a_{2}^{j}+\cdots+a_{n}^{j}.

We often denote 𝓅𝒿=𝓅𝒿(a)\mathpzc{p}_{j}=\mathpzc{p}_{j}(\boldsymbol{a}) for brevity when there is no risk of confusion.

Lemma 3.1.

Let j1j\geq 1 be any integer and suppose X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d. random variables which admit a moment generating function. If κ1,κ2,\kappa_{1},\kappa_{2},\ldots denotes the cumulants of the XiX_{i}, then

fj(𝒂)=j!𝝅P(j)κ𝝅y𝝅𝓅𝝅,\displaystyle f_{j}(\boldsymbol{a})=j!\sum_{\boldsymbol{\pi}\in P(j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}\mathpzc{p}_{\boldsymbol{\pi}},

where κ𝛑=κj1κj2κjr\kappa_{\boldsymbol{\pi}}=\kappa_{j_{1}}\kappa_{j_{2}}\cdots\kappa_{j_{r}} and 𝓅𝛑=𝓅𝒿1𝓅𝒿2𝓅𝒿𝓇\mathpzc{p}_{\boldsymbol{\pi}}=\mathpzc{p}_{j_{1}}\mathpzc{p}_{j_{2}}\cdots\mathpzc{p}_{j_{r}} given 𝛑=(j1,j2,,jr)\boldsymbol{\pi}=(j_{1},j_{2},\ldots,j_{r}).

Proof.

The random variables X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d. which implies that the moment generating function of k=1nakXk\sum_{k=1}^{n}a_{k}X_{k} takes the form k=1nM(akt)\prod_{k=1}^{n}M(a_{k}t), where M(t)M(t) denotes the moment generating function of the XiX_{i} [5, Sec. 9]. Therefore,

j=0fj(𝒂)tjj!\displaystyle\sum_{j=0}^{\infty}f_{j}(\boldsymbol{a})\frac{t^{j}}{j!} =k=1nM(akt)=exp(K(a1t)+K(a2t)++K(ant)),\displaystyle=\prod_{k=1}^{n}M(a_{k}t)=\exp\Big{(}K(a_{1}t)+K(a_{2}t)+\cdots+K(a_{n}t)\Big{)},

in which K(t)=logM(t)K(t)=\log M(t) denotes the cumulant generating function of the XiX_{i}. The identity K(t)==1κt/!K(t)=\sum_{\ell=1}^{\infty}\kappa_{\ell}t^{\ell}/\ell!, which defines the cumulant sequence κ1,κ2,\kappa_{1},\kappa_{2},\ldots, implies

j=0fj(𝒂)tjj!=exp(=1κ𝓅𝓉!)=𝒿=0𝒿(κ1𝓅1,κ2𝓅2,,κ𝒿𝓅𝒿)𝓉𝒿𝒿!,\displaystyle\sum_{j=0}^{\infty}f_{j}(\boldsymbol{a})\frac{t^{j}}{j!}=\exp\Big{(}\sum_{\ell=1}^{\infty}\kappa_{\ell}\mathpzc{p}_{\ell}\frac{t^{\ell}}{\ell!}\Big{)}=\sum_{j=0}^{\infty}B_{j}(\kappa_{1}\mathpzc{p}_{1},\kappa_{2}\mathpzc{p}_{2},\ldots,\kappa_{j}\mathpzc{p}_{j})\frac{t^{j}}{j!}, (3.1)

where Bj(x1,,xj)B_{j}(x_{1},\ldots,x_{j}) denotes the complete Bell polynomial of degree jj in the variables x1,x2,,xjx_{1},x_{2},\ldots,x_{j} [4, Sec. II] defined via the generating function

j=0Bj(x1,x2,,xj)tjj!=exp(=1xt!).\displaystyle\sum_{j=0}^{\infty}B_{j}(x_{1},x_{2},\ldots,x_{j})\frac{t^{j}}{j!}=\exp\Big{(}\sum_{\ell=1}^{\infty}x_{\ell}\frac{t^{\ell}}{\ell!}\Big{)}. (3.2)

Comparing coefficients in the above expression and applying the identity

Bj(x1,x2,,xj)=j!i1,i2,,ij0i1+2i2++jij=js=1jxsis(s!)isis!=j!𝝅P(j)x𝝅y𝝅\displaystyle B_{j}(x_{1},x_{2},\ldots,x_{j})=j!\sum_{\begin{subarray}{c}i_{1},i_{2},\ldots,i_{j}\geq 0\\ i_{1}+2i_{2}+\cdots+ji_{j}=j\end{subarray}}\prod_{s=1}^{j}\frac{x_{s}^{i_{s}}}{(s!)^{i_{s}}i_{s}!}=j!\sum_{\boldsymbol{\pi}\in P(j)}\frac{x_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}} (3.3)

completes the proof. ∎

Remark 3.2.

XX admits a moment generating function, which implies that its cumulant generating function K(t)K(t) converges in a neighborhood of t=0t=0 [5, Sec. 9]. Relation (3.1) therefore holds in a neighborhood of t=0t=0. We use this fact repeatedly throughout the rest of the paper.

4. Asymptotics for Erdős-Rényi-Gilbert Graphs

Here and throughout we let 𝔼𝒢=𝔼𝒢(n,p)\mathbb{E}_{\mathcal{G}}=\mathbb{E}_{\mathcal{G}(n,p)} and 𝒢=𝒢(n,p)\mathbb{P}_{\mathcal{G}}=\mathbb{P}_{\mathcal{G}(n,p)} denote expectation and probability, respectively, with respect to 𝒢(n,p)\mathcal{G}(n,p). A fundamental fact is that the number of edges in a random graph of order nn follow a binomial distribution,

𝒢(|E(G)|=m)=(Nm)pm(1p)Nm,\displaystyle\mathbb{P}_{\mathcal{G}}\big{(}|E(G)|=m\big{)}={N\choose m}p^{m}(1-p)^{N-m},

where N=(n2)N={n\choose 2}. Moreover, the adjacency matrix AnA_{n} of random graph of order nn is a random n×nn\times n real symmetric matrix whose upper-triangular entries ξij\xi_{ij} are bounded independent random variables which have mean μ=p\mu=p and variance σp2=p(1p)\sigma_{p}^{2}=p(1-p). The diagonal elements of AnA_{n} satisfy ν=𝔼𝒢[ξii]=0\nu=\mathbb{E}_{\mathcal{G}}[\xi_{ii}]=0 since loops are not permitted. The result by Füredi and Komlós [17, Thm. 1], which we outline in Section 2.2, implies that w.h.p.,

|λ1|=(n1)p+1p+O(n1/2)=(p+o(1))n\displaystyle|\lambda_{1}|=(n-1)p+1-p+O(n^{-1/2})=\big{(}p+o(1)\big{)}n (4.1)

and

|λ2|<2σpn+O(n1/3logn)=(2σp+o(1))n.\displaystyle|\lambda_{2}|<2\sigma_{p}\sqrt{n}+O(n^{1/3}\log n)=\Big{(}2\sigma_{p}+o(1)\Big{)}\sqrt{n}. (4.2)

4.1. The Eigenvalue Case

We now establish the limiting behavior for the partial moments fj(𝝀)=𝔼Ω(λ1X1+λ2X2++λnXn)jf_{j}(\boldsymbol{\lambda})=\mathbb{E}_{\Omega}(\lambda_{1}X_{1}+\lambda_{2}X_{2}+\cdots+\lambda_{n}X_{n})^{j} in which X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d. random variables defined on (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) and λ1,λ2,,λn\lambda_{1},\lambda_{2},\ldots,\lambda_{n} correspond to eigenvalues in the Erdős-Rényi-Gilbert model. We recall 𝓅𝓀(𝝀)\mathpzc{p}_{k}(\boldsymbol{\lambda}) denotes the power sum symmetric polynomial of degree kk in the variables λ1,λ2,,λn\lambda_{1},\lambda_{2},\ldots,\lambda_{n}.

Lemma 4.1.

Let k1k\geq 1 be an odd integer. If λ1,λ2,,λn\lambda_{1},\lambda_{2},\ldots,\lambda_{n} correspond to eigenvalues in the Erdős-Rényi-Gilbert 𝒢(n,p)\mathcal{G}(n,p) model, then we have 𝓅𝓀(𝛌)=(𝓃1+𝓀/2)\mathpzc{p}_{k}(\boldsymbol{\lambda})=o(n^{1+k/2}) almost surely.

Proof.

Let BnB_{n} be the adjacency matrix of a random graph of order nn. This matrix satisfies the hypotheses in Section 2.1. Moreover, the variance of the upper-triangular entries is given by σp2=p(1p)\sigma_{p}^{2}=p(1-p). The ESD of the normalized matrix An=Bn/(nσp)A_{n}=B_{n}/(\sqrt{n}\sigma_{p}) converges almost surely to the semicircular law by [2, Thm. 2] as seen in Section 2.1. Therefore, we can use (2.1) and the identity tr(Bnk)=𝓅𝓀(𝝀)\operatorname{tr}(B_{n}^{k})=\mathpzc{p}_{k}(\boldsymbol{\lambda}) to conclude

𝓅𝓀(𝝀)n(nσp)k=1ntr(Ank)xk𝑑μsc(x)\displaystyle\frac{\mathpzc{p}_{k}(\boldsymbol{\lambda})}{n(\sqrt{n}\sigma_{p})^{k}}=\frac{1}{n}\operatorname{tr}(A_{n}^{k})\to\int_{-\infty}^{\infty}x^{k}\,d\mu_{\operatorname{sc}}(x) (4.3)

almost surely. The symmetry of the the semicircular density and the fact that kk is odd imply that 𝓅𝓀(𝝀)n(nσp)k0\frac{\mathpzc{p}_{k}(\boldsymbol{\lambda})}{n(\sqrt{n}\sigma_{p})^{k}}\to 0 almost surely. The claim follows. ∎

Proposition 4.2.

Let j1j\geq 1 be any integer and let X1,X2,,XnX_{1},X_{2},\ldots,X_{n} be i.i.d. random variables with cumulant sequence κ1,κ2,\kappa_{1},\kappa_{2},\ldots. Define

cj(p)=j!𝝅E(j)κ𝝅y𝝅pjm2,\displaystyle c_{j}(p)=j!\sum_{\boldsymbol{\pi}\in E(j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}p^{j-m_{2}},

where κ𝛑\kappa_{\boldsymbol{\pi}} and y𝛑y_{\boldsymbol{\pi}} are defined as in Lemma 3.1 and E(j)E(j) denotes the set of partitions of jj into even parts. The partial moments fj(𝛌)f_{j}(\boldsymbol{\lambda}) satisfy the following, where o(1)o(1) denotes a term tending to zero as nn\to\infty with jj fixed.

(a) If jj is odd, then njfj(𝝀)=o(1)n^{-j}f_{j}(\boldsymbol{\lambda})=o(1) w.h.p.

(b) If jj is even, then njfj(𝝀)=cj(p)+o(1)n^{-j}f_{j}(\boldsymbol{\lambda})=c_{j}(p)+o(1) w.h.p.

Proof.

Denote 𝓅𝒾=𝓅𝒾(𝝀)\mathpzc{p}_{i}=\mathpzc{p}_{i}(\boldsymbol{\lambda}) for brevity. The number of edges in a random graph GnG_{n} of order nn is a binomial random variable with parameters N=(n2)N={n\choose 2} and pp. The expected number of edges in GnG_{n} is therefore given by 𝔼𝒢[m]=pN\mathbb{E}_{\mathcal{G}}[m]=pN. The weak law of large numbers implies that mm is tightly concentrated around its mean for large nn. Therefore, we have m=(p/2+o(1))n2m=\big{(}p/2+o(1)\big{)}n^{2} w.h.p. If AnA_{n} denotes the adjaceny matrix of GnG_{n}, then 𝓅2=tr(𝒜𝓃2)=2𝓂\mathpzc{p}_{2}=\operatorname{tr}(A_{n}^{2})=2m [9, Thm. 3.1.1], which implies that w.h.p.,

𝓅2=(𝓅+(1))𝓃2.\displaystyle\mathpzc{p}_{2}=\big{(}p+o(1)\big{)}n^{2}. (4.4)

If i>2i>2 is even, then consider the bound |λ1|i𝓅𝒾|λ1|𝒾+𝓃|λ2|𝒾.|\lambda_{1}|^{i}\leq\mathpzc{p}_{i}\leq|\lambda_{1}|^{i}+n|\lambda_{2}|^{i}. Inequalities (4.1) and (4.2) imply that w.h.p.,

(pi+o(1))ni𝓅𝒾<(𝓅𝒾+(1))𝓃𝒾+𝒪(𝓃1+𝒾/2).\displaystyle\big{(}p^{i}+o(1)\big{)}n^{i}\leq\mathpzc{p}_{i}<\big{(}p^{i}+o(1)\big{)}n^{i}+O(n^{1+i/2}).

Observe that n1i/20n^{1-i/2}\to 0 as nn\to\infty since i>2i>2. We conclude that w.h.p.,

𝓅𝒾=(𝓅𝒾+(1))𝓃𝒾.\displaystyle\mathpzc{p}_{i}=\big{(}p^{i}+o(1)\big{)}n^{i}. (4.5)

Let 𝝅=(j1,j2,,jr)\boldsymbol{\pi}=(j_{1},j_{2},\ldots,j_{r}) be a partition of jj. Lemma 4.1 together with relations (4.4) and (4.5) imply that w.h.p,

𝓅𝒿1𝓅𝒿2𝓅𝒿𝓇=(𝓃𝒿)\displaystyle\mathpzc{p}_{j_{1}}\mathpzc{p}_{j_{2}}\cdots\mathpzc{p}_{j_{r}}=o(n^{j}) (4.6)

whenever 𝝅\boldsymbol{\pi} contains an odd integer larger than one. If m1(𝝅)0m_{1}(\boldsymbol{\pi})\neq 0, then (4.6) still holds since 𝓅1=tr(𝒜𝓃)=0\mathpzc{p}_{1}=\operatorname{tr}(A_{n})=0 for simple graphs. Any partition of an odd integer must contain an odd part. Lemma 3.1 implies that w.h.p.,

fj(𝝀)=o(nj)\displaystyle f_{j}(\boldsymbol{\lambda})=o(n^{j})

whenever jj is odd. This proves (a). If jj is even, then 𝓅𝒿1𝓅𝒿2𝓅𝒿𝓇=(𝓃𝒿)\mathpzc{p}_{j_{1}}\mathpzc{p}_{j_{2}}\cdots\mathpzc{p}_{j_{r}}=o(n^{j}) w.h.p. unless 𝝅\boldsymbol{\pi} is a partition of jj into even parts. Lemma 3.1, together with relations (4.4) and (4.5), imply that w.h.p.,

fj(𝝀)=(j!𝝅E(d)κ𝝅y𝝅pjm2+o(1))nj,\displaystyle f_{j}(\boldsymbol{\lambda})=\Bigg{(}j!\sum_{\boldsymbol{\pi}\in E(d)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}p^{j-m_{2}}+o(1)\Bigg{)}n^{j},

which proves (b). We remark that the m2m_{2} term appearing in the last expression occurs because of the discrepency in the power of pp occuring in (4.4) and (4.5). ∎

4.2. The Singular Value Case

We now establish the limiting behavior for the partial moments fj(𝒔)=𝔼Ω(s1X1+s2X2++snXn)jf_{j}(\boldsymbol{s})=\mathbb{E}_{\Omega}(s_{1}X_{1}+s_{2}X_{2}+\cdots+s_{n}X_{n})^{j} in which X1,X2,,XnX_{1},X_{2},\ldots,X_{n} are i.i.d. random variables defined on (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) and s1,s2,,sns_{1},s_{2},\ldots,s_{n} correspond to singular values in the Erdős-Rényi-Gilbert model.

Proposition 4.3.

Let j1j\geq 1 be any integer and let X1,X2,,XnX_{1},X_{2},\ldots,X_{n} be i.i.d. mean zero random variables with cumulant sequence κ1,κ2,\kappa_{1},\kappa_{2},\ldots. The partial moments fj(𝐬)f_{j}(\boldsymbol{s}) satisfy

njfj(𝒔)=j!𝝅P0(j)κ𝝅y𝝅pjm2+o(1)\displaystyle n^{-j}f_{j}(\boldsymbol{s})=j!\sum_{\boldsymbol{\pi}\in P_{0}(j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}p^{j-m_{2}}+o(1)

w.h.p., where κ𝛑\kappa_{\boldsymbol{\pi}} and y𝛑y_{\boldsymbol{\pi}} are defined as in Lemma 3.1 and P0(j)P_{0}(j) denotes the set of all partitions of jj for which m1=0m_{1}=0.

Proof.

The singular values of a random graph correspond to the moduli of its eigenvalues. Therefore, (4.4) and (4.5) hold for 𝓅𝒾=𝓅𝒾(s)\mathpzc{p}_{i}=\mathpzc{p}_{i}(\boldsymbol{s}) when ii is even. The same exact reasoning behind (4.5) implies 𝓅𝒾=(𝓅𝒾+(1))𝓃𝒾\mathpzc{p}_{i}=\big{(}p^{i}+o(1)\big{)}n^{i} when i1i\neq 1 is odd. Finally, κ1=0\kappa_{1}=0 since X1,X2,,XnX_{1},X_{2},\ldots,X_{n} have mean zero. Lemma 3.1 implies the claim. ∎

Proposition 4.4.

Let j1j\geq 1 be any integer and let X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X be i.i.d. random variables in which XX admits a moment generating function and has non-zero mean μ\mu. The partial moments fj(𝐬)f_{j}(\boldsymbol{s}) satisfy

fj(𝒔)(μn3/2)j=(8σp3π)j+o(1)\displaystyle\frac{f_{j}(\boldsymbol{s})}{(\mu n^{3/2})^{j}}=\Big{(}\frac{8\sigma_{p}}{3\pi}\Big{)}^{j}+o(1)

w.h.p., in which σp2=p(1p)\sigma_{p}^{2}=p(1-p).

Proof.

The quantity 𝓅1=𝓅1(𝐬)\mathpzc{p}_{1}=\mathpzc{p}_{1}(\mathbf{s}) is called the graph energy [19] and [11, Thm. 1] establishes that w.h.p.,

𝓅1n3/2=8σp3π+o(1).\displaystyle\frac{\mathpzc{p}_{1}}{n^{3/2}}=\frac{8\sigma_{p}}{3\pi}+o(1).

Lemma 3.1 asserts

fj(𝒔)=j!𝝅P(j)κ𝝅y𝝅𝓅𝝅=μ𝒿𝓅1𝒿+𝒿!𝝅(1,1,,1)κ𝝅𝓎𝝅𝓅𝝅\displaystyle f_{j}(\boldsymbol{s})=j!\sum_{\boldsymbol{\pi}\in P(j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}\mathpzc{p}_{\boldsymbol{\pi}}=\mu^{j}\mathpzc{p}_{1}^{j}+j!\sum_{\boldsymbol{\pi}\neq(1,1,\ldots,1)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}\mathpzc{p}_{\boldsymbol{\pi}} (4.7)

since κ1=μ\kappa_{1}=\mu and y𝝅=m1!=j!y_{\boldsymbol{\pi}}=m_{1}!=j! when 𝝅=(1,1,,1)P(j).\boldsymbol{\pi}=(1,1,\ldots,1)\in P(j). We have so far established that 𝓅2=(𝓅+(1))𝓃2\mathpzc{p}_{2}=\big{(}p+o(1)\big{)}n^{2} and 𝓅𝒾=(𝓅𝒾+(1))𝓃𝒾\mathpzc{p}_{i}=\big{(}p^{i}+o(1)\big{)}n^{i} for integers i3i\geq 3. Dividing both sides of (4.7) by (μn3/2)j(\mu n^{3/2})^{j} implies the claim since 𝓅𝝅/(𝓃3/2)𝒿=(1)\mathpzc{p}_{\boldsymbol{\pi}}/(n^{3/2})^{j}=o(1) w.h.p.  whenever 𝝅(1,1,,1)\boldsymbol{\pi}\neq(1,1,\ldots,1). ∎

5. Proof of Theorems 1.1 and 1.2

The following general form of the Hoeffding inequality [32, Thm. 2.6.3] plays an important role in our proof. Suppose ξ1,ξ2,,ξn\xi_{1},\xi_{2},\ldots,\xi_{n} are independent mean zero sub-gaussian random variables. There exists an absolute constant c1>0c_{1}>0 such that

(|j=1naiξi|t)2exp(c1t2K2𝓅2(a))\displaystyle\mathbb{P}\Bigg{(}\Big{|}\sum_{j=1}^{n}a_{i}\xi_{i}\Big{|}\geq t\Bigg{)}\leq 2\exp\Big{(}-\frac{c_{1}t^{2}}{K^{2}\mathpzc{p}_{2}(\boldsymbol{a})}\Big{)}

for all a1,a2,,ana_{1},a_{2},\ldots,a_{n}\in\mathbb{R}, in which 𝓅2(a)=𝒾𝒶𝒾2\mathpzc{p}_{2}(\boldsymbol{a})=\sum_{i}a_{i}^{2} and K=maxiξiψ2.K=\max_{i}\|\xi_{i}\|_{\psi_{2}}. If in addition ξ1,ξ2,,ξn\xi_{1},\xi_{2},\ldots,\xi_{n} have unit variances, then

k=1naiXi1𝓅2(a).\displaystyle\|\sum_{k=1}^{n}a_{i}X_{i}\|_{1}\leq\sqrt{\mathpzc{p}_{2}(\boldsymbol{a})}. (5.1)

Moreover, there exists an absolute constant c2>0c_{2}>0 such that for all q2q\geq 2,

k=1naiXiqc2Kq𝓅2(a).\displaystyle\|\sum_{k=1}^{n}a_{i}X_{i}\|_{q}\leq c_{2}K\sqrt{q}\sqrt{\mathpzc{p}_{2}(\boldsymbol{a})}. (5.2)

5.1. Proof of Theorem 1.1

Suppose X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are i.i.d. random variables defined on a probability space (Ω,Ω,Ω)(\Omega,\mathcal{F}_{\Omega},\mathbb{P}_{\Omega}) in which XX is a sub-gaussian random variable with mean μ\mu and variance σ2\sigma^{2}. Define the auxiliary variables ξi=σ2(Xiμ)\xi_{i}=\sigma^{-2}(X_{i}-\mu) and let j1j\geq 1 be any integer. The sum Wn(𝝀)W_{n}(\boldsymbol{\lambda}) satisfies

𝔼Ω|Wn(𝝀)|j\displaystyle\mathbb{E}_{\Omega}|W_{n}(\boldsymbol{\lambda})|^{j} =Wn(𝝀)jj\displaystyle=\|W_{n}(\boldsymbol{\lambda})\|_{j}^{j}
=njk=1nλkXkjj\displaystyle=n^{-j}\|\sum_{k=1}^{n}\lambda_{k}X_{k}\|_{j}^{j}
=njk=1nλk(Xkμ)+μk=1nλkjj\displaystyle=n^{-j}\|\sum_{k=1}^{n}\lambda_{k}(X_{k}-\mu)+\mu\sum_{k=1}^{n}\lambda_{k}\|_{j}^{j}
=njσ2jk=1nλkξkjj\displaystyle=n^{-j}\sigma^{2j}\|\sum_{k=1}^{n}\lambda_{k}\xi_{k}\|_{j}^{j}

since simple graphs are traceless. The random variables ξ1,ξ2,ξn\xi_{1},\xi_{2},\ldots\xi_{n} are mean zero sub-gaussian random variables with unit variances. Relations (5.1) and (5.2), together with the inequality 2mn(n1)2m\leq n(n-1), imply that there exists a constant c>0c>0, which is independent of nn, such that

𝔼Ω|Wn(𝝀)|jcnj[𝓅2(𝝀)]𝒿/2=𝒸𝓃𝒿(2𝓂)𝒿/2𝒸𝓃𝒿[𝓃(𝓃1)]𝒿/2𝒸\displaystyle\mathbb{E}_{\Omega}|W_{n}(\boldsymbol{\lambda})|^{j}\leq cn^{-j}\big{[}\mathpzc{p}_{2}(\boldsymbol{\lambda})\big{]}^{j/2}=cn^{-j}(2m)^{j/2}\leq cn^{-j}[n(n-1)]^{j/2}\leq c (5.3)

Therefore, 𝔼𝒢𝔼Ω|Wn(𝝀)|j\mathbb{E}_{\mathcal{G}}\mathbb{E}_{\Omega}|W_{n}(\boldsymbol{\lambda})|^{j} is finite and the Fubini-Tonelli theorem [16, Thm. 2.16] ensures

𝔼[Wn(𝝀)]j=𝔼𝒢𝔼Ω[Wn(𝝀)]j,\displaystyle\mathbb{E}\big{[}W_{n}(\boldsymbol{\lambda})\big{]}^{j}=\mathbb{E}_{\mathcal{G}}\mathbb{E}_{\Omega}\big{[}W_{n}(\boldsymbol{\lambda})\big{]}^{j}, (5.4)

in which 𝔼[]\mathbb{E}[\cdot] denotes expectation with respect to the product measure 𝒢Ω\mathbb{P}_{\mathcal{G}}\otimes\mathbb{P}_{\Omega}.

Proposition 4.2 and the uniform boundedness of the variables 𝔼Ω|Wn(𝝀)|j\mathbb{E}_{\Omega}|W_{n}(\boldsymbol{\lambda})|^{j} allow us to compute the limit for the total expectation of [Wn(𝝀)]j[W_{n}(\boldsymbol{\lambda})]^{j}, which we now highlight. Define Ej={G:njfj(𝝀)=αj}E_{j}=\{G\,:\,n^{-j}f_{j}(\boldsymbol{\lambda})=\alpha_{j}\}, where αj=αj(n)\alpha_{j}=\alpha_{j}(n) is a real number to be chosen momentarily. Relation (5.4) implies

𝔼[Wn(𝝀)]j\displaystyle\mathbb{E}[W_{n}(\boldsymbol{\lambda})]^{j} =𝔼𝒢[njfj(𝝀)]\displaystyle=\mathbb{E}_{\mathcal{G}}[n^{-j}f_{j}(\boldsymbol{\lambda})]
=𝒢njfj(𝝀)𝑑𝒢\displaystyle=\int_{\mathcal{G}}n^{-j}f_{j}(\boldsymbol{\lambda})\,d\mathbb{P}_{\mathcal{G}}
=αj𝒢(Ej)+Ejcnjfj(𝝀)𝑑𝒢.\displaystyle=\alpha_{j}\mathbb{P}_{\mathcal{G}}(E_{j})+\int_{E_{j}^{c}}n^{-j}f_{j}(\boldsymbol{\lambda})\,d\mathbb{P}_{\mathcal{G}}.

The inequality nj|fj(𝝀)|𝔼Ω|Wn(𝝀)|j,n^{-j}|f_{j}(\boldsymbol{\lambda})|\leq\mathbb{E}_{\Omega}|W_{n}(\boldsymbol{\lambda})|^{j}, together with (5.3), implies that

|Ejcnjfj(𝝀)𝑑𝒢|Ejc𝔼Ω|Wn(𝝀)|j𝑑𝒢c𝒢(Ejc).\displaystyle\Big{|}\int_{E_{j}^{c}}n^{-j}f_{j}(\boldsymbol{\lambda})\,d\mathbb{P}_{\mathcal{G}}\Big{|}\leq\int_{E_{j}^{c}}\mathbb{E}_{\Omega}|W_{n}(\boldsymbol{\lambda})|^{j}\,d\mathbb{P}_{\mathcal{G}}\leq c\mathbb{P}_{\mathcal{G}}(E_{j}^{c}).

Therefore,

limn𝔼[Wn(𝝀)]j=limn(αj𝒢(Ej))\displaystyle\lim_{n\to\infty}\mathbb{E}[W_{n}(\boldsymbol{\lambda})]^{j}=\lim_{n\to\infty}\Big{(}\alpha_{j}\mathbb{P}_{\mathcal{G}}(E_{j})\Big{)}

provided 𝒢(Ej)1\mathbb{P}_{\mathcal{G}}(E_{j})\to 1 as nn\to\infty so that 𝒢(Ejc)0\mathbb{P}_{\mathcal{G}}(E_{j}^{c})\to 0 as nn\to\infty. We now choose αj\alpha_{j} according to Proposition 4.2 to conclude

limn𝔼[Wn(𝝀)]j={0for j odd,cj(p)for j even.\displaystyle\lim_{n\to\infty}\mathbb{E}\big{[}W_{n}(\boldsymbol{\lambda})\big{]}^{j}=\begin{cases}0&\mbox{for }j\mbox{ odd},\\ c_{j}(p)&\mbox{for }j\mbox{ even}.\end{cases} (5.5)

A useful criteria for determing when a distribution is determined by its moments is that it admits a moment generating function [5, Thm. 30.1]. Suppose that the distribution of a random variable WW is determined by its moments. The method of moments [5, Thm. 30.2] ensures that Wn(𝝀)W_{n}(\boldsymbol{\lambda}) converges in distribution to WW provided Wn(𝝀)W_{n}(\boldsymbol{\lambda}) has moments of all orders and for all j1j\geq 1,

limn𝔼[Wn(𝝀)]j=𝔼[Wj].\displaystyle\lim_{n\to\infty}\mathbb{E}\big{[}W_{n}(\boldsymbol{\lambda})\big{]}^{j}=\mathbb{E}[W^{j}].

Recall identity (3.3) to conclude, for j1j\geq 1,

(2j)!𝝅E(2j)κ𝝅y𝝅p2jm2\displaystyle(2j)!\sum_{\boldsymbol{\pi}\in E(2j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}p^{2j-m_{2}} =(2j)!p2j𝝅E(2j)1y𝝅κ2m2κ4m4κ2km2kpm2\displaystyle=(2j)!p^{2j}\sum_{\boldsymbol{\pi}\in E(2j)}\frac{1}{y_{\boldsymbol{\pi}}}\kappa_{2}^{m_{2}}\kappa_{4}^{m_{4}}\cdots\kappa_{2k}^{m_{2k}}p^{-m_{2}} (5.6)
=(2j)!p2j𝝅E(2j)1y𝝅κm2κ4m4κ2km2k\displaystyle=(2j)!p^{2j}\sum_{\boldsymbol{\pi}\in E(2j)}\frac{1}{y_{\boldsymbol{\pi}}}\kappa^{m_{2}}\kappa_{4}^{m_{4}}\cdots\kappa_{2k}^{m_{2k}} (5.7)
=p2jB2j(0,κ,0,κ4,0,,0,κ2j),\displaystyle=p^{2j}B_{2j}(0,\kappa,0,\kappa_{4},0,\ldots,0,\kappa_{2j}), (5.8)

where κ=κ2/p\kappa=\kappa_{2}/p. The generating function (3.2) for the complete Bell polynomials implies

exp(=1xt!)+exp(=1x(t)!)=2j=0B2j(x1,x2,,x2j)t2j(2j)!\displaystyle\exp\Big{(}\sum_{\ell=1}^{\infty}x_{\ell}\frac{t^{\ell}}{\ell!}\Big{)}+\exp\Big{(}\sum_{\ell=1}^{\infty}x_{\ell}\frac{(-t)^{\ell}}{\ell!}\Big{)}=2\sum_{j=0}^{\infty}B_{2j}(x_{1},x_{2},\ldots,x_{2j})\frac{t^{2j}}{(2j)!}

Setting x1=x2==x2j1=0x_{1}=x_{2}=\cdots=x_{2j-1}=0 in the above expression and then simplifying yields the identity

j=0B2j(0,x2,0,,0,x2j)(pt)2j(2j)!=exp(=1x2(pt)2(2)!).\displaystyle\sum_{j=0}^{\infty}B_{2j}(0,x_{2},0,\ldots,0,x_{2j})\frac{(pt)^{2j}}{(2j)!}=\exp\Big{(}\sum_{\ell=1}^{\infty}x_{2\ell}\frac{(pt)^{2\ell}}{(2\ell)!}\Big{)}.

Setting x2=κx_{2}=\kappa and xi=κix_{i}=\kappa_{i} for i>2i>2 in the above expression and then appealing to (5.8) implies

j=0c2j(p)t2j(2j)!\displaystyle\sum_{j=0}^{\infty}c_{2j}(p)\frac{t^{2j}}{(2j)!} =j=0B2j(0,κ,0,,0,κ2j)(pt)2j(2j)!\displaystyle=\sum_{j=0}^{\infty}B_{2j}(0,\kappa,0,\ldots,0,\kappa_{2j})\frac{(pt)^{2j}}{(2j)!}
=exp(κp2t22+=2κ2(pt)2(2)!)\displaystyle=\exp\Big{(}\frac{\kappa p^{2}t^{2}}{2}+\sum_{\ell=2}^{\infty}\kappa_{2\ell}\frac{(pt)^{2\ell}}{(2\ell)!}\Big{)}
=exp(κp2t22κ2p2t22+=1κ2(pt)2(2)!)\displaystyle=\exp\Big{(}\frac{\kappa p^{2}t^{2}}{2}-\frac{\kappa_{2}p^{2}t^{2}}{2}+\sum_{\ell=1}^{\infty}\kappa_{2\ell}\frac{(pt)^{2\ell}}{(2\ell)!}\Big{)}
=exp(αp2t22κ2p2t22)exp(K(pt)+K(pt)2)\displaystyle=\exp\Big{(}\frac{\alpha p^{2}t^{2}}{2}-\frac{\kappa_{2}p^{2}t^{2}}{2}\Big{)}\exp\Bigg{(}\frac{K(pt)+K(-pt)}{2}\Bigg{)}
=exp(12p(1p)σ2t2)M(pt)M(pt)\displaystyle=\exp\Big{(}\frac{1}{2}p(1-p)\sigma^{2}t^{2}\Big{)}\sqrt{M(pt)M(-pt)} (5.9)

in a neighborhood of t=0t=0, where M(t)M(t) and K(t)K(t) denote the moment and cumulant generating functions of XX, respectively. If XX is a normal random variable with mean μ\mu and variance σ2\sigma^{2}, then the moment generating function of XX is given by M(t)=exp(μt+12σ2t2)M(t)=\exp\big{(}\mu t+\frac{1}{2}\sigma^{2}t^{2}\big{)} and (5.9) implies that

j=0c2j(p)t2j(2j)!=exp(12pσ2t2)\displaystyle\sum_{j=0}^{\infty}c_{2j}(p)\frac{t^{2j}}{(2j)!}=\exp\Bigg{(}\frac{1}{2}p\sigma^{2}t^{2}\Bigg{)}

in a neighborhood of t=0t=0. The method of moments and (5.5) imply Theorem 1.1 since the moment generating function for the sum of two independent random variables is given by the product of their moment generating functions.

5.2. Proof of Theorem 1.2

If XX is a symmetric sub-gaussian random variable, then M(t)=M(t)M(t)=M(-t) and (5.9) implies

j=0c2j(p)t2j(2j)!=exp(12p(1p)σ2t2)M(pt).\displaystyle\sum_{j=0}^{\infty}c_{2j}(p)\frac{t^{2j}}{(2j)!}=\exp\Bigg{(}\frac{1}{2}p(1-p)\sigma^{2}t^{2}\Bigg{)}M(pt).

The moment generating function of pXpX converges for all tt\in\mathbb{R} since pXpX is a sub-gaussian random variable with mean zero [32, Prop. 2.5.2]. The right hand side is the moment generating function for the sum of pXpX with an independent normal with mean zero and variance p(1p)σ2p(1-p)\sigma^{2}. The corresponding distribution is therefore determined by its moments. The method of moments and (5.5) now conclude the proof of Theorem 1.2.

6. Proof of Theorem 1.3

Suppose X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are i.i.d. random variables defined on (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) in which XX is a sub-gaussian random variable with mean zero and variance σ2\sigma^{2}. The same reasoning of Section 5 implies that for all j1j\geq 1,

limn𝔼[Wn(𝒔)]j=j!𝝅P0(j)κ𝝅y𝝅pjm2.\displaystyle\lim_{n\to\infty}\mathbb{E}[W_{n}(\boldsymbol{s})]^{j}=j!\sum_{\boldsymbol{\pi}\in P_{0}(j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}p^{j-m_{2}}.

Set κ=κ2/p\kappa=\kappa_{2}/p and apply (3.3) to conclude, for j1j\geq 1,

j!𝝅P0(j)κ𝝅y𝝅pjm2\displaystyle j!\sum_{\boldsymbol{\pi}\in P_{0}(j)}\frac{\kappa_{\boldsymbol{\pi}}}{y_{\boldsymbol{\pi}}}p^{j-m_{2}} =j!pj𝝅P0(j)1y𝝅κ2m2κ3m3κjmjpm2\displaystyle=j!p^{j}\sum_{\boldsymbol{\pi}\in P_{0}(j)}\frac{1}{y_{\boldsymbol{\pi}}}\kappa_{2}^{m_{2}}\kappa_{3}^{m_{3}}\cdots\kappa_{j}^{m_{j}}p^{-m_{2}}
=j!pj𝝅P0(j)1y𝝅κm2κ3m3κjmj\displaystyle=j!p^{j}\sum_{\boldsymbol{\pi}\in P_{0}(j)}\frac{1}{y_{\boldsymbol{\pi}}}\kappa^{m_{2}}\kappa_{3}^{m_{3}}\cdots\kappa_{j}^{m_{j}}
=pjBj(0,κ,κ3,,κj).\displaystyle=p^{j}B_{j}(0,\kappa,\kappa_{3},\ldots,\kappa_{j}).

The generating function (3.2) for the complete Bell polynomials implies

j=0Bj(0,κ,κ3,,κj)(pt)jj!\displaystyle\sum_{j=0}^{\infty}B_{j}(0,\kappa,\kappa_{3},\ldots,\kappa_{j})\frac{(pt)^{j}}{j!} =exp(12κp2t2+=3κ(pt)!)\displaystyle=\exp\Big{(}\frac{1}{2}\kappa p^{2}t^{2}+\sum_{\ell=3}^{\infty}\kappa_{\ell}\frac{(pt)^{\ell}}{\ell!}\Big{)}
=exp(12κp2t212κ2p2t2+=2κ(pt)!)\displaystyle=\exp\Big{(}\frac{1}{2}\kappa p^{2}t^{2}-\frac{1}{2}\kappa_{2}p^{2}t^{2}+\sum_{\ell=2}^{\infty}\kappa_{\ell}\frac{(pt)^{\ell}}{\ell!}\Big{)}
=exp(12κp2t212κ2p2t2)exp(K(pt))\displaystyle=\exp\Big{(}\frac{1}{2}\kappa p^{2}t^{2}-\frac{1}{2}\kappa_{2}p^{2}t^{2}\Big{)}\exp\Big{(}K(pt)\Big{)}
=exp(12p(1p)σ2t2)M(pt)\displaystyle=\exp\Big{(}\frac{1}{2}p(1-p)\sigma^{2}t^{2}\Big{)}M(pt)

in a neighborhood of t=0t=0, where M(t)M(t) and K(t)K(t) denote the moment and cumulant generating functions of XX, respectively. The method of moments concludes the proof of Theorem 1.3.

7. Proof of Theorem 1.4

Suppose XX is a random variable defined on (Ω,,Ω)(\Omega,\mathcal{F},\mathbb{P}_{\Omega}) which has a non-zero mean and admits a moment generating function. Let X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X be i.i.d.  random variables. Denote Vn(𝒔)=Wn(𝒔)/(μn)V_{n}(\boldsymbol{s})=W_{n}(\boldsymbol{s})/(\mu\sqrt{n}) for brevity. Minkowski’s inequality [5, p. 242] and the fact that X1,X2,,XnXX_{1},X_{2},\ldots,X_{n}\sim X are identically distributed imply

(𝔼Ω|Vn(𝒔)|j)1/j\displaystyle\Big{(}\mathbb{E}_{\Omega}|V_{n}(\boldsymbol{s})|^{j}\Big{)}^{1/j} =μ1n3/2(𝔼Ω|σ1X1+σ2X2++σnXn|j)1/j\displaystyle=\mu^{-1}n^{-3/2}\Big{(}\mathbb{E}_{\Omega}|\sigma_{1}X_{1}+\sigma_{2}X_{2}+\cdots+\sigma_{n}X_{n}|^{j}\Big{)}^{1/j}
=μ1n3/2σ1X1+σ2X2++σnXnj\displaystyle=\mu^{-1}n^{-3/2}\|\sigma_{1}X_{1}+\sigma_{2}X_{2}+\cdots+\sigma_{n}X_{n}\|_{j}
μ1n3/2Xjk=1nσk.\displaystyle\leq\mu^{-1}n^{-3/2}\|X\|_{j}\sum_{k=1}^{n}\sigma_{k}.

The Cauchy-Schwarz inequality yields the inequality

(k=1nσk)2=(k=1n1σk)2nk=1nσk2=n𝓅2(s).\displaystyle\Big{(}\sum_{k=1}^{n}\sigma_{k}\Big{)}^{2}=\Big{(}\sum_{k=1}^{n}1\cdot\sigma_{k}\Big{)}^{2}\leq n\sum_{k=1}^{n}\sigma_{k}^{2}=n\mathpzc{p}_{2}(\boldsymbol{s}).

Appealing to the inequality 𝓅2(s)=𝓅2(𝝀)=2𝓂𝓃(𝓃1)\mathpzc{p}_{2}(\boldsymbol{s})=\mathpzc{p}_{2}(\boldsymbol{\lambda})=2m\leq n(n-1) now implies that there exists a constant cc, which is independent of nn, for which

𝔼Ω|Vn(𝒔)|j(μ1n3/2)jXjj(2nm)jc.\displaystyle\mathbb{E}_{\Omega}|V_{n}(\boldsymbol{s})|^{j}\leq\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}\|X\|_{j}^{j}\big{(}\sqrt{2nm}\big{)}^{j}\leq c. (7.1)

Therefore, 𝔼𝒢𝔼Ω|Vn(𝝀)|j\mathbb{E}_{\mathcal{G}}\mathbb{E}_{\Omega}|V_{n}(\boldsymbol{\lambda})|^{j} is finite and the Fubini-Tonelli theorem ensures

𝔼[Vn(𝝀)]j=𝔼𝒢𝔼Ω[Vn(𝝀)]j,\displaystyle\mathbb{E}\big{[}V_{n}(\boldsymbol{\lambda})\big{]}^{j}=\mathbb{E}_{\mathcal{G}}\mathbb{E}_{\Omega}\big{[}V_{n}(\boldsymbol{\lambda})\big{]}^{j}, (7.2)

in which 𝔼[]\mathbb{E}[\cdot] denotes expectation with respect to the product measure 𝒢Ω\mathbb{P}_{\mathcal{G}}\otimes\mathbb{P}_{\Omega}.

Proposition 4.4 and the uniform boundedness of the variables 𝔼Ω|Vn(𝒔)|j\mathbb{E}_{\Omega}|V_{n}(\boldsymbol{s})|^{j} now allow us to compute the limit for the total expectation of [Vn(𝒔)]j[V_{n}(\boldsymbol{s})]^{j}, which we now highlight. Define Ej={G:(μ1n3/2)jfj(𝒔)=αj}E_{j}=\{G\,:\,\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}f_{j}(\boldsymbol{s})=\alpha_{j}\}, where αj=αj(n)\alpha_{j}=\alpha_{j}(n) is a real number to be chosen momentarily. Relation (7.2) implies

𝔼[Vn(𝒔)]j\displaystyle\mathbb{E}[V_{n}(\boldsymbol{s})]^{j} =𝔼𝒢[(μ1n3/2)jfj(𝒔)]\displaystyle=\mathbb{E}_{\mathcal{G}}\big{[}\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}f_{j}(\boldsymbol{s})\big{]}
=𝒢(μ1n3/2)jfj(𝒔)𝑑𝒢\displaystyle=\int_{\mathcal{G}}\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}f_{j}(\boldsymbol{s})\,d\mathbb{P}_{\mathcal{G}}
=αj𝒢(Ej)+Ejc(μ1n3/2)jfj(𝒔)𝑑𝒢.\displaystyle=\alpha_{j}\mathbb{P}_{\mathcal{G}}(E_{j})+\int_{E_{j}^{c}}\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}f_{j}(\boldsymbol{s})\,d\mathbb{P}_{\mathcal{G}}.

The inequality (μ1n3/2)j|fj(𝒔)|𝔼Ω|Vn(𝒔)|j,\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}|f_{j}(\boldsymbol{s})|\leq\mathbb{E}_{\Omega}|V_{n}(\boldsymbol{s})|^{j}, together with (7.1), implies that

|Ejc(μ1n3/2)jfj(𝒔)𝑑𝒢|Ejc𝔼Ω|Vn(𝒔)|j𝑑𝒢c𝒢(Ejc).\displaystyle\Big{|}\int_{E_{j}^{c}}\big{(}\mu^{-1}n^{-3/2}\big{)}^{j}f_{j}(\boldsymbol{s})\,d\mathbb{P}_{\mathcal{G}}\Big{|}\leq\int_{E_{j}^{c}}\mathbb{E}_{\Omega}|V_{n}(\boldsymbol{s})|^{j}\,d\mathbb{P}_{\mathcal{G}}\leq c\mathbb{P}_{\mathcal{G}}(E_{j}^{c}).

Therefore,

limn𝔼[Vn(𝒔)]j=limn(αj𝒢(Ej))\displaystyle\lim_{n\to\infty}\mathbb{E}[V_{n}(\boldsymbol{s})]^{j}=\lim_{n\to\infty}\Big{(}\alpha_{j}\mathbb{P}_{\mathcal{G}}(E_{j})\Big{)}

provided 𝒢(Ej)1\mathbb{P}_{\mathcal{G}}(E_{j})\to 1 as nn\to\infty so that 𝒢(Ejc)0\mathbb{P}_{\mathcal{G}}(E_{j}^{c})\to 0 as nn\to\infty. We now choose αj\alpha_{j} according to Proposition 4.4 to conclude

limn𝔼[Vn(𝒔)]j=(8σp3π)j.\displaystyle\lim_{n\to\infty}\mathbb{E}\big{[}V_{n}(\boldsymbol{s})\big{]}^{j}=\Big{(}\frac{8\sigma_{p}}{3\pi}\Big{)}^{j}. (7.3)

Finally, the series

j=0(8σp3π)jtjj!=exp(8σp3πt)\displaystyle\sum_{j=0}^{\infty}\Big{(}\frac{8\sigma_{p}}{3\pi}\Big{)}^{j}\frac{t^{j}}{j!}=\exp\Big{(}\frac{8\sigma_{p}}{3\pi}t\Big{)}

corresponds to the moment generating function of a point mass at 8σp/(3π)8\sigma_{p}/(3\pi). The method of moments concludes the proof of Theorem 1.4.

8. Closing Remarks and Open Questions

There are several possible paths to take with this project. However, none of these paths appear to be particularly easy. We argue, without the intention of undermining its potentital for difficulty, that a natural step forward is to replace the weights appearing in (1.2) with the eigenvalues for the Laplacian and signless Laplacian of a random graph [9, Ch. 7]. We recall that the Laplacian of a simple graph GG of order nn is the symmetric n×nn\times n matrix L(G)L(G) defined by

L(G)=D(G)A(G),\displaystyle L(G)=D(G)-A(G),

where D(G)D(G) denotes the degree matrix of GG defined by setting [D(G)]ii[D(G)]_{ii} equal to the degree of vertex ii and [D(G)]ij=0[D(G)]_{ij}=0 when iji\neq j. The signless Laplacian of a simple graph GG of order nn is the symmetric n×nn\times n matrix Q(G)Q(G) defined by

Q(G)=D(G)+A(G).\displaystyle Q(G)=D(G)+A(G).

If GG is a random graph, then the upper-triangular entries of L(G)L(G) and Q(G)Q(G) satisfy the hypotheses of the theorems in Section 3. Unfortunately, the vertex degrees of a random graph, while not highly correlated, are not independent. The theorems of Section 3, therefore, do not apply. A result due to Bryc, Dembo and Jiang on spectral measures of large Markov matrices [7, Thm. 1.3], however, can likely be adapted to handle this new situation. Lastly, we remark that the Laplacian and signless Laplacian matrices are positive semi-definite and therefore have non-negative eigenvalues. This motivates the following.

Problem 8.1.

Find analogs of Theorems 1.3 and 1.4 for the Laplacian and signless Laplacian spectra of Erdős-Rényi-Gilbert graphs.

Perhaps the most natural question to consider pertains to the assumptions imposed on X1,X2,,XnX_{1},X_{2},\ldots,X_{n}. Ultimately, the sub-gaussian assumption in Theorems 1.2 and 1.3 is needed to ensure the partial moments of Wn(𝒂)W_{n}(\boldsymbol{a}) are uniformly bounded over all graphs. This uniform boundedness is crucial for evaluating the limit of 𝔼[Wn(𝒂)]j\mathbb{E}[W_{n}(\boldsymbol{a})]^{j}. The authors leave it as an interesting task to try and drop the sub-gaussian assumption in these theorems.

Problem 8.2.

Can we relax the sub-gaussian condition imposed on the random variables X1,X2,,XnX_{1},X_{2},\ldots,X_{n} appearing in Theorems 1.2 and 1.3?

Acknowledgments. The authors thank Stephan Ramon Garcia and Ken McLaughlin for useful comments and suggestions in the preparation of this manuscript.

References

  • [1] L. Arnold, On the asymptotic distribution of the eigenvalues of random matrices, J. Math. Anal. Appl. 20 (1967), 262-268.
  • [2] L. Arnold, On Wigner’s semicircle law for the eigenvalues of random matrices, Z. Wahrsh. Verw. Gebiete. 19 ( 1971), 191-198.
  • [3] Z. Bai and J. Silverstein, Spectral Analysis of Large Dimensional Random Matrices, Springer-Verlag, New York, 2010.
  • [4] E. Bell, Exponential Polynomials, Ann. Math. 35 (1934), 258-277.
  • [5] P. Billingsley, Probability and Measure, Third Edition, John Wiley and Sons, New York, 1995.
  • [6] B. Bollobás, Modern Graph Theory, Graduate Texts in Mathematics, Springer, New York, 1998.
  • [7] W. Bryc, A. Dembo, T. Jiang, Spectral Measure of Large Random Hankel, Markov and Toeplitz Matrices, Ann. Probab. 34 (2006), no. 1, 1-38.
  • [8] G. Cipolloni, L. Erdős, D. Schröder, Central limit theorem for linear eigenvalue statistics of non-Hermitian random matrices, Comm. Pure Appl. Math. (2021) https://doi.org/10.1002/cpa.22028
  • [9] D. Cvetković, P. Rowlinson and S. Simić, An Introduction to the Theory of Graph Spectra, Cambridge University Press, Cambridge, 2009.
  • [10] A. de Moivre, The Doctrine of Chances: or, a Method of Calculating the Probability of Events in Play, W. Pearson, London, 1718.
  • [11] W. Du, X. Li, Y. Li, The Laplacian energy of random graphs, J. Math. Anal. Appl. 368 (2010), no. 1, 311-319.
  • [12] F. Dyson, Statistical Theory of the Energy Levels of Complex Systems. I, J. Math. Phys. 3, 140 (1962).
  • [13] F. Dyson, The Threefold Way. Algebraic Structure of Symmetry Groups and Ensembles in Quantum Mechanics, J. Math. Phys. 3, 1199 (1962).
  • [14] P. Erdős, A. Rényi, On random graphs I, Publ. Math. 6 (1959), 290-297.
  • [15] P. Erdős, A. Rényi, On the evolution of random graphs, Magyar Tud. Akad. Mat. Kutató Int. Kőzl. 5 (1960), 17-61.
  • [16] G. Folland, A Guide to Advanceed Real Analysis, Mathematical Association of America, 2009.
  • [17] Z. Füredi, J. Komlós, The eigenvalues of random symmetric matrices, Combinatorica. 1 (1981), no. 3, 233-241.
  • [18] E. Gilbert, Random Graphs, Ann. Math. Stat. 30 (1959), no. 4, 1141–1144.
  • [19] I. Gutman, The energy of a graph, Ber. Math. Stat. Sekt. Forschungszent. Graz. 103 (1978), 1-22.
  • [20] U. Grenander, Probabilities on Algebraic Structures, John Wiley and Sons, New York, 1963.
  • [21] K. Johansson, On fluctuations of eigenvalues of random Hermitian matrices, Duke Math. J. 91 (1998), no. 1, 151-204.
  • [22] P. Laplace, Théorie Analytiques Probabilités, Imprimerie Royale, Paris, 1847.
  • [23] P. Lévy, Théorie de l’Addition des Variables Aléatoires, Gauthier-Villars, Paris, 1937.
  • [24] J. Lindeberg, Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung, Math. Z. 15 (1922), 211-225.
  • [25] A. Lyapunov, Une proposition générale du calcul de probabilités, Comp. Ren. Heb. de l’Académie des Sci. de Paris. 132 (1901), 814.
  • [26] A. Lytova, L. Pastur, Central limit theorem for linear eigenvalue statistics of random matrices with independent entries, Ann. Probab. 37 (2009), no. 5, 1778-1840.
  • [27] V. Nikiforov, Extremal norms of graphs and matrices, J. Math. Sci. 182 (2012), 164-174.
  • [28] M. Shcherbina, B. Tirozzi, Central limit theorem for fluctuations of linear eigenvalue statistics of large random graphs, J. Math. Phys. 51 (2010)
  • [29] R. Stanley, Enumerative Combinatorics, Volume 1, 2nd Edition, Cambridge University Press, New York, 2012.
  • [30] R. Stanley, Enumerative Combinatorics, Volume 2, Cambridge University Press, New York, 1999.
  • [31] T. Tao, Topics in Random Matrix Theory, Graduate Studies in Mathematics Volume 132, American Mathematical Society, 2012.
  • [32] R. Vershynin, High-Dimensional Probability: An Introduction with Applications to Data Science, Cambridge University Press, Cambridge, 2018.
  • [33] E. Wigner, Characteristic vectors bordered matrices with infinite dimensions, Ann. Math. 62 (1955), 548-564.
  • [34] E. Wigner, On the distributions of the roots of certain symmetric matrices, Ann. Math. 67 (1958), 325-327.