Limiting Distributions of Sums with Random Spectral Weights
Abstract.
This paper studies the asymptotic properties of weighted sums of the form , in which are i.i.d. random variables and correspond to either eigenvalues or singular values in the classic Erdős-Rényi-Gilbert model. In particular, we prove central limit-type theorems for the sequences with varying conditions imposed on .
Key words and phrases:
random matrix, random graph, eigenvalue, singular value, central limit theorem, graph energy, Schatten norm, sub-gaussian, convolution, method of moments2020 Mathematics Subject Classification:
60B20, 60F05, 05C801. Introduction
Suppose is a sequence of random variables. A classic problem in probability is to understand the limiting behavior of the sum
Early versions of this problem, which consider the case when are independent Bernoulli trials, are rooted in the work of de Moivre [10] and Laplace [22] pertaining to normal approximations to the binomial distribution. These classic results mark the beginnings of a long standing problem of approximating laws of sums of random variables by normal distributions. This two-hundred year problem ultimately culminated with what is today known as the central limit theorem. Major contributions towards our modern understanding of the central limit theorem are attributed, in particular, to Lévy [23], Lindeberg [24] and Lyapunov [25], among several others.
Suppose are independent and identically distributed (i.i.d.) random variables with mean and finite positive variance . The version of the central limit theorem attributed to Lévy and Lindeberg [5, Thm. 27.1] asserts that the sequence of random variables converge in distribution to a standard normal. A theorem attributed to Lyapunov [5, Thm. 27.3] shows the assumption that the variables be identically distributed can even be relaxed as long as the absolute moments of the satisfy a certain (Lyapunov) growth condition.
The present focus is to study the limiting behavior of sequences of the form
(1.1) |
in which are i.i.d. random variables and the weights correspond to either the eigenvalues or the singular values of a random symmetric matrix. Specifically, we take eigenvalues and singular values corresponding to the Erdős-Rényi-Gilbert random graph model. A random graph in this model, which was developed independently by Erdős-Rényi [14, 15] and Gilbert [18], is constructed by attaching edges among a set of labeled vertices independently with probability . The random variables in this case are neither independent nor indentically distributed, and there is no general method available to handle this situation. However, adjacency matrices of Erdős-Rényi-Gilbert graphs have bounded entries which, modulo the constraints imposed by symmetry, are independent. This simple fact, together, with the almost sure convergence of their empirical spectral distributions to the semicircular law, allow us to establish central limit-type theorems for the sequences .
1.1. Notation and Terminology
Graph theoretic terminology may be found in [6]. A graph of order is an ordered pair consisting of a set of edges and a set of vertices such that . We adopt standard notation and let denote the number of edges in a graph . A graph is simple if it contains no loops or multiple edges and it is connected if it contains no isolated vertices. If is a graph of order , then its adjacency matrix is the real symmetric matrix whose entries are defined by setting if vertices and are connected by an edge and otherwise. The spectrum of is the spectrum of its adjacency matrix , and is therefore real since is Hermitian. We adopt a standard convention and write the spectrum of in non-increasing order,
Moreover, the singular spectrum of consists of the singular values of . We remark that the singular values of the Hermitian matrix correspond to the moduli of its eigenvalues. Again, we adopt a standard convention and write the singular spectrum of in non-increasing order,
For , we let denote the vector space of random variables defined on the probability space with finite -norm defined by
A random variable defined on a probability space is called sub-gaussian if for all , where
is called the sub-gaussian norm of [32, Def. 2.5.6]. Gaussian, Bernoulli and bounded random variables are typical examples of sub-gaussian random variables [32, Ex. 2.5.8].
1.2. Statement of Results
This paper establishes central limit-type theorems for the sequences of weighted sums
(1.2) |
in which are i.i.d. random variables and correspond to either the eigenvalues or the singular values of Erdős-Rényi-Gilbert graphs. Graphs in the Erdős-Rényi-Gilbert model are constructed by attaching edges between each vertex pair from a set of labeled vertices independently with probability . Here and henceforth we assume .
The first two theorems illustrate the relatively simple limiting distributions of in the case when certain symmetry conditions are imposed on . The third theorem illustrates the simple limiting distributions of in the case when are sub-gaussian with mean zero but not necessarily symmetric.
Theorem 1.1.
Suppose is a normal random variable with mean and variance . If are i.i.d. random variables, then converges in distribution to a standard normal.
Theorem 1.2.
Suppose is a symmetric sub-gaussian random variable with variance . If are i.i.d. random variables, then
in distribution, where is a normal random variable, independent of , with mean zero and variance .
Theorem 1.3.
Suppose is a sub-gaussian random variable with mean zero and variance . If are i.i.d. random variables, then
in distribution, where is a normal random variable, independent of , with mean zero and variance .
The final theorem illustrates, in particular, how sensitive the sequences are with respect to the conditions imposed on . Very distinct behavior emerges by simply choosing random variables with non-zero mean. Ultimately, this distinction is rooted in the asymptotic behavior of the Schatten norm of a graph, which is defined for by
In particular, the large behavior of is very different for the cases and [27, Thm. 5]. Note the sub-gaussian condition is also removed in the following theorem.
Theorem 1.4.
Suppose is any random variable with non-zero mean which admits a moment generating function. If are i.i.d. random variables, then converges in distribution to a point mass at .
There exist various central limit-type theorems in the literature pertaining to sums of eigenvalues of random matrices e.g. [8, 21, 26, 28]. Among the earliest results in this direction are due to Johansson [21]. These specific results concern random Hermitian matrices distributed according to the probability measure
(1.3) |
in which , is a polynomial with positive even degree and a positive leading coefficient and denotes Lebesgue measure on the space of complex Hermitian matrices. It bears worth mentioning that setting and gives rise to the Gaussian unitary ensemble introduced by Dyson [12, 13]. The main result of [21] establishes that the linear eigenvalue statistic converges in distribution to a normal random variable with mean zero. The sums we consider in this paper can be thought of as randomized graph-theoretic versions of the sums originally considered by Johansson.
1.3. Examples and Simulations
The following examples and simulations illustrate a few of the main theorems. All code used to generate these plots is made available by contacting the authors.
Example 1.5.

Example 1.6.
Suppose is a Rademacher random variable. In particular, one has . The random variable is sub-gaussian and Theorem 1.2 ensures that converges in distribution to the sum of with an independent normal random variable with mean zero and variance . The probability density of this sum is given by the convolution of the gaussian with . Interestingly, this density corresponds to the gaussian mixture , which is bimodal in the case . Figure 2 shows histogram plots for in the cases and .


1.4. Outline of the Paper
This paper, which is intended for a wide probabilistic audience, takes us on a short journey through the spectral analysis of large random matrices and is organized as follows. Section 2 highlights a few classic results in random matrix theory which serve as prerequisites for later sections. No background in random matrices is assumed. Section 3 provides a computational lemma that we use to expand the partial moments of in terms of power sum symmetric functions. Section 4 establishes the asymptotics for the partial moments of by analyzing the limiting behavior of the power sum symmetric functions. Theorems 1.1 and 1.2 are proved in Section 5. Theorem 1.3 is proved in Section 6 and Theorem 1.4 is proved in Section 7. Finally, we conclude with possible directions for future work and closing remarks.
2. Random Matrix Prerequisites
The limiting spectral analysis for large random matrices has become a widely studied topic in probability since the pioneering work of Eugene Wigner who proved that the expected empirical spectral distribution of a normalized (Wigner) matrix tends to the semicircular law . To begin, suppose is an Hermitian matrix with complex entries. The eigenvalues of are real and we can define the one-dimensional distribution function
called the empircal spectral distribution (ESD) of . The relation [3, Sec. 1.3.1]
(2.1) |
plays a fundamental role in random matrix theory. Specifically, it turns the problem of establishing convergence, in whatever sense, for the ESD of a sequence of random matrices into the problem of establishing convergence of the sequence for each fixed .
An symmetric Wigner matrix is an real symmetric matrix whose entries, modulo the symmetry condition , are independent. Specifically, we permit i.i.d. mean zero entries above the main diagonal and i.i.d. mean zero entries on the main diagonal. These two families need not share the same distribution, however. Moreover, we impose the condition that all entries have bounded moments and share a common second moment. If is an symmetric Wigner matrix, then we denote . The pioneering work of Wigner [33, 34] establishes
(2.2) |
for all integers . In particular, the expected ESD of a normalized symmetric Wigner matrix tends to the semicircular law whose density is given by
This original result due to Wigner has been extended in several aspects. Grenander [20] proved the empirical spectral distribution converges to in probability. Arnold [1, 2] further improved this result by showing the empirical spectral distribution converges to the semicircular law almost surely. We remark that the matrix ensembles underlying (2.2) can be generalized beyond those originally considered by Wigner and refer the reader to [3] and [31] for excellent surveys on the rich and rapidly developing field of spectral analysis of large random matrices.
2.1. Almost Sure Convergence of the ESD
The form of (2.2) that we need for later sections is due to Arnold. In particular, suppose is an real symmetric matrix whose entries, modulo the symmetry condition , are independent. Assume the upper-triangular entries share a common distribution with finite positive variance . In addition, we assume the diagonal entries also share a common distribution. Furthermore, assume the entries have finite fourth and sixth moments for and the diagonal entries have finite second and fourth moments. Define the normalized matrix . Arnold proves that almost surely in the sense that
(2.3) |
for all continuous and compactly supported test functions [2, Thm. 2].
2.2. Real Symmetric Matrices with Independent Bounded Entries
Here and throughout we adopt standard asymptotic notation. In particular, we write if there exists a constant such that for all sufficiently large. Moreover, we write whenever as .
A result due to Füredi and Komlós allows us to analyze the limiting behavior of polynomials in . Suppose is an real symmetric matrix with bounded entries. Moreover, we assume the entries of , modulo the constraint , are independent. Let denote the common mean of the upper-triangular entries and let denote their common variance. Furthermore, suppose the diagonal entries share a common mean, . Füredi and Komlós [17, Thm. 1] show that the distribution of the largest eigenvalue of can be approximated in order by a normal distribution with mean and variance . Moreover, with high probability (w.h.p.) we have
whenever .
3. A Computational Lemma for the Partial Moments of
Suppose and are i.i.d. random variables defined on the probability space . The lemma we present is a simple, albeit useful, computational tool for evaluating the moments
in which denotes expectation with respect to . This lemma expresses as a sum taken over all partitions of and involves power sum symmetric polynomials in the variables . We recall these definitions below and refer the reader to [29, Sec. 1.7] and [30, Sec. 7.7] for in depth discussions.
A partition of an integer is a non-increasing sequence of positive integers such that . If is an even integer, then a partition is a partition into even parts if are even integers. We let and denote the set of all partitions of and the set of all partitions of into even parts, respectively. We define
in which denotes the multiplicity of appearing in a partition . Lastly, the power sum symmetric polynomial of degree in the variables is the homogeneous polynomial defined by setting
We often denote for brevity when there is no risk of confusion.
Lemma 3.1.
Let be any integer and suppose are i.i.d. random variables which admit a moment generating function. If denotes the cumulants of the , then
where and given .
Proof.
The random variables are i.i.d. which implies that the moment generating function of takes the form , where denotes the moment generating function of the [5, Sec. 9]. Therefore,
in which denotes the cumulant generating function of the . The identity , which defines the cumulant sequence , implies
(3.1) |
where denotes the complete Bell polynomial of degree in the variables [4, Sec. II] defined via the generating function
(3.2) |
Comparing coefficients in the above expression and applying the identity
(3.3) |
completes the proof. ∎
4. Asymptotics for Erdős-Rényi-Gilbert Graphs
Here and throughout we let and denote expectation and probability, respectively, with respect to . A fundamental fact is that the number of edges in a random graph of order follow a binomial distribution,
where . Moreover, the adjacency matrix of random graph of order is a random real symmetric matrix whose upper-triangular entries are bounded independent random variables which have mean and variance . The diagonal elements of satisfy since loops are not permitted. The result by Füredi and Komlós [17, Thm. 1], which we outline in Section 2.2, implies that w.h.p.,
(4.1) |
and
(4.2) |
4.1. The Eigenvalue Case
We now establish the limiting behavior for the partial moments in which are i.i.d. random variables defined on and correspond to eigenvalues in the Erdős-Rényi-Gilbert model. We recall denotes the power sum symmetric polynomial of degree in the variables .
Lemma 4.1.
Let be an odd integer. If correspond to eigenvalues in the Erdős-Rényi-Gilbert model, then we have almost surely.
Proof.
Let be the adjacency matrix of a random graph of order . This matrix satisfies the hypotheses in Section 2.1. Moreover, the variance of the upper-triangular entries is given by . The ESD of the normalized matrix converges almost surely to the semicircular law by [2, Thm. 2] as seen in Section 2.1. Therefore, we can use (2.1) and the identity to conclude
(4.3) |
almost surely. The symmetry of the the semicircular density and the fact that is odd imply that almost surely. The claim follows. ∎
Proposition 4.2.
Let be any integer and let be i.i.d. random variables with cumulant sequence . Define
where and are defined as in Lemma 3.1 and denotes the set of partitions of into even parts. The partial moments satisfy the following, where denotes a term tending to zero as with fixed.
(a) If is odd, then w.h.p.
(b) If is even, then w.h.p.
Proof.
Denote for brevity. The number of edges in a random graph of order is a binomial random variable with parameters and . The expected number of edges in is therefore given by . The weak law of large numbers implies that is tightly concentrated around its mean for large . Therefore, we have w.h.p. If denotes the adjaceny matrix of , then [9, Thm. 3.1.1], which implies that w.h.p.,
(4.4) |
If is even, then consider the bound Inequalities (4.1) and (4.2) imply that w.h.p.,
Observe that as since . We conclude that w.h.p.,
(4.5) |
Let be a partition of . Lemma 4.1 together with relations (4.4) and (4.5) imply that w.h.p,
(4.6) |
whenever contains an odd integer larger than one. If , then (4.6) still holds since for simple graphs. Any partition of an odd integer must contain an odd part. Lemma 3.1 implies that w.h.p.,
whenever is odd. This proves (a). If is even, then w.h.p. unless is a partition of into even parts. Lemma 3.1, together with relations (4.4) and (4.5), imply that w.h.p.,
which proves (b). We remark that the term appearing in the last expression occurs because of the discrepency in the power of occuring in (4.4) and (4.5). ∎
4.2. The Singular Value Case
We now establish the limiting behavior for the partial moments in which are i.i.d. random variables defined on and correspond to singular values in the Erdős-Rényi-Gilbert model.
Proposition 4.3.
Let be any integer and let be i.i.d. mean zero random variables with cumulant sequence . The partial moments satisfy
w.h.p., where and are defined as in Lemma 3.1 and denotes the set of all partitions of for which .
Proof.
Proposition 4.4.
Let be any integer and let be i.i.d. random variables in which admits a moment generating function and has non-zero mean . The partial moments satisfy
w.h.p., in which .
5. Proof of Theorems 1.1 and 1.2
The following general form of the Hoeffding inequality [32, Thm. 2.6.3] plays an important role in our proof. Suppose are independent mean zero sub-gaussian random variables. There exists an absolute constant such that
for all , in which and If in addition have unit variances, then
(5.1) |
Moreover, there exists an absolute constant such that for all ,
(5.2) |
5.1. Proof of Theorem 1.1
Suppose are i.i.d. random variables defined on a probability space in which is a sub-gaussian random variable with mean and variance . Define the auxiliary variables and let be any integer. The sum satisfies
since simple graphs are traceless. The random variables are mean zero sub-gaussian random variables with unit variances. Relations (5.1) and (5.2), together with the inequality , imply that there exists a constant , which is independent of , such that
(5.3) |
Therefore, is finite and the Fubini-Tonelli theorem [16, Thm. 2.16] ensures
(5.4) |
in which denotes expectation with respect to the product measure .
Proposition 4.2 and the uniform boundedness of the variables allow us to compute the limit for the total expectation of , which we now highlight. Define , where is a real number to be chosen momentarily. Relation (5.4) implies
The inequality together with (5.3), implies that
Therefore,
provided as so that as . We now choose according to Proposition 4.2 to conclude
(5.5) |
A useful criteria for determing when a distribution is determined by its moments is that it admits a moment generating function [5, Thm. 30.1]. Suppose that the distribution of a random variable is determined by its moments. The method of moments [5, Thm. 30.2] ensures that converges in distribution to provided has moments of all orders and for all ,
Recall identity (3.3) to conclude, for ,
(5.6) | ||||
(5.7) | ||||
(5.8) |
where . The generating function (3.2) for the complete Bell polynomials implies
Setting in the above expression and then simplifying yields the identity
Setting and for in the above expression and then appealing to (5.8) implies
(5.9) |
in a neighborhood of , where and denote the moment and cumulant generating functions of , respectively. If is a normal random variable with mean and variance , then the moment generating function of is given by and (5.9) implies that
in a neighborhood of . The method of moments and (5.5) imply Theorem 1.1 since the moment generating function for the sum of two independent random variables is given by the product of their moment generating functions.
5.2. Proof of Theorem 1.2
If is a symmetric sub-gaussian random variable, then and (5.9) implies
The moment generating function of converges for all since is a sub-gaussian random variable with mean zero [32, Prop. 2.5.2]. The right hand side is the moment generating function for the sum of with an independent normal with mean zero and variance . The corresponding distribution is therefore determined by its moments. The method of moments and (5.5) now conclude the proof of Theorem 1.2.
6. Proof of Theorem 1.3
Suppose are i.i.d. random variables defined on in which is a sub-gaussian random variable with mean zero and variance . The same reasoning of Section 5 implies that for all ,
Set and apply (3.3) to conclude, for ,
The generating function (3.2) for the complete Bell polynomials implies
in a neighborhood of , where and denote the moment and cumulant generating functions of , respectively. The method of moments concludes the proof of Theorem 1.3.
7. Proof of Theorem 1.4
Suppose is a random variable defined on which has a non-zero mean and admits a moment generating function. Let be i.i.d. random variables. Denote for brevity. Minkowski’s inequality [5, p. 242] and the fact that are identically distributed imply
The Cauchy-Schwarz inequality yields the inequality
Appealing to the inequality now implies that there exists a constant , which is independent of , for which
(7.1) |
Therefore, is finite and the Fubini-Tonelli theorem ensures
(7.2) |
in which denotes expectation with respect to the product measure .
Proposition 4.4 and the uniform boundedness of the variables now allow us to compute the limit for the total expectation of , which we now highlight. Define , where is a real number to be chosen momentarily. Relation (7.2) implies
The inequality together with (7.1), implies that
Therefore,
provided as so that as . We now choose according to Proposition 4.4 to conclude
(7.3) |
Finally, the series
corresponds to the moment generating function of a point mass at . The method of moments concludes the proof of Theorem 1.4.
8. Closing Remarks and Open Questions
There are several possible paths to take with this project. However, none of these paths appear to be particularly easy. We argue, without the intention of undermining its potentital for difficulty, that a natural step forward is to replace the weights appearing in (1.2) with the eigenvalues for the Laplacian and signless Laplacian of a random graph [9, Ch. 7]. We recall that the Laplacian of a simple graph of order is the symmetric matrix defined by
where denotes the degree matrix of defined by setting equal to the degree of vertex and when . The signless Laplacian of a simple graph of order is the symmetric matrix defined by
If is a random graph, then the upper-triangular entries of and satisfy the hypotheses of the theorems in Section 3. Unfortunately, the vertex degrees of a random graph, while not highly correlated, are not independent. The theorems of Section 3, therefore, do not apply. A result due to Bryc, Dembo and Jiang on spectral measures of large Markov matrices [7, Thm. 1.3], however, can likely be adapted to handle this new situation. Lastly, we remark that the Laplacian and signless Laplacian matrices are positive semi-definite and therefore have non-negative eigenvalues. This motivates the following.
Problem 8.1.
Perhaps the most natural question to consider pertains to the assumptions imposed on . Ultimately, the sub-gaussian assumption in Theorems 1.2 and 1.3 is needed to ensure the partial moments of are uniformly bounded over all graphs. This uniform boundedness is crucial for evaluating the limit of . The authors leave it as an interesting task to try and drop the sub-gaussian assumption in these theorems.
Problem 8.2.
Acknowledgments. The authors thank Stephan Ramon Garcia and Ken McLaughlin for useful comments and suggestions in the preparation of this manuscript.
References
- [1] L. Arnold, On the asymptotic distribution of the eigenvalues of random matrices, J. Math. Anal. Appl. 20 (1967), 262-268.
- [2] L. Arnold, On Wigner’s semicircle law for the eigenvalues of random matrices, Z. Wahrsh. Verw. Gebiete. 19 ( 1971), 191-198.
- [3] Z. Bai and J. Silverstein, Spectral Analysis of Large Dimensional Random Matrices, Springer-Verlag, New York, 2010.
- [4] E. Bell, Exponential Polynomials, Ann. Math. 35 (1934), 258-277.
- [5] P. Billingsley, Probability and Measure, Third Edition, John Wiley and Sons, New York, 1995.
- [6] B. Bollobás, Modern Graph Theory, Graduate Texts in Mathematics, Springer, New York, 1998.
- [7] W. Bryc, A. Dembo, T. Jiang, Spectral Measure of Large Random Hankel, Markov and Toeplitz Matrices, Ann. Probab. 34 (2006), no. 1, 1-38.
- [8] G. Cipolloni, L. Erdős, D. Schröder, Central limit theorem for linear eigenvalue statistics of non-Hermitian random matrices, Comm. Pure Appl. Math. (2021) https://doi.org/10.1002/cpa.22028
- [9] D. Cvetković, P. Rowlinson and S. Simić, An Introduction to the Theory of Graph Spectra, Cambridge University Press, Cambridge, 2009.
- [10] A. de Moivre, The Doctrine of Chances: or, a Method of Calculating the Probability of Events in Play, W. Pearson, London, 1718.
- [11] W. Du, X. Li, Y. Li, The Laplacian energy of random graphs, J. Math. Anal. Appl. 368 (2010), no. 1, 311-319.
- [12] F. Dyson, Statistical Theory of the Energy Levels of Complex Systems. I, J. Math. Phys. 3, 140 (1962).
- [13] F. Dyson, The Threefold Way. Algebraic Structure of Symmetry Groups and Ensembles in Quantum Mechanics, J. Math. Phys. 3, 1199 (1962).
- [14] P. Erdős, A. Rényi, On random graphs I, Publ. Math. 6 (1959), 290-297.
- [15] P. Erdős, A. Rényi, On the evolution of random graphs, Magyar Tud. Akad. Mat. Kutató Int. Kőzl. 5 (1960), 17-61.
- [16] G. Folland, A Guide to Advanceed Real Analysis, Mathematical Association of America, 2009.
- [17] Z. Füredi, J. Komlós, The eigenvalues of random symmetric matrices, Combinatorica. 1 (1981), no. 3, 233-241.
- [18] E. Gilbert, Random Graphs, Ann. Math. Stat. 30 (1959), no. 4, 1141–1144.
- [19] I. Gutman, The energy of a graph, Ber. Math. Stat. Sekt. Forschungszent. Graz. 103 (1978), 1-22.
- [20] U. Grenander, Probabilities on Algebraic Structures, John Wiley and Sons, New York, 1963.
- [21] K. Johansson, On fluctuations of eigenvalues of random Hermitian matrices, Duke Math. J. 91 (1998), no. 1, 151-204.
- [22] P. Laplace, Théorie Analytiques Probabilités, Imprimerie Royale, Paris, 1847.
- [23] P. Lévy, Théorie de l’Addition des Variables Aléatoires, Gauthier-Villars, Paris, 1937.
- [24] J. Lindeberg, Eine neue Herleitung des Exponentialgesetzes in der Wahrscheinlichkeitsrechnung, Math. Z. 15 (1922), 211-225.
- [25] A. Lyapunov, Une proposition générale du calcul de probabilités, Comp. Ren. Heb. de l’Académie des Sci. de Paris. 132 (1901), 814.
- [26] A. Lytova, L. Pastur, Central limit theorem for linear eigenvalue statistics of random matrices with independent entries, Ann. Probab. 37 (2009), no. 5, 1778-1840.
- [27] V. Nikiforov, Extremal norms of graphs and matrices, J. Math. Sci. 182 (2012), 164-174.
- [28] M. Shcherbina, B. Tirozzi, Central limit theorem for fluctuations of linear eigenvalue statistics of large random graphs, J. Math. Phys. 51 (2010)
- [29] R. Stanley, Enumerative Combinatorics, Volume 1, 2nd Edition, Cambridge University Press, New York, 2012.
- [30] R. Stanley, Enumerative Combinatorics, Volume 2, Cambridge University Press, New York, 1999.
- [31] T. Tao, Topics in Random Matrix Theory, Graduate Studies in Mathematics Volume 132, American Mathematical Society, 2012.
- [32] R. Vershynin, High-Dimensional Probability: An Introduction with Applications to Data Science, Cambridge University Press, Cambridge, 2018.
- [33] E. Wigner, Characteristic vectors bordered matrices with infinite dimensions, Ann. Math. 62 (1955), 548-564.
- [34] E. Wigner, On the distributions of the roots of certain symmetric matrices, Ann. Math. 67 (1958), 325-327.