Perron similarities and the nonnegative inverse eigenvalue problem
Abstract.
The longstanding nonnegative inverse eigenvalue problem (NIEP) is to determine which multisets of complex numbers occur as the spectrum of an entry-wise nonnegative matrix. Although there are some well-known necessary conditions, a solution to the NIEP is far from known.
An invertible matrix is called a Perron similarity if it diagonalizes an irreducible, nonnegative matrix. Johnson and Paparella [Linear Algebra Appl., 493 (2016), 281–300] developed the theory of real Perron similarities. Here, we fully develop the theory of complex Perron similarities.
Each Perron similarity gives a nontrivial polyhedral cone and convex polytope of realizable spectra (thought of as vectors in complex Euclidean space). The extremals of these convex sets are finite in number, and their determination for each Perron similarity would solve the diagonalizable NIEP, a major portion of the entire problem. By considering Perron similarities of certain realizing matrices of Type I Karpelevič arcs, large portions of realizable spectra are generated for a given positive integer. This is demonstrated by producing a nearly complete geometrical representation of the spectra of four-by-four stochastic matrices.
Similar to the Karpelevič region, it is shown that the subset of complex Euclidean space comprising the spectra of stochastic matrices is compact and star-shaped. Extremal elements of the set are defined and shown to be on the boundary.
It is shown that the polyhedral cone and convex polytope of the discrete Fourier transform (DFT) matrix corresponds to the conical hull and convex hull of its rows, respectively. Similar results are established for multifold Kronecker products of DFT matrices and multifold Kronecker products of DFT matrices and Walsh matrices. These polytopes are of great significance with respect to the NIEP because they are extremal in the region comprising the spectra of stochastic matrices.
Implications for further inquiry are also given.
2020 Mathematics Subject Classification:
Primary: 15A29; Secondary: 15B48, 15B511. Introduction
The nonnegative inverse eigenvalue problem (NIEP) asks, for each positive integer , which multi-sets of complex numbers occur as the eigenvalues of an -by- entry-wise nonnegative matrix? This has proven one of the most challenging problems in mathematics, and is certainly the most sought-after question in matrix analysis. Thus, a variety of sub-questions have been worthy goals.
If (entry-wise) is -by- and with spectrum , then is called realizable, and is called a realizing matrix. If the realizing matrix is required to be diagonalizable, then the resulting subproblem is called the diagonalizable NIEP or the DNIEP. There are differences between the two problems [16, p. 214] and both are unsolved when . A solution to either has appeared far off. For additional information about the NIEP, and its numerous variants, there is a recent survey [16].
It is known that if is realizable and is a realizing matrix for , then
(1.1) |
(1.2) |
(1.3) |
and
(1.4) |
These conditions are not independent: Loewy and London [25] showed that the moment condition (1.3) implies the self-conjugacy condition (1.2). Friedland [9, Theorem 1] showed that the eventual nonnegativity of the moments implies the spectral radius condition (1.1). Finally, if the trace is nonnegative, i.e., if , then the JLL condition (1.4) (established independently by Johnson [15] and by Loewy and London [25]) implies the moment condition since
Thus, the JLL condition and the nonnegativity of the trace imply the self-conjugacy, spectral radius, and moment conditions.
Holtz [11] showed that if is realizable, with , then the shifted spectrum satisfies Newton’s inequalities. Furthermore, Holtz demonstrated that these inequalities are independent of (1.3) an (1.4).
The problem of characterizing the nonzero spectra of nonnegative matrices is due to Boyle and Handelman [3] (a constructive version of their main result was given by Laffey [23]). However, despite their remarkable achievement, and the stringent necessary conditions listed above, the NIEP and its variants are unsolved when is greater than four.
Our focus here is upon the DNIEP (without loss of generality, when the realizing matrix is irreducible). We are able to explicitly characterize invertible matrices that diagonalize irreducible nonnegative matrices. We call them Perron similarities. We show also that, for each Perron similarity, there is a nontrivial polyhedral cone of realizable spectra, which we call the (Perron) spectracone. When the Perron similarity is properly normalized, the cross section of the spectracone, for which the spectral radius is one, is called the (Perron) spectratope and is a convex polytope. With the mentioned normalization, each matrix may be taken to be row stochastic. This focuses attention, for each Perron similarity, upon the extreme points that are finite in number. Their determination solves a dramatic portion of the NIEP. This program is carried out for particular types of matrices, such as those that correspond to Type I Karpelevič arcs (see below) and circulant and block circulant matrices.
2. Notation & Background
For ease of notation, denotes the set of positive integers and . If , then .
The set of -by- matrices with entries from a field is denoted by . If , then is abbreviated to . The set of nonsingular matrices in is denoted by .
If , then or denotes the -entry of and denotes the diagonal matrix whose -entry is . Notice that
Denote by , , , and the identity matrix, the all-ones vector, the canonical basis vector, and the zero vector, respectively. The size of each aforementioned object is implied by its context.
If , then:
-
•
, , or denotes the -entry of ;
-
•
denotes the transpose of ;
-
•
denotes the entrywise conjugate of ;
-
•
denotes the conjugate-transpose of ; and
-
•
denotes the -row of as a column vector (when the context is clear, is abbreviated to ).
If , then denotes the spectrum of and denotes the spectral radius of .
If and , then is called reducible if there is a permutation matrix such that
where and are nonempty square matrices. If is not reducible, then A is called irreducible.
If , then the characteristic polynomial of , denoted by , is defined by . The companion matrix of a monic polynomial is the -by- matrix defined by
where . It is well-known that . Notice that is irreducible if and only if .
If , then denotes the canonical inner product of and , i.e.,
and
If , then
and .
The Hadamard product of , , denoted by , is the -by- matrix whose -entry is . If and , then denotes the -power of with respect to the Hadamard product, i.e., . If , then . If is totally nonzero, i.e., , then denotes the inverse of with respect to the Hadamard product, i.e., . Notice that if is totally nonzero, then .
The direct sum of , where , denoted by , or , is the -by- matrix
If and , then is the -by- vector such that and denotes the permutation matrix corresponding to , i.e., is the the -by- matrix whose -entry is , where denotes the Kronecker delta. As is well-known, . When the context is clear, is abbreviated to . Notice that .
If , then denotes the matrix obtained by deleting the -row of and is the projection map defined by .
2.1. Nonnegative Matrices
If and or , then is called nonnegative or positive, respectively, and we write or , respectively. If (), then (respectively, ) if and (respectively, and ).
If and
then is called (row) stochastic. If , then is stochastic if and only if . Furthermore, if is stochastic, then and . It is known that the NIEP and the stochastic NIEP are equivalent (see, e.g., Johnson [15, p. 114]).
Observation 2.1.
stochasticconvex If are stochastic, then the matrix
is stochastic.
We recall the Perron–Frobenius theorem for irreducible matrices.
Theorem 2.2 ([13, Theorem 8.4.4]).
pftirr Let be irreducible and nonnegative, and suppose that . Then
-
(i)
;
-
(ii)
is an algebraically simple eigenvalue of ;
-
(iii)
there is a unique positive vector such that and ; and
-
(iv)
there is a unique positive vector such that and .
The vector in part (iii) of \threfpftirr is called the (right) Perron vector of and the vector in part (iv) of \threfpftirr is called the left Perron vector of A [13].
3. Preliminary Results
In this section, we cover more background and establish some ancillary results that will be useful in the sequel.
3.1. Complex Polyhedral Regions
If , then the conical hull of , denoted by , is defined by
i.e., when is nonempty, consists of all conical combinations. Similarly, the convex hull of , denoted by is defined by
The conical hull (convex hull) of a finite list is abbreviated to (respectively, ).
A subset set of is called
-
•
a cone if ;
-
•
convex if ;
-
•
star-shaped at if ; and
-
•
a convex cone if .
If , then denotes the Minkowski sum of and , i.e., and . If is a convex cone, then the dimension of is the quantity . If , then .
A point of a convex cone is called an extreme direction or extreme ray if or , whenever , with and linearly independent. A point of a convex set is called an extreme point (of ) if or , whenever , with and , i.e., does not lie in any open line segment contained in .
If , , and , then
is a closed half-space determined by the hyperplane . If , then the half-space is abbreviated to and contains the origin on its boundary.
Any set of the form
is called a polyhedron and a bounded polyhedron is called a polytope. Any set of the form
is called a polyhedral cone.
Since is a real inner product and is a -dimensional real vector space, it follows that is a -dimensional Euclidean space and the following celebrated result is applicable (see, e.g., [22, Corollaries 2.13 and 2.14] and [28, pp. 170–178]; or references in [1, Remark 3.3] or [30, p. 87]).
Theorem 3.1 (Farkas–Minkowski–Weyl).
If is a polytope or a polyhedral cone in a Euclidean space , then there are vectors such that
or
respectively.
The following four propositions will be useful in the sequel; because they are easy to establish, their proofs are omitted.
Proposition 3.2.
exptball If , then is an extreme point of the convex set .
If , then the standard (or unit) -simplex, denoted by , is defined by
Proposition 3.3.
exptconv Let be convex and suppose that is an extreme point of . If there are vectors and a vector such that , then .
Proposition 3.4.
cor:allonesextreme If , then if and only if .
Proposition 3.5.
conisub If is a finite subset of a convex cone , then .
Proposition 3.6.
conecontain Let and be convex cones. If , then if and only if .
3.2. Region Comprising Stochastic Lists
For , denote by the list and for every natural number , let
and
Clearly, is a cone that contains and a characterization of either set constitutes a solution to the NIEP. As such, we catalog various properties of each set.
Recall that if , then the angle between them is defined by
(3.1) |
Because non-real eigenvalues of real matrices occur in complex conjugate pairs, it follows that . Thus, whenever or . Furthermore, if , then there is a nonnegative matrix such that . Consequently,
(3.2) |
and
(3.3) |
since the realizing matrix is nonnegative.
Proposition 3.7.
SLangle If , then and .
Lemma 3.8.
seq If is a convergent sequence of stochastic -by- matrices with limit , then is stochastic.
Proof.
Routine analysis exercise. ∎
Theorem 3.9.
If is a positive integer, then is compact.
Proof.
Following \threfseq and the fact that the eigenvalues of a matrix are topologically continuous [24, pp. 620–621], it follows that is closed. Because the spectral radius of a stochastic matrix is one, we obtain , i.e., is bounded. ∎
Theorem 3.10.
If is a positive integer, then is star-shaped at .
Proof.
If , then there is a stochastic matrix such that . Write and let be an invertible matrix such that is a Jordan canonical form of . If and , then the matrix is stochastic (\threfstochasticconvex) and since is upper-triangular, it follows that , i.e., , . ∎
If , then and if is a permutation matrix corresponding to the permutation , then . In light of these two facts, there is no loss in generality in restricting attention to .
The following eigenvalue-perturbation result is due to Brauer [4, Theorem 27] (for more proofs, see [26] and references therein).
Theorem 3.11 (Brauer).
Brauer Let and suppose that
If is an eigenvector associated with and , then .
Theorem 3.12.
If , then is star-shaped at the origin.
Proof.
If , then there is a stochastic matrix such that . If and , then . By \threfstochasticconvex, the matrix
is stochastic. Since , it follows that
(\threfBrauer) | ||||
Thus, . ∎
3.3. The Karpelevič Region
In 1938, Kolmogorov posed the question of characterizing the region in the complex plane, denoted by , that occur as an eigenvalue of a stochastic matrix [32, p. 2]. Dmitriev and Dynkin [6] (see [32, Appendix A] or [31] for an English translation) obtained a partial solution, and Karpelevič [21] (see [31] for an English translation) solved the problem by showing that the boundary of consists of curvilinear arcs (hereinafter, Karpelevič arcs or K-arcs), whose points satisfy a polynomial equation that depends on the endpoints of the arc.
If , then denotes the set of Farey fractions. The following result is the celebrated Karpelevič theorem in a form due to Ito [14].
Theorem 3.13 (Karpelevič theorem).
karpito The region is symmetric with respect to the real axis, is included in the unit-disc , and intersects the unit-circle at the points . The boundary of consists of these points and of curvilinear arcs connecting them in circular order.
Let the endpoints of an arc be and (). Each of these arcs is given by the following parametric equation:
(3.4) |
Following [18], equation (3.4) is called the Ito equation and the polynomial
(3.5) |
is called the Ito polynomial. The Ito polynomials are divided into four types as follows (note that since ):
-
•
If , then
(3.6) is called a Type 0 (Ito) polynomial and corresponds to the Farey pair .
-
•
If , then
(3.7) is called a Type I (Ito) polynomial.
-
•
If and , then
(3.8) is called a Type II (Ito) polynomial.
-
•
If and , then
(3.9) is called a Type III (Ito) polynomial.
The polynomials given by equations (3.6)–(3.9) are called the reduced Ito polynomials. Johnson and Paparella [18, Theorem 3.2] showed that if , then there is a stochastic matrix such that , where .
Remark 3.14.
If is a polynomial and denotes its derivative, then has a multiple root if and only if the resultant vanishes (here denotes the Sylvester matrix). Since the coefficients of the polynomials and depend on the single parameter , it follows that is a univariate polynomial in of degree at most
Hence, there are at most zeros and at most values in such that does not have distinct zeros.
More is known about the number of zeros of corresponding to the Type I arc .
Proposition 3.15.
[17, Proposition 4.1] For , let
(3.10) |
-
(i)
If is even, then has distinct roots.
-
(ii)
If is odd and , then has distinct roots.
-
(iii)
If is odd and , then has a multiple root if and only if .
3.4. Extremal and Boundary Points
If , then is called extremal if whenever . It is an easy exercise to show that if is extremal, then . Karpelevič asserted that the converse follows from the closure of but this is incorrect in view of the two- and three-dimensional cases. However, an elementary proof was given recently by Munger et al. [27, Section 6].
Definition 3.16.
If , then is called extremal (in ) if , . The set of extremal points in is denoted by .
Theorem 3.17.
If is a positive integer, then .
Proof.
If , then and the result is clear.
Otherwise, assume that and, for contradiction, that , but . By definition, such that . If and
then
Because , it follows that and , i.e., . Since , it follows that is not extremal, a contradiction. ∎
Observation 3.18.
If and is extremal in , where , then .
Proof.
For contradiction, if , then such that . Thus, there is a stochastic matrix with spectrum . Consequently, , a contradiction. ∎
4. Spectral Polyhedra
Although the following results are specified for complex matrices, many of the definitions and results apply, with minimal alteration, to real matrices.
Definition 4.1.
spectratope If , then:
-
(i)
is called the (Perron) spectracone of ;
-
(ii)
is called the (Perron) spectratope of ;
-
(iii)
.
Remark 4.2.
Observation 4.3.
prodstochequalsstoch The product of stochastic matrices is stochastic.
Theorem 4.4.
hadamardcones If , then:
-
(i)
is a nonempty convex cone that is closed with respect to the Hadamard product;
-
(ii)
is a nonempty convex set that is closed with respect to the Hadamard product; and
-
(iii)
is a nonempty convex cone that is closed with respect to matrix multiplication.
Proof.
Since and is stochastic, it follows that and all three sets are nonempty.
-
(i)
If and , , then
(4.1) i.e., is a convex cone.
Furthermore,
(4.2) i.e., the convex cone is closed with respect to the Hadamard product.
- (ii)
- (iii)
Remark 4.5.
If , , or , then , , and are called trivial; otherwise, they are called nontrivial.
Before we prove our next result, we note the following: if , then
and
Since and , it follows that
(4.3) |
Similarly, if , then and .
Theorem 4.6.
CSpolyconePSpolytope If , then is a polyhedral cone and is a polytope.
Proof.
In what follows, we let denote the -entry of . Via the mechanics of matrix multiplication and the complex inner product,
(4.4) |
where denotes the -row of (as a column vector) and denotes the -column of . Consequently,
Via the mechanics of matrix multiplication and the complex inner product, notice that
Thus, if and only if
and
Thus, is a polyhedron and since , it follows that is a polytope. ∎
Theorem 4.7.
If , then .
Proof.
If , then, since , it follows that by part (i) of \threfhadamardcones. Since
the result follows from \threfSLangle. ∎
4.1. Basic Transformations
As detailed in the sequel, the set is unchanged, or changes predictably, by certain basic transformations of .
The following result is a routine exercise.
Lemma 4.8.
nonnegsims If , is a permutation matrix, and , then
-
(i)
if and only if ; and
-
(ii)
if and only if .
The relative gain array was used to give a short proof of the following useful result [17, Lemma 3.3].
Lemma 4.9.
jplemma If is a permutation matrix and , then .
Theorem 4.10.
conetransforms If , is a permutation matrix, and is a totally nonzero vector, then
-
(i)
;
-
(ii)
;
-
(iii)
, ;
-
(iv)
;
-
(v)
, ;
-
(vi)
;
-
(vii)
. In particular, , where ; and
-
(viii)
. In particular, , where .
Proof.
-
(i)
Follows from part (i) of \threfnonnegsims.
-
(ii)
If is the permutation corresponding to , then
(\threfjplemma) -
(iii)
Follows from part (ii) of \threfnonnegsims.
-
(iv)
Follows from the fact that
-
(v)
Immediate from part (iv) since .
-
(vi)
Follows from the fact that
-
(vii)
Follows from the fact that .
- (viii)
Definition 4.11.
equivrel If , then is equivalent to , denoted by , if , where is a permutation matrix; is a permutation matrix; is a positive vector; and is a totally nonzero vector.
Theorem 4.12.
thmequivrel If is defined as in \threfequivrel, then is an equivalence relation on .
Proof.
If , then it is clear that .
If , then, by \threfjplemma,
Thus, whenever .
If and , then, by \threfjplemma,
Thus, whenever and . ∎
4.2. Complex Perron Similarities
Definition 4.13.
If , then is called a Perron similarity if there is an irreducible, nonnegative matrix and a diagonal matrix such that .
Theorem 4.14.
perronsimcharacterization If , then is a Perron similarity if and only if there is a unique positive integer such that and , where and are nonzero complex numbers such that , and and are positive vectors. Furthermore, if is a Perron similarity, then is nontrivial.
Proof.
If is a Perron similarity, then there is an irreducible, nonnegative matrix and a diagonal matrix such that . In view of \threfpftirr, there are (possibly empty) diagonal matrices and such that
where and . If , then since . Because the geometric multiplicity of an eigenvalue is less than or equal to its algebraic multiplicity [13, p. 181], it follows that . Hence, there is a nonzero complex number such that , where denotes the unique right Perron vector of .
Because the line of reasoning above applies to , it follows that there is a nonzero complex number such that , where and denotes the unique left Perron vector of .
Since , it follows that . As and are positive, we obtain .
Conversely, if there is a positive integer such that and , where and are nonzero complex numbers such that , and and are positive vectors, then . Thus, is a Perron similarity.
For uniqueness, suppose, for contradiction, that there is a positive integer such that and , where and are nonzero complex numbers such that , and and are positive vectors. Since , it follows that and are right and left eigenvectors corresponding to an eigenvalue . By the principle of biorthogonality [13, Theorem 1.4.7(a)], . However,
a contradiction.
Finally, suppose that is a Perron similarity. For contradiction, if is trivial, then is trivial and only contains nonnegative matrices of the form , a contradiction if is a Perron similarity. ∎
Remark 4.15.
It is known that if is nontrivial, then is not necessarily a Perron similarity (see Dockter et al. [7, Example 11]).
4.3. Normalization
If is a real matrix, then any eigenvector associated with a real eigenvalue of may be taken to be real ([13, p. 48, Problem 1.1.P3]), and if is an eigenvector corresponding to a nonreal eigenvalue of , then is an eigenvector corresponding to ([13, p. 45]). In view of these elementary facts, and taking into account parts (i)–(iv) of \threfconetransforms and \threfthmequivrel, if is a Perron similarity, then
(4.5) |
where , ; , for ; and , for . Hereinafter it is assumed that every Perron similarity is normalized.
The following simple, but useful fact was shown for real matrices [19, Lemma 2.1]. Although the proof extends to complex matrices without alteration, a demonstration is included for completeness.
Proposition 4.16.
simple If , then if and only if .
Proof.
Notice that
Definition 4.17.
Given an -by- matrix , the row cone of [19] denoted by , is the the polyhedral cone generated by the rows of , i.e., and the row polytope of , denoted by , is the polytope generated by the rows of , i.e., .
Johnson and Paparella [17] demonstrated that can coincide with for a class of Hadamard matrices called Walsh matrices (see \threfwalshmatrices below). We extend and generalize these results for complex matrices.
Definition 4.18.
If is a Perron similarity, then is called ideal if .
Lemma 4.19.
realizablerows If , then if and only if .
Proof.
Immediate from \threfconecontain. ∎
Lemma 4.20.
allonesrow If , then if and only if .
Proof.
The necessity of the condition is trivial given that .
For sufficiency, assume that and let . By definition, there is a nonnegative vector such that and . Since
it follows that by \threfsimple . ∎
Theorem 4.21.
thm:idealsims If is a Perron similarity, then is ideal if and only if and .
Proof.
Immediate from Lemmas LABEL:realizablerows and LABEL:allonesrow. ∎
Theorem 4.22.
coneandpoly If is a Perron similarity, then is ideal if and only if .
Proof.
If is ideal, then . If , then and, by hypothesis, . Thus, and it suffices to demonstrate that . Since , it follows that . Furthermore, any convex combination of the rows of produces a vector whose first entry is 1. Thus, and
i.e., . If , then and . Since and has a positive eigenvector, it follows that [13, Corollary 8.1.30]. Notice that , i.e., such that . Thus,
i.e., .
Conversely, suppose that . Since , it follows from Propositions LABEL:exptball and LABEL:exptconv that one of the rows of must be . Thus, by \threfallonesrow. By hypothesis, every row of is realizable so that, following Lemma LABEL:realizablerows, . ∎
Corollary 4.23.
If is a Perron similarity, then is ideal if and only if , , and such that .
Proof.
Immediate from \threfcor:allonesextreme and \threfthm:idealsims. ∎
Definition 4.24.
If is a Perron similarity, then is called extremal if contains an extremal point other than .
4.4. Kronecker Product & Walsh Matrices
If and , then the Kronecker product of and , denoted by , is the -by- matrix defined blockwise by .
If and , then
Although some of the definitions differ, the proofs for the following results, established by Dockter et al. [7], can be retooled to obtain the following.
Theorem 4.25 ([7, Theorem 7]).
kronPS If and are Perron similarities, then is a Perron similarity.
Theorem 4.26 ([7, Theorem 13]).
kronideal If and are ideal, then is ideal.
Definition 4.27.
If , then is called a Hadamard matrix if .
Definition 4.28.
walshmatrices If , then the matrix
is called Sylvester’s Hadamard matrix or, for brevity, the Walsh matrix (of order ).
It is well-known that is a Hadamard matrix. Notice that , and
The theory of association schemes was used to prove that these matrices are ideal [17, Proposition 5.1 and Theorem 5.2]. However, a proof of this is readily available via induction coupled with Theorems LABEL:kronPS and LABEL:kronideal.
Theorem 4.29.
If , then is ideal, extremal, and .
Although every normalized Hadamard matrix is a Perron similarity, not every Hadamard matrix is ideal (it can be verified via the MATLAB-command hadamard(12)
that only the first row, i.e., the all-ones vector, of the normalized Hadamard matrix of order twelve is realizable). However, it is known that if is a Hadamard matrix, then [17, Remark 6.4]. Furthermore, in view of \threfallonesrow, we have
Thus, every Hadamard matrix is extremal.
If and , then a matrix of the form
is called a permutative matrix. Notice that permutation matrices and circulant matrices are permutative matrices.
5. Circulant Matrices
In what follows, we let denote the -by- discrete Fourier transform matrix, i.e., is the -by- matrix such that
with
If , then
with
Because is symmetric and unitary [5, Theorem 2.5.1], it follows that .
Since , it follows that . Consequently,
(5.1) |
Thus,
Definition 5.1 ([5, p. 66]).
If and is a matrix such that
then is called a circulant or a circulant matrix with reference vector . In such a case, we write .
If , then is circulant if and only if there is a diagonal matrix such that [5, Theorems 3.2.2 and 3.2.3].
Recall that , with is called (an -by-) block matrix. If with , then the block matrix is called block-circulant or a block-circulant matrix of type [5, §5.6]. The set of block-circulant matrices of type is denoted by .
If is an -by- block matrix and is circulant, then is called a circulant block matrix and the set of such matrices is denoted by [5, §5.7].
Combining the previous sets yields the class of block circulant matrices with circulant blocks, i.e., block matrices of the form , where are circulant. The set of such matrices is denoted by . All matrices in are simultaneously diagonalizable by the unitary matrix and any matrix of the form
with diagonal, belongs to [5, Theorem 5.8.1].
6. Perron Similarities arising from K-arcs
In this section, we examine Perron similarities from realizing matrices of points on Type I K-arcs. Attention is focused on Type I K-arcs because many Type II arcs and Type III arcs are pointwise powers of Type I arcs (see, e.g., Munger et al. [27, Section 5]).
6.1. Type 0
Points on the Type 0 arc are zeros of the reduced Ito polynomial , where and . In [18] it was showed (and it is easy to verify otherwise) that the matrix
realizes this arc, i.e., . Because is a circulant, there is a diagonal matrix such that . As is a scaled Vandermonde matrix, we defer its discussion to the subsequent section.
6.2. Type I
For Type I arcs, the reduced Ito polynomial is , with and . If
then is a nonnegative irreducible companion matrix and . Following Remark 3.14, there are at most complex values, and hence at most values in , such that does not have distinct roots. Notice that and if are the distinct zeros of , then the Vandermonde matrix
(6.1) |
is a Perron similarity satisfying . The Perron similarity is ideal because every row is realizable (the -row forms the spectrum of ) and the first row is . Furthermore, it is extremal because the second row is extremal.
Corollary 6.1.
TypeIsims Let be defined as in (3.10), and let be the Vandermonde matrix defined as in corresponding to the zeros of .
-
(i)
If is even, then is ideal and extremal.
-
(ii)
If is odd and , then is ideal and extremal.
-
(iii)
If is odd, , and , then is ideal and extremal.
7. Circulant and Block-Circulant NIEP
Since , with a slight abuse of notation we let denote the -by- matrix such that . Notice that is a Vandermonde matrix corresponding to the distinct -roots of unity, i.e,
It is easy to show that
(7.1) |
(7.2) |
and is circulant if and only if there is a diagonal matrix such that .
Corollary 7.1 (extreme ray and vertex representation).
dftconetope If , then is ideal, extremal, and .
Proof.
The claim that is ideal is immediate from \threfTypeIsims or (7.3). Clearly, is extremal since every entry of is extremal in . Finally, because is invertible. ∎
Theorem 7.2 (half-space description).
If is a positive integer, then
where
and , .
Proof.
If , then, by \threfdftconetope, there is a nonnegative vector such that . Since is symmetric, it follows that . Since
(7.4) |
it follows that and . Because , it follows that , i.e., . As and were arbitrary, .
Let and suppose, for contradiction, that . Because is invertible, there is a unique complex vector such that , and since is symmetric, we have . As , it follows that or . Thus, such that or . Notice that because the calculations in (7.4) do not rely on the nonnegativity of . If , then implies , a contradiction since . Otherwise, if , then or ; if , then the equation
implies , a contradiction since ; if , then the equation
implies , a contradiction since . Thus, and the demonstration is complete. ∎
Proposition 7.3.
propF If , then:
-
(i)
;
-
(ii)
is ideal; and
-
(iii)
.
Proof.
-
(i)
Notice that
-
(ii)
Follows from the observation that
-
(iii)
The assertion that is an immediate consequence of part (ii) and \threfconeandpoly. ∎
There is an easier test for feasibility, as follows.
Theorem 7.4.
If , then is realizable by an -by- circulant matrix if and only if .
Proof.
Corollary 7.5.
If , then is realizable by an -by- circulant matrix if and only if
(7.5) |
If satisfies the inequalities in (7.5), then the inequality corresponding to in yields
corresponding to the trace-condition.
Remark 7.6.
rem_symm For , let
In the light of (5.1), notice that
As such, the vector is real if and only if .
Example 7.7.
In view of the inequalities given by (7.5) and \threfrem_symm, if and , then is realizable by a circulant matrix if and only if and
If and , then is realizable by a circulant matrix if and only if , , and
Theorem 7.8.
If , then is ideal, extremal, and .
Proof.
Immediate from \threfkronideal and \threfdftconetope. ∎
Theorem 7.9 (half-space description).
If , then
where
and , .
Proposition 7.10.
If , then:
-
(i)
;
-
(ii)
is ideal; and
-
(iii)
.
Proof.
Analogous to the proof of \threfpropF using (7.2) and properties of the Kronecker product. ∎
Corollary 7.11.
If , then is realizable by an block-circulant matrix with circulant blocks if and only if .
Theorem 7.12.
If and , then is ideal, extremal, and
Corollary 7.13.
If , then is realizable by a Klein block matrix with circulant blocks if and only if .
Remark 7.14.
Recall that if are matrices with , then
Theorem 7.15.
If are Perron similarities with , then is a Perron similarity.
Proof.
Follows by a straightforward proof by induction on in conjunction with \threfkronPS. ∎
Theorem 7.16.
If are ideal with , then is ideal.
Proof.
Follows by a straightforward proof by induction on in conjunction with \threfkronideal. ∎
Corollary 7.17.
If
then is ideal and extremal.
Example 7.18.
The matrices , , , , and are ideal and extremal Perron similarities of order .
8. Geometrical representation of the spectra of 4-by-4 matrices
The problem of finding a geometric representation of all vectors in such that is the spectrum of a 4-by-4 nonnegative matrix (we denote this region by ) was posed by Egleston et al. [8, Problem 1].
In 2007, Torre-Mayo et al. [33] characterized the coefficients of the characteristic polynomials of four-by-four nonnegative matrices and in 2014, Benvenuti [2] used these results to produce the region given in Figure 1. It is worth noting here that this approach is not applicable to any other dimension.

8.1. Region Generated by Spectratopes
Lemma 8.1.
If
then
Proof.
Notice that
and since , it follows that and the result follows. ∎
Theorem 8.2.
cartprodFn If
then . Furthermore, is extremal.
Proof.
Let and let . Since
it follows that
If , then the matrix above is stochastic. Thus, and , i.e., .
If , then . Since the first column of , it follows that and the matrix
is clearly stochastic, i.e., .
Finally, note that is extremal because is extremal. ∎
If is a Perron similarity, then
Furthermore, if is ideal and , then there are nonnegative scalars , , , and such that and
Consequently,
and
When , the K-arcs in the upper-half region are:
-
•
(Type 0);
-
•
(Type I); and
-
•
(Type II).
However, it is known that [27, Remark 5.3]. As mentioned earlier, the Type 0 arc is subsumed in the Type I arc. Thus, it suffices to consider Perron similarities of realizing matrices corresponding to the arc . Figure 2 depicts the projected spectratope corresponding to .

Figure 4 contains spectra derived from the projected spectratopes of these Peron similarities. Notice that Figure 4 matches the Karpelevich region when .


If then by \threfcartprodFn. Figure 5 adds the projected spectratope of .

Notice that the missing region, which is small relative to the entire region, contains spectra such that , , and .
9. Implications for Further Inquiry
Theorems LABEL:hadamardcones and LABEL:CSpolyconePSpolytope demonstrate that () is a polyhedral cone (polytope) that is closed with respect to the Hadamard product. As such we pose the following.
Question 9.1.
If is a polyhedral cone (polytope) that is closed with respect to the Hadamard product, is there an invertible matrix such that ()?
The following conjecture, which fails when and , would demonstrate that characterizing the extreme points is enough to characterize .
Conjecture 9.2.
If , then , i.e., points on the boundary are extremal.
References
- [1] Adi Ben-Israel, Linear equations and inequalities on finite dimensional, real or complex, vector spaces: A unified theory, J. Math. Anal. Appl. 27 (1969), 367–389. MR 242865
- [2] Luca Benvenuti, A geometrical representation of the spectra of four dimensional nonnegative matrices, Linear Algebra Appl. 445 (2014), 162–180. MR 3151269
- [3] Mike Boyle and David Handelman, The spectra of nonnegative matrices via symbolic dynamics, Ann. of Math. (2) 133 (1991), no. 2, 249–316. MR 1097240 (92d:58057)
- [4] Alfred Brauer, Limits for the characteristic roots of a matrix. IV. Applications to stochastic matrices, Duke Math. J. 19 (1952), 75–91. MR 47003
- [5] Philip J. Davis, Circulant matrices, John Wiley & Sons, New York-Chichester-Brisbane, 1979, A Wiley-Interscience Publication, Pure and Applied Mathematics. MR 543191
- [6] N. Dmitriev and E. Dynkin, On characteristic roots of stochastic matrices, Bull. Acad. Sci. URSS. Sér. Math. [Izvestia Akad. Nauk SSSR] 10 (1946), 167–184. MR 0017269
- [7] Janelle M. Dockter, Pietro Paparella, Robert L. Perry, and Jonathan D. Ta, Kronecker products of Perron similarities, Electron. J. Linear Algebra 38 (2022), 114–122. MR 4387576
- [8] Patricia D. Egleston, Terry D. Lenker, and Sivaram K. Narayan, The nonnegative inverse eigenvalue problem, Linear Algebra Appl. 379 (2004), 475–490, Tenth Conference of the International Linear Algebra Society. MR 2039754 (2005b:15040)
- [9] Shmuel Friedland, On an inverse problem for nonnegative and eventually nonnegative matrices, Israel J. Math. 29 (1978), no. 1, 43–60. MR 492634 (80h:15010)
- [10] I. J. Good, The interaction algorithm and practical Fourier analysis, J. Roy. Statist. Soc. Ser. B 20 (1958), 361–372. MR 102888
- [11] Olga Holtz, -matrices satisfy Newton’s inequalities, Proc. Amer. Math. Soc. 133 (2005), no. 3, 711–717. MR 2113919
- [12] Roger A. Horn and Charles R. Johnson, Topics in matrix analysis, Cambridge University Press, Cambridge, 1994, Corrected reprint of the 1991 original. MR 1288752 (95c:15001)
- [13] by same author, Matrix analysis, second ed., Cambridge University Press, Cambridge, 2013. MR 2978290
- [14] Hisashi Ito, A new statement about the theorem determining the region of eigenvalues of stochastic matrices, Linear Algebra Appl. 267 (1997), 241–246. MR 1479122 (98i:15016)
- [15] Charles R. Johnson, Row stochastic matrices similar to doubly stochastic matrices, Linear and Multilinear Algebra 10 (1981), no. 2, 113–130. MR 618581 (82g:15016)
- [16] Charles R. Johnson, Carlos Marijuán, Pietro Paparella, and Miriam Pisonero, The NIEP, Operator theory, operator algebras, and matrix theory, Oper. Theory Adv. Appl., vol. 267, Birkhäuser/Springer, Cham, 2018, pp. 199–220. MR 3837638
- [17] Charles R. Johnson and Pietro Paparella, Perron spectratopes and the real nonnegative inverse eigenvalue problem, Linear Algebra Appl. 493 (2016), 281–300. MR 3452738
- [18] by same author, A matricial view of the Karpelevič Theorem, Linear Algebra Appl. 520 (2017), 1–15. MR 3611453
- [19] by same author, Row cones, perron similarities, and nonnegative spectra, Linear Multilinear Algebra 65 (2017), no. 10, 2124–2130. MR 3733402
- [20] Dan Kalman and James E. White, Polynomial equations and circulant matrices, Amer. Math. Monthly 108 (2001), no. 9, 821–840. MR 1864053
- [21] Fridrikh I. Karpelevič, On the characteristic roots of matrices with nonnegative elements, Izvestiya Akad. Nauk SSSR. Ser. Mat. 15 (1951), 361–383. MR 0043063 (13,201a)
- [22] Victor Klee, Some characterizations of convex polyhedra, Acta Math. 102 (1959), 79–107. MR 105651
- [23] Thomas J. Laffey, A constructive version of the Boyle-Handelman theorem on the spectra of nonnegative matrices, Linear Algebra Appl. 436 (2012), no. 6, 1701–1709. MR 2890950
- [24] Chi-Kwong Li and Fuzhen Zhang, Eigenvalue continuity and Geršgorin’s theorem, Electron. J. Linear Algebra 35 (2019), 619–625. MR 4044371
- [25] Raphael Loewy and David London, A note on an inverse problem for nonnegative matrices, Linear and Multilinear Algebra 6 (1978/79), no. 1, 83–90. MR 0480563 (58 #722)
- [26] Judith J. McDonald and Pietro Paparella, A short and elementary proof of Brauer’s theorem, The Teaching of Mathematics XXIV (2021), 85–86.
- [27] Devon N. Munger, Andrew L. Nickerson, and Pietro Paparella, Demystifying the Karpelevič theorem, Linear Algebra Appl. 702 (2024), 46–62. MR 4788244
- [28] R. Tyrrell Rockafellar, Convex analysis, Princeton Landmarks in Mathematics, Princeton University Press, Princeton, NJ, 1997, Reprint of the 1970 original, Princeton Paperbacks. MR 1451876
- [29] Donald J. Rose, Matrix identities of the fast Fourier transform, Linear Algebra Appl. 29 (1980), 423–443. MR 562772
- [30] Alexander Schrijver, Theory of linear and integer programming, Wiley-Interscience Series in Discrete Mathematics, John Wiley & Sons, Ltd., Chichester, 1986, A Wiley-Interscience Publication. MR 874114
- [31] Ben Silver (ed.), American Mathematical Society Translations. Series 2. Vol. 140, American Mathematical Society Translations, Series 2, vol. 140, American Mathematical Society, Providence, RI, 1988, Eleven papers translated from the Russian. MR 982759
- [32] Joanne Swift, The location of characteristic roots of stochastic matrices, Master’s thesis, McGill University, Montréal, 1972.
- [33] J. Torre-Mayo, M. R. Abril-Raymundo, E. Alarcia-Estévez, C. Marijuán, and M. Pisonero, The nonnegative inverse eigenvalue problem from the coefficients of the characteristic polynomial. EBL digraphs, Linear Algebra Appl. 426 (2007), no. 2-3, 729–773. MR 2350690 (2008k:15014)