Equivalences for Linearizations of Matrix Polynomials
Abstract
One useful standard method to compute eigenvalues of matrix polynomials of degree at most in (denoted of grade , for short) is to first transform to an equivalent linear matrix polynomial , called a companion pencil, where and are usually of larger dimension than but is now only of grade in . The eigenvalues and eigenvectors of can be computed numerically by, for instance, the QZ algorithm. The eigenvectors of , including those for infinite eigenvalues, can also be recovered from eigenvectors of if is what is called a “strong linearization” of . In this paper we show how to use algorithms for computing the Hermite Normal Form of a companion matrix for a scalar polynomial to direct the discovery of unimodular matrix polynomial cofactors and which, via the equation , explicitly show the equivalence of and . By this method we give new explicit constructions for several linearizations using different polynomial bases. We contrast these new unimodular pairs with those constructed by strict equivalence, some of which are also new to this paper. We discuss the limitations of this experimental, computational discovery method of finding unimodular cofactors.
Keywords— linearization, matrix polynomials, polynomial bases, equivalence, Hermite normal form, Smith normal form
2010 Mathematics Subject Classification— 65F15, 15A22, 65D05
1 Introduction
Given a field and a set of polynomials for that define a basis for polynomials of grade (“grade” is short for “degree at most”) then a square matrix polynomial is an element of which we can write as
The matrix coefficients are elements of . For concrete exposition, take to be the field of complex numbers . The case when the “leading coefficient” is singular—meaning the matrix coefficient of , once the polynomial is expressed in the monomial basis—can also be of special interest. Normally, only with the case of regular square matrix polynomials, for which there exists some with is considered. Although our intermediate results require certain nonsingularity conditions, our results using strict equivalence are ultimately valid even in the case when the determinant of is identically zero.
Matrix polynomials are of significant classical and current interest: see the surveys Güttel and Tisseur (2017) and Mackey et al. (2015) for more information about their theory and applications. In this present paper we use Maple to make small example computations in order to discover and prove certain facts about one method for finding eigenvalues of matrix polynomials, namely linearization, which means finding an equivalent grade matrix polynomial when given a higher-grade matrix polynomial to start.
2 Definitions and Notation
A companion pencil for a matrix polynomial has the property that for some nonzero . This means that the eigenvalues of the companion pencil are the eigenvalues of the matrix polynomial.
A linearization of a matrix polynomial has a stronger requirement: a linearization is a pencil which is equivalent to in the following sense: there exist unimodular444In this context, the matrix polynomial is unimodular if and only if is a nonzero constant field element. matrix polynomial cofactors and which satisfy . We write for the identity matrix here.
Linearizations preserve information not only about eigenvalues, but also eigenvectors and invariant factors. Going further, a strong linearization is one for which the matrix pencil is a linearization for the reversal of , namely . Strong linearizations also preserve information about eigenstructure at infinity.
The Hermite normal form of a matrix polynomial is an upper triangular matrix polynomial unimodularly row equivalent to , which is to say , where is a matrix polynomial with a nonzero constant. See Storjohann (1994) for properties and Hermite form algorithm description. We will chiefly use the computation of the Hermite normal form as a step in the process of discovering unimodular matrix polynomials and that demonstrate that linearizes .
Another technique, which gives a stronger result (when you can do it) is to prove strict equivalence. We say matrix pencils and are strictly equivalent if there exist constant unimodular matrices and with . We will cite and show some strict equivalence results in this paper, and prove a new strict equivalence for some Lagrange bases linearizations.
The paper Dopico et al. (2020) introduces a new notion in their Definition 2, that of a local linearization of rational matrix functions: this definition allows and to be rational unimodular transformations, and defines a local linearization only on a subset of . Specifically, by this definition, two rational matrices and are locally equivalent if there exist rational matrices and , invertible for all , such that for all .
This allows exclusion of poles of or , for instance. It turns out that several authors, including Amiraslani et al. (2008), had in effect been using rational matrices and local linearizations for matrix polynomials. For most purposes, in skilled hands this notion provides all the analytical tools that one needs. However, as we will see, unimodular polynomial equivalence is superior in some ways. This paper therefore is motivated to see how far such local linearizations, in particular for the Bernstein basis and the Lagrange interpolational bases, can be strengthened to unimodular matrix polynomials. The point of this paper, as a symbolic computation paper, is to see how experiments with built-in code for Hermite normal form can help us to discover such cofactors.
We write
(2.1) |
for the Bernstein polynomials of degree . This set of polynomials forms a basis for polynomials of grade .
We will use the convention of writing for the coefficients when the matrix polynomial is expressed in the monomial basis or in another basis with a three-term recurrence relation among its elements, for the coefficients when the matrix polynomial is expressed in the Bernstein basis, and for the coefficients when the matrix polynomial is expressed in a Lagrange basis. We will use lower-case letters for scalar polynomials. We may write the (matrix) coefficient of for an arbitrary basis element as ; for instance, the “leading coefficient” of , considered as a grade polynomial, is .
3 Explicit Cofactors
We find cofactors for orthogonal bases, the Bernstein basis, and Lagrange interpolational bases, all modulo certain exceptions. We believe all of these are new. These results are restricted to certain cases; say for the Bernstein basis when the coefficient (which is not the same as ) is nonsingular. The method of this paper does not succeed in that case to find cofactors that demonstrate universal linearization. In fact, however, it is known that this Bernstein companion pencil is indeed a true linearization; we will give two references with two different proofs, and give our own proof in section 5.3.
Similarly, this method produces new cofactors for Lagrange interpolational bases, but in this case restricted to when the coefficients (which are actually the values of the matrix polynomial at the interpolational nodes) are all nonsingular. We also have an algebraic proof of equivalence, which we have cut for space reasons; this gives another proof that the Lagrange interpolational basis pencils used here are, in fact, linearizations. In section 5.3 we prove strict equivalence.
3.1 Monomial Basis
Given a potential linearization , it is possible to discover the matrices and by computing the generic Hermite Form555The Smith form, which is related, is also useful here; but we found that the Maple implementation of the Hermite Form Storjohann (1994) gave a simpler answer, although we had to compute the matrix separately ourselves because the Hermite form is upper triangular, not diagonal., with respect to the variable , for instance by using a modestly sized example of and a symbolic computation system such as Maple. The generic form that we obtain is of course not correct on specialization of the symbolic coefficients, and in particular is incorrect if the leading coefficient is zero; but we may adjust this by hand to find the following construction, which we will not detail in this section but will in the next.
By using the five-by-five scalar case, with symbolic coefficients which we write as instead of because they are scalar, we find that if
(3.2) |
where and for , , , are the partial Horner evaluations of , and if
(3.3) |
and if
(3.4) |
then we have . Moreover, and , depending only on dimension.
Once this form is known, it is easily verified for general grades and quickly generalizes to matrix polynomials, establishing (as is well-known) that this form (known as the second companion form) is a linearization.
Note that the polynomial coefficients appear linearly in and the unimodular matrix polynomials and are thus universally valid.
3.2 Three-term Recurrence Bases
The monomial basis, the shifted monomial basis, the Taylor basis, the Newton interpolational bases, and many common orthogonal polynomial bases all have three-term recurrence relations that, for , can be written
(3.5) |
All such polynomial bases require . For instance, the Chebyshev polynomial recurrence is usually written but is easily rewritten in the above form by isolating , and all Chebyshev for . We refer the reader to section 18.9 of the Digital Library of Mathematical Functions (dlmf.nist.gov) for more. See also Gautschi (2016).
For all such bases, we have the linearization666For exposition, we follow Peter Lancaster’s dictum, namely that the case almost always gives the idea. where
(3.6) |
and
(3.7) |
In Maple we compute the Hermite normal form of a grade example with symbolic coefficients by the following commands:
That procedure does not use the same convention we use here and so we apply the standard involutory permutation (SIP) matrix to it and transpose it to place the polynomial coefficients in the top row.
Now compute the generic Hermite normal form:
That returns matrices such that , or . We now look at their structure.
This produces
(3.8) |
and a separate investigation shows the diagonal entries are (as is correct in the generic case) all just , until the lower corner entry which is a monic version of the original polynomial. The matrix has a simple enough shape in this case, but is more convenient:
produces
(3.9) |
Because the first columns of is a subset of the identity matrix, the first columns of are the same as the first columns of . Since the final column of has only one nonzero entry, at the top, call that . Call the entries in the final column of in descending order , ,, ; let us suppose that the final entry is a multiple of the scalar polynomial.
Multiplying out and equating its final column to the final column of gives us a set of (triangular, as it happens) equations in the unknowns. These allow us to deduce a general form for , and to prove it is unimodular. Once we have and with , construction of unimodular and so that is straightforward.
The results are slightly simpler in this case to present as inverses: Here we show the Chebyshev scalar case.
(3.10) |
and
(3.11) |
It is not immediately obvious that is unimodular, but it is. For it is obvious.
Notice again that the result is linear in the unknown polynomial coefficients, and that we therefore have a universal equivalence (once generalized to the matrix polynomial case, and of arbitrary dimension, which is straightforward).
4 Bernstein Basis
The set of Bernstein polynomials in equation (2.1) is a set of polynomials each of exact degree that together forms a basis for polynomials of grade over fields of characteristic zero. Bernstein polynomials have many applications, for example in Computer Aided Geometric Design (CAGD), and many important properties including that of optimal condition number over all bases positive on . They do not satisfy a simple three term recurrence relation of the form discussed in Section 3.2, although they satisfy an interesting and useful “degree-elevation” recurrence, namely
(4.12) |
which specifically demonstrates that a sum of Bernstein polynomials of degree might actually have degree strictly less than . See Farouki (2012), Farouki and Goodman (1996), and Farouki and Rajan (1987) for more details of Bernstein bases.
A Bernstein linearization for is
(4.13) |
The pattern on the diagonal is written in unreduced fractions so that the general pattern is more clear.
For a construction of rational unimodular and that show that this is a local linearization, in the case that no eigenvalue occurred at the end of the Bernstein interval (), see Amiraslani et al. (2008). The discussion in Mackey and Perović (2016) shows by strict equivalence that it is in fact a global strong linearization independently of any singularities, or indeed independently of regularity of the matrix polynomial. We will give here only an explicit construction of unimodular polynomial matrices and , again unless , and requiring that the “leading” block be nonsingular.
The linearization used here was first analyzed in Jónsson (2001) and Jónsson and Vavasis (2004) (as a companion pencil for scalar polynomials) and has been further studied and generalized in Mackey and Perović (2016). One of the present authors independently invented and implemented a version of this linearization in the Maple command CompanionMatrix (except using , and flipped from the above form) in about . For a review of Bernstein linearizations, see the aforementioned Mackey and Perović (2016). For a proof of their numerical stability, see the original thesis Jónsson (2001).
4.1 Hermite form of the linearization
We use the same approach as before, probing with a small example and computing its Hermite normal form explicitly to suggest the correct form.
The resulting suggested form for is
(4.14) |
which is more complicated than before, because the final column is full (as before, the first columns merely copy the companion pencil ). Moreover, this time the entries in are polynomial functions of the , , not just linear; but involve negative powers of (this equivalence appears to be new: the treatment in Mackey and Perović (2016) is implicit, while the treatment in Amiraslani et al. (2008) only gives a local linearization). This means that in the case is singular, is an eigenvalue and something else must be done to linearize the matrix polynomial.
We find a recurrence relation for the unknown blocks in both the purported Hermite normal form and in the inverse of the cofactor. Put
(4.15) |
which, apart from the final column, is the same as the linearization , and the Hermite normal form analogue as
(4.16) |
Multiplying out we find that for the lower right corner block
(4.17) |
A separate investigation shows that must be a constant matrix. Evaluating that equation at shows that , so must be nonsingular for this form to hold. Once is identified, equation (4.17) can be solved uniquely for the grade matrix polynomial :
(4.18) |
Division is exact there, as can be checked by expanding the numerator of the right-hand side in Taylor series about .
The other block entries in the multiplied-out equation give a triangular set of recurrences for the and the that are solvable in the same way and under the same condition, namely that must be nonsingular.
Once we have and with , construction of unimodular and so that is straightforward.
4.2 Strict Equivalence
For the pencil in equation (4.13) we exhibit the strict equivalence by the unimodular matrices and giving , below; for ease of understanding, we actually show which shows its relationship to more clearly:
(4.19) |
and
(4.20) |
where , , and . Generalizing these to matrix polynomials is straightforward, as is generalizing these to arbitrary grade.
The paper Mackey and Perović (2016) contains, in its equation (3.6), a five by five example including tensor products that shows how to find a strict equivalence of the pencil here to the strong linearizations they construct in their paper. Both the results of that paper and the construction given by example above demonstrate that the Bernstein linearization here is a strong linearization, independently of the singularity of and indeed independently of the regularity of the matrix polynomial.
4.3 A new reversal
We here present a slightly different reversal, namely rev of a polynomial of grade expressed in a Bernstein basis, instead of the standard reversal . This new reversal has a slight numerical advantage if all the coefficients of are the same sign. We also give a proof that the linearization of this reversal is the corresponding reversal of the linearization, thus giving a new independent proof that the linearization is a strong one. This provides the details of the entries in the unimodular matrix polynomials and with constructed above.
A short computation shows that if
(4.21) |
then
(4.22) |
where
(4.23) |
whereas the coefficients of the standard reversal are, in contrast,
(4.24) |
which has introduced sign changes, which may fail to preserve numerical stability if all the are of one sign. A further observation is that the coefficient only involves , while involves all ; involves and while involves all but , and so on; in that sense, this new reversal has a more analogous behaviour to the monomial basis reversal, which simply reverses the list of coefficients.
For interest, we note that if is a linearization for so that , then reversing the linearization by this transformation is not a matter of simply interchanging and :
(4.25) |
and so the corresponding reversed linearization is . The sign change is of no importance.
Suppose that the Bernstein linearization of is and that the Bernstein linearization of rev is . That is, the matrices and have the same form as that of and , but where contain s the matrices contain s. To give a new proof that the Bernstein linearization is actually a strong linearization, then, we must find a pair of unimodular matrices which have and , valid for all choices of coefficients (which determine the corresponding reversed coefficients by the formula above).
First, it simplifies matters to deal not with but rather with . Then, our defining conditions become
(4.26) | ||||
(4.27) |
which are linear in the unknowns (the entries of and of ). By inspection of the first few dimensions, we find that and have the following form (using the six-by-six case, for variation, to demonstrate). The anti-diagonal of the general has entries for , , , .
(4.28) |
and
(4.29) |
One can then guess the explicit general formulae
(4.30) | ||||
(4.31) | ||||
(4.32) |
and then prove that these are not only necessary for the equations above, but also sufficient. The matrix is diagonal and gives a direct relationship between the triangular block in and a corresponding portion of ; the other equation gives a recurrence relation for the entries of . Comparison of the final columns of the products gives an explicit formula for the final column of and an explicit formula for the entries of by comparison of the coefficients of the symbols ; this formula can be seen to verify the recurrence relation found earlier, closing the circle and establishing sufficiency. Both matrices and have determinant : row-permutations brings to upper triangular form and the determinant (times ) can be read off as the product of the formerly anti-diagonal elements, and similarly for the block of which gives a sign .
5 Lagrange Interpolational Bases
A useful arrowhead companion matrix pencil for polynomials given in a Lagrange basis was given in Corless (2004); Corless and Watt (2004). Later, Piers Lawrence recognized that the mathematically equivalent pencil re-ordered by similarity transformation by the standard involutory permutation (SIP) matrix so that the arrow was pointing up and to the left was numerically superior, in that one of the spurious infinite eigenvalues will be immediately deflated—without rounding errors—by the standard QZ algorithm Lawrence and Corless (2012). We shall use that variant in this paper.
For expository purposes, consider interpolating a scalar polynomial on the four distinct nodes for . In the first barycentric form Berrut and Trefethen (2004) this is
(5.33) |
where the node polynomial , and , and the barycentric weights are
(5.34) |
Then a Schur complement with respect to the bottom right block shows that if
(5.35) |
then . By exhibiting a rational unimodular equivalence, the paper Amiraslani et al. (2008) showed that the general form of this was at least a local linearization for matrix polynomials , and also demonstrated that this was true for the reversal as well, showing that it was a strong (local) linearization. They also gave indirect arguments, equivalent to the notion of patched local linearizations introduced in Dopico et al. (2020), showing that the construction gave a genuine strong linearization. Here, we wish to see if we can explicitly construct unimodular matrix polynomials and which show equivalence, directly demonstrating that this is a linearization.
5.1 Hermite form of the linearization
As we did for the monomial basis, we start with a scalar version. We compute the Hermite normal form , and the transformation matrix so that , with Maple to give clues to find the general form. When we do this for the grade example above we find that the form of is not helpful, but that the form of and are:
(5.36) |
As is usual, the generic Hermite normal form contains in the lower right corner; on specialization of the polynomial coefficients this can change, of course. We return to this point later.
This gives us enough information to conjecture the general form in the theorem below, and a proof follows quickly.
Theorem 5.1.
If
(5.37) |
and no is singular then where
and is the identity matrix with its final column replaced with a block matrix that can be partitioned as
Moreover,
(5.38) |
(5.39) |
for , and
(5.40) |
Note that the definition of in equation (5.39) ensures that the division in equation (5.40) is exact, and therefore is a matrix polynomial of grade . Furthermore, is unimodular.
Proof.
Block multiplication of the forms of and gives block equations:
(5.41) | ||||
(5.42) | ||||
(5.43) |
The last block equation identifies . Putting in each of the middle block equations identifies each . Once that has been done, the middle block equations define each . All that remains is to show that these purported solutions satisfy equation (5.41) and that the resulting matrix is unimodular.
We will need the following facts about the node polynomial , the barycentric weights , and the first barycentric form for :
(5.44) |
(5.45) |
and
(5.46) |
By substituting in equation (5.45) we find that the sum of the barycentric weights is zero777This is true even if one of the is zero, inductively because one factor of then cancels in the numerator and denominator on the left hand side and the result is one degree lower..
We now substitute equations (5.40)–(5.39) into the left hand side of equation (5.41) to get
(5.47) |
Expanding this, we have
(5.48) |
This shows that we have found a successful factoring analogous to the scalar Hermite normal form.
All that remains is to show that is unimodular. We use the Schur determinantal formula, and identify the as the barycentric weights on the nodes with removed and we see that up to sign (depending on dimension) the determinant simplifies to . This completes the proof. ∎
As a corollary, we can explicitly construct unimodular matrix polynomials and from these factors showing that, if each is nonsingular, is equivalent to .
5.2 Linearization equivalence via local Smith form
In this section we approach the equivalence of to its linearization via localizations. The argument does not require that or its evaluations be nonsingular. However, this method is less explicit about the form of the unimodular cofactors. Also, a stronger result, strict equivalence, is shown in section 5.3. The line of argument here, that local information may be brought together to obtain a global equivalence, may be useful for other equivalence problems lacking strict equivalence.
The localizations here are in the algebraic sense. is a principal ideal ring as is each localization, , at an irreducible , where denotes the multiplicatively closed set of all relatively prime to . The ideals of are the multiples of a given power of , Working over such a localization, unimodular cofactors may have denominators relatively prime to . Then this algebraically local solution is a local solution in the analytic sense of section 2, being valid for all not a zero of .
Lemma 1.
Matrices are equivalent if and only if they are equivalent over every localization at an irreducible .
Proof.
Let be the Smith normal form of . There are unimodular such that . Let be an irreducible and suppose and are such that with relatively prime to . Then the matrix is unimodular over and , where is in Smith normal form and is the unique Smith form of locally at . Thus the powers of in the Smith form over form precisely the Smith form locally over . Matrices are equivalent precisely when they have the same Smith form. As we’ve just shown, this happens if and only if they have the same Smith forms locally at each irreducible . ∎
We remark that it would also work to approximate the localizations and do computations over for sufficiently large . would do since the degree of any minor of is bounded by . For a thorough treatment on the local approach to Smith form, see Wilkening and Yu (2011).
Assuming that is equivalent to (which we show next), one can readily produce unimodular cofactors using a Smith form algorithm. Let be the Smith form of ; then the Smith form of is . Let the unimodular cofactors be , , , such that and . Then, using the cofactors
one has .
Theorem 5.2.
Let be a list of distinct nodes in . Let of degree no larger than . Then is equivalent to , in which is the Lagrange interpolation linearization of and there are identity blocks in the block diagonal equivalent.
Proof.
We’ll show the equivalence for each localization at an irreducible. From this the equivalence over follows. We show the equivalence locally at any relatively prime to . This captures the general case where is relatively prime to all as well as the special case . The cases for follow by symmetry.
Let , and . Then we have the block form
Note that is a unit locally at so that is unimodular. Thus is equivalent to and to , where is the following Schur complement of in .
Then we may manipulate this block into the desired form .
Expanding the leading block, we see that it is :
∎
5.3 Strict Equivalence
We prove the following theorem, establishing the strict equivalence of the companion pencil of equation (5.37), for a matrix polynomial determined to grade by interpolation at points, to the monomial basis linearization for considered as a grade matrix polynomial. This establishes that the Lagrange basis pencil is in fact a linearization, indeed a strong linearization, independently of the singularity or not of any of the values of the matrix polynomial, and independently of the regularity of the matrix pencil.
Theorem 5.3.
Provided that the nodes are distinct, then the Lagrange basis linearization in equation (5.37), namely , is strictly equivalent to the monomial basis linearization ; in other words, there exist nonsingular constant matrices and such that both and . Explicitly, if is the Vandermonde matrix with entry for , then we have
(5.49) |
and, if the node polynomial has coefficients , which we place in a row vector ,
(5.50) |
Proof.
Notice first that is not zero, and that is the reciprocal of that. Both these matrices are therefore nonsingular if the nodes are distinct.
Next, it is straightforward to verify that by multiplying out, and using the fact that the upper left corner block of each is the zero block, and that the rest is the identity because .
The remainder of the proof consists of detailed examination of the consequences of multiplying out the more complicated . From now on we drop the notation as clutter, and the proof is considered only for the scalar case, but the tensor products should be kept in mind as the operations are read. Writing
(5.51) |
where , we have (using )
(5.52) |
We seek to establish that
(5.53) |
is the same matrix. This means establishing that
(5.54) |
Begin by relating the monomial basis to the Lagrange basis:
(5.55) |
where the are the Lagrange basis polynomials . The matrix with the powers of is, of course, just .
Name the columns of . We thus have to show that the final column of equation (5.54) is and that
(5.56) |
for , , , , and that . But this is just a translation into matrix algebra of the polynomial coefficients of
(5.57) |
or . The coefficients of different powers of in these identities provide the columns in the matrix equation (5.54). This completes the proof. ∎
6 Concluding Remarks
This paper shows how to use scalar matrix tools—specifically the Maple code for computing the Hermite normal form—and computation with low-dimensional examples to solve problems posed for matrix polynomials of arbitrary dimension and arbitrary block size. Current symbolic computation systems do not seem to offer facilities for direct computation with such objects.
The mathematical problems that we examined included the explicit construction of unimodular matrix polynomial cofactors and which would demonstrate that a given linearization for a matrix polynomial was, indeed, a linearization.
We showed that the approach discovered cofactors that were valid generically. This is because the primary tool used here, namely the Hermite normal form, is discontinuous at special values of the parameters (and indeed discovery of those places of discontinuity is the main purpose of the Hermite and Smith normal forms). Nonetheless, in the case of the monomial basis and others we were able to guess such a universal form (by replacing a leading coefficient block by the identity block) from the HNF. A separate investigation, based on the change-of-basis matrix but also inspired by experimental computation, found new explicit universal cofactors for all bases considered here. We also introduced a new reversal for polynomials expressed in the Bernstein basis which may have better numerical properties than the standard reversal does.
For the Bernstein basis, which is symmetric under , we were initially puzzled that the Hermite analogue had a problem if was singular but not when was singular. This asymmetry disappears by considering instead a Hermite analogue of the form
(6.58) |
This works only if exists.
We plan to apply this method to Hermite interpolational bases, where (as previously for Lagrange interpolational bases) only local linearizations are currently known in the literature.
Acknowledgements 6.1.
RMC thanks Froilán Dopico for several very useful discussions on linearization, giving several references, and pointing out the difference between local linearization and linearization. The support of Western University’s New International Research Network grant is also gratefully acknowledged.
References
- Amiraslani et al. [2008] Amir Amiraslani, Robert M. Corless, and Peter Lancaster. Linearization of matrix polynomials expressed in polynomial bases. IMA Journal of Numerical Analysis, 29(1):141–157, 2008.
- Berrut and Trefethen [2004] Jean-Paul Berrut and Lloyd N. Trefethen. Barycentric Lagrange interpolation. SIAM Review, 46(3):501–517, 2004. doi: 10.1137/S0036144502417715. URL https://doi.org/10.1137/S0036144502417715.
- Cachadina et al. [2020] Maria Isabel Bueno Cachadina, Javier Perez, Anthony Akshar, Daria Mileeva, and Remy Kassem. Linearizations for interpolatory bases - a comparison: New families of linearizations. The Electronic Journal of Linear Algebra, 36(36):799–833, December 2020. doi: 10.13001/ela.2020.5183. URL https://doi.org/10.13001/ela.2020.5183.
- Corless [2004] Robert M Corless. Generalized companion matrices in the Lagrange basis. In Proceedings EACA, pages 317–322. Citeseer, 2004.
- Corless and Watt [2004] Robert M. Corless and Stephen M. Watt. Bernstein bases are optimal, but, sometimes, Lagrange bases are better. In Proceedings of SYNASC, Timisoara, pages 141–153. MIRTON Press, 2004.
- Dopico et al. [2020] Froilán M. Dopico, Silvia Marcaida, María C. Quintana, and Paul Van Dooren. Local linearizations of rational matrices with application to rational approximations of nonlinear eigenvalue problems. Linear Algebra and its Applications, 604:441–475, November 2020. doi: 10.1016/j.laa.2020.07.004. URL https://doi.org/10.1016/j.laa.2020.07.004.
- Farouki [2012] Rida T. Farouki. The Bernstein polynomial basis: A centennial retrospective. Computer Aided Geometric Design, 29(6):379–419, 2012.
- Farouki and Goodman [1996] Rida T. Farouki and T. Goodman. On the optimal stability of the Bernstein basis. Mathematics of Computation of the American Mathematical Society, 65(216):1553–1566, 1996.
- Farouki and Rajan [1987] Rida T. Farouki and V. T. Rajan. On the numerical condition of polynomials in Bernstein form. Computer Aided Geometric Design, 4(3):191–216, 1987.
- Gautschi [2016] Walter Gautschi. Orthogonal polynomials in MATLAB: Exercises and Solutions, volume 26. SIAM, 2016.
- Güttel and Tisseur [2017] Stefan Güttel and Françoise Tisseur. The nonlinear eigenvalue problem. Acta Numerica, 26:1–94, 2017.
- Jónsson [2001] Guðbjörn F. Jónsson. Eigenvalue methods for accurate solution of polynomial equations. PhD thesis, Center for Applied Mathematics, Cornell University, Ithaca, NY, 2001.
- Jónsson and Vavasis [2004] Guðbjörn F. Jónsson and Stephen Vavasis. Solving polynomials with small leading coefficients. SIAM Journal on Matrix Analysis and Applications, 26(2):400–414, 2004.
- Lawrence and Corless [2012] Piers W. Lawrence and Robert M. Corless. Numerical stability of barycentric Hermite root-finding. In Proceedings of the 2011 International Workshop on Symbolic-Numeric Computation, pages 147–148. ACM, 2012.
- Mackey and Perović [2016] D. Steven Mackey and Vasilije Perović. Linearizations of matrix polynomials in Bernstein bases. Linear Algebra and its Applications, 501:162–197, July 2016. doi: 10.1016/j.laa.2016.03.019. URL https://doi.org/10.1016/j.laa.2016.03.019.
- Mackey et al. [2015] D. Steven Mackey, Niloufer Mackey, and Françoise Tisseur. Polynomial eigenvalue problems: Theory, computation, and structure. In Numerical Algebra, Matrix Theory, Differential-Algebraic Equations and Control Theory, pages 319–348. Springer, 2015.
- Storjohann [1994] Arne Storjohann. Computation of Hermite and Smith normal forms of matrices. Master’s thesis, University of Waterloo, 1994.
- Wilkening and Yu [2011] Jon Wilkening and Jia Yu. A local construction of the smith normal form of a matrix polynomial. Journal of Symbolic Computation, 46(1):1–22, 2011. ISSN 0747-7171. doi: https://doi.org/10.1016/j.jsc.2010.06.025. URL https://www.sciencedirect.com/science/article/pii/S0747717110001136.