Analytical solutions to some generalized and polynomial eigenvalue problems
Abstract
It is well-known that the finite difference discretization of the Laplacian eigenvalue problem leads to a matrix eigenvalue problem (EVP) where the matrix is Toeplitz-plus-Hankel. Analytical solutions to tridiagonal matrices with various boundary conditions are given in a recent work of Strang and MacNamara. We generalize the results and develop analytical solutions to certain generalized matrix eigenvalue problems (GEVPs) which arise from the finite element method (FEM) and isogeometric analysis (IGA). The FEM matrices are corner-overlapped block-diagonal while the IGA matrices are almost Toeplitz-plus-Hankel. In fact, IGA with a correction that results in Toeplitz-plus-Hankel matrices gives a better numerical method. In this paper, we focus on finding the analytical eigenpairs to the GEVPs while developing better numerical methods is our motivation. Analytical solutions are also obtained for some polynomial eigenvalue problems (PEVPs). Lastly, we generalize the eigenvector-eigenvalue identity (rediscovered and coined recently for EVPs) for GEVPs and derive some trigonometric identities.
Keywords
eigenvalue, eigenvector, Toeplitz, Hankel, block-diagonal
1 Introduction
It is well-known that the following tridiagonal Toeplitz matrix
has analytical eigenpairs with where
with (we assume that is a nonzero constant throughout the paper); see, for example, [21, p. 514] for a general case of tridiagonal Toeplitz matrix. Following the constructive technique in [21, p. 515] that finds the analytical solutions to the matrix eigenvalue problem (EVP) , one can derive analytical solutions to the generalized matrix eigenvalue problem (GEVP) where is an invertible tridiagonal Toeplitz matrix. For example, let
then, the GEVP has analytical eigenpairs with where (see [17, Sec. 4] for a scaled case; is scaled by while is scaled by )
These matrices or their scaled (by constants) versions arise from various applications. For example, the matrix arises from the discrete discretizations of the 1D Laplace operator by the finite difference method (FDM, cf., [24, 26], scaled by ) or the spectral element method (SEM, cf., [23], scaled by ). The work [28] showed that functions of the (tridiagonal) matrices that arise from finite difference discretization of the Laplacian with different boundary conditions are Toeplitz-plus-Hankel. Similar analytical results for tridiagonal matrices from finite difference discretization are obtained in [4]. The matrix (and also ) arises from the discrete discretization by the finite element method (FEM, cf., [5, 15, 27], scaled by ).
In general, it is difficult to find analytical solutions to the EVPs. The work by Losonczi [20] gave analytical eigenpairs to the EVPs for some symmetric and tridiagonal matrices. A new proof of these solutions was given by da Fonseca [6]. We refer to the recent work [7] for a survey on the analytical eigenpairs of tridiagonal matrices. For pentadiagonal and heptadiagonal matricies, finding analytical eigenpairs becomes more difficult. The work [2] derived asymptotical results of eigenvalues for pentadiagonal symmetric Toeplitz matrices. Some spectral properties were found in [14] for some pentadiagonal symmetric matrices. A recent work by Anđelić and da Fonseca [1] presents some determinantal considerations for pentadiagonal matrices. The work [25] gave analytical eigenvalues (as the zeros of some complicated functions) for heptadiagonal symmetric Toeplitz matrices.
To the best of the author’s knowledge, there are no widely-known articles in literature which address the issue of finding analytical solutions to either the EVPs with more general matrices (other than the ones mentioned above) or the GEVPs. The articles [17, 16, 3] present analytical solutions (somehow implicitly) to GEVPs for some tridiagonal and/or pentadiagonal matrices that arise from the isogeometric analysis (IGA) of the 1D Laplace operator. For heptadiagonal and more general matrices, no analytical solutions exist and the numerical approximations in [16, 3, 8] can be considered as asymptotic results for certain structured matrices arising from the numerical discretizations of the differential operators.
In this paper, we present analytical solutions to GEVPs for mainly two classes of matrices. The first class is the Toeplitz-plus-Hankel matrices while the second class is the corner-overlapped block-diagonal matrices. The main insights for the Toeplitz-plus-Hankel matrices are from the numerical spectral approximation techniques where a particular solution form such as, the well-known Bloch wave form, is sought. For the corner-overlapped block-diagonal matrices, we propose to decompose the original problem into a lower two-block matrix problem where one of the blocks is a quadratic eigenvalue problem (QEVP). We solve the QEVP by rewriting the problem and applying the analytical results from the tridiagonal Toeplitz matrices. Generalization of finding solutions to polynomial eigenvalue problem (PEVP) is also given. Additionally, Denton, Parke, Tao, and Zhang in a recent work [13] rediscovered and coined the eigenvector-eigenvalue identity for certain EVPs. We generalize this identity for the GEVPs. Based on these identities, we derive some interesting trigonometric identities.
The rest of the article is organized as follows. Section 2 presents the main results, i.e., the analytical solutions to the two classes of matrices. Several examples are given and discussed. Section 3 generalizes the eigenvector-eigenvalue identity and derives some trigonometric identities. Potential applications to the design of better numerical discretization methods for partial differential equations and other concluding remarks are presented in Section 4.
2 Main results
2.1 Notation and problem statement
Throughout the paper, we denote matrices and vectors by uppercase and lowercase letters, respectively. In particular, let be a square matrix with entries denoted as and be a vector with entries denoted as in the complex field. We denote by a matrix that the entries depend on a sequence of parameters The superscript is omitted when the context is clear. We denote by a symmetric and diagonal-structured Toeplitz matrix with entries
(2.1) |
where specifies the matrix bandwidth. Explicitly, this matrix can be written as
where the empty spots are zeros. For a Hankel matrix, it appears to be different in each of the cases we consider in the paper and we define it at its occurrence in the context.
For matrices , the EVP is to find the eigenpairs such that
(2.2) |
while the GEVP is to find the eigenpairs such that
(2.3) |
Throughout the paper, we focus on GEVPs and since the analytical eigenpairs for GEVPs with small dimensions are relatively easy to find, we assume that the dimension is large enough for the generalization of matrices to make sense. For simplicity, we slightly abuse the notation such as for a matrix and for an eigenpair. Once analytic eigenpairs for a GEVP are found, eigenpairs for EVP follows naturally by setting as an identity matrix.
2.2 Toeplitz-plus-Hankel matrices
In this section, we present analytical solutions to certain Toeplitz-plus-Hankel matrices. The main insights are from the proof in finding analytical solutions to a tridiagonal Toeplitz matrix in [21, p. 514] and the numerical spectral approximations of the Laplace operator (cf.,[28, 17, 16, 3]). The main idea is to seek eigenvectors in a particular form such as the Bloch wave form , where as in [21, 17] and sinusoidal form as in [16]. Our main contribution is the generalization of these results to GEVPs, especially, with larger matrix bandwidths.
We denote by a Hankel matrix with entries
(2.4) |
which can be explicitly written as
where the entries at the bottom-right corner are defined such that the matrix is persymmetric. Herein, a persymmetric matrix is defined as a square matrix which is symmetric with respect to the anti-diagonal. With this in mind, we give the following result.
Theorem 2.1 (Analytical eigenvalues and eigenvectors, set 1).
Proof.
Following [21, p. 515], one seeks eigenvectors of the form . Using the trigonometric identity , one can verify that each row of the GEVP (2.3), , reduces to , which is independent of the row number . Thus, the eigenpairs given in (2.5) satisfies (2.3). The GEVP has at most eigenpairs and the eigenvectors are linearly independent. This completes the proof. ∎
We remark that all the matrices defined above are symmetric and persymmetric. For and , the matrix is of the form
where the Hankel matrix modifies the matrix at the first and last few rows. For , both and are tridiagonal Toeplitz matrices. For , both and are Toeplitz-plus-Hankel matrices. This result generalizes the finite-difference matrix results in Strang and MacNamara in [28].
We present the following example. Let Then, we have the following matrices
The eigenpairs of the GEVP (2.3) with these matrices are
(2.6) |
If we scale the matrix by while the matrix by , then the new system is a GEVP that arises from the IGA approximation of the Laplacian eigenvalue problem on ; see, for example, [16, eqns. 118–123] (the first two rows have slightly different entries ).
Similarly, if we define a Hankel matrix with entries
(2.7) |
one can shift the phase by a half and seek solutions of the form . As a consequence, we have the following result.
Theorem 2.2 (Analytical eigenvalues and eigenvectors, set 2).
Assume that is invertible. Then, the GEVP (2.3) has eigenpairs where
(2.8) |
Herein, as an example, with and , the matrix is of the form
The proof can be established similarly and we omit it for brevity. An example is the GEVP with the matrices in [16, eqns. 118–123]. These matrices satisfy the assumption of the above theorem and the analytical solutions are given by (2.8) with a scale to the eigenvalues. The eigenvectors of Theorems 2.1 and 2.2 correspond to the solutions of the Laplacian eigenvalue problem on the unit interval. When , the results in Theorems 2.1 and 2.2 provide an insight to remove the outliers in high-order IGA approximations. We refer to [17, 16] for the outlier behavior in the approximate spectrum and to [9, 10] for their eliminations. Similarly, we define a Hankel matrix with entries
(2.9) |
With the insights from the numerical methods for the Neumann eigenvalue problem on the unit interval, we have the following two sets of analytical eigenpairs.
Theorem 2.3 (Analytical eigenvalues and eigenvectors, set 3).
The matrices can be complex-valued. For example, let Then, with the setting in Theorem 2.3, we have the following matrices
(2.11) |
By direct calculations, the eigenpairs of (2.3) are
(2.12) | ||||
which verifies Theorem 2.3.
Now, we define a Hankel matrix with entries
(2.13) |
We then have the following set of analytical eigenpairs.
Theorem 2.4 (Analytical eigenvalues and eigenvectors, set 4).
We present the following example. Let Then, with the setting in Theorem 2.4, we have the following matrices
(2.15) |
By direct calculations, the eigenpairs of (2.3) are
(2.16) | ||||
which verifies Theorem 2.4.
Remark 2.5.
Theorems 2.1–2.4 give analytical eigenvalues and eigenvectors for four different sets of GEVPs. The eigenvalues are of the same form while the eigenvectors are of different forms. For a fix bandwidth , the internal entries of the matrices defined in Theorems 2.1–2.4 are the same while the modifications to the boundary rows are slightly different. This small discrepancy leads to different eigenvectors. It is well-known that for a complex-valued Hermitian matrix, the eigenvalues are real and eigenvectors are complex. The matrices in (2.11) are complex-valued. They are symmetric but not Hermitian. The eigenvalues are complex while the eigenvectors are real. This property is for these special matrices and it would be interesting to generalize this property to larger classes of matrices.
2.3 Corner-overlapped block-diagonal matrices
In this section, we consider the following type of matrix, that is, with
(2.17) |
We assume that the dimension of the matrix is . It is a block-diagonal matrix where the corners of blocks are overlapped. Therefore, we refer to this type of matrices as the corner-overlapped block-diagonal matrices.
In this section, we derive their analytical eigenpairs. To illustrate the idea, we consider the following matrix
(2.18) |
and its EVP (2.2). A direct symbolic calculation leads to the analytical eigenvalues
(2.19) | ||||
Alternatively, to find its eigenvalues, we note that the first and the last rows of (2.2) lead to
On one hand, we solve these equations to arrive at
(2.20) | ||||
which is then substituted into the third equation in (2.2) to get
(2.21) |
On the other hand, we substitute (2.20) and the third equation of (2.2) into the second and the fourth equations in (2.2) to get
(2.22) | ||||
We now see that the EVP (2.2) is decomposed into two subproblems; that is, one EVP and QEVP as follows
(2.23) |
and
(2.24) |
where is the identity matrix (we assume that the dimension of is adaptive to its occurrence, in this case, it is ),
Both matrices and are symmetric. The EVP (2.23) has an analytical eigenvalue which is one of the eigenvalues in (2.19). The characteristic polynomial of the QEVP (2.24) is
(2.25) |
which is a polynomial of order four. From fundamental theorem of algebra, it has four roots. It is easy to verify that given in (2.19) are the four roots of the equation
Now, we propose the following idea. Assuming that , we rewrite the QEVP (2.24) as
(2.26) |
where
(2.27) |
which is a symmetric matrix. For a fixed the matrix can be viewed as a constant matrix. We apply Theorem 2.1 with bandwidth and dimension . Thus, the eigenvalues of satisfy
(2.28) |
which is rewritten as a quadratic form in terms of
(2.29) |
For , and we obtain the eigenvalues as in (2.19) while for , and we obtain the eigenvalues as in (2.19). If , then (2.27) is invalid and we note that a shift (divide (2.26) by for some non-zero ) will lead to the same set of eigenvalues.
We now generalize the matrix defined (2.18) and consider the GEVP. Based on the idea above, we have the following theorem which gives analytical solutions to a class of GEVPs with corner-overlapped block-diagonal matrices.
Theorem 2.6 (Analytical eigenvalues and eigenvectors, set 5).
Let and . Assume that is invertible. Then, the GEVP (2.3) has eigenvalues
(2.30) |
where
(2.31) | ||||
The corresponding eigenvectors are with
(2.32) | ||||
where is the ceiling function.
Proof.
For simplicity, we denote as a generic eigenpair of the GEVP (2.3). We assume that One one hand, the -th row of (2.3) leads to
(2.33) |
which is simplified to
(2.34) |
Using (2.34) recursively and , we calculate
(2.35) | ||||
One the other hand, the -th row of (2.3) leads to
(2.36) | ||||
where Using (2.34) and , this equation simplifies to
(2.37) |
where
(2.38) | ||||||
The GEVP (2.3) with the matrices defined in this theorem is then decomposed to a block form
(2.39) |
where , , the first block are formed from (2.37), and the blocks and are formed from (2.34). The characteristic polynomial of (2.39) is
(2.40) |
The roots of give the eigenvalues. Firstly, using (2.35), leads to the eigenpair which we denote it as with the eigenvector and
(2.41) |
This eigenvalue is simple due to the nonzero block matrix . Now, the first block part of (2.39) that leads to can be written as a GEVP with tridiagonal matrices defined using parameters in (2.38). Applying Theorem 2.1 with , we obtain that the eigenpairs with and
(2.42) |
Using (2.38) and rearranging the index of the eigenpairs accordingly (due to the quadratic feature in the eigenvalue, one eigenpair becomes two eigenpairs and ), we have
where
We give an example which arises from the FEM discretization. The quadratic FEM of the Laplacian eigenvalue problem with uniform elements on the unit interval leads to the stiffness and mass matrices [15, 27]
(2.43) |
which are of dimension . With these matrices in mind, we have the following analytical result.
Let and be defined in (2.43). Then, the GEVP has analytical eigenvalues
and analytical eigenvectors where
Generalization of Theorem 2.6 is possible. The matrix defined in (2.17) is a corner-overlapped block-diagonal matrix where each block (except the first and last) is of dimension . One can generalize the block to be of dimension where . We give the following example where the block is . The cubic FEM of the Laplacian eigenvalue problem with uniform elements on the unit interval leads to the stiffness and mass matrices [15, 27]
(2.44) | ||||
which are of dimension . To find the analytical eigenvalues, we follow the same procedure as for quadratic FEM. By static condensation, the matrix eigenvalue problem GEVP is first rewritten to a block-matrix form
(2.45) |
where , . Similarly, leads to the eigenvalue . To find the other eigenvalues, we rewrite as a cubic EVP
(2.46) | ||||
which is of dimension . Herein, . Using the derivations from the proof of Theorem 2.6, we obtain the eigenvalues that are the roots of the equation
(2.47) |
where . The eigenvectors can be obtained similarly.
2.4 Polynomial eigenvalue problems (PEVPs)
The PEVP is to find the eigenpairs such that
(2.48) |
where . When and , the PEVP reduces to linear eigenvalue problem (LEVP) and QEVP, respectively. The LEVP is also referred to as the GEVP as discussed above.
We note that the nonlinear eigenvalue problem defined in the proof of Theorem 2.6 is a QEVP. In particular, the problem can be written as follows
(2.49) |
where with , with , and with . Herein, is defined as (2.1). The analytical eigenpairs are then derived based on the Theorem 2.1 with . The analytical eigenvalues of the cubic EVP (2.46) are derived in a similar fashion. We generalize these results and give the following result.
Theorem 2.7 (Analytical solutions to PEVP).
Proof.
Following [21, p. 515] and the generalization in Theorem 2.1, we seek eigenvectors with entries of the form . Then each row of the PEVP (2.48),
which boils down to
(2.51) |
We note that the above form is independent of the row number and the eigenvector index number . This means that a root of (2.51) and a vector with always satisfy the PEVP. Hence, the root and give an eigenpair. This completes the proof. ∎
We remark that similar analytical results can be obtained if the Hankel matrix is defined as (2.7), (2.9), and (2.13). The eigenvectors are of the same form in each Theorem 2.2–2.4 and there are eigenvectors. There are eigenvalues and the eigenvalues are roots of the -th order polynomials. Two examples are given in the previous subsection, so we omit presenting more examples for brevity.
2.5 Generalizations
We now consider some other generalizations. The simplest case is the constant scaling. Let be constants, then the GEVP has eigenvalues where is an eigenvalue of the GEVP . The eigenvectors remain the same. This constant scaling has applications in various numerical spectral approximations. For example, for FEM, the scaling is in the form while for FDM, the scaling is where is the size of a mesh (cf., [27, 26]).
Following the book of Strang [26, Section 5], one may generalize these results to powers and products of matrices. For EVP of the form (2.2), the powers has eigenvalues , where are eigenvalues of (2.2). The eigenvectors remain the same. For the EVPs and , if commutes with , that is, , then the two EVPs have the same eigenvectors and the eigenvalues of (or ) are . Additionally, in this case, has eigenvalues . For GEVP of the form (2.3), similar results can be obtained. If the matrices entries are commutative, then The GEVP has eigenvalues where is an eigenvalue of . Similar results can be obtained for products and additions.
We now consider the tensor-product matrices from the FEM-based discretizations (see, for example, [3, 8]) of the Laplacian eigenvalue problem in multi-dimension. We have the following result.
Theorem 2.8 (Eigenvalues and eigenvectors for tensor-product matrices).
Let be the eigenpairs of the GEVP (2.3) and be the eigenpairs of the GEVP . Then, the GEVP
(2.52) |
has eigenpairs with
(2.53) |
Proof.
Let be a generic eigenpair of and be a generic eigenpair of . Let , we calculate that
(2.54) | ||||
which completes the proof. ∎
Remark 2.9.
Once the two sets of the eigenpairs are found, either numerically or analytically, the eigenpairs for the GEVP in the form (2.52) can be derived. A FEM (FDM, SEM, or IGA) discretization of on unit square domain with a uniform tensor-product mesh leads to the GEVP in the form of (2.52). For three- or higher- dimensional problems, this result can be generalized in a similar fashion.
3 Trigonometric identities
In this section, we derive some trigonometric identities based on the eigenvector-eigenvalue identity that was rediscovered and coined recently in [13]. The eigenvector-eigenvalue identity for the EVP (2.2) is (see [13, Theorem 1])
(3.1) |
where is a Hermitian matrix with dimension , are eigenpairs of with normalized eigenvectors , and is an eigenvalue of with being the minor of formed by removing the row and column. We generalize this identity for the GEVPs as follows.
Theorem 3.1 (Eigenvector-eigenvalue identity for the GEVP).
Let and be Hermitian matrices with dimension . Assume that is invertible. Let be the eigenpairs of the GEVP (2.3) with normalized eigenvectors . Then, there holds
(3.2) |
where is an eigenvalue of with and being minors of and formed by removing the row and column, respectively, is an eigenvalue of , and is an eigenvalue of .
Proof.
We follow the proof of (3.1) for that uses perturbative analysis in [13, Sect. 2.4] (first appeared in [22]). Firstly, since is invertible, and hence . Let be the characteristic polynomial of the GEVP (2.3). Then,
(3.3) |
The derivative of at is
(3.4) |
Similarly, let be the characteristic polynomial of the GEVP . Then,
(3.5) |
Now, with the limiting argument, we assume that has simple eigenvalues. Let be a small parameter and we define the perturbed matrix
(3.6) |
where is the standard basis. The perturbed GEVP is defined as
(3.7) |
Using (3.3) and cofactor expansion, the characteristic polynomial of this perturbed GEVP can be expanded as
(3.8) |
With being a normalized eigenvector, one has
(3.9) |
Using this normalization, from perturbation theory [19], the eigenvalue of (3.7) can be expanded as
(3.10) |
Applying the Taylor expansion and , we rewrite
(3.11) | ||||
which the linear term in leads to
(3.12) |
The identity (3.2) can be rewritten in terms of the characteristic polynomials as (3.12). A similar identity in terms of determinants, eigenvalues, and rescaled eigenvectors was presented in [18, eqn. 18] for real-valued matrices. The identity (3.2) is in terms of only eigenvalues and eigenvectors.
Based on these two identities (3.1) and (3.2), one can easily derive the following trigonometric identities. For EVP, using (3.1) and applying Theorem 2.1 with and being an identity matrix, we have
(3.13) |
We note that this identity is independent of the matrix entries and . The left hand side can be written in terms of a cosine function as to have an identity in terms of only cosine functions. For example, let , then the identity boils down to
(3.14) |
Similarly, we have for
(3.15) |
If we introduce the notation that , then (3.13) can be written as (3.15) with or .
It is obvious that (3.16) reduces to (3.15) when is an identity matrix (or multiplied by a nonzero constant).
Remark 3.2.
Other similar trigonometric identities can be established. Moreover, Theorems 2.1–2.6 give various analytical eigenpairs. An application of the eigenvector-eigenvalue identity (3.1) along with these analytical results set up a system of equations governing the eigenvalues of the minors of the original matrices. Thus, the eigenvalues of these minors can be found by solving the system of equations.
4 Concluding remarks
We first remark that the ideas of finding analytical solutions to the GEVPs with Toeplitz-plus-Hankel and corner-overlapped block-diagonal matrices can be applied to solve other problems where a particular solution form is sought. Other applications include matrix representations of differential operators such as the Schrödinger operator in quantum mechanics [11] and the -order operators [12]. Moreover, the idea to solve QEVP can be applied to solve other nonlinear EVPs.
The boundary modifications in the Toeplitz-plus-Hankel matrices give new insights for designing better numerical methods. For example, the high-order IGA (cf.,[17]) produces outliers in the high-frequency region of the spectrum. A method which modifies the boundary terms to arrive at the Toeplitz-plus-Hankel matrices will be outlier-free [9]. For FDM, the structure of the Toeplitz-plus-Hankel matrices gives insights to the design better higher-order approximations near the domain boundaries. Lastly, we remark that the corner-overlapped block-diagonal matrices have applications in the FEMs and the discontinuous Galerkin methods.
Acknowledgments
The author thanks Professor Gilbert Strang for several discussions on the Toeplitz-plus-Hankel matrices and on the potential applications to the design of better numerical methods.
References
- [1] M. Anđelić and C. M. da Fonseca, Some determinantal considerations for pentadiagonal matrices, Linear and Multilinear Algebra, (2020), pp. 1–9.
- [2] M. Barrera and S. Grudsky, Asymptotics of eigenvalues for pentadiagonal symmetric Toeplitz matrices, in Large Truncated Toeplitz Matrices, Toeplitz Operators, and Related Topics, Springer, 2017, pp. 51–77.
- [3] V. Calo, Q. Deng, and V. Puzyrev, Dispersion optimized quadratures for isogeometric analysis, Journal of Computational and Applied Mathematics, 355 (2019), pp. 283–300.
- [4] H.-W. Chang, S.-E. Liu, and R. Burridge, Exact eigensystems for some matrices arising from discretizations, Linear algebra and its applications, 430 (2009), pp. 999–1006.
- [5] P. G. Ciarlet, The finite element method for elliptic problems, SIAM, 2002.
- [6] C. Da Fonseca, On the eigenvalues of some tridiagonal matrices, Journal of Computational and Applied Mathematics, 200 (2007), pp. 283–286.
- [7] C. M. Da Fonseca and V. Kowalenko, Eigenpairs of a family of tridiagonal matrices: three decades later, Acta Mathematica Hungarica, (2019), pp. 1–14.
- [8] Q. Deng and V. Calo, Dispersion-minimized mass for isogeometric analysis, Computer Methods in Applied Mechanics and Engineering, 341 (2018), pp. 71–92.
- [9] , A boundary penalization technique to remove outliers from isogeometric analysis on tensor-product meshes, arXiv preprint arXiv:2010.08159, (2020).
- [10] , Outlier removal for isogeometric spectral approximation with the optimally-blended quadratures, arXiv preprint arXiv:2102.07543, (2021).
- [11] Q. Deng, V. Puzyrev, and V. Calo, Isogeometric spectral approximation for elliptic differential operators, Journal of Computational Science, 36 (2019), p. 100879.
- [12] , Optimal spectral approximation of 2n-order differential operators by mixed isogeometric analysis, Computer Methods in Applied Mechanics and Engineering, 343 (2019), pp. 297–313.
- [13] P. Denton, S. Parke, T. Tao, and X. Zhang, Eigenvectors from eigenvalues: a survey of a basic identity in linear algebra, Bulletin of the American Mathematical Society, (2021).
- [14] D. Fasino, Spectral and structural properties of some pentadiagonal symmetric matrices, Calcolo, 25 (1988), pp. 301–310.
- [15] T. J. Hughes, The finite element method: linear static and dynamic finite element analysis, Courier Corporation, 2012.
- [16] T. J. R. Hughes, J. A. Evans, and A. Reali, Finite element and NURBS approximations of eigenvalue, boundary-value, and initial-value problems, Computer Methods in Applied Mechanics and Engineering, 272 (2014), pp. 290–320.
- [17] T. J. R. Hughes, A. Reali, and G. Sangalli, Duality and unified analysis of discrete approximations in structural dynamics and wave propagation: comparison of p-method finite elements with k-method NURBS, Computer methods in applied mechanics and engineering, 197 (2008), pp. 4104–4124.
- [18] E. Kausel, Normalized modes at selected points without normalization, Journal of Sound and Vibration, 420 (2018), pp. 261–268.
- [19] M. Konstantinov, D. W. Gu, V. Mehrmann, and P. Petkov, Perturbation theory for matrix equations, Gulf Professional Publishing, 2003.
- [20] L. Losonczi, Eigenvalues and eigenvectors of some tridiagonal matrices, Acta Mathematica Hungarica, 60 (1992), pp. 309–322.
- [21] C. D. Meyer, Matrix analysis and applied linear algebra, vol. 71, Siam, 2000.
- [22] A. K. Mukherjee and K. K. Datta, Two new graph-theoretical methods for generation of eigenvectors of chemical graphs, in Proceedings of the Indian Academy of Sciences-Chemical Sciences, vol. 101, Springer, 1989, pp. 499–517.
- [23] A. T. Patera, A spectral element method for fluid dynamics: laminar flow in a channel expansion, Journal of computational Physics, 54 (1984), pp. 468–488.
- [24] G. D. Smith, Numerical solution of partial differential equations: finite difference methods, Oxford university press, 1985.
- [25] M. S. Solary, Finding eigenvalues for heptadiagonal symmetric Toeplitz matrices, Journal of Mathematical Analysis and Applications, 402 (2013), pp. 719–730.
- [26] G. Strang, Linear algebra and its applications, Cole Thomson Learning Inc, (1988).
- [27] G. Strang and G. J. Fix, An analysis of the finite element method, vol. 212, Prentice-hall Englewood Cliffs, NJ, 1973.
- [28] G. Strang and S. MacNamara, Functions of difference matrices are Toeplitz plus Hankel, siam REVIEW, 56 (2014), pp. 525–546.