This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Analytical solutions to some generalized and polynomial eigenvalue problems

Quanling Deng Department of Mathematics, University of Wisconsin–Madison, Madison, WI 53706, USA. E-mail addresses: [email protected]; [email protected]. Online open access: Special Matrices, 9(1), 240-256. https://doi.org/10.1515/spma-2020-0135.
Abstract

It is well-known that the finite difference discretization of the Laplacian eigenvalue problem Δu=λu-\Delta u=\lambda u leads to a matrix eigenvalue problem (EVP) Ax=λxAx=\lambda x where the matrix AA is Toeplitz-plus-Hankel. Analytical solutions to tridiagonal matrices with various boundary conditions are given in a recent work of Strang and MacNamara. We generalize the results and develop analytical solutions to certain generalized matrix eigenvalue problems (GEVPs) Ax=λBxAx=\lambda Bx which arise from the finite element method (FEM) and isogeometric analysis (IGA). The FEM matrices are corner-overlapped block-diagonal while the IGA matrices are almost Toeplitz-plus-Hankel. In fact, IGA with a correction that results in Toeplitz-plus-Hankel matrices gives a better numerical method. In this paper, we focus on finding the analytical eigenpairs to the GEVPs while developing better numerical methods is our motivation. Analytical solutions are also obtained for some polynomial eigenvalue problems (PEVPs). Lastly, we generalize the eigenvector-eigenvalue identity (rediscovered and coined recently for EVPs) for GEVPs and derive some trigonometric identities.

Keywords

eigenvalue, eigenvector, Toeplitz, Hankel, block-diagonal

1 Introduction

It is well-known that the following tridiagonal Toeplitz matrix

A=[2112112112]n×nA=\begin{bmatrix}2&-1&\\ -1&2&-1&\\ &\ddots&\ddots&\ddots&\\ &&-1&2&-1\\ &&&-1&2\\ \end{bmatrix}_{n\times n}

has analytical eigenpairs (λj,xj)(\lambda_{j},x_{j}) with xj=(xj,1,,xj,n)Tx_{j}=(x_{j,1},\cdots,x_{j,n})^{T} where

λj=22cos(jπh),xj,k=csin(jπkh),h=1n+1,j,k=1,2,,n\lambda_{j}=2-2\cos(j\pi h),\quad x_{j,k}=c\sin(j\pi kh),\quad h=\frac{1}{n+1},\quad j,k=1,2,\cdots,n

with 0c0\neq c\in\mathbb{R} (we assume that cc is a nonzero constant throughout the paper); see, for example, [21, p. 514] for a general case of tridiagonal Toeplitz matrix. Following the constructive technique in [21, p. 515] that finds the analytical solutions to the matrix eigenvalue problem (EVP) Ax=λxAx=\lambda x, one can derive analytical solutions to the generalized matrix eigenvalue problem (GEVP) Ax=λBxAx=\lambda Bx where BB is an invertible tridiagonal Toeplitz matrix. For example, let

B=[23161623161623161623]n×n,B=\begin{bmatrix}\frac{2}{3}&\frac{1}{6}\\[5.69046pt] \frac{1}{6}&\frac{2}{3}&\frac{1}{6}\\[5.69046pt] &\ddots&\ddots&\ddots&\\[5.69046pt] &&\frac{1}{6}&\frac{2}{3}&\frac{1}{6}\\[5.69046pt] &&&\frac{1}{6}&\frac{2}{3}\\ \end{bmatrix}_{n\times n},

then, the GEVP Ax=λBxAx=\lambda Bx has analytical eigenpairs (λj,xj)(\lambda_{j},x_{j}) with xj=(xj,1,x_{j}=(x_{j,1}, ,xj,n)T\cdots,x_{j,n})^{T} where (see [17, Sec. 4] for a scaled case; AA is scaled by 1/h1/h while BB is scaled by hh)

λj=6+182+cos(jπh),xj,k=csin(jπkh),h=1n+1,j,k=1,2,,n.\lambda_{j}=-6+\frac{18}{2+\cos(j\pi h)},\quad x_{j,k}=c\sin(j\pi kh),\quad h=\frac{1}{n+1},\quad j,k=1,2,\cdots,n.

These matrices or their scaled (by constants) versions arise from various applications. For example, the matrix AA arises from the discrete discretizations of the 1D Laplace operator by the finite difference method (FDM, cf., [24, 26], scaled by 1/h21/h^{2}) or the spectral element method (SEM, cf., [23], scaled by 1/h1/h). The work [28] showed that functions of the (tridiagonal) matrices that arise from finite difference discretization of the Laplacian with different boundary conditions are Toeplitz-plus-Hankel. Similar analytical results for tridiagonal matrices from finite difference discretization are obtained in [4]. The matrix BB (and also AA) arises from the discrete discretization by the finite element method (FEM, cf., [5, 15, 27], scaled by hh).

In general, it is difficult to find analytical solutions to the EVPs. The work by Losonczi [20] gave analytical eigenpairs to the EVPs for some symmetric and tridiagonal matrices. A new proof of these solutions was given by da Fonseca [6]. We refer to the recent work [7] for a survey on the analytical eigenpairs of tridiagonal matrices. For pentadiagonal and heptadiagonal matricies, finding analytical eigenpairs becomes more difficult. The work [2] derived asymptotical results of eigenvalues for pentadiagonal symmetric Toeplitz matrices. Some spectral properties were found in [14] for some pentadiagonal symmetric matrices. A recent work by Anđelić and da Fonseca [1] presents some determinantal considerations for pentadiagonal matrices. The work [25] gave analytical eigenvalues (as the zeros of some complicated functions) for heptadiagonal symmetric Toeplitz matrices.

To the best of the author’s knowledge, there are no widely-known articles in literature which address the issue of finding analytical solutions to either the EVPs with more general matrices (other than the ones mentioned above) or the GEVPs. The articles [17, 16, 3] present analytical solutions (somehow implicitly) to GEVPs for some tridiagonal and/or pentadiagonal matrices that arise from the isogeometric analysis (IGA) of the 1D Laplace operator. For heptadiagonal and more general matrices, no analytical solutions exist and the numerical approximations in [16, 3, 8] can be considered as asymptotic results for certain structured matrices arising from the numerical discretizations of the differential operators.

In this paper, we present analytical solutions to GEVPs for mainly two classes of matrices. The first class is the Toeplitz-plus-Hankel matrices while the second class is the corner-overlapped block-diagonal matrices. The main insights for the Toeplitz-plus-Hankel matrices are from the numerical spectral approximation techniques where a particular solution form such as, the well-known Bloch wave form, is sought. For the corner-overlapped block-diagonal matrices, we propose to decompose the original problem into a lower two-block matrix problem where one of the blocks is a quadratic eigenvalue problem (QEVP). We solve the QEVP by rewriting the problem and applying the analytical results from the tridiagonal Toeplitz matrices. Generalization of finding solutions to polynomial eigenvalue problem (PEVP) is also given. Additionally, Denton, Parke, Tao, and Zhang in a recent work [13] rediscovered and coined the eigenvector-eigenvalue identity for certain EVPs. We generalize this identity for the GEVPs. Based on these identities, we derive some interesting trigonometric identities.

The rest of the article is organized as follows. Section 2 presents the main results, i.e., the analytical solutions to the two classes of matrices. Several examples are given and discussed. Section 3 generalizes the eigenvector-eigenvalue identity and derives some trigonometric identities. Potential applications to the design of better numerical discretization methods for partial differential equations and other concluding remarks are presented in Section 4.

2 Main results

2.1 Notation and problem statement

Throughout the paper, we denote matrices and vectors by uppercase and lowercase letters, respectively. In particular, let An×nA\in\mathbb{C}^{n\times n} be a square matrix with entries denoted as Ajk,j,k=1,,n,A_{jk},j,k=1,\cdots,n, and xnx\in\mathbb{C}^{n} be a vector with entries denoted as xj,j=1,,n,x_{j},j=1,\cdots,n, in the complex field. We denote by A(α,m)A^{(\alpha,m)} a matrix that the entries depend on a sequence of parameters α=(α0,α1,,αm).\alpha=(\alpha_{0},\alpha_{1},\cdots,\alpha_{m}). The superscript (α,m)\cdot^{(\alpha,m)} is omitted when the context is clear. We denote by T(α,m)=(Tj,k(α,m))n×nT^{(\alpha,m)}=(T^{(\alpha,m)}_{j,k})\in\mathbb{C}^{n\times n} a symmetric and diagonal-structured Toeplitz matrix with entries

Tj,j+k(α,m)={ξ|k|,|k|m,k,j=1,,n,0,otherwise,T^{(\alpha,m)}_{j,j+k}=\begin{cases}\xi_{|k|},\quad&|k|\leq m,\quad k\in\mathbb{Z},\quad j=1,\cdots,n,\\ 0,\quad&\text{otherwise},\end{cases} (2.1)

where mn1m\leq n-1 specifies the matrix bandwidth. Explicitly, this matrix can be written as

T(α,m)\displaystyle T^{(\alpha,m)} =[α0α1αmα1α0α1αmα1α0α1αmαmα1α0α1αmαmα1α0α1αmα1α0α1αmα1α0]n×n,\displaystyle=\begin{bmatrix}\alpha_{0}&\alpha_{1}&\cdots&\alpha_{m}\\ \alpha_{1}&\alpha_{0}&\alpha_{1}&\cdots&\alpha_{m}\\ \vdots&\alpha_{1}&\alpha_{0}&\alpha_{1}&\cdots&\alpha_{m}\\ \alpha_{m}&\cdots&\alpha_{1}&\alpha_{0}&\alpha_{1}&\cdots&\alpha_{m}\\ &\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots\\ &&\alpha_{m}&\cdots&\alpha_{1}&\alpha_{0}&\alpha_{1}&\cdots\\ &&&\alpha_{m}&\cdots&\alpha_{1}&\alpha_{0}&\alpha_{1}\\ &&&&\alpha_{m}&\cdots&\alpha_{1}&\alpha_{0}\\ \end{bmatrix}_{n\times n},

where the empty spots are zeros. For a Hankel matrix, it appears to be different in each of the cases we consider in the paper and we define it at its occurrence in the context.

For matrices A,Bn×nA,B\in\mathbb{C}^{n\times n}, the EVP is to find the eigenpairs (λ,xn)(\lambda\in\mathbb{C},x\in\mathbb{C}^{n}) such that

Ax=λxAx=\lambda x (2.2)

while the GEVP is to find the eigenpairs (λ,xn)(\lambda\in\mathbb{C},x\in\mathbb{C}^{n}) such that

Ax=λBx.Ax=\lambda Bx. (2.3)

Throughout the paper, we focus on GEVPs and since the analytical eigenpairs for GEVPs with small dimensions are relatively easy to find, we assume that the dimension nn is large enough for the generalization of matrices to make sense. For simplicity, we slightly abuse the notation such as AA for a matrix and (λ,x)(\lambda,x) for an eigenpair. Once analytic eigenpairs for a GEVP are found, eigenpairs for EVP Ax=λxAx=\lambda x follows naturally by setting BB as an identity matrix.

2.2 Toeplitz-plus-Hankel matrices

In this section, we present analytical solutions to certain Toeplitz-plus-Hankel matrices. The main insights are from the proof in finding analytical solutions to a tridiagonal Toeplitz matrix in [21, p. 514] and the numerical spectral approximations of the Laplace operator (cf.,[28, 17, 16, 3]). The main idea is to seek eigenvectors in a particular form such as the Bloch wave form ea+ιbe^{a+\iota b}, where ι2=1\iota^{2}=-1 as in [21, 17] and sinusoidal form as in [16]. Our main contribution is the generalization of these results to GEVPs, especially, with larger matrix bandwidths.

We denote by H(α,m)=(Hj,k(α,m))n×nH^{(\alpha,m)}=(H^{(\alpha,m)}_{j,k})\in\mathbb{C}^{n\times n} a Hankel matrix with entries

Hj,k(α,m)=Hnj+1,nk+1(α,m)={αj+k,k=1,,mj,j=1,,m1,2mn,0,otherwise,H^{(\alpha,m)}_{j,k}=H^{(\alpha,m)}_{n-j+1,n-k+1}=\begin{cases}\alpha_{j+k},\quad&k=1,\cdots,m-j,\quad j=1,\cdots,m-1,\quad 2\leq m\leq n,\\ 0,\quad&\text{otherwise},\end{cases} (2.4)

which can be explicitly written as

H(α,m)\displaystyle H^{(\alpha,m)} =[α2α3α4αm0α3α4αm0α4αm0αm0αm00]n×n,\displaystyle=\begin{bmatrix}\alpha_{2}&\alpha_{3}&\alpha_{4}&\cdots&\alpha_{m}&0&\cdots\\ \alpha_{3}&\alpha_{4}&\cdots&\alpha_{m}&0&\ddots\\ \alpha_{4}&\cdots&\alpha_{m}&0&\ddots\\ \vdots&\alpha_{m}&0&\ddots\\ \alpha_{m}&0&\ddots\\ 0&\ddots\\ \vdots\\ \end{bmatrix}_{n\times n},

where the entries at the bottom-right corner are defined such that the matrix is persymmetric. Herein, a persymmetric matrix is defined as a square matrix which is symmetric with respect to the anti-diagonal. With this in mind, we give the following result.

Theorem 2.1 (Analytical eigenvalues and eigenvectors, set 1).

Let n2,1mn1n\geq 2,1\leq m\leq n-1 and A=T(α,m)H(α,m),B=T(β,m)H(β,m)A=T^{(\alpha,m)}-H^{(\alpha,m)},B=T^{(\beta,m)}-H^{(\beta,m)} with T(ξ,m)T^{(\xi,m)} and H(ξ,m)H^{(\xi,m)}, ξ=α,β\xi=\alpha,\beta, defined in (2.1) and (2.4), respectively. Assume that BB is invertible. Then, the GEVP (2.3) has eigenpairs (λj,xj)(\lambda_{j},x_{j}) where

λj=α0+2l=1mαlcos(ljπh)β0+2l=1mβlcos(ljπh),xj,k=csin(jπkh),h=1n+1,j,k=1,2,,n.\lambda_{j}=\frac{\alpha_{0}+2\sum_{l=1}^{m}\alpha_{l}\cos(lj\pi h)}{\beta_{0}+2\sum_{l=1}^{m}\beta_{l}\cos(lj\pi h)},\quad x_{j,k}=c\sin(j\pi kh),\quad h=\frac{1}{n+1},\ j,k=1,2,\cdots,n. (2.5)
Proof.

Following [21, p. 515], one seeks eigenvectors of the form csin(jπkh)c\sin(j\pi kh). Using the trigonometric identity sin(ϕ±ψ)=sin(ϕ)cos(ψ)±cos(ϕ)sin(ψ)\sin(\phi\pm\psi)=\sin(\phi)\cos(\psi)\pm\cos(\phi)\sin(\psi), one can verify that each row of the GEVP (2.3), k=1nAikxj,k=λk=1nBikxj,k,i=1,,n\sum_{k=1}^{n}A_{ik}x_{j,k}=\lambda\sum_{k=1}^{n}B_{ik}x_{j,k},i=1,\cdots,n, reduces to α0+2l=1mαlcos(ljπh)=λ(β0+2l=1mβlcos(ljπh))\alpha_{0}+2\sum_{l=1}^{m}\alpha_{l}\cos(lj\pi h)=\lambda\big{(}\beta_{0}+2\sum_{l=1}^{m}\beta_{l}\cos(lj\pi h)\big{)}, which is independent of the row number ii. Thus, the eigenpairs (λj,xj)(\lambda_{j},x_{j}) given in (2.5) satisfies (2.3). The GEVP has at most nn eigenpairs and the nn eigenvectors are linearly independent. This completes the proof. ∎

We remark that all the matrices defined above are symmetric and persymmetric. For m=3m=3 and n4n\geq 4, the matrix A=T(ξ,m)H(ξ,m)A=T^{(\xi,m)}-H^{(\xi,m)} is of the form

A\displaystyle A =[α0α2α1α3α2α3α1α3α0α1α2α3α2α1α0α1α2α3α3α2α1α0α1α2α3α3α2α1α0α1α2α3α2α1α0α1α3α3α2α1α3α0α2]n×n,\displaystyle=\begin{bmatrix}\alpha_{0}-\alpha_{2}&\alpha_{1}-\alpha_{3}&\alpha_{2}&\alpha_{3}\\ \alpha_{1}-\alpha_{3}&\alpha_{0}&\alpha_{1}&\alpha_{2}&\alpha_{3}\\ \alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}&\alpha_{2}&\alpha_{3}\\ \alpha_{3}&\alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}&\alpha_{2}&\alpha_{3}\\ &\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots\\ &&\alpha_{3}&\alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}&\alpha_{2}\\ &&&\alpha_{3}&\alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}-\alpha_{3}\\ &&&&\alpha_{3}&\alpha_{2}&\alpha_{1}-\alpha_{3}&\alpha_{0}-\alpha_{2}\\ \end{bmatrix}_{n\times n},

where the Hankel matrix H(ξ,m)H^{(\xi,m)} modifies the matrix at the first and last few rows. For m=1m=1, both AA and BB are tridiagonal Toeplitz matrices. For m2m\geq 2, both AA and BB are Toeplitz-plus-Hankel matrices. This result generalizes the finite-difference matrix results in Strang and MacNamara in [28].

We present the following example. Let m=2,α0=1,α1=1/3,α2=1/6,m=2,\alpha_{0}=1,\alpha_{1}=-1/3,\alpha_{2}=-1/6, β0=11/20,β1=13/60,β2=1/120.\beta_{0}=11/20,\beta_{1}=13/60,\beta_{2}=1/120. Then, we have the following matrices

A\displaystyle A =[76131613113161613113161613113161613113161376],B\displaystyle=\begin{bmatrix}\frac{7}{6}&-\frac{1}{3}&-\frac{1}{6}\\[5.69046pt] -\frac{1}{3}&1&-\frac{1}{3}&-\frac{1}{6}\\[5.69046pt] -\frac{1}{6}&-\frac{1}{3}&1&-\frac{1}{3}&-\frac{1}{6}\\[5.69046pt] &\ddots&\ddots&\ddots&\ddots&\ddots&\\[5.69046pt] &&-\frac{1}{6}&-\frac{1}{3}&1&-\frac{1}{3}&-\frac{1}{6}\\[5.69046pt] &&&-\frac{1}{6}&-\frac{1}{3}&1&-\frac{1}{3}\\[5.69046pt] &&&&-\frac{1}{6}&-\frac{1}{3}&\frac{7}{6}\\ \end{bmatrix},B =[13241360112013601120136011201120136011201360112011201360112013601120112013601120524112013601324].\displaystyle=\begin{bmatrix}\frac{13}{24}&\frac{13}{60}&\frac{1}{120}\\[5.69046pt] \frac{13}{60}&\frac{11}{20}&\frac{13}{60}&\frac{1}{120}\\[5.69046pt] \frac{1}{120}&\frac{13}{60}&\frac{11}{20}&\frac{13}{60}&\frac{1}{120}\\[5.69046pt] &\ddots&\ddots&\ddots&\ddots&\ddots&\\[5.69046pt] &&\frac{1}{120}&\frac{13}{60}&\frac{11}{20}&\frac{13}{60}&\frac{1}{120}\\[5.69046pt] &&&\frac{1}{120}&\frac{13}{60}&\frac{11}{20}&\frac{5}{24}\\[5.69046pt] &&&&\frac{1}{120}&\frac{13}{60}&\frac{13}{24}\\ \end{bmatrix}.

The eigenpairs of the GEVP (2.3) with these matrices are

λj=20+240(3+2cos(jπh))33+26cos(jπh)+cos(2jπh),xj,k=csin(jπkh),j,k=1,2,,n.\lambda_{j}=-20+\frac{240(3+2\cos(j\pi h))}{33+26\cos(j\pi h)+\cos(2j\pi h)},\quad x_{j,k}=c\sin(j\pi kh),\quad j,k=1,2,\cdots,n. (2.6)

If we scale the matrix AA by 1/h1/h while the matrix BB by hh, then the new system is a GEVP that arises from the IGA approximation of the Laplacian eigenvalue problem Δu=λu-\Delta u=\lambda u on [0,1][0,1]; see, for example, [16, eqns. 118–123] (the first two rows have slightly different entries A11=4/3,A12=A21=1/6,B11=1/3,B12=B21=5/24A_{11}=4/3,A_{12}=A_{21}=-1/6,B_{11}=1/3,B_{12}=B_{21}=5/24).

Similarly, if we define a Hankel matrix H(α,m)=(Hj,k(α,m))n×nH^{(\alpha,m)}=(H^{(\alpha,m)}_{j,k})\in\mathbb{C}^{n\times n} with entries

Hj,k(α,m)=Hnj+1,nk+1(α,m)={αj+k1,k=1,,mj+1,j=1,,m,1mn,0,otherwise,H^{(\alpha,m)}_{j,k}=H^{(\alpha,m)}_{n-j+1,n-k+1}=\begin{cases}\alpha_{j+k-1},\quad&k=1,\cdots,m-j+1,\quad j=1,\cdots,m,\quad 1\leq m\leq n,\\ 0,\quad&\text{otherwise},\end{cases} (2.7)

one can shift the phase by a half and seek solutions of the form sin(jπ(k12)h)\sin\big{(}j\pi(k-\frac{1}{2})h\big{)}. As a consequence, we have the following result.

Theorem 2.2 (Analytical eigenvalues and eigenvectors, set 2).

Let n2,1mn1n\geq 2,1\leq m\leq n-1 and A=T(α,m)H(α,m),B=T(β,m)H(β,m)A=T^{(\alpha,m)}-H^{(\alpha,m)},B=T^{(\beta,m)}-H^{(\beta,m)} with T(ξ,m)T^{(\xi,m)} and H(ξ,m)H^{(\xi,m)}, ξ=α,β\xi=\alpha,\beta, defined in (2.1) and (2.7), respectively.

Assume that BB is invertible. Then, the GEVP (2.3) has eigenpairs (λj,xj)(\lambda_{j},x_{j}) where

λj=α0+2l=1mαlcos(ljπh)β0+2l=1mβlcos(ljπh),xj,k=csin(jπ(k12)h),h=1n,j,k=1,2,,n.\lambda_{j}=\frac{\alpha_{0}+2\sum_{l=1}^{m}\alpha_{l}\cos(lj\pi h)}{\beta_{0}+2\sum_{l=1}^{m}\beta_{l}\cos(lj\pi h)},\quad x_{j,k}=c\sin\big{(}j\pi(k-\frac{1}{2})h\big{)},\ h=\frac{1}{n},\ j,k=1,2,\cdots,n. (2.8)

Herein, as an example, with m=2m=2 and n3n\geq 3, the matrix T(α,m)H(α,m)T^{(\alpha,m)}-H^{(\alpha,m)} is of the form

[α0α1α1α2α2α1α2α0α1α2α2α1α0α1α2α2α1α0α1α2α2α1α2α0α1]n×n.\begin{bmatrix}\alpha_{0}-\alpha_{1}&\alpha_{1}-\alpha_{2}&\alpha_{2}\\ \alpha_{1}-\alpha_{2}&\alpha_{0}&\alpha_{1}&\alpha_{2}\\ \alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}&\alpha_{2}\\ &\ddots&\ddots&\ddots&\ddots&\ddots\\ &&\alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}-\alpha_{2}\\ &&&\alpha_{2}&\alpha_{1}-\alpha_{2}&\alpha_{0}-\alpha_{1}\\ \end{bmatrix}_{n\times n}.

The proof can be established similarly and we omit it for brevity. An example is the GEVP with the matrices in [16, eqns. 118–123]. These matrices satisfy the assumption of the above theorem and the analytical solutions are given by (2.8) with a scale n2n^{2} to the eigenvalues. The eigenvectors of Theorems 2.1 and 2.2 correspond to the solutions of the Laplacian eigenvalue problem on the unit interval. When m3m\geq 3, the results in Theorems 2.1 and 2.2 provide an insight to remove the outliers in high-order IGA approximations. We refer to [17, 16] for the outlier behavior in the approximate spectrum and to [9, 10] for their eliminations. Similarly, we define a Hankel matrix H(α,m)=(Hj,k(α,m))n×nH^{(\alpha,m)}=(H^{(\alpha,m)}_{j,k})\in\mathbb{C}^{n\times n} with entries

Hj+1,k+1(ξ,m)=Hnj,nk(ξ,m)={ξ0/2,j,k=0,ξj+k,k=1,,mj,j=1,,m1,m2,0,otherwise.H^{(\xi,m)}_{j+1,k+1}=H^{(\xi,m)}_{n-j,n-k}=\begin{cases}-\xi_{0}/2,\quad&j,k=0,\\ \xi_{j+k},\quad&k=1,\cdots,m-j,\ j=1,\cdots,m-1,m\geq 2,\\ 0,\quad&\text{otherwise}.\end{cases} (2.9)

With the insights from the numerical methods for the Neumann eigenvalue problem on the unit interval, we have the following two sets of analytical eigenpairs.

Theorem 2.3 (Analytical eigenvalues and eigenvectors, set 3).

Let n2,1mn1n\geq 2,1\leq m\leq n-1 and A=T(α,m)+H(α,m),B=T(β,m)+H(β,m)A=T^{(\alpha,m)}+H^{(\alpha,m)},B=T^{(\beta,m)}+H^{(\beta,m)} with T(ξ,m)T^{(\xi,m)} and H(ξ,m)H^{(\xi,m)}, ξ=α,β\xi=\alpha,\beta, defined in (2.1) and (2.9), respectively. Assume that BB is invertible. Then, the GEVP (2.3) has eigenpairs (λj,xj)(\lambda_{j},x_{j}) where

λj+1=α0+2l=1mαlcos(ljπh)β0+2l=1mβlcos(ljπh),xj+1,k+1=ccos(jπkh),j,k=0,1,,n1\lambda_{j+1}=\frac{\alpha_{0}+2\sum_{l=1}^{m}\alpha_{l}\cos(lj\pi h)}{\beta_{0}+2\sum_{l=1}^{m}\beta_{l}\cos(lj\pi h)},\ x_{j+1,k+1}=c\cos(j\pi kh),\ j,k=0,1,\cdots,n-1 (2.10)

with h=1n1h=\frac{1}{n-1}.

The matrices can be complex-valued. For example, let ι2=1,n=5,m=2,α0=8+2ι,α1=5ι,α2=2ι,β0=6,β1=3ι,β2=1ι.\iota^{2}=-1,n=5,m=2,\alpha_{0}=8+2\iota,\alpha_{1}=5-\iota,\alpha_{2}=2\iota,\beta_{0}=6,\beta_{1}=3\iota,\beta_{2}=1-\iota. Then, with the setting in Theorem 2.3, we have the following matrices

A\displaystyle A =[4+ι5ι2ι005ι8+4ι5ι2ι2ι5ι8+2ι5ι2ι02ι5ι8+4ι5ι002ι5ι4+ι],B\displaystyle=\begin{bmatrix}4+\iota&5-\iota&2\iota&0&0\\ 5-\iota&8+4\iota&5-\iota&2\iota\\ 2\iota&5-\iota&8+2\iota&5-\iota&2\iota\\ 0&2\iota&5-\iota&8+4\iota&5-\iota\\ 0&0&2\iota&5-\iota&4+\iota\end{bmatrix},B =[33ι1ι003ι7ι3ι1ι1ι3ι63ι1ι01ι3ι7ι3ι001ι3ι3].\displaystyle=\begin{bmatrix}3&3\iota&1-\iota&0&0\\ 3\iota&7-\iota&3\iota&1-\iota\\ 1-\iota&3\iota&6&3\iota&1-\iota\\ 0&1-\iota&3\iota&7-\iota&3\iota\\ 0&0&1-\iota&3\iota&3\end{bmatrix}. (2.11)

By direct calculations, the eigenpairs of (2.3) are

λ1,2,3,4,5\displaystyle\lambda_{1,2,3,4,5} =2ι2,73ι+(65ι)29,7565ι,58+38ι,73ι(65ι)29,\displaystyle=2-\frac{\iota}{2},\frac{7-3\iota+(6-5\iota)\sqrt{2}}{9},\frac{7}{5}-\frac{6}{5}\iota,-\frac{5}{8}+\frac{3}{8}\iota,\frac{7-3\iota-(6-5\iota)\sqrt{2}}{9}, (2.12)
x1\displaystyle x_{1} =(1,1,1,1,1)T,\displaystyle=(1,1,1,1,1)^{T},
x2\displaystyle x_{2} =(1,12,0,12,1)T,\displaystyle=(1,\frac{1}{\sqrt{2}},0,-\frac{1}{\sqrt{2}},-1)^{T},
x3\displaystyle x_{3} =(1,0,1,0,1)T,\displaystyle=(1,0,-1,0,1)^{T},
x4\displaystyle x_{4} =(1,1,1,1,1)T,\displaystyle=(1,-1,1,-1,1)^{T},
x5\displaystyle x_{5} =(1,12,0,12,1)T,\displaystyle=(-1,\frac{1}{\sqrt{2}},0,-\frac{1}{\sqrt{2}},1)^{T},

which verifies Theorem 2.3.

Now, we define a Hankel matrix H(α,m)=(Hj,k(α,m))n×nH^{(\alpha,m)}=(H^{(\alpha,m)}_{j,k})\in\mathbb{C}^{n\times n} with entries

Hj,k(ξ,m)=Hnj+1,nk+1(ξ,m)={ξj+k1,k=1,,mj+1,j=1,,m,1mn,0,otherwise.H^{(\xi,m)}_{j,k}=H^{(\xi,m)}_{n-j+1,n-k+1}=\begin{cases}\xi_{j+k-1},\quad&k=1,\cdots,m-j+1,\quad j=1,\cdots,m,\quad 1\leq m\leq n,\\ 0,\quad&\text{otherwise}.\end{cases} (2.13)

We then have the following set of analytical eigenpairs.

Theorem 2.4 (Analytical eigenvalues and eigenvectors, set 4).

Let n2,1mn1n\geq 2,1\leq m\leq n-1 and A=T(α,m)+H(α,m),B=T(β,m)+H(β,m)A=T^{(\alpha,m)}+H^{(\alpha,m)},B=T^{(\beta,m)}+H^{(\beta,m)} with T(ξ,m)T^{(\xi,m)} and H(ξ,m)H^{(\xi,m)}, ξ=α,β\xi=\alpha,\beta, defined in (2.1) and (2.13), respectively. Assume that BB is invertible. Then, the GEVP (2.3) has eigenpairs (λj,xj)(\lambda_{j},x_{j}) where

λj+1=α0+2l=1mαlcos(ljπh)β0+2l=1mβlcos(ljπh),xj+1,k=ccos(jπ(k12)h)\lambda_{j+1}=\frac{\alpha_{0}+2\sum_{l=1}^{m}\alpha_{l}\cos(lj\pi h)}{\beta_{0}+2\sum_{l=1}^{m}\beta_{l}\cos(lj\pi h)},\quad\ x_{j+1,k}=c\cos\big{(}j\pi(k-\frac{1}{2})h\big{)} (2.14)

with h=1n,k=1,2,,n,j=0,1,,n1.h=\frac{1}{n},k=1,2,\cdots,n,\ j=0,1,\cdots,n-1.

We present the following example. Let n=4,m=2,α0=7,α1=5,α2=2,β0=5,β1=3,β2=1.n=4,m=2,\alpha_{0}=7,\alpha_{1}=5,\alpha_{2}=2,\beta_{0}=5,\beta_{1}=3,\beta_{2}=1. Then, with the setting in Theorem 2.4, we have the following matrices

A\displaystyle A =[127207752257702712],B\displaystyle=\begin{bmatrix}12&7&2&0\\ 7&7&5&2\\ 2&5&7&7\\ 0&2&7&12\end{bmatrix},\qquad B =[8410453113540148].\displaystyle=\begin{bmatrix}8&4&1&0\\ 4&5&3&1\\ 1&3&5&4\\ 0&1&4&8\end{bmatrix}. (2.15)

By direct calculations, the eigenpairs of (2.3) are

λ1,2,3,4\displaystyle\lambda_{1,2,3,4} =2113,5+427, 1,5427,\displaystyle=\frac{21}{13},\ \frac{5+4\sqrt{2}}{7},\ 1,\ \frac{5-4\sqrt{2}}{7}, (2.16)
x1\displaystyle x_{1} =(1,1,1,1)T,\displaystyle=(1,1,1,1)^{T},
x2\displaystyle x_{2} =(1,21,12,1)T,\displaystyle=(1,\sqrt{2}-1,1-\sqrt{2},-1)^{T},
x3\displaystyle x_{3} =(1,1,1,1)T,\displaystyle=(1,-1,-1,1)^{T},
x4\displaystyle x_{4} =(1,1+2,12,1)T,\displaystyle=(-1,1+\sqrt{2},-1-\sqrt{2},1)^{T},

which verifies Theorem 2.4.

Remark 2.5.

Theorems 2.12.4 give analytical eigenvalues and eigenvectors for four different sets of GEVPs. The eigenvalues are of the same form while the eigenvectors are of different forms. For a fix bandwidth mm, the internal entries of the matrices defined in Theorems 2.12.4 are the same while the modifications to the boundary rows are slightly different. This small discrepancy leads to different eigenvectors. It is well-known that for a complex-valued Hermitian matrix, the eigenvalues are real and eigenvectors are complex. The matrices in (2.11) are complex-valued. They are symmetric but not Hermitian. The eigenvalues are complex while the eigenvectors are real. This property is for these special matrices and it would be interesting to generalize this property to larger classes of matrices.

2.3 Corner-overlapped block-diagonal matrices

In this section, we consider the following type of matrix, that is, G(ξ)=(Gj,k(ξ))G^{(\xi)}=(G^{(\xi)}_{j,k}) with

G(ξ)=[ξ3ξ1ξ1ξ0ξ1ξ2ξ1ξ3ξ1ξ2ξ1ξ0ξ1ξ2ξ1ξ3ξ1ξ2ξ1ξ0ξ3].G^{(\xi)}=\begin{bmatrix}\xi_{3}&\xi_{1}\\ \xi_{1}&\xi_{0}&\xi_{1}&\xi_{2}\\ &\xi_{1}&\xi_{3}&\xi_{1}\\ &\xi_{2}&\xi_{1}&\xi_{0}&\xi_{1}&\xi_{2}\\ &&&\xi_{1}&\xi_{3}&\xi_{1}\\ &&&\xi_{2}&\xi_{1}&\xi_{0}&\cdots\\ &&&&&\vdots&\ddots\\ &&&&&&\ &\xi_{3}\\ \end{bmatrix}. (2.17)

We assume that the dimension of the matrix is (2n+1)×(2n+1)(2n+1)\times(2n+1). It is a block-diagonal matrix where the corners of blocks are overlapped. Therefore, we refer to this type of matrices as the corner-overlapped block-diagonal matrices.

In this section, we derive their analytical eigenpairs. To illustrate the idea, we consider the following matrix

A=[α3α1α1α0α1α2α1α3α1α2α1α0α1α1α3]A=\begin{bmatrix}\alpha_{3}&\alpha_{1}\\ \alpha_{1}&\alpha_{0}&\alpha_{1}&\alpha_{2}\\ &\alpha_{1}&\alpha_{3}&\alpha_{1}\\ &\alpha_{2}&\alpha_{1}&\alpha_{0}&\alpha_{1}\\ &&&\alpha_{1}&\alpha_{3}\\ \end{bmatrix} (2.18)

and its EVP (2.2). A direct symbolic calculation leads to the analytical eigenvalues

λ1\displaystyle\lambda_{1} =α3,\displaystyle=\alpha_{3}, (2.19)
λ2,3\displaystyle\lambda_{2,3} =12(α0+α2+α3±12α12+(α0+α2α3)2),\displaystyle=\frac{1}{2}\Big{(}\alpha_{0}+\alpha_{2}+\alpha_{3}\pm\sqrt{12\alpha_{1}^{2}+(\alpha_{0}+\alpha_{2}-\alpha_{3})^{2}}\Big{)},
λ4,5\displaystyle\lambda_{4,5} =12(α0α2+α3±4α12+(α0α2α3)2).\displaystyle=\frac{1}{2}\Big{(}\alpha_{0}-\alpha_{2}+\alpha_{3}\pm\sqrt{4\alpha_{1}^{2}+(\alpha_{0}-\alpha_{2}-\alpha_{3})^{2}}\Big{)}.

Alternatively, to find its eigenvalues, we note that the first and the last rows of (2.2) lead to

α3x1+α1x2\displaystyle\alpha_{3}x_{1}+\alpha_{1}x_{2} =λx1,\displaystyle=\lambda x_{1},
α1x4+α3x5\displaystyle\alpha_{1}x_{4}+\alpha_{3}x_{5} =λx5.\displaystyle=\lambda x_{5}.

On one hand, we solve these equations to arrive at

α1x2\displaystyle\alpha_{1}x_{2} =(λα3)x1,\displaystyle=(\lambda-\alpha_{3})x_{1}, (2.20)
α1x4\displaystyle\alpha_{1}x_{4} =(λα3)x5,\displaystyle=(\lambda-\alpha_{3})x_{5},

which is then substituted into the third equation in (2.2) to get

(λα3)(x5x3+x1)=0.(\lambda-\alpha_{3})(x_{5}-x_{3}+x_{1})=0. (2.21)

On the other hand, we substitute (2.20) and the third equation of (2.2) into the second and the fourth equations in (2.2) to get

(2α12+α0(λα3)λ(λα3))x2+(α12+α2(λα3))x4\displaystyle\big{(}2\alpha_{1}^{2}+\alpha_{0}(\lambda-\alpha_{3})-\lambda(\lambda-\alpha_{3})\big{)}x_{2}+\big{(}\alpha_{1}^{2}+\alpha_{2}(\lambda-\alpha_{3})\big{)}x_{4} =0,\displaystyle=0, (2.22)
(α12+α2(λα3))x2+(2α12+α0(λα3)λ(λα3))x4\displaystyle\big{(}\alpha_{1}^{2}+\alpha_{2}(\lambda-\alpha_{3})\big{)}x_{2}+\big{(}2\alpha_{1}^{2}+\alpha_{0}(\lambda-\alpha_{3})-\lambda(\lambda-\alpha_{3})\big{)}x_{4} =0.\displaystyle=0.

We now see that the EVP (2.2) is decomposed into two subproblems; that is, one EVP and QEVP as follows

A~x=λx,whereA~=[α3]1×1\tilde{A}x=\lambda x,\qquad\text{where}\qquad\tilde{A}=[\alpha_{3}]_{1\times 1} (2.23)

and

λ2IxλBxCx=0,\lambda^{2}Ix-\lambda Bx-Cx=0, (2.24)

where II is the identity matrix (we assume that the dimension of II is adaptive to its occurrence, in this case, it is 2×22\times 2),

B=[α0+α3α2α2α0+α3],andC=[2α12α0α3α12α2α3α12α2α32α12α0α3].B=\begin{bmatrix}\alpha_{0}+\alpha_{3}&\alpha_{2}\\ \alpha_{2}&\alpha_{0}+\alpha_{3}\end{bmatrix},\quad\text{and}\quad C=\begin{bmatrix}2\alpha_{1}^{2}-\alpha_{0}\alpha_{3}&\alpha_{1}^{2}-\alpha_{2}\alpha_{3}\\ \alpha_{1}^{2}-\alpha_{2}\alpha_{3}&2\alpha_{1}^{2}-\alpha_{0}\alpha_{3}\end{bmatrix}.

Both matrices BB and CC are symmetric. The EVP (2.23) has an analytical eigenvalue λ=α3\lambda=\alpha_{3} which is one of the eigenvalues in (2.19). The characteristic polynomial of the QEVP (2.24) is

χ(λ)=det(λ2I+λB+C),\chi(\lambda)=\det(\lambda^{2}I+\lambda B+C), (2.25)

which is a polynomial of order four. From fundamental theorem of algebra, it has four roots. It is easy to verify that λj,j=2,3,4,5\lambda_{j},j=2,3,4,5 given in (2.19) are the four roots of the equation χ(x)=0.\chi(x)=0.

Now, we propose the following idea. Assuming that λ0\lambda\neq 0, we rewrite the QEVP (2.24) as

Ξx=λx,\Xi x=\lambda x, (2.26)

where

Ξ=B+1λC=[α0+α3+2α12α0α3λα2+α12α2α3λα2+α12α2α3λα0+α3+2α12α0α3λ],\Xi=B+\frac{1}{\lambda}C=\begin{bmatrix}\alpha_{0}+\alpha_{3}+\frac{2\alpha_{1}^{2}-\alpha_{0}\alpha_{3}}{\lambda}&\alpha_{2}+\frac{\alpha_{1}^{2}-\alpha_{2}\alpha_{3}}{\lambda}\\[5.69046pt] \alpha_{2}+\frac{\alpha_{1}^{2}-\alpha_{2}\alpha_{3}}{\lambda}&\alpha_{0}+\alpha_{3}+\frac{2\alpha_{1}^{2}-\alpha_{0}\alpha_{3}}{\lambda}\end{bmatrix}, (2.27)

which is a symmetric matrix. For a fixed λ\lambda the matrix Ξ\Xi can be viewed as a constant matrix. We apply Theorem 2.1 with bandwidth m=1m=1 and dimension n=2n=2. Thus, the eigenvalues of Ξ\Xi satisfy

λj=(α0+α3+2α12α0α3λj)+2(α2+α12α2α3λj)cos(jπ/3),j=1,2,\lambda_{j}=\Big{(}\alpha_{0}+\alpha_{3}+\frac{2\alpha_{1}^{2}-\alpha_{0}\alpha_{3}}{\lambda_{j}}\Big{)}+2\Big{(}\alpha_{2}+\frac{\alpha_{1}^{2}-\alpha_{2}\alpha_{3}}{\lambda_{j}}\Big{)}\cos(j\pi/3),\qquad j=1,2, (2.28)

which is rewritten as a quadratic form in terms of λj\lambda_{j}

λj2(α0+α3+2α2cos(jπ/3))λj(2α12α0α3+2(α12α2α3)cos(jπ/3))=0,j=1,2.\lambda_{j}^{2}-\big{(}\alpha_{0}+\alpha_{3}+2\alpha_{2}\cos(j\pi/3)\big{)}\lambda_{j}-\big{(}2\alpha_{1}^{2}-\alpha_{0}\alpha_{3}+2(\alpha_{1}^{2}-\alpha_{2}\alpha_{3})\cos(j\pi/3)\big{)}=0,\ j=1,2. (2.29)

For j=1j=1, cos(jπ/3)=1/2\cos(j\pi/3)=1/2 and we obtain the eigenvalues λ2,3\lambda_{2,3} as in (2.19) while for j=2j=2, cos(jπ/3)=1/2\cos(j\pi/3)=-1/2 and we obtain the eigenvalues λ4,5\lambda_{4,5} as in (2.19). If λ=0\lambda=0, then (2.27) is invalid and we note that a shift (divide (2.26) by λα\lambda-\alpha for some non-zero α\alpha) will lead to the same set of eigenvalues.

We now generalize the matrix AA defined (2.18) and consider the GEVP. Based on the idea above, we have the following theorem which gives analytical solutions to a class of GEVPs with corner-overlapped block-diagonal matrices.

Theorem 2.6 (Analytical eigenvalues and eigenvectors, set 5).

Let A=G(α)A=G^{(\alpha)} and B=G(β)B=G^{(\beta)}. Assume that BB is invertible. Then, the GEVP (2.3) has 2n+12n+1 eigenvalues

λ2n+1=α3β3,λ2j1,2j=b^±b^24a^c^2a^,j=1,2,,n.\lambda_{2n+1}=\frac{\alpha_{3}}{\beta_{3}},\qquad\lambda_{2j-1,2j}=\frac{-\hat{b}\pm\sqrt{\hat{b}^{2}-4\hat{a}\hat{c}}}{2\hat{a}},\qquad j=1,2,\cdots,n. (2.30)

where

a^\displaystyle\hat{a} =β0β32β12+2(β2β3β12)cos(jπh),\displaystyle=\beta_{0}\beta_{3}-2\beta_{1}^{2}+2(\beta_{2}\beta_{3}-\beta_{1}^{2})\cos(j\pi h), (2.31)
b^\displaystyle\hat{b} =4α1β1β0α3α0β32(β2α32α1β1+α2β3)cos(jπh),\displaystyle=4\alpha_{1}\beta_{1}-\beta_{0}\alpha_{3}-\alpha_{0}\beta_{3}-2(\beta_{2}\alpha_{3}-2\alpha_{1}\beta_{1}+\alpha_{2}\beta_{3})\cos(j\pi h),
c^\displaystyle\hat{c} =α0α32α12+2(α2α3α12)cos(jπh).\displaystyle=\alpha_{0}\alpha_{3}-2\alpha_{1}^{2}+2(\alpha_{2}\alpha_{3}-\alpha_{1}^{2})\cos(j\pi h).

The corresponding eigenvectors are xj=(xj,1,,xj,2n+1)Tx_{j}=(x_{j,1},\cdots,x_{j,2n+1})^{T} with

x2n+1,2j+1\displaystyle x_{2n+1,2j+1} =c(1)j,x2n+1,2j=0,j=1,,n,\displaystyle=c(-1)^{j},\quad x_{2n+1,2j}=0,\quad j=1,\cdots,n, (2.32)
xj,2k+1\displaystyle x_{j,2k+1} =α1λjβ1λjβ3α3(xj,2k+xj,2k+2),\displaystyle=\frac{\alpha_{1}-\lambda_{j}\beta_{1}}{\lambda_{j}\beta_{3}-\alpha_{3}}(x_{j,2k}+x_{j,2k+2}),
xj,2k\displaystyle x_{j,2k} =csin(j2πkh),h=1n+1,j=1,,2n,k=0,1,,n,\displaystyle=c\sin(\lceil\frac{j}{2}\rceil\pi kh),\quad h=\frac{1}{n+1},\quad j=1,\cdots,2n,\ k=0,1,\cdots,n,

where \lceil\cdot\rceil is the ceiling function.

Proof.

For simplicity, we denote (λ,x=(x1,,x2n+1)T)(\lambda,x=(x_{1},\cdots,x_{2n+1})^{T}) as a generic eigenpair of the GEVP (2.3). We assume that x0=x2n+2=0.x_{0}=x_{2n+2}=0. One one hand, the (2j+1)(2j+1)-th row of (2.3) leads to

α1x2j+α3x2j+1+α1x2j+2=λ(β1x2j+β3x2j+1+β1x2j+2),j=0,1,,n,\alpha_{1}x_{2j}+\alpha_{3}x_{2j+1}+\alpha_{1}x_{2j+2}=\lambda(\beta_{1}x_{2j}+\beta_{3}x_{2j+1}+\beta_{1}x_{2j+2}),\qquad j=0,1,\cdots,n, (2.33)

which is simplified to

(α1λβ1)(x2j+x2j+2)=(λβ3α3)x2j+1,j=0,1,,n.(\alpha_{1}-\lambda\beta_{1})(x_{2j}+x_{2j+2})=(\lambda\beta_{3}-\alpha_{3})x_{2j+1},\qquad j=0,1,\cdots,n. (2.34)

Using (2.34) recursively and x0=x2n+2=0x_{0}=x_{2n+2}=0, we calculate

0\displaystyle 0 =(α1λβ1)x2n+2\displaystyle=(\alpha_{1}-\lambda\beta_{1})x_{2n+2} (2.35)
=(λβ3α3)x2n+1(α1λβ1)x2n\displaystyle=(\lambda\beta_{3}-\alpha_{3})x_{2n+1}-(\alpha_{1}-\lambda\beta_{1})x_{2n}
=\displaystyle=\cdots
=(λβ3α3)(j=0n(1)njx2j+1).\displaystyle=(\lambda\beta_{3}-\alpha_{3})\Big{(}\sum_{j=0}^{n}(-1)^{n-j}x_{2j+1}\Big{)}.

One the other hand, the (2j+2)(2j+2)-th row of (2.3) leads to

α2x2j+α1x2j+1\displaystyle\alpha_{2}x_{2j}+\alpha_{1}x_{2j+1} +α0x2j+2+α1x2j+3+α2x2j+4\displaystyle+\alpha_{0}x_{2j+2}+\alpha_{1}x_{2j+3}+\alpha_{2}x_{2j+4} (2.36)
=λ(β2x2j+β1x2j+1+β0x2j+2+β1x2j+3+β2x2j+4),\displaystyle=\lambda(\beta_{2}x_{2j}+\beta_{1}x_{2j+1}+\beta_{0}x_{2j+2}+\beta_{1}x_{2j+3}+\beta_{2}x_{2j+4}),

where j=0,1,,n1.j=0,1,\cdots,n-1. Using (2.34) and x0=x2n+2=0x_{0}=x_{2n+2}=0, this equation simplifies to

α~1x2j+α~0x2j+2+α~1x2j+4=λ(β~1x2j+β~0x2j+2+β~1x2j+4),j=0,1,,n1,\tilde{\alpha}_{1}x_{2j}+\tilde{\alpha}_{0}x_{2j+2}+\tilde{\alpha}_{1}x_{2j+4}=\lambda(\tilde{\beta}_{1}x_{2j}+\tilde{\beta}_{0}x_{2j+2}+\tilde{\beta}_{1}x_{2j+4}),\quad j=0,1,\cdots,n-1, (2.37)

where

α~0\displaystyle\tilde{\alpha}_{0} =α0+2α1(α1λβ1)λβ3α3,\displaystyle=\alpha_{0}+\frac{2\alpha_{1}(\alpha_{1}-\lambda\beta_{1})}{\lambda\beta_{3}-\alpha_{3}}, β~0=β0+2β1(α1λβ1)λβ3α3,\displaystyle\tilde{\beta}_{0}=\beta_{0}+\frac{2\beta_{1}(\alpha_{1}-\lambda\beta_{1})}{\lambda\beta_{3}-\alpha_{3}}, (2.38)
α~1\displaystyle\tilde{\alpha}_{1} =α2+α1(α1λβ1)λβ3α3,\displaystyle=\alpha_{2}+\frac{\alpha_{1}(\alpha_{1}-\lambda\beta_{1})}{\lambda\beta_{3}-\alpha_{3}}, β~1=β2+β1(α1λβ1)λβ3α3.\displaystyle\tilde{\beta}_{1}=\beta_{2}+\frac{\beta_{1}(\alpha_{1}-\lambda\beta_{1})}{\lambda\beta_{3}-\alpha_{3}}.

The GEVP (2.3) with the matrices defined in this theorem is then decomposed to a block form

[Aee𝟎AoeAoo][xexo]=[𝟎𝟎],\begin{bmatrix}A_{ee}&\mathbf{0}\\ A_{oe}&A_{oo}\end{bmatrix}\begin{bmatrix}x_{e}\\ x_{o}\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\ \mathbf{0}\end{bmatrix}, (2.39)

where xe=(x2,x2n)Tx_{e}=(x_{2},\cdots x_{2n})^{T}, xo=(x1,x2n+1)Tx_{o}=(x_{1},\cdots x_{2n+1})^{T}, the first block AeeA_{ee} are formed from (2.37), and the blocks AoeA_{oe} and AooA_{oo} are formed from (2.34). The characteristic polynomial of (2.39) is

χ(λ)=det(Aee)det(Aoo).\chi(\lambda)=\det(A_{ee})\det(A_{oo}). (2.40)

The roots of χ(λ)=0\chi(\lambda)=0 give the eigenvalues. Firstly, using (2.35), det(Aoo)=0\det(A_{oo})=0 leads to the eigenpair which we denote it as (λ2n+1,x2n+1)(\lambda_{2n+1},x_{2n+1}) with the eigenvector x2n+1=(x2n+1,1,,x2n+1,2n+1)Tx_{2n+1}=(x_{2n+1,1},\cdots,x_{2n+1,2n+1})^{T} and

λ2n+1=α3β3,x2n+1,2j+1=c(1)j,x2n+1,2j=0,j=1,,n.\lambda_{2n+1}=\frac{\alpha_{3}}{\beta_{3}},\qquad x_{2n+1,2j+1}=c(-1)^{j},\quad x_{2n+1,2j}=0,\quad j=1,\cdots,n. (2.41)

This eigenvalue λ2n+1\lambda_{2n+1} is simple due to the nonzero block matrix AoeA_{oe}. Now, the first block part of (2.39) that leads to Aeexe=0A_{ee}x_{e}=0 can be written as a GEVP with tridiagonal matrices defined using parameters in (2.38). Applying Theorem 2.1 with m=1m=1, we obtain that the eigenpairs (λj,xj)(\lambda_{j},x_{j}) with xj=(xj,2,,xj,2n)Tx_{j}=(x_{j,2},\cdots,x_{j,2n})^{T} and

λj=α~0+2α~1cos(jπh)β~0+2β~1cos(jπh),xj,2k=csin(jπkh),h=1n+1,j,k=1,2,,n.\lambda_{j}=\frac{\tilde{\alpha}_{0}+2\tilde{\alpha}_{1}\cos(j\pi h)}{\tilde{\beta}_{0}+2\tilde{\beta}_{1}\cos(j\pi h)},\quad x_{j,2k}=c\sin(j\pi kh),\quad h=\frac{1}{n+1},\ j,k=1,2,\cdots,n. (2.42)

Using (2.38) and rearranging the index of the eigenpairs accordingly (due to the quadratic feature in the eigenvalue, one eigenpair (λj,xj)(\lambda_{j},x_{j}) becomes two eigenpairs (λ2j1,x2j1)(\lambda_{2j-1},x_{2j-1}) and (λ2j,x2j)(\lambda_{2j},x_{2j})), we have

λ2j1,2j=b^±b^24a^c^2a^,j=1,2,,n,\lambda_{2j-1,2j}=\frac{-\hat{b}\pm\sqrt{\hat{b}^{2}-4\hat{a}\hat{c}}}{2\hat{a}},\qquad j=1,2,\cdots,n,

where

a^\displaystyle\hat{a} =β0β32β12+2(β2β3β12)cos(jπh),\displaystyle=\beta_{0}\beta_{3}-2\beta_{1}^{2}+2(\beta_{2}\beta_{3}-\beta_{1}^{2})\cos(j\pi h),
b^\displaystyle\hat{b} =4α1β1β0α3α0β32(β2α32α1β1+α2β3)cos(jπh),\displaystyle=4\alpha_{1}\beta_{1}-\beta_{0}\alpha_{3}-\alpha_{0}\beta_{3}-2(\beta_{2}\alpha_{3}-2\alpha_{1}\beta_{1}+\alpha_{2}\beta_{3})\cos(j\pi h),
c^\displaystyle\hat{c} =α0α32α12+2(α2α3α12)cos(jπh).\displaystyle=\alpha_{0}\alpha_{3}-2\alpha_{1}^{2}+2(\alpha_{2}\alpha_{3}-\alpha_{1}^{2})\cos(j\pi h).

Associated with the eigenvalue λ2j1\lambda_{2j-1} and λ2j\lambda_{2j}, the eigenvectors are then denoted as x2j1x_{2j-1} and x2jx_{2j}, respectively. From (2.42), we have

x2j1,2k=x2j,2k=csin(jπkh),j,k=1,2,,n.x_{2j-1,2k}=x_{2j,2k}=c\sin(j\pi kh),\quad j,k=1,2,\cdots,n.

The odd entries of the eigenvectors are given by (2.34) as

x2j1,2k+1\displaystyle x_{2j-1,2k+1} =α1λ2j1β1λ2j1β3α3(x2k+x2k+2),\displaystyle=\frac{\alpha_{1}-\lambda_{2j-1}\beta_{1}}{\lambda_{2j-1}\beta_{3}-\alpha_{3}}(x_{2k}+x_{2k+2}),
x2j,2k+1\displaystyle x_{2j,2k+1} =α1λ2jβ1λ2jβ3α3(x2k+x2k+2),k=0,1,,n,j=1,2,,n.\displaystyle=\frac{\alpha_{1}-\lambda_{2j}\beta_{1}}{\lambda_{2j}\beta_{3}-\alpha_{3}}(x_{2k}+x_{2k+2}),\qquad k=0,1,\cdots,n,\quad j=1,2,\cdots,n.

This completes the proof. ∎

We give an example which arises from the FEM discretization. The quadratic FEM of the Laplacian eigenvalue problem with nn uniform elements on the unit interval [0,1][0,1] leads to the stiffness and mass matrices [15, 27]

K=1h[16383831438313831638383163],M=h[815115115415115130115815115115815],K=\frac{1}{h}\begin{bmatrix}\frac{16}{3}&-\frac{8}{3}&\\[5.69046pt] -\frac{8}{3}&\frac{14}{3}&-\frac{8}{3}&\frac{1}{3}\\[5.69046pt] &-\frac{8}{3}&\frac{16}{3}&-\frac{8}{3}&\\[5.69046pt] &&\ddots&\ddots&\ddots\\ &&&-\frac{8}{3}&\frac{16}{3}\\ \end{bmatrix},\qquad M=h\begin{bmatrix}\frac{8}{15}&\frac{1}{15}&\\[5.69046pt] \frac{1}{15}&\frac{4}{15}&\frac{1}{15}&-\frac{1}{30}\\[5.69046pt] &\frac{1}{15}&\frac{8}{15}&\frac{1}{15}&\\[5.69046pt] &&\ddots&\ddots&\ddots\\ &&&\frac{1}{15}&\frac{8}{15}\\ \end{bmatrix}, (2.43)

which are of dimension (2n1)×(2n1){(2n-1)\times(2n-1)}. With these matrices in mind, we have the following analytical result.

Let KK and MM be defined in (2.43). Then, the GEVP Ku=λMuKu=\lambda Mu has analytical eigenvalues

λj={4(13+2cos(jπh)124+112cos(jπh)11cos2(jπh))3cos(jπh)n2,j=1,,n1,10n2,j=n,4(13+2cos(jπh)+124+112cos(jπh)11cos2(jπh))3cos(jπh)n2,j=n+1,,2n1,\lambda_{j}=\begin{cases}\frac{4\big{(}13+2\cos(j\pi h)-\sqrt{124+112\cos(j\pi h)-11\cos^{2}(j\pi h)}\big{)}}{3-\cos(j\pi h)}n^{2},\quad&j=1,\cdots,n-1,\\ 10n^{2},\quad&j=n,\\ \frac{4\big{(}13+2\cos(j\pi h)+\sqrt{124+112\cos(j\pi h)-11\cos^{2}(j\pi h)}\big{)}}{3-\cos(j\pi h)}n^{2},\quad&j=n+1,\cdots,2n-1,\\ \end{cases}

and analytical eigenvectors uj=(uj,1,u_{j}=(u_{j,1}, ,uj,2n1)T\cdots,u_{j,2n-1})^{T} where

uj,2k+1\displaystyle u_{j,2k+1} =40+λjhh2808λjhh2(uj,2k+uj,2k+2),jn,j=1,,2n1,k=0,1,,n1,\displaystyle=\frac{40+\lambda_{j}^{h}h^{2}}{80-8\lambda_{j}^{h}h^{2}}(u_{j,2k}+u_{j,2k+2}),j\neq n,j=1,\cdots,2n-1,\ k=0,1,\cdots,n-1,
uj,2k\displaystyle u_{j,2k} ={csin(jπkh),j=1,,n1,k=0,1,,n,csin((jn)πkh),j=n+1,,2n1,k=0,1,,n,\displaystyle=\begin{cases}c\sin(j\pi kh),\quad j=1,\cdots,n-1,\ k=0,1,\cdots,n,\\ c\sin((j-n)\pi kh),\quad j=n+1,\cdots,2n-1,\ k=0,1,\cdots,n,\\ \end{cases}
un,2j+1\displaystyle u_{n,2j+1} =c(1)j,j=0,1,,n1,un,2j=0,j=1,,n1.\displaystyle=c(-1)^{j},j=0,1,\cdots,n-1,\quad u_{n,2j}=0,\quad j=1,\cdots,n-1.

Generalization of Theorem 2.6 is possible. The matrix defined in (2.17) is a corner-overlapped block-diagonal matrix where each block (except the first and last) is of dimension 3×33\times 3. One can generalize the block to be of dimension k×kk\times k where k3k\geq 3. We give the following example where the block is 4×44\times 4. The cubic FEM of the Laplacian eigenvalue problem with nn uniform elements on the unit interval [0,1][0,1] leads to the stiffness and mass matrices [15, 27]

K\displaystyle K =1h[545297402720297405451894027201894037518940272013401894054529740272027202974054518940272029740545],\displaystyle=\frac{1}{h}\begin{bmatrix}\frac{54}{5}&-\frac{297}{40}&\frac{27}{20}\\[5.69046pt] -\frac{297}{40}&\frac{54}{5}&-\frac{189}{40}\\[5.69046pt] \frac{27}{20}&-\frac{189}{40}&\frac{37}{5}&-\frac{189}{40}&\frac{27}{20}&-\frac{13}{40}\\[5.69046pt] &&-\frac{189}{40}&\frac{54}{5}&-\frac{297}{40}&\frac{27}{20}\\[5.69046pt] &&\frac{27}{20}&-\frac{297}{40}&\frac{54}{5}&-\frac{189}{40}\\[5.69046pt] &&&&&\ddots&\ddots&\ddots\\[5.69046pt] &&&&&\frac{27}{20}&-\frac{297}{40}&\frac{54}{5}\\ \end{bmatrix}, (2.44)
M\displaystyle M =h[277027560314027560277033560314033560161053356031401916803356027702756031403140275602770335603140275602770],\displaystyle=h\begin{bmatrix}\frac{27}{70}&-\frac{27}{560}&-\frac{3}{140}\\[5.69046pt] -\frac{27}{560}&\frac{27}{70}&\frac{33}{560}\\[5.69046pt] -\frac{3}{140}&\frac{33}{560}&\frac{16}{105}&\frac{33}{560}&-\frac{3}{140}&\frac{19}{1680}\\[5.69046pt] &&\frac{33}{560}&\frac{27}{70}&-\frac{27}{560}&-\frac{3}{140}\\[5.69046pt] &&-\frac{3}{140}&-\frac{27}{560}&\frac{27}{70}&\frac{33}{560}\\[5.69046pt] &&&&&\ddots&\ddots&\ddots\\[5.69046pt] &&&&&-\frac{3}{140}&-\frac{27}{560}&\frac{27}{70}\\ \end{bmatrix},

which are of dimension (3n1)×(3n1){(3n-1)\times(3n-1)}. To find the analytical eigenvalues, we follow the same procedure as for quadratic FEM. By static condensation, the matrix eigenvalue problem GEVP Ku=λMuKu=\lambda Mu is first rewritten to a block-matrix form

[Att𝟎AotAoo][utuo]=[𝟎𝟎],\begin{bmatrix}A_{tt}&\mathbf{0}\\ A_{ot}&A_{oo}\end{bmatrix}\begin{bmatrix}u^{t}\\ u^{o}\end{bmatrix}=\begin{bmatrix}\mathbf{0}\\ \mathbf{0}\end{bmatrix}, (2.45)

where ut=(u3,u6,,u3n3)Tu^{t}=(u_{3},u_{6},\cdots,u_{3n-3})^{T}, uo=(u1,u2,u4,u5,,u3n2,u3n1)Tu^{o}=(u_{1},u_{2},u_{4},u_{5},\cdots,u_{3n-2},u_{3n-1})^{T}. Similarly, det(Aoo)=0\det(A_{oo})=0 leads to the eigenvalue λn=10n2,λ2n=42n2\lambda_{n}=10n^{2},\lambda_{2n}=42n^{2}. To find the other eigenvalues, we rewrite Attut=𝟎A_{tt}u^{t}=\mathbf{0} as a cubic EVP

0\displaystyle 0 =(Λ)3[8118118118]+30(Λ)2[36113611361136]\displaystyle=(\Lambda)^{3}\begin{bmatrix}8&1&\\ 1&8&1&\\ &\ddots&\ddots&\ddots&\\ &&1&8&1\\ &&&1&8\\ \end{bmatrix}+30(\Lambda)^{2}\begin{bmatrix}-36&1&\\ 1&-36&1&\\ &\ddots&\ddots&\ddots&\\ &&1&-36&1\\ &&&1&-36\\ \end{bmatrix} (2.46)
+360(Λ)[64336433643364]25200[2112112112],\displaystyle\quad+360(\Lambda)\begin{bmatrix}64&3&\\ 3&64&3&\\ &\ddots&\ddots&\ddots&\\ &&3&64&3\\ &&&3&64\\ \end{bmatrix}-25200\begin{bmatrix}2&-1&\\ -1&2&-1&\\ &\ddots&\ddots&\ddots&\\ &&-1&2&-1\\ &&&-1&2\\ \end{bmatrix},

which is of dimension (n1)×(n1){(n-1)\times(n-1)}. Herein, Λ=λh2\Lambda=\lambda h^{2}. Using the derivations from the proof of Theorem 2.6, we obtain the eigenvalues that are the roots of the equation

(4+ζ)Λ330(18ζ)Λ2+360(32+3ζ)Λ25200(1ζ)=0,(4+\zeta)\Lambda^{3}-30(18-\zeta)\Lambda^{2}+360(32+3\zeta)\Lambda-25200(1-\zeta)=0, (2.47)

where ζ=cos(jπh),j=1,,n1\zeta=\cos(j\pi h),j=1,\cdots,n-1. The eigenvectors can be obtained similarly.

2.4 Polynomial eigenvalue problems (PEVPs)

The PEVP is to find the eigenpairs (λ,xn)(\lambda\in\mathbb{C},x\in\mathbb{C}^{n}) such that

P(λ)x=0withP(λ):=λqAq+λq1Aq1++A0,P(\lambda)x=0\qquad\text{with}\qquad P(\lambda):=\lambda^{q}A_{q}+\lambda^{q-1}A_{q-1}+\cdots+A_{0}, (2.48)

where Ajn×n,j=0,1,,qA_{j}\in\mathbb{C}^{n\times n},j=0,1,\cdots,q. When q=1q=1 and q=2q=2, the PEVP reduces to linear eigenvalue problem (LEVP) and QEVP, respectively. The LEVP is also referred to as the GEVP as discussed above.

We note that the nonlinear eigenvalue problem Aeexe=0A_{ee}x_{e}=0 defined in the proof of Theorem 2.6 is a QEVP. In particular, the problem can be written as follows

λ2Bx+λCx+Dx=0,\lambda^{2}Bx+\lambda Cx+Dx=0, (2.49)

where B=T(γ,1)B=T^{(\gamma,1)} with γ0=β0β32β12,γ1=β2β3β12\gamma_{0}=\beta_{0}\beta_{3}-2\beta_{1}^{2},\gamma_{1}=\beta_{2}\beta_{3}-\beta_{1}^{2}, C=T(γ,1)C=T^{(\gamma,1)} with γ0=4α1β1β0α3α0β3,γ1=2α1β1β2α3α2β3\gamma_{0}=4\alpha_{1}\beta_{1}-\beta_{0}\alpha_{3}-\alpha_{0}\beta_{3},\gamma_{1}=2\alpha_{1}\beta_{1}-\beta_{2}\alpha_{3}-\alpha_{2}\beta_{3}, and D=T(γ,1)D=T^{(\gamma,1)} with γ0=α0α32α12,γ1=α2α3α12\gamma_{0}=\alpha_{0}\alpha_{3}-2\alpha_{1}^{2},\gamma_{1}=\alpha_{2}\alpha_{3}-\alpha_{1}^{2}. Herein, T(γ,1)T^{(\gamma,1)} is defined as (2.1). The analytical eigenpairs are then derived based on the Theorem 2.1 with m=1m=1. The analytical eigenvalues of the cubic EVP (2.46) are derived in a similar fashion. We generalize these results and give the following result.

Theorem 2.7 (Analytical solutions to PEVP).

Let n2,1mn1n\geq 2,1\leq m\leq n-1 and Aj=T(α(j),m)H(α(j),m)A_{j}=T^{(\alpha^{(j)},m)}-H^{(\alpha^{(j)},m)} with T(ξ,m)T^{(\xi,m)} and H(ξ,m)H^{(\xi,m)}, ξ=α(j),j=0,1,,q,\xi=\alpha^{(j)},j=0,1,\cdots,q, defined in (2.1) and (2.4), respectively. Then, the PEVP (2.48) has eigenpairs (λj,xj)(\lambda_{j},x_{j}) that xjx_{j} are given in (2.5) and λj\lambda_{j} are the roots of the polynomials

k=0qλjk(α0(k)+2l=1mαl(k)cos(ljπh))=0,j=1,2,,n.\sum_{k=0}^{q}\lambda_{j}^{k}\Big{(}\alpha_{0}^{(k)}+2\sum_{l=1}^{m}\alpha_{l}^{(k)}\cos(lj\pi h)\Big{)}=0,\qquad j=1,2,\cdots,n. (2.50)
Proof.

Following [21, p. 515] and the generalization in Theorem 2.1, we seek eigenvectors with entries of the form csin(jπkh)c\sin(j\pi kh). Then each row of the PEVP (2.48),

s=0qk=1nλsAs,ikxj,k=s=0qλsk=1nAs,ikxj,k=0,i=1,,n,\sum_{s=0}^{q}\sum_{k=1}^{n}\lambda^{s}A_{s,ik}x_{j,k}=\sum_{s=0}^{q}\lambda^{s}\sum_{k=1}^{n}A_{s,ik}x_{j,k}=0,\quad i=1,\cdots,n,

which boils down to

s=0qλs(α0(s)+2l=1mαl(s)cos(ljπh))=0,i=1,,n.\sum_{s=0}^{q}\lambda^{s}\Big{(}\alpha_{0}^{(s)}+2\sum_{l=1}^{m}\alpha_{l}^{(s)}\cos(lj\pi h)\Big{)}=0,\quad i=1,\cdots,n. (2.51)

We note that the above form is independent of the row number ii and the eigenvector index number jj. This means that a root of (2.51) and a vector xj=(xj,1,,xj,n)Tx_{j}=(x_{j,1},\cdots,x_{j,n})^{T} with xj,k=csin(jπkh),k=1,,n,x_{j,k}=c\sin(j\pi kh),k=1,\cdots,n, always satisfy the PEVP. Hence, the root and xjx_{j} give an eigenpair. This completes the proof. ∎

We remark that similar analytical results can be obtained if the Hankel matrix HH is defined as (2.7), (2.9), and (2.13). The eigenvectors are of the same form in each Theorem 2.22.4 and there are nn eigenvectors. There are nqnq eigenvalues and the eigenvalues are roots of the nn qq-th order polynomials. Two examples are given in the previous subsection, so we omit presenting more examples for brevity.

2.5 Generalizations

We now consider some other generalizations. The simplest case is the constant scaling. Let c10,c20c_{1}\neq 0,c_{2}\neq 0 be constants, then the GEVP c1Ax=λc2Bxc_{1}Ax=\lambda c_{2}Bx has eigenvalues λ=λ~c2/c1\lambda=\tilde{\lambda}c_{2}/c_{1} where λ~\tilde{\lambda} is an eigenvalue of the GEVP Ax=λ~BxAx=\tilde{\lambda}Bx. The eigenvectors remain the same. This constant scaling has applications in various numerical spectral approximations. For example, for FEM, the scaling is in the form (1/h)Ax=λhBx(1/h)Ax=\lambda hBx while for FDM, the scaling is c1=1/h2,c2=1c_{1}=1/h^{2},c_{2}=1 where hh is the size of a mesh (cf., [27, 26]).

Following the book of Strang [26, Section 5], one may generalize these results to powers and products of matrices. For EVP of the form (2.2), the powers Ak,kA^{k},k\in\mathbb{Z} has eigenvalues λjk\lambda^{k}_{j}, where λj,j=1,,n\lambda_{j},j=1,\cdots,n are eigenvalues of (2.2). The eigenvectors remain the same. For the EVPs Ax=λxAx=\lambda x and Bx=μxBx=\mu x, if AA commutes with BB, that is, AB=BAAB=BA, then the two EVPs have the same eigenvectors and the eigenvalues of ABAB (or BABA) are λμ\lambda\mu. Additionally, in this case, A+BA+B has eigenvalues λ+μ\lambda+\mu. For GEVP of the form (2.3), similar results can be obtained. If the matrices entries are commutative, then B1A=AB1.B^{-1}A=AB^{-1}. The GEVP Akx=μBkxA^{k}x=\mu B^{k}x has eigenvalues μ=λk\mu=\lambda^{k} where λ\lambda is an eigenvalue of Ax=λBxAx=\lambda Bx. Similar results can be obtained for products and additions.

We now consider the tensor-product matrices from the FEM-based discretizations (see, for example, [3, 8]) of the Laplacian eigenvalue problem in multi-dimension. We have the following result.

Theorem 2.8 (Eigenvalues and eigenvectors for tensor-product matrices).

Let (λj,xj),j=1,,n,(\lambda_{j},x_{j}),j=1,\cdots,n, be the eigenpairs of the GEVP (2.3) and (μk,yk),k=1,,m,(\mu_{k},y_{k}),k=1,\cdots,m, be the eigenpairs of the GEVP Cy=μDyCy=\mu Dy. Then, the GEVP

(AD+BC)z=η(BD)z(A\otimes D+B\otimes C)z=\eta(B\otimes D)z (2.52)

has eigenpairs (η(j,k),z(j,k))(\eta_{(j,k)},z_{(j,k)}) with

η(j,k)=λj+μk,z(j,k)=xjyk,j=1,,n,k=1,,m.\eta_{(j,k)}=\lambda_{j}+\mu_{k},\qquad z_{(j,k)}=x_{j}\otimes y_{k},\qquad j=1,\cdots,n,\ k=1,\cdots,m. (2.53)
Proof.

Let (λ,x)(\lambda,x) be a generic eigenpair of Ax=λBxAx=\lambda Bx and (μ,y)(\mu,y) be a generic eigenpair of Cy=μDyCy=\mu Dy. Let z=xyz=x\otimes y, we calculate that

(AD+BC)z\displaystyle(A\otimes D+B\otimes C)z =(AD+BC)(xy)\displaystyle=(A\otimes D+B\otimes C)(x\otimes y) (2.54)
=AxDy+BxCy\displaystyle=Ax\otimes Dy+Bx\otimes Cy
=λBxDy+BxμDy\displaystyle=\lambda Bx\otimes Dy+Bx\otimes\mu Dy
=(λ+μ)(BxDy)\displaystyle=(\lambda+\mu)(Bx\otimes Dy)
=(λ+μ)(BD)(xy)\displaystyle=(\lambda+\mu)(B\otimes D)(x\otimes y)
=(λ+μ)(BD)z,\displaystyle=(\lambda+\mu)(B\otimes D)z,

which completes the proof. ∎

Remark 2.9.

Once the two sets of the eigenpairs are found, either numerically or analytically, the eigenpairs for the GEVP in the form (2.52) can be derived. A FEM (FDM, SEM, or IGA) discretization of Δu=λu-\Delta u=\lambda u on unit square domain with a uniform tensor-product mesh leads to the GEVP in the form of (2.52). For three- or higher- dimensional problems, this result can be generalized in a similar fashion.

3 Trigonometric identities

In this section, we derive some trigonometric identities based on the eigenvector-eigenvalue identity that was rediscovered and coined recently in [13]. The eigenvector-eigenvalue identity for the EVP (2.2) is (see [13, Theorem 1])

|xj,k|2l=1,ljn(λjλl)=l=1n1(λjμl(k)),j,k=1,,n,|x_{j,k}|^{2}\prod_{l=1,l\neq j}^{n}(\lambda_{j}-\lambda_{l})=\prod_{l=1}^{n-1}(\lambda_{j}-\mu_{l}^{(k)}),\quad j,k=1,\cdots,n, (3.1)

where AA is a Hermitian matrix with dimension nn, (λj,xj),j=1,,n,(\lambda_{j},x_{j}),j=1,\cdots,n, are eigenpairs of Ax=λxAx=\lambda x with normalized eigenvectors xj=(xj,1,,xj,n)Tx_{j}=(x_{j,1},\cdots,x_{j,n})^{T}, and μl(k)\mu_{l}^{(k)} is an eigenvalue of A(k)y=μ(k)yA^{(k)}y=\mu^{(k)}y with A(k)A^{(k)} being the minor of AA formed by removing the kthk^{\text{th}} row and column. We generalize this identity for the GEVPs as follows.

Theorem 3.1 (Eigenvector-eigenvalue identity for the GEVP).

Let AA and BB be Hermitian matrices with dimension n×nn\times n. Assume that BB is invertible. Let (λj,xj),j=1,,n,(\lambda_{j},x_{j}),j=1,\cdots,n, be the eigenpairs of the GEVP (2.3) with normalized eigenvectors xj=(xj,1,,xj,n)Tx_{j}=(x_{j,1},\cdots,x_{j,n})^{T}. Then, there holds

|xj,k|2l=1,ljn(λjλl)=l=1n1ηl(k)l=1,ljnηll=1n1(λjμl(k)),j,k=1,,n,|x_{j,k}|^{2}\prod_{l=1,l\neq j}^{n}(\lambda_{j}-\lambda_{l})=\frac{\prod_{l=1}^{n-1}\eta_{l}^{(k)}}{\prod_{l=1,l\neq j}^{n}\eta_{l}}\cdot\prod_{l=1}^{n-1}(\lambda_{j}-\mu_{l}^{(k)}),\quad j,k=1,\cdots,n, (3.2)

where μl(k)\mu_{l}^{(k)} is an eigenvalue of A(k)y=μ(k)B(k)yA^{(k)}y=\mu^{(k)}B^{(k)}y with A(k)A^{(k)} and B(k)B^{(k)} being minors of AA and BB formed by removing the kthk^{\text{th}} row and column, respectively, ηl\eta_{l} is an eigenvalue of By=ηyBy=\eta y, and ηl(k)\eta_{l}^{(k)} is an eigenvalue of B(k)y=η(k)yB^{(k)}y=\eta^{(k)}y.

Proof.

We follow the proof of (3.1) for Ax=λxAx=\lambda x that uses perturbative analysis in [13, Sect. 2.4] (first appeared in [22]). Firstly, since BB is invertible, det(B)0\det(B)\neq 0 and hence ηl0,l=1,,n\eta_{l}\neq 0,l=1,\cdots,n. Let Q(λ)Q(\lambda) be the characteristic polynomial of the GEVP (2.3). Then,

Q(λ)=det(λBA)=det(B)det(λIB1A)=det(B)l=1n(λλl).Q(\lambda)=\det(\lambda B-A)=\det(B)\det(\lambda I-B^{-1}A)=\det(B)\prod_{l=1}^{n}(\lambda-\lambda_{l}). (3.3)

The derivative Q(λj)Q^{\prime}(\lambda_{j}) of Q(λ)Q(\lambda) at λ=λj\lambda=\lambda_{j} is

Q(λj)=det(B)l=1,ljn(λjλl).Q^{\prime}(\lambda_{j})=\det(B)\prod_{l=1,l\neq j}^{n}(\lambda_{j}-\lambda_{l}). (3.4)

Similarly, let P(k)(μ(k))P^{(k)}(\mu^{(k)}) be the characteristic polynomial of the GEVP A(k)y=μ(k)B(k)yA^{(k)}y=\mu^{(k)}B^{(k)}y. Then,

P(k)(μ(k))=det(B(k))l=1n1(μ(k)μl(k)).P^{(k)}(\mu^{(k)})=\det(B^{(k)})\prod_{l=1}^{n-1}(\mu^{(k)}-\mu^{(k)}_{l}). (3.5)

Now, with the limiting argument, we assume that AA has simple eigenvalues. Let ϵ\epsilon be a small parameter and we define the perturbed matrix

Aϵ,k=A+ϵekekT,k=1,,n,A^{\epsilon,k}=A+\epsilon e_{k}e_{k}^{T},\qquad k=1,\cdots,n, (3.6)

where {ek}k=1n\{e_{k}\}_{k=1}^{n} is the standard basis. The perturbed GEVP is defined as

Aϵ,kxϵ=λϵ,kBxϵ.A^{\epsilon,k}x^{\epsilon}=\lambda^{\epsilon,k}Bx^{\epsilon}. (3.7)

Using (3.3) and cofactor expansion, the characteristic polynomial of this perturbed GEVP can be expanded as

Qϵ(λ)=det(λBAϵ,k)=Q(λ)ϵP(k)(λ)+𝒪(ϵ2).Q^{\epsilon}(\lambda)=\det(\lambda B-A^{\epsilon,k})=Q(\lambda)-\epsilon P^{(k)}(\lambda)+\mathcal{O}(\epsilon^{2}). (3.8)

With xjx_{j} being a normalized eigenvector, one has

xjTxj=1,xjTBxj=ηj,j=1,,n.x_{j}^{T}\cdot x_{j}=1,\qquad x_{j}^{T}Bx_{j}=\eta_{j},\qquad j=1,\cdots,n. (3.9)

Using this normalization, from perturbation theory [19], the eigenvalue λjϵ\lambda^{\epsilon}_{j} of (3.7) can be expanded as

λϵ,k=λj+ϵηj|xj,k|2+𝒪(ϵ2).\lambda^{\epsilon,k}=\lambda_{j}+\frac{\epsilon}{\eta_{j}}|x_{j,k}|^{2}+\mathcal{O}(\epsilon^{2}). (3.10)

Applying the Taylor expansion and Q(λj)=0Q(\lambda_{j})=0, we rewrite

0=Qϵ(λϵ,k)\displaystyle 0=Q^{\epsilon}(\lambda^{\epsilon,k}) =Q(λϵ,k)ϵP(k)(λϵ,k)+𝒪(ϵ2)\displaystyle=Q(\lambda^{\epsilon,k})-\epsilon P^{(k)}(\lambda^{\epsilon,k})+\mathcal{O}(\epsilon^{2}) (3.11)
=Q(λj)+ϵηj|xj,k|2Q(λj)ϵP(k)(λj)+𝒪(ϵ2)\displaystyle=Q(\lambda_{j})+\frac{\epsilon}{\eta_{j}}|x_{j,k}|^{2}Q^{\prime}(\lambda_{j})-\epsilon P^{(k)}(\lambda_{j})+\mathcal{O}(\epsilon^{2})
=ϵηj|xj,k|2Q(λj)ϵP(k)(λj)+𝒪(ϵ2),\displaystyle=\frac{\epsilon}{\eta_{j}}|x_{j,k}|^{2}Q^{\prime}(\lambda_{j})-\epsilon P^{(k)}(\lambda_{j})+\mathcal{O}(\epsilon^{2}),

which the linear term in ϵ\epsilon leads to

|xj,k|2Q(λj)=ηjP(k)(λj).|x_{j,k}|^{2}Q^{\prime}(\lambda_{j})=\eta_{j}P^{(k)}(\lambda_{j}). (3.12)

Applying (3.4) and (3.5) with det(B)=l=1nηl\det(B)=\prod_{l=1}^{n}\eta_{l} and det(B(k))=l=1n1ηl(k)\det(B^{(k)})=\prod_{l=1}^{n-1}\eta_{l}^{(k)} to (3.12) completes the proof. ∎

The identity (3.2) can be rewritten in terms of the characteristic polynomials as (3.12). A similar identity in terms of determinants, eigenvalues, and rescaled eigenvectors was presented in [18, eqn. 18] for real-valued matrices. The identity (3.2) is in terms of only eigenvalues and eigenvectors.

Based on these two identities (3.1) and (3.2), one can easily derive the following trigonometric identities. For EVP, using (3.1) and applying Theorem 2.1 with m=1m=1 and BB being an identity matrix, we have

2n+1sin2kπn+1=j=1n1(coskπn+1cosjπn)j=1,jkn(coskπn+1cosjπn+1),n2,k=1,,n.\frac{2}{n+1}\sin^{2}\frac{k\pi}{n+1}=\dfrac{\prod_{j=1}^{n-1}\Big{(}\cos\frac{k\pi}{n+1}-\cos\frac{j\pi}{n}\Big{)}}{\prod_{j=1,j\neq k}^{n}\Big{(}\cos\frac{k\pi}{n+1}-\cos\frac{j\pi}{n+1}\Big{)}},\ n\geq 2,\ k=1,\cdots,n. (3.13)

We note that this identity is independent of the matrix entries α0\alpha_{0} and α1\alpha_{1}. The left hand side can be written in terms of a cosine function as 1n+1(1cos2kπn+1)\frac{1}{n+1}(1-\cos\frac{2k\pi}{n+1}) to have an identity in terms of only cosine functions. For example, let n=2,k=1n=2,k=1, then the identity boils down to

12=23sin2π3=cos(π/3)cos(π/2)cos(π/3)cos(2π/3)=1/201/2+1/2=12.\frac{1}{2}=\frac{2}{3}\sin^{2}\frac{\pi}{3}=\frac{\cos(\pi/3)-\cos(\pi/2)}{\cos(\pi/3)-\cos(2\pi/3)}=\frac{1/2-0}{1/2+1/2}=\frac{1}{2}. (3.14)

Similarly, we have for n2,k=1,,n,l=2,,n1n\geq 2,\ k=1,\cdots,n,l=2,\cdots,n-1

2n+1sin2klπn+1=j=1l1(coskπn+1cosjπl)j=ln1(coskπn+1cos(jl+1)πnl+1)j=1,jkn(coskπn+1cosjπn+1).\frac{2}{n+1}\sin^{2}\frac{kl\pi}{n+1}=\dfrac{\prod_{j=1}^{l-1}\Big{(}\cos\frac{k\pi}{n+1}-\cos\frac{j\pi}{l}\Big{)}\prod_{j=l}^{n-1}\Big{(}\cos\frac{k\pi}{n+1}-\cos\frac{(j-l+1)\pi}{n-l+1}\Big{)}}{\prod_{j=1,j\neq k}^{n}\Big{(}\cos\frac{k\pi}{n+1}-\cos\frac{j\pi}{n+1}\Big{)}}. (3.15)

If we introduce the notation that j=10()=1\prod_{j=1}^{0}(\cdot)=1, then (3.13) can be written as (3.15) with l=1l=1 or l=nl=n.

For the GEVP, using (3.2) and applying Theorem 2.1 with m=1m=1, we have for n2,l,k=1,,n,n\geq 2,\ l,k=1,\cdots,n,

2n+1sin2klπn+1=j=1n1(β0+2β1cosjπn)j=1,jkn(β0+2β1cosjπn+1)Π1Π2Π3,\frac{2}{n+1}\sin^{2}\frac{kl\pi}{n+1}=\dfrac{\prod_{j=1}^{n-1}\Big{(}\beta_{0}+2\beta_{1}\cos\frac{j\pi}{n}\Big{)}}{\prod_{j=1,j\neq k}^{n}\Big{(}\beta_{0}+2\beta_{1}\cos\frac{j\pi}{n+1}\Big{)}}\cdot\frac{\Pi_{1}\Pi_{2}}{\Pi_{3}}, (3.16)

where

Π1\displaystyle\Pi_{1} =j=1l1(α0+2α1coskπn+1β0+2β1coskπn+1α0+2α1cosjπlβ0+2β1cosjπl),\displaystyle=\prod_{j=1}^{l-1}\Big{(}\frac{\alpha_{0}+2\alpha_{1}\cos\frac{k\pi}{n+1}}{\beta_{0}+2\beta_{1}\cos\frac{k\pi}{n+1}}-\frac{\alpha_{0}+2\alpha_{1}\cos\frac{j\pi}{l}}{\beta_{0}+2\beta_{1}\cos\frac{j\pi}{l}}\Big{)}, (3.17)
Π2\displaystyle\Pi_{2} =j=ln1(α0+2α1coskπn+1β0+2β1coskπn+1α0+2α1cos(jl+1)πnl+1β0+2β1cos(jl+1)πnl+1),\displaystyle=\prod_{j=l}^{n-1}\Big{(}\frac{\alpha_{0}+2\alpha_{1}\cos\frac{k\pi}{n+1}}{\beta_{0}+2\beta_{1}\cos\frac{k\pi}{n+1}}-\frac{\alpha_{0}+2\alpha_{1}\cos\frac{(j-l+1)\pi}{n-l+1}}{\beta_{0}+2\beta_{1}\cos\frac{(j-l+1)\pi}{n-l+1}}\Big{)},
Π1\displaystyle\Pi_{1} =j=1,jkn(α0+2α1coskπn+1β0+2β1coskπn+1α0+2α1cosjπn+1β0+2β1cosjπn+1).\displaystyle=\prod_{j=1,j\neq k}^{n}\Big{(}\frac{\alpha_{0}+2\alpha_{1}\cos\frac{k\pi}{n+1}}{\beta_{0}+2\beta_{1}\cos\frac{k\pi}{n+1}}-\frac{\alpha_{0}+2\alpha_{1}\cos\frac{j\pi}{n+1}}{\beta_{0}+2\beta_{1}\cos\frac{j\pi}{n+1}}\Big{)}.

It is obvious that (3.16) reduces to (3.15) when BB is an identity matrix (or multiplied by a nonzero constant).

Remark 3.2.

Other similar trigonometric identities can be established. Moreover, Theorems 2.12.6 give various analytical eigenpairs. An application of the eigenvector-eigenvalue identity (3.1) along with these analytical results set up a system of equations governing the eigenvalues of the minors of the original matrices. Thus, the eigenvalues of these minors can be found by solving the system of equations.

4 Concluding remarks

We first remark that the ideas of finding analytical solutions to the GEVPs with Toeplitz-plus-Hankel and corner-overlapped block-diagonal matrices can be applied to solve other problems where a particular solution form is sought. Other applications include matrix representations of differential operators such as the Schrödinger operator in quantum mechanics [11] and the 2n2n-order operators [12]. Moreover, the idea to solve QEVP can be applied to solve other nonlinear EVPs.

The boundary modifications in the Toeplitz-plus-Hankel matrices give new insights for designing better numerical methods. For example, the high-order IGA (cf.,[17]) produces outliers in the high-frequency region of the spectrum. A method which modifies the boundary terms to arrive at the Toeplitz-plus-Hankel matrices will be outlier-free [9]. For FDM, the structure of the Toeplitz-plus-Hankel matrices gives insights to the design better higher-order approximations near the domain boundaries. Lastly, we remark that the corner-overlapped block-diagonal matrices have applications in the FEMs and the discontinuous Galerkin methods.

Acknowledgments

The author thanks Professor Gilbert Strang for several discussions on the Toeplitz-plus-Hankel matrices and on the potential applications to the design of better numerical methods.

References

  • [1] M. Anđelić and C. M. da Fonseca, Some determinantal considerations for pentadiagonal matrices, Linear and Multilinear Algebra, (2020), pp. 1–9.
  • [2] M. Barrera and S. Grudsky, Asymptotics of eigenvalues for pentadiagonal symmetric Toeplitz matrices, in Large Truncated Toeplitz Matrices, Toeplitz Operators, and Related Topics, Springer, 2017, pp. 51–77.
  • [3] V. Calo, Q. Deng, and V. Puzyrev, Dispersion optimized quadratures for isogeometric analysis, Journal of Computational and Applied Mathematics, 355 (2019), pp. 283–300.
  • [4] H.-W. Chang, S.-E. Liu, and R. Burridge, Exact eigensystems for some matrices arising from discretizations, Linear algebra and its applications, 430 (2009), pp. 999–1006.
  • [5] P. G. Ciarlet, The finite element method for elliptic problems, SIAM, 2002.
  • [6] C. Da Fonseca, On the eigenvalues of some tridiagonal matrices, Journal of Computational and Applied Mathematics, 200 (2007), pp. 283–286.
  • [7] C. M. Da Fonseca and V. Kowalenko, Eigenpairs of a family of tridiagonal matrices: three decades later, Acta Mathematica Hungarica, (2019), pp. 1–14.
  • [8] Q. Deng and V. Calo, Dispersion-minimized mass for isogeometric analysis, Computer Methods in Applied Mechanics and Engineering, 341 (2018), pp. 71–92.
  • [9]  , A boundary penalization technique to remove outliers from isogeometric analysis on tensor-product meshes, arXiv preprint arXiv:2010.08159, (2020).
  • [10]  , Outlier removal for isogeometric spectral approximation with the optimally-blended quadratures, arXiv preprint arXiv:2102.07543, (2021).
  • [11] Q. Deng, V. Puzyrev, and V. Calo, Isogeometric spectral approximation for elliptic differential operators, Journal of Computational Science, 36 (2019), p. 100879.
  • [12]  , Optimal spectral approximation of 2n-order differential operators by mixed isogeometric analysis, Computer Methods in Applied Mechanics and Engineering, 343 (2019), pp. 297–313.
  • [13] P. Denton, S. Parke, T. Tao, and X. Zhang, Eigenvectors from eigenvalues: a survey of a basic identity in linear algebra, Bulletin of the American Mathematical Society, (2021).
  • [14] D. Fasino, Spectral and structural properties of some pentadiagonal symmetric matrices, Calcolo, 25 (1988), pp. 301–310.
  • [15] T. J. Hughes, The finite element method: linear static and dynamic finite element analysis, Courier Corporation, 2012.
  • [16] T. J. R. Hughes, J. A. Evans, and A. Reali, Finite element and NURBS approximations of eigenvalue, boundary-value, and initial-value problems, Computer Methods in Applied Mechanics and Engineering, 272 (2014), pp. 290–320.
  • [17] T. J. R. Hughes, A. Reali, and G. Sangalli, Duality and unified analysis of discrete approximations in structural dynamics and wave propagation: comparison of p-method finite elements with k-method NURBS, Computer methods in applied mechanics and engineering, 197 (2008), pp. 4104–4124.
  • [18] E. Kausel, Normalized modes at selected points without normalization, Journal of Sound and Vibration, 420 (2018), pp. 261–268.
  • [19] M. Konstantinov, D. W. Gu, V. Mehrmann, and P. Petkov, Perturbation theory for matrix equations, Gulf Professional Publishing, 2003.
  • [20] L. Losonczi, Eigenvalues and eigenvectors of some tridiagonal matrices, Acta Mathematica Hungarica, 60 (1992), pp. 309–322.
  • [21] C. D. Meyer, Matrix analysis and applied linear algebra, vol. 71, Siam, 2000.
  • [22] A. K. Mukherjee and K. K. Datta, Two new graph-theoretical methods for generation of eigenvectors of chemical graphs, in Proceedings of the Indian Academy of Sciences-Chemical Sciences, vol. 101, Springer, 1989, pp. 499–517.
  • [23] A. T. Patera, A spectral element method for fluid dynamics: laminar flow in a channel expansion, Journal of computational Physics, 54 (1984), pp. 468–488.
  • [24] G. D. Smith, Numerical solution of partial differential equations: finite difference methods, Oxford university press, 1985.
  • [25] M. S. Solary, Finding eigenvalues for heptadiagonal symmetric Toeplitz matrices, Journal of Mathematical Analysis and Applications, 402 (2013), pp. 719–730.
  • [26] G. Strang, Linear algebra and its applications, Cole Thomson Learning Inc, (1988).
  • [27] G. Strang and G. J. Fix, An analysis of the finite element method, vol. 212, Prentice-hall Englewood Cliffs, NJ, 1973.
  • [28] G. Strang and S. MacNamara, Functions of difference matrices are Toeplitz plus Hankel, siam REVIEW, 56 (2014), pp. 525–546.