This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Ambarzumian-type problems for discrete Schrödinger operators

Jerik Eakins Department of Mathematics, Texas A&M University, College Station, TX 77843, U.S.A. [email protected] William Frendreiss Department of Mathematics, Texas A&M University, College Station, TX 77843, U.S.A. [email protected] Burak Hati̇noğlu Department of Mathematics, UC Santa Cruz, Santa Cruz, CA 95064, U.S.A. [email protected] Lucille Lamb Department of Mathematics, Texas A&M University, College Station, TX 77843, U.S.A. [email protected] Sithija Manage Department of Mathematics, Texas A&M University, College Station, TX 77843, U.S.A. [email protected]  and  Alejandra Puente Department of Mathematics, Texas A&M University, College Station, TX 77843, U.S.A. [email protected]
Abstract.

We discuss the problem of unique determination of the finite free discrete Schrödinger operator from its spectrum, also known as Ambarzumian problem, with various boundary conditions, namely any real constant boundary condition at zero and Floquet boundary conditions of any angle. Then we prove the following Ambarzumian-type mixed inverse spectral problem: Diagonal entries except the first and second ones and a set of two consecutive eigenvalues uniquely determine the finite free discrete Schrödinger operator.

Key words and phrases:
inverse spectral theory, discrete Schrödinger operators, Jacobi matrices, Ambarzumian-type problems

1. Introduction

The Jacobi matrix is a three-diagonal matrix defined as

(b1a1000a1b2a200a2b300an100an1bn)\begin{pmatrix}b_{1}&a_{1}&0&0&0\\ a_{1}&b_{2}&a_{2}&\ddots&0\\ 0&a_{2}&b_{3}&\ddots&0\\ 0&\ddots&\ddots&\ddots&a_{n-1}\\ 0&\dots&0&a_{n-1}&b_{n}\end{pmatrix}

where nn\in\mathbb{N}, ak>0a_{k}~{}\textgreater~{}0 for any k{1,2,,n1}k\in\{1,2,\dots,n-1\} and bkb_{k}\in\mathbb{R} for any k{1,2,,n}k\in\{1,2,\dots,n\}. When ak=1a_{k}=1 for each k{1,2,,n1}k\in\{1,2,\dots,n-1\}, this matrix defines the finite discrete Schrödinger operator.

Direct spectral problems aim to get spectral information from the sequences {ak}k=1n1\{a_{k}\}_{k=1}^{n-1} and {bk}k=1n\{b_{k}\}_{k=1}^{n}. In inverse spectral problems one tries to recover these sequences from spectral information such as the spectrum, the spectral measure or Weyl mm-function.

Early inverse spectral problems for finite Jacobi matrices appear as discrete analogs of inverse spectral problems for the Schrödinger (Sturm-Liouville) equations

u′′(t)+q(t)u(t)=zu(t),-u^{\prime\prime}(t)+q(t)u(t)=zu(t),

on the interval [0,π][0,\pi] with the boundary conditions

u(0)cosαu(0)sinα=0\displaystyle u(0)\cos\alpha-u^{\prime}(0)\sin\alpha=0
u(π)cosβ+u(π)sinβ=0,\displaystyle u(\pi)\cos\beta+u^{\prime}(\pi)\sin\beta=0,

where the potential function qL1(0,π)q\in L^{1}(0,\pi) is real-valued and α,β[0,π)\alpha,\beta\in[0,\pi).

The first inverse spectral result on Schrödinger operators is given by Ambarzumian [AMB]. He considered continuous potential with Neumann boundary conditions at both endpoints (α=β=π/2\alpha=\beta=\pi/2) and showed that q0q\equiv 0 if the spectrum consists of squares of integers. Later Borg [BOR] realized that knowledge of one spectrum is sufficient for unique recovery only for the zero potential. He proved that an L1L^{1}-potential is uniquely recovered from two spectra, which share the same boundary condition at π\pi (β1=β2\beta_{1}=\beta_{2}) and one of which is with Dirichlet boundary condition at 0 (α1=0,\alpha_{1}=0, α2(0,π)\alpha_{2}\in(0,\pi)). A few years later, Levinson [LEVI] removed the Dirichlet boundary condition restriction from Borg’s result. This famous theorem is also known as two-spectra theorem. Then Marchenko [MAR] observed that the spectral measure (or Weyl-Titchmarsh mm-function) uniquely recovers an L1L^{1}-potential. Another classical result is due to Hochstadt and Liebermann [HL], which says that if the first half of an L1L^{1}-potential is known, one spectrum recovers the whole. One can find the statements of these classical theorems and some other results from the inverse spectral theory of Schrödinger operators e.g. in [HAT] and references therein.

Finite Jacobi matrix analogs of Borg’s and Hochstadt and Lieberman’s theorems were considered by Hochstadt [HOC, HOC2, HOC3], where the potential qq is replaced by the sequences {ak}k=1n1\{a_{k}\}_{k=1}^{n-1} and {bk}k=1n\{b_{k}\}_{k=1}^{n}. These classical theorems led to various other inverse spectral results on finite Jacobi matrices (see [BD, DK, GES, GS, S, WW] and references therein) and other settings such as semi-infinite, infinite, generalized Jacobi matrices and matrix-valued Jacobi operators (see e.g. [CGR, D, D2, DKS2, DKS3, DKS4, DS, GKM, GKT, HAT2, SW, SW2, TES2] and references therein). In general, these problems can be divided into two groups. In Borg-Marchenko-type spectral problems, one tries to recover the sequences {ak}k=1n1\{a_{k}\}_{k=1}^{n-1} and {bk}k=1n\{b_{k}\}_{k=1}^{n} from the spectral data. On the other hand, Hochstadt-Lieberman-type (or mixed) spectral problems recover the sequences {ak}k=1n1\{a_{k}\}_{k=1}^{n-1} and {bk}k=1n\{b_{k}\}_{k=1}^{n} using a mixture of partial information on these sequences and the spectral data.

Ambarzumian-type problems focus on inverse spectral problems for free discrete Schrödinger operators, i.e. ak=1a_{k}=1 and bk=0b_{k}=0 for every kk, or similar cases when bk=0b_{k}=0 for some kk. In this paper, we first revisit the classical Ambarzumian problem for the finite discrete Schödinger operator in Theorem 3.3, which says that the spectrum of the free operator uniquely determines the operator. Then we provide a counter-example, Example 3.6, which shows that knowledge of the spectrum of the free operator with a non-zero boundary condition is not sufficient for unique recovery. In Theorem 3.7, we observe that a non-zero boundary condition along with the corresponding spectrum of the free operator is needed for the uniqueness result. However, in Theorem 3.8 we prove that for the free operator with Floquet boundary conditions, the set of eigenvalues including multiplicities is sufficient to get uniqueness up to transpose.

We also answer the following mixed Ambarzumian-type inverse problem positively in Theorem 4.2.

Inverse Spectral Problem.

Let us define the discrete Schrödinger matrix Sn as ak=1a_{k}=1 for k{1,,n1}k\in\{1,\dots,n-1\} and b1,b2b_{1},b_{2}\in\mathbb{R}, bk=0b_{k}=0 for k{3,,n}k\in\{3,\dots,n\}. Let us also denote the free discrete Schrödinger operator by Fn, which is defined as ak=1a_{k}=1 for k{1,,n1}k\in\{1,\dots,n-1\} and bk=0b_{k}=0 for k{1,,n}k\in\{1,\dots,n\}. If Sn and Fn share two consecutive eigenvalues, then do we get b1=b2=0b_{1}=b_{2}=0, i.e. S=nFn{}_{n}=\textnormal{{F}}_{n}?

The paper is organized as follows. In Section 2 we recall necessary definitions and results we use in our proofs. In Section 3 we consider the problem of unique determination of the finite free discrete Schrödinger operator from its spectrum, with various boundary conditions, namely any real constant boundary condition at zero, and Floquet boundary conditions of any angle. In Section 4 we prove the above mentioned Ambarzumian-type mixed inverse spectral problem.

2. Preliminaries

Let us start by fixing our notation. Let Jn represent the finite Jacobi matrix of size n×nn\times n

(2.1) Jn:=(b1a100a1b2a20a2b30an100an1bn),\textbf{J}_{n}:=\begin{pmatrix}b_{1}&a_{1}&{0}&\cdots&{0}\\ a_{1}&b_{2}&a_{2}&\ddots&\vdots\\ {0}&a_{2}&b_{3}&\ddots&{0}\\ \vdots&\ddots&\ddots&\ddots&a_{n-1}\\ {0}&\cdots&{0}&a_{n-1}&b_{n}\\ \end{pmatrix},\\

where ak>0,a_{k}>0, bkb_{k}\in\mathbb{R}. Given Jn, let us consider the Jacobi matrix where all aka_{k}’s and bkb_{k}’s are the same as Jn except b1b_{1} and bnb_{n} are replaced by b1+bb_{1}+b and bn+Bb_{n}+B respectively for b,Bb,B\in\mathbb{R}, i.e.

(2.2) Jn+b(δ1,)δ1+B(δn,)δn.\textbf{J}_{n}+b(\delta_{1},\cdot)\delta_{1}+B(\delta_{n},\cdot)\delta_{n}.

The Jacobi matrix (2.2) is given by the Jacobi difference expression

ak1fk1+bkfk+akfk+1,k{1,,n}a_{k-1}f_{k-1}+b_{k}f_{k}+a_{k}f_{k+1},\quad\quad k\in\{1,\cdots,n\}

with the boundary conditions

f0=bf1andfn+1=Bfn.f_{0}=bf_{1}\qquad\text{and}\qquad f_{n+1}=Bf_{n}.

Let us note that we assume a0=1a_{0}=1 and an=1a_{n}=1.

In order to get a unique Jacobi difference expression with boundary conditions for a given Jacobi matrix, we can see the first and the last diagonal entries of the matrix Jn, defined in (2.1), as the boundary conditions at 0 and n+1n+1 respectively. Therefore let J(b,B)n{}_{n}(b,B) denote the Jacobi matrix Jn satisfying b1=bb_{1}=b and bn=Bb_{n}=B.

If we consider the Jacobi difference expression with the Floquet boundary conditions

f0=fne2πiθandfn+1=f1e2πiθ,θ[0,π)f_{0}=f_{n}e^{2\pi i\theta}\qquad\text{and}\qquad f_{n+1}=f_{1}e^{-2\pi i\theta},\qquad\theta\in[0,\pi)

then we get the following matrix representation, which we denote by J(θ)n{}_{n}(\theta).

(2.3) Jn(θ):=(b1a10e2πiθa1b2a200a2b30an1e2πiθ0an1bn).\textbf{J}_{n}(\theta):=\begin{pmatrix}b_{1}&a_{1}&{0}&\cdots&{e^{2\pi i\theta}}\\ a_{1}&b_{2}&a_{2}&\ddots&{0}\\ {0}&a_{2}&b_{3}&\ddots&{0}\\ \vdots&\ddots&\ddots&\ddots&a_{n-1}\\ {e^{-2\pi i\theta}}&{0}&\cdots&a_{n-1}&b_{n}\\ \end{pmatrix}.\\

Let us denote the discrete Schödinger matrix accordingly, i.e. S(b,B)n{}_{n}(b,B) denotes the matrix Jn such that b1=bb_{1}=b, bn=Bb_{n}=B and ak=1a_{k}=1 for each k{1,2,,n1}k\in\{1,2,\cdots,n-1\}. Similarly S(θ)n{}_{n}(\theta) denotes the matrix J(θ)n{}_{n}(\theta) such that ak=1a_{k}=1 for each k{1,2,,n1}k\in\{1,2,\cdots,n-1\}. Let us denote the free discrete Schödinger matrix of size n×nn\times n by Fn\textbf{F}_{n}:

Fn=(0100101010010010),\textbf{F}_{n}=\begin{pmatrix}0&1&0&\cdots&0\\ 1&0&1&\ddots&\vdots\\ 0&1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&1\\ 0&\cdots&0&1&0\\ \end{pmatrix},

so F(b,B)n{}_{n}(b,B) and F(θ)n{}_{n}(\theta) denote the free discrete Schödinger matrices with boundary conditions bb at 0, BB at n+1n+1 and Floquet boundary conditions for θ\theta, respectively.

Let us state some basic properties of the free discrete Schödinger matrix. If λ1,λ2,,λn\lambda_{1},\lambda_{2},...,\lambda_{n} denote the eigenvalues of Fn\textbf{F}_{n}, they have the following properties:

  • For all kk, λk[2,2]\lambda_{k}\in[-2,2].

  • The free discrete Schödinger matrix Fn has nn distinct eigenvalues, so we can reorder the eigenvalues such that λ1<λ2<<λn\lambda_{1}<\lambda_{2}<\cdots<\lambda_{n}.

  • Let Fn-1 be the (n1)×(n1)(n-1)\times(n-1) submatrix of Fn obtained by removing the last row and the last column of Fn. If μ1,μ2,,μn1\mu_{1},\mu_{2},\cdots,\mu_{n-1} denote the eigenvalues of Fn-1 taken in increasing order, then we have the interlacing property of eigenvalues, i.e.

    λ1<μ1<λ2<μ2<<λn1<μn1<λn\lambda_{1}<\mu_{1}<\lambda_{2}<\mu_{2}<...<\lambda_{n-1}<\mu_{n-1}<\lambda_{n}

The second and third properties are valid for any Jacobi matrix Jn. These basic properties can be found in [TES], which provides an extensive study of Jacobi operators.

The following results show smoothness of simple eigenvalues and corresponding eigenvectors of a smooth matrix-valued function. We will use them in Section 4 in order to prove the mixed inverse spectral problem mentioned in Introduction.

Theorem 2.1.

([LAX], Theorem 9.7) Let A(t)A(t) be a differentiable square matrix-valued function of the real variable tt. Suppose that A(0)A(0) has an eigenvalue a0a_{0} of multiplicity one, in the sense that a0a_{0} is a simple root of the characteristic polynomial of A(0)A(0). Then for tt small enough, A(t)A(t) has an eigenvalue a(t)a(t) that depends differentiably on t, and which equals aoa_{o} at zero, that is, a(0)=a0a(0)=a_{0}.

Theorem 2.2.

([LAX], Theorem 9.8) Let A(t)A(t) be a differentiable matrix-valued function of tt, a(t)a(t) an eigenvalue of A(t)A(t) of multiplicity one. Then we can choose an eigenvector X(t)X(t) of A(t)A(t) pertaining to the eigenvalue a(t)a(t) to depend differentiably on tt.

Once we obtain smoothness of an eigenvalue and the corresponding eigenvector of a smooth self-adjoint matrix, the Hellmann-Feynman theorem relates the derivatives of the eigenvalue and the matrix with the corresponding eigenvector.

Theorem 2.3 (Hellmann-Feynman).

([SIM2], Theorem 1.4.7) Let A(t)A(t) be self-adjoint matrix-valued, X(t)X(t) be vector-valued and λ(t)\lambda(t) be real-valued functions. If A(t)X(t)=λ(t)X(t)A(t)X(t)=\lambda(t)X(t) and X(t)=1||X(t)||=1, then

λ(t)=X(t),A(t)X(t).\lambda^{\prime}(t)=\langle X(t),A^{\prime}(t)X(t)\rangle.

3. Ambarzumian problem with various boundary conditions

In addition to the notations we introduced in the previous section, let pn(x)p_{n}(x) be the characteristic polynomial of Fn\textbf{F}_{n} with zeroes λ1,,λn\lambda_{1},\dots,\lambda_{n} and let qn(x)q_{n}(x) be the characteristic polynomial of Sn\textbf{S}_{n} with zeroes μ1,,μn\mu_{1},\dots,\mu_{n}.

Let us start by obtaining the first three leading coefficients of qn(x)q_{n}(x). This is a well-known result, but we give a proof in order make this section self-contained.

Lemma 3.1.

The characteristic polynomial qn(x)q_{n}(x) of the discrete Schrödinger matrix Sn has the form

qn(x)=xn(i=1nbi)xn1+(1i<jnbibj(n1))xn2+Qn2(x),q_{n}(x)=x^{n}-\left(\sum_{i=1}^{n}b_{i}\right)x^{n-1}+\left(\sum_{\begin{subarray}{c}1\leq i<j\leq n\end{subarray}}b_{i}b_{j}-(n-1)\right)x^{n-2}+Q_{n-2}(x),

where Qn2(x)Q_{n-2}(x) is a polynomial of degree at most n2n-2.

Proof.

The characteristic polynomial qn(x)q_{n}(x) is given by det(xInSn)\det(x\textbf{I}_{n}-\textbf{S}_{n}). Let us consider Liebniz’ formula for the determinants

(3.1) det(A)=σ𝕊nsgn(σ)i=1nαi,σ(i),\det(\textbf{A})=\sum_{\sigma\in\mathbb{S}_{n}}\textrm{sgn}(\sigma)\prod_{i=1}^{n}\alpha_{i,\sigma(i)},

where A=[αi,j]\textbf{A}=[\alpha_{i,j}] is an n×nn\times n matrix and sgn is the sign function of permutations in the permutation group 𝕊n\mathbb{S}_{n}, which returns +1+1 and 1-1 for even and odd permutations, respectively. If we use (3.1) with the identity permutation, then det(xInSn)\det(x\textbf{I}_{n}-\textbf{S}_{n}) becomes i=1n(xbi)\prod_{i=1}^{n}(x-b_{i}), so we can get that qn(x)q_{n}(x) is a monic polynomial and the coefficient of xn1x^{n-1} is tr(Sn)-\operatorname{tr}(\textbf{S}_{n}).

The coefficient of the xn2x^{n-2} term of qn(x)q_{n}(x) is formed from the sum of all disjoint pairs of aia_{i}. Hence we obtain 1i<jnbibj\sum_{1\leq i<j\leq n}b_{i}b_{j}.

The only other permutation that will yield an xn2x^{n-2} term is a transposition. However, if |ij|1|i-j|\geq 1, then the product will be zero. Thus we are looking for transpositions where |ij|=1|i-j|=1. There are n1n-1 of these, namely (1,2),(2,3),,(n2,n1),(n1,n)(1,2),(2,3),\dots,(n-2,n-1),(n-1,n). The product is of the form

i=1nαi,σ(i)=(1)(1)(xb1)(xb2)(xbn)(xbi)(xbj)=xn2+rn3(x),\prod_{i=1}^{n}\alpha_{i,\sigma(i)}=\frac{(-1)(-1)(x-b_{1})(x-b_{2})\cdots(x-b_{n})}{(x-b_{i})(x-b_{j})}=x^{n-2}+r_{n-3}(x),

where rn3(x)r_{n-3}(x) is a polynomial of degree at most n3n-3. Since the signature of a transposition is negative, we derive xn2-x^{n-2} for each product. Summing over all n1n-1 permutations and adding to 1i<jnbibj\sum_{1\leq i<j\leq n}b_{i}b_{j} yields our desired result. ∎

Corollary 3.2.

The characteristic polynomial pn(x)p_{n}(x) of the free discrete Schrödinger matrix Fn has the form

pn(x)=xn(n1)xn2+Pn2(x),p_{n}(x)=x^{n}-(n-1)x^{n-2}+P_{n-2}(x),

where Pn2(x)P_{n-2}(x) is a polynomial of degree at most n2n-2.

Proof.

Simply set bi=0b_{i}=0 for each i{1,,n}i\in\{1,\cdots,n\} and apply Lemma 3.1. ∎

Let us start by giving a proof of Ambarzumian problem with Dirichlet-Dirichlet boundary conditions, i.e. for the matrix F(0,0)n{}_{n}(0,0) = Fn in our notation.

Theorem 3.3.

Suppose Sn shares all of its eigenvalues with Fn. Then S=n{}_{n}=~{}Fn.

Proof.

In order for the two matrices to have all the same eigenvalues, they must have equal characteristic polynomials. Comparing the results from Lemma 3.1 to Corollary 3.2, we must have

(3.2) i=1nbi=0and1i<jnbibj=0\sum_{i=1}^{n}b_{i}=0\qquad\text{and}\qquad\sum_{1\leq i<j\leq n}b_{i}b_{j}=0

This leads us to conclude that

i=1nbi2=(i=1nbi)22(1i<jnbibj)=0\sum_{i=1}^{n}b_{i}^{2}=\left(\sum_{i=1}^{n}b_{i}\right)^{2}-2\left(\sum_{1\leq i<j\leq n}b_{i}b_{j}\right)=0

which only occurs when all the bib_{i} are zero, i.e. Sn=Fn\textbf{S}_{n}=\textbf{F}_{n}, since bib_{i}\in\mathbb{R} for each i{1,,n}i\in\{1,\cdots,n\}. ∎

A natural question to ask is whether or not we get the uniqueness of the free operator with non-zero boundary conditions. At this point let us recall Borg and Levinson’s famous two-spectra theorem [BOR, LEVI], which says that two spectra for different boundary conditions of a regular Schrödinger operator on a finite interval uniquely determines the operator. Finite Jacobi analogs of two-spectra theorem were proved by Gesztesy and Simon.

Given a Jacobi matrix Jn, define J(b)n{}_{n}^{(b)} as the Jacobi matrix where all aka_{k}’s and bkb_{k}’s are the same as Jn except b1b_{1} is replaced by b1+bb_{1}+b for bb\in\mathbb{R}, i.e.

(3.3) Jn(b):=Jn+b(δ1,)δ1.\textbf{J}_{n}^{(b)}:=\textbf{J}_{n}+b(\delta_{1},\cdot)\delta_{1}.

Let us denote the eigenvalues of J(b)n{}_{n}^{(b)} by λk(b)\lambda^{(b)}_{k} for k{1,2,,n}k\in\{1,2,\cdots,n\}. Note that Jn and J(b)n{}_{n}^{(b)} represent the same Jacobi difference equation with different boundary conditions, namely Jn with boundary conditions b1b_{1} at 0, bnb_{n} at n+1n+1 and J(b)n{}_{n}^{(b)} with boundary conditions b1+bb_{1}+b at 0, bnb_{n} at n+1n+1. The Jacobi matrix J(b)n{}_{n}^{(b)} can also be seen as a rank-one perturbation of the Jacobi matrix Jn.

Gesztesy and Simon [GS] proved that if bb is known, then the spectrum of Jn and the spectrum of J(b)n{}_{n}^{(b)} except one eigenvalue uniquely determine the Jacobi matrix.

Theorem 3.4.

([GS], Theorem 5.1) The eigenvalues λ1,,λn\lambda_{1},\cdots,\lambda_{n} of Jn, together with bb and n1n-1 eigenvalues λ1(b),,λn1(b)\lambda^{(b)}_{1},\cdots,\lambda^{(b)}_{n-1} of J(b)n{}_{n}^{(b)}, determine Jn uniquely.

Note that it is irrelevant which n1n-1 eigenvalues from the spectrum of J(b)n{}_{n}^{(b)} are known. Gesztesy and Simon also showed that if two spectra are known, the uniqueness result is obtained without knowing bb.

Theorem 3.5.

([GS], Theorem 5.2) The eigenvalues λ1,,λn\lambda_{1},\cdots,\lambda_{n} of Jn, together with the nn eigenvalues λ1(b),,λn(b)\lambda^{(b)}_{1},\cdots,\lambda^{(b)}_{n} of some J(b)n{}_{n}^{(b)} (with bb unknown), determine Jn and bb uniquely.

After Theorems 3.3, 3.4 and 3.5 one may expect to get the uniqueness of a free discrete Schrödinger operator from a spectrum with non-zero boundary condition at 0. However, this is not the case because of the following counterexample:

Example 3.6.

Let us define the discrete Schrödinger matrices A:=(210101010) and B:=(2/(1+5)1011101(1+5)/2)A:=\begin{pmatrix}2&1&0\\ 1&0&1\\ 0&1&0\end{pmatrix}\text{ and }B:=\begin{pmatrix}-2/(1+\sqrt{5})&1&0\\ 1&1&1\\ 0&1&(1+\sqrt{5})/2\\ \end{pmatrix}.

This example shows that Theorem 3.3 was a special case, so in order to get uniqueness of a rank-one perturbation of the free operator, we also need to know the non-zero boundary condition along with the spectrum.

Theorem 3.7.

Suppose S(b,bn)n{}_{n}(b,b_{n}) shares all of its eigenvalues with F(b,0)n{}_{n}(b,0). Then S(b,bn)n={}_{n}(b,b_{n})=~{}F(b,0)n{}_{n}(b,0).

Proof.

Comparing coefficients of characteristic polynomials of S(b,bn)n{}_{n}(b,b_{n}) and F(b,0)n{}_{n}(b,0) like we did in the proof of Theorem 3.3, we get

(3.4) b+i=2nbi=bandbj=2nbj+2i<jnbibj=0b+\sum_{i=2}^{n}b_{i}=b\qquad\text{and}\qquad b\sum_{j=2}^{n}b_{j}+\sum_{2\leq i<j\leq n}b_{i}b_{j}=0

The first equation of (3.4) gives i=2nbi=0\sum_{i=2}^{n}b_{i}=0 and using this in the second equation of (3.4) we get 2i<jnbibj=0\sum_{2\leq i<j\leq n}b_{i}b_{j}=0. This leads us to conclude that

i=2nbi2=(i=2nbi)22(2i<jnbibj)=0\sum_{i=2}^{n}b_{i}^{2}=\left(\sum_{i=2}^{n}b_{i}\right)^{2}-2\left(\sum_{2\leq i<j\leq n}b_{i}b_{j}\right)=0

which only occurs when bi=0b_{i}=0 for each i{2,,n}i\in\{2,\cdots,n\}, i.e. S(b,bn)n{}_{n}(b,b_{n}) = F(b,0)n{}_{n}(b,0). ∎

Now, we approach the Ambarzumian problem with Floquet boundary conditions. Let us recall that S(θ)n{}_{n}(\theta) and F(ϕ)n{}_{n}(\phi) denote a discrete Schrödinger operator and the free discrete Schrödinger Operator with Floquet Boundary Conditions for the angles 0θ<10\leq\theta<1 and 0ϕ<10\leq\phi<1, respectively:

Sn(θ)=(b110e2πiθ1b21001b301e2πiθ01bn),Fn(ϕ)=(010e2πiϕ101001001e2πiϕ010).\textbf{S}_{n}(\theta)=\begin{pmatrix}b_{1}&1&{0}&\cdots&{e^{2\pi i\theta}}\\ 1&b_{2}&1&\ddots&{0}\\ {0}&1&b_{3}&\ddots&{0}\\ \vdots&\ddots&\ddots&\ddots&1\\ {e^{-2\pi i\theta}}&{0}&\cdots&1&b_{n}\\ \end{pmatrix},\quad\textbf{F}_{n}(\phi)=\begin{pmatrix}0&1&{0}&\cdots&{e^{2\pi i\phi}}\\ 1&0&1&\ddots&{0}\\ {0}&1&0&\ddots&{0}\\ \vdots&\ddots&\ddots&\ddots&1\\ {e^{-2\pi i\phi}}&{0}&\cdots&1&0\\ \end{pmatrix}.\\

The following theorem shows that with Floquet boundary conditions, the knowledge of the spectrum of the free operator is sufficient for the uniqueness of the operator up to transpose.

Theorem 3.8.

Suppose that S(θ)n{}_{n}(\theta) shares all of its eigenvalues with F(ϕ)n{}_{n}(\phi), including multiplicity, for 0θ,ϕ<10\leq\theta,\phi<1. Then b1==bn=0b_{1}=\dots=b_{n}=0 and θ=ϕ\theta=\phi or θ=1ϕ\theta=1-\phi, i.e. S(θ)n{}_{n}(\theta) = F(ϕ)n{}_{n}(\phi) or S(θ)n{}_{n}(\theta) = F(ϕ)n𝖳{}_{n}^{\mathsf{T}}(\phi)

Proof.

Let us define D[k,l]D[k,l] as the following determinant of a (lk+1)×(lk+1)(l-k+1)\times(l-k+1) matrix for 1k<ln1\leq k<l\leq n:

D[k,l]:=|xbk1001xbk+100xbl11001xbl|(lk+1)D[k,l]:=\begin{vmatrix}x-b_{k}&-1&0&\cdots&0\\ -1&x-b_{k+1}&\ddots&\ddots&\vdots\\ 0&\ddots&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&x-b_{l-1}&-1\\ 0&\cdots&0&-1&x-b_{l}\\ \end{vmatrix}_{(l-k+1)}\\

Let us consider the characteristic polynomial of S(θ)n{}_{n}(\theta) by using cofactor expansion on the first row:

|xInSn(θ)|\displaystyle|x\textbf{I}_{n}-\textbf{S}_{n}(\theta)| =(xb1)D[2,n]+|11001xb31010xbn11e2πiθ01xbn|(n1)\displaystyle=(x-b_{1})D[2,n]+\begin{vmatrix}-1&-1&0&\cdots&0\\ -1&x-b_{3}&-1&\ddots&\vdots\\ 0&-1&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&x-b_{n-1}&-1\\ -e^{-2\pi i\theta}&0&\cdots&-1&x-b_{n}\\ \end{vmatrix}_{(n-1)}
+(1)n+1(e2πiθ)|1xb21001xb31001xbn1e2πiθ001|(n1)\displaystyle+(-1)^{n+1}\left(-e^{2\pi i\theta}\right)\begin{vmatrix}-1&x-b_{2}&-1&0&\cdots\\ 0&-1&x-b_{3}&\ddots&\ddots\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\cdots&0&-1&x-b_{n-1}\\ -e^{-2\pi i\theta}&0&\cdots&0&-1\\ \end{vmatrix}_{(n-1)}

Then by using cofactor expansions on the first row of the determinant in the second term and on the first column of the determinant in the third term we get

|xInSn(θ)|\displaystyle|x\textbf{I}_{n}-\textbf{S}_{n}(\theta)| =(xb1)D[2,n]+(1)D[3,n]+|01000xb410101e2πiθ01xbn|(n2)\displaystyle=(x-b_{1})D[2,n]+(-1)D[3,n]+\begin{vmatrix}0&-1&0&\cdots&0\\ 0&x-b_{4}&-1&\ddots&\vdots\\ 0&-1&\ddots&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ -e^{-2\pi i\theta}&0&\cdots&-1&x-b_{n}\\ \end{vmatrix}_{(n-2)}
+(1)n+1(e2πiθ)(1)|1xb31001xb4101xbn1001|(n2)\displaystyle+(-1)^{n+1}\left(-e^{2\pi i\theta}\right)(-1)\begin{vmatrix}-1&x-b_{3}&-1&0&\cdots\\ 0&-1&x-b_{4}&\ddots&\ddots\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\cdots&\ddots&-1&x-b_{n-1}\\ 0&\cdots&\cdots&0&-1\\ \end{vmatrix}_{(n-2)}
+(1)n+1(e2πiθ)(1)n(e2πiθ)D[2,n1]\displaystyle+(-1)^{n+1}\left(-e^{2\pi i\theta}\right)(-1)^{n}\left(-e^{-2\pi i\theta}\right)D[2,n-1]

Now let’s use cofactor expansion on the first column of the determinant in the third term. Also note that the determinant in the fourth term is the determinant of an upper triangular matrix. Therefore,

|xInSn(θ)|\displaystyle|x\textbf{I}_{n}-\textbf{S}_{n}(\theta)| =(xb1)D[2,n]D[3,n]\displaystyle=(x-b_{1})D[2,n]-D[3,n]
+(1)n1(e2πiθ)|1000xb411xb50010e2πiθ01xbn11|(n3)\displaystyle+(-1)^{n-1}\left(-e^{2\pi i\theta}\right)\begin{vmatrix}-1&0&0&\cdots&0\\ x-b_{4}&-1&\ddots&\ddots&\vdots\\ -1&x-b_{5}&\ddots&\ddots&0\\ 0&\ddots&\ddots&-1&0\\ -e^{-2\pi i\theta}&0&-1&x-b_{n-1}&-1\\ \end{vmatrix}_{(n-3)}
+(1)n+1(e2πiθ)[(1)(1)n2+(1)n(e2πiθ)D[2,n1]]\displaystyle+(-1)^{n+1}\left(-e^{2\pi i\theta}\right)\left[(-1)(-1)^{n-2}+(-1)^{n}\left(-e^{-2\pi i\theta}\right)D[2,n-1]\right]

Finally, noting again that the determinant in the third term is that of a lower triangular matrix, we get

(3.5) |xInSn(θ)|\displaystyle|x\textbf{I}_{n}-\textbf{S}_{n}(\theta)| =(xb1)D[2,n]D[3,n]+(1)n1(e2πiθ)(1)n3\displaystyle=(x-b_{1})D[2,n]-D[3,n]+(-1)^{n-1}\left(-e^{2\pi i\theta}\right)(-1)^{n-3}
+(1)2n(e2πiθ)+(1)2n+1D[2,n1]\displaystyle+(-1)^{2n}\left(-e^{-2\pi i\theta}\right)+(-1)^{2n+1}D[2,n-1]
=(xb1)D[2,n]D[3,n]D[2,n1]e2πiθe2πiθ\displaystyle=(x-b_{1})D[2,n]-D[3,n]-D[2,n-1]-e^{2\pi i\theta}-e^{-2\pi i\theta}

At this point note that D[k,l]D[k,l] is the characteristic polynomial of the following discrete Schrödinger matrix

(bk1001bk+11010bl11001bl).\begin{pmatrix}b_{k}&1&{0}&\cdots&{0}\\ 1&b_{k+1}&1&\ddots&\vdots\\ {0}&1&\ddots&\ddots&{0}\\ \vdots&\ddots&\ddots&b_{l-1}&1\\ {0}&\cdots&{0}&1&b_{l}\\ \end{pmatrix}.\\

Therefore using Lemma 3.1 and equation (3.5), we obtain

(3.6) |xInSn(θ)|\displaystyle|x\textbf{I}_{n}-\textbf{S}_{n}(\theta)| =xn(i=1nbi)xn1+(1i<jnbibj(n1))xn2\displaystyle=x^{n}-\left(\sum_{i=1}^{n}b_{i}\right)x^{n-1}+\left(\sum_{\begin{subarray}{c}1\leq i<j\leq n\end{subarray}}b_{i}b_{j}-(n-1)\right)x^{n-2}
+fn3(x)e2πiθe2πiθ\displaystyle+f_{n-3}(x)-e^{2\pi i\theta}-e^{-2\pi i\theta}

where fn3f_{n-3} is a polynomial of degree at most n3n-3, which is independent of θ\theta. Using the same steps for Fn(ϕ)\textbf{F}_{n}(\phi), we obtain

(3.7) |xInFn(ϕ)|\displaystyle|x\textbf{I}_{n}-\textbf{F}_{n}(\phi)| =xn(n1)xn2+gn3(x)e2πiϕe2πiϕ\displaystyle=x^{n}-(n-1)x^{n-2}+g_{n-3}(x)-e^{2\pi i\phi}-e^{-2\pi i\phi}

where gn3g_{n-3} is a polynomial of degree at most n3n-3, which is independent of ϕ\phi.

Comparing equations (3.6) and (3.7), like we did in the proof of Theorem 3.3, we can conclude that the diagonal entries {bi}i=1n\{b_{i}\}_{i=1}^{n} of S(θ)n{}_{n}(\theta) must be zero.

Note that the expression consisting of the first three terms in the right end of (3.5), (xb1)D[2,n]D[3,n]D[2,n1](x-b_{1})D[2,n]-D[3,n]-D[2,n-1] is independent of θ\theta. In addition, we observed that b1==bn=0b_{1}=\dots=b_{n}=0. Therefore using the equivalence of the characteristic polynomials of S(θ)n{}_{n}(\theta) and F(ϕ)n{}_{n}(\phi), we obtain

e2πiθ+e2πiθ=e2πiϕ+e2πiϕ,e^{2\pi i\theta}+e^{-2\pi i\theta}=e^{2\pi i\phi}+e^{-2\pi i\phi},

which can be written using Euler’s identity as

(3.8) 2cos(2πθ)=2cos(2πϕ).2\cos(2\pi\theta)=2\cos(2\pi\phi).

Equation (3.8) is valid if and only if θ\theta differs from ϕ\phi or ϕ-\phi by an integer. Since 0θ,ϕ<10\leq\theta,\phi<1, the only possible values for θ\theta are ϕ\phi and 1ϕ1-\phi. This completes the proof. ∎

4. An Ambarzumian-type mixed inverse spectral problem

Let us introduce the following n×nn\times n discrete Schrödinger matrix for 1mn1\leq m\leq n:

Sn,m:=(b11000110001bm10010010010)\textnormal{{S}}_{n,m}:=\begin{pmatrix}b_{1}&1&0&0&\ldots&0\\ 1&\ddots&1&0&\ddots&0\\ 0&1&b_{m}&1&\ddots&\vdots\\ 0&0&1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&1\\ 0&\ldots&\ldots&0&1&0\\ \end{pmatrix}\\

Let us also recall that Fn denotes the free discrete Schrödinger matrix of size n×nn\times n. In this section our goal is to answer the following Ambarzumian-type mixed spectral problem positively for the m=2m=2 case.

Inverse Spectral Problem.

If Sn,m and Fn share mm consecutive eigenvalues, then do we get b1==bm=0b_{1}=\cdots=b_{m}=0, i.e. S=n,mFn{}_{n,m}=\textnormal{{F}}_{n}?

When m=1m=1, this problem becomes a special case of the following result of Gesztesy and Simon [GS]. For a Jacobi matrix given as (2.1), let us consider the sequences {ak}\{a_{k}\} and {bk}\{b_{k}\} as a single sequence b1,a1,b2,a2,,an1,bn=c1,c2,,c2n1b_{1},a_{1},b_{2},a_{2},\cdots,a_{n-1},b_{n}=c_{1},c_{2},\cdots,c_{2n-1}, i.e. c2k1:=bkc_{2k-1}:=b_{k} and c2k:=akc_{2k}:=a_{k}.

Theorem 4.1.

([GS], Theorem 4.2) Suppose that 1kn1\leq k\leq n and ck+1,,c2n1c_{k+1},\cdots,c_{2n-1} are known, as well as kk of the eigenvalues. Then c1,,ckc_{1},\cdots,c_{k} are uniquely determined.

By letting k=1k=1, we get the inverse spectral problem stated above for m=1m=1. Now let us prove the m=2m=2 case. Let λ1<λ2<<λn\lambda_{1}<\lambda_{2}<\dots<\lambda_{n} denote the eigenvalues of Fn, and let λ~1<λ~2<<λ~n\tilde{\lambda}_{1}<\tilde{\lambda}_{2}<\dots<\tilde{\lambda}_{n} denote the eigenvalues of Sn,2.

Theorem 4.2.

Let λk=λ~k\lambda_{k}=\tilde{\lambda}_{k} and λk+1=λ~k+1\lambda_{k+1}=\tilde{\lambda}_{k+1} for some k{1,2,,n1}k\in\{1,2,\dots,n-1\}. Then b1=0b_{1}=0 and b2=0b_{2}=0, i.e. S=n,2Fn{}_{n,2}=\textnormal{{F}}_{n}.

Proof.

For simplicity, let us use Sn instead of Sn,2. We start by proving the following claim.

Claim: If λk=λ~k\lambda_{k}=\tilde{\lambda}_{k} and λk+1=λ~k+1\lambda_{k+1}=\tilde{\lambda}_{k+1} , then either b1=b2=0b_{1}=b_{2}=0 or b1=λk+λk+1b_{1}=\lambda_{k}+\lambda_{k+1} and b2=1/λk+1/λk+1b_{2}=1/\lambda_{k}+1/\lambda_{k+1}.

Let us consider the characteristic polynomial of Sn using cofactor expansion on the last row of λISn\lambda I-\textnormal{{S}}_{n}.

det(λISn)=\displaystyle\det(\lambda I-\textnormal{{S}}_{n})=
(λb1)|λb21001λ101λ01001λ|(n1)+|11001λ101λ01001λ|(n1)\displaystyle(\lambda-b_{1})\begin{vmatrix}\lambda-b_{2}&-1&0&\ldots&0\\ -1&\lambda&-1&\ddots&\vdots\\ 0&-1&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\ldots&0&-1&\lambda\\ \end{vmatrix}_{(n-1)}+\quad\begin{vmatrix}-1&-1&0&\ldots&0\\ -1&\lambda&-1&\ddots&\vdots\\ 0&-1&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\ldots&0&-1&\lambda\\ \end{vmatrix}_{(n-1)}

Using cofactor expansion on the first row for the first term and the first column for the second term, we get

det(λISn)\displaystyle\det(\lambda I-\textnormal{{S}}_{n}) =(λb1)(λb2)det(λIFn2)\displaystyle=(\lambda-b_{1})(\lambda-b_{2})\det(\lambda I-\textnormal{{F}}_{n-2})
+(λb1)|11000λ101λ01001λ|(n2)det(λIFn2)\displaystyle+(\lambda-b_{1})\begin{vmatrix}-1&-1&0&\ldots&0\\ 0&\lambda&-1&\ddots&\vdots\\ 0&-1&\lambda&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&-1\\ 0&\ldots&0&-1&\lambda\\ \end{vmatrix}_{(n-2)}-\det(\lambda I-\textnormal{{F}}_{n-2})

Finally, using cofactor expansion on the first column of the second term, we get

(4.1) det(λISn)=[(λb1)(λb2)1]det(λIFn2)(λb1)det(λIFn3).\det(\lambda I-\textnormal{{S}}_{n})=[(\lambda-b_{1})(\lambda-b_{2})-1]\det(\lambda I-\textnormal{{F}}_{n-2})-(\lambda-b_{1})\det(\lambda I-\textnormal{{F}}_{n-3}).

Since λ~k=λk\tilde{\lambda}_{k}=\lambda_{k} and λ~k+1=λk+1\tilde{\lambda}_{k+1}=\lambda_{k+1}, right hand side of (4.1) is zero when λ=λk\lambda=\lambda_{k} or λ=λk+1\lambda=\lambda_{k+1}. Therefore for λ=λk\lambda=\lambda_{k} or λ=λk+1\lambda=\lambda_{k+1} we get

(4.2) (λb1)(λb2)1λb1=det(λIFn3)det(λIFn2).\frac{(\lambda-b_{1})(\lambda-b_{2})-1}{\lambda-b_{1}}=\frac{\det(\lambda I-\textnormal{{F}}_{n-3})}{\det(\lambda I-\textnormal{{F}}_{n-2})}.

Note that equation (4.2) is also valid for Fn\textnormal{{F}}_{n}, i.e. when b1=b2=0b_{1}=b_{2}=0, and the right hand side of the equation does not depend on b1b_{1} or b2b_{2} and hence identical for Sn\textnormal{{S}}_{n} and Fn\textnormal{{F}}_{n}. Therefore the left hand side of (4.2) should also be identical for Sn\textnormal{{S}}_{n} and Fn\textnormal{{F}}_{n}, when λ~k=λk\tilde{\lambda}_{k}=\lambda_{k} and λ~k+1=λk+1\tilde{\lambda}_{k+1}=\lambda_{k+1}. Hence,

(4.3) (λb1)(λb2)1λb1=(λ0)(λ0)1λ0\frac{(\lambda-b_{1})(\lambda-b_{2})-1}{\lambda-b_{1}}=\frac{(\lambda-0)(\lambda-0)-1}{\lambda-0}

for λ=λk\lambda=\lambda_{k} or λ=λk+1\lambda=\lambda_{k+1}. Therefore,

λ(λb1)(λb2)λ\displaystyle\lambda(\lambda-b_{1})(\lambda-b_{2})-\lambda =(λ21)(λb1)\displaystyle=(\lambda^{2}-1)(\lambda-b_{1})
λ3(b1+b2)λ2+b1b2λλ\displaystyle\lambda^{3}-(b_{1}+b_{2})\lambda^{2}+b_{1}b_{2}\lambda-\lambda =λ3b1λ2λ+b1\displaystyle=\lambda^{3}-b_{1}\lambda^{2}-\lambda+b_{1}
b2λ2+b1b2λb1\displaystyle-b_{2}\lambda^{2}+b_{1}b_{2}\lambda-b_{1} =0\displaystyle=0

for λ=λ1\lambda=\lambda_{1} or λ=λ2\lambda=\lambda_{2}. If b2=0b_{2}=0, then b1=0b_{1}=0 from the last equation above, so we can assume b20b_{2}\neq 0. Then λ2b1λ+b1/b2=0\lambda^{2}-b_{1}\lambda+b_{1}/b_{2}=0 for λ=λk\lambda=\lambda_{k} or λ=λk+1\lambda=\lambda_{k+1}.

Since x2b1x+b1/b2x^{2}-b_{1}x+b_{1}/b_{2} is a monic polynomial with two distinct roots x=λkx=\lambda_{k} and x=λk+1x=\lambda_{k+1}, we get

x2b1x+b1/b2=(xλk)(xλk+1)x^{2}-b_{1}x+b_{1}/b_{2}=(x-\lambda_{k})(x-\lambda_{k+1})

which implies

x2b1x+b1/b2=x(λk+λk+1)x+λkλk+1x^{2}-b_{1}x+b_{1}/b_{2}=x-(\lambda_{k}+\lambda_{k+1})x+\lambda_{k}\lambda_{k+1}

Comparing coefficients we get our claim, since b1=λk+λk+1b_{1}=\lambda_{k}+\lambda_{k+1}, and b1/b2=λkλk+1b_{1}/b_{2}=\lambda_{k}\lambda_{k+1} implies

b2=b1λkλk+1=λk+λk+1λkλk+1=1λk+1λk+1b_{2}=\frac{b_{1}}{\lambda_{k}\lambda_{k+1}}=\frac{\lambda_{k}+\lambda_{k+1}}{\lambda_{k}\lambda_{k+1}}=\frac{1}{\lambda_{k}}+\frac{1}{\lambda_{k+1}}

Now our goal is to get a contradiction for the second case of the claim, i.e. when b1=λk+λk+1b_{1}=\lambda_{k}+\lambda_{k+1} and b2=1/λk+1/λk+1b_{2}=1/\lambda_{k}+1/\lambda_{k+1}, so let us assume

b1=λk+λk+1andb2=1/λk+1/λk+1.b_{1}=\lambda_{k}+\lambda_{k+1}\quad\text{and}\quad b_{2}=1/\lambda_{k}+1/\lambda_{k+1}.

First let us show that b1b_{1} and b2b_{2} have the same sign. If nn is even and k=n/2k=n/2, then λk=λk+1\lambda_{k}=-\lambda_{k+1}. Hence b1=b2=0b_{1}=b_{2}=0. If nn is odd and k=(n1)/2k=(n-1)/2 or k=(n+1)/2k=(n+1)/2, then one of the eigenvalues λk\lambda_{k} or λk+1\lambda_{k+1} is zero, so b2b_{2} is undefined. For all other values of kk, two consecutive eigenvalues λk\lambda_{k} and λk+1\lambda_{k+1} and hence b1b_{1} and b2b_{2} have the same sign.

Without loss of generality let us assume both λk\lambda_{k} and λk+1\lambda_{k+1} are negative and b1b2b_{1}\leq b_{2}. Let us define the matrices Cn\textnormal{{C}}_{n} and Mn(t)\textnormal{{M}}_{n}(t) with the real parameter tt as follows:

Cn:=(b21001b21010010010)andMn(t):=(t1001t1010010010).\textnormal{{C}}_{n}:=\begin{pmatrix}b_{2}&1&0&\ldots&0\\ 1&b_{2}&1&\ddots&\vdots\\ 0&1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&1\\ 0&\ldots&0&1&0\\ \end{pmatrix}\quad\text{and}\quad\textnormal{{M}}_{n}(t):=\begin{pmatrix}-t&1&0&\ldots&0\\ 1&-t&1&\ddots&\vdots\\ 0&1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&1\\ 0&\ldots&0&1&0\\ \end{pmatrix}.\\

Note that kth eigenvalue of Cn\textnormal{{C}}_{n}, denoted by λk(Cn)\lambda_{k}(\textnormal{{C}}_{n}), is greater than or equal to λ~k\tilde{\lambda}_{k}, since CnAn\textnormal{{C}}_{n}\geq\textnormal{{A}}_{n}. Let us also note that Mn(b2)=Cn\textnormal{{M}}_{n}(-b_{2})=\textnormal{{C}}_{n} and Mn(0)=Fn\textnormal{{M}}_{n}(0)=\textnormal{{F}}_{n}. Let us denote the kth eigenvalue of Mn(t)\textnormal{{M}}_{n}(t) by λk(t)\lambda_{k}(t) and the corresponding eigenvector by X(t)X(t), normalized as X(t)=1||X(t)||=1. Since Mn(t)\textnormal{{M}}_{n}(t) is a smooth function of tt around 0, same is true for λk(t)\lambda_{k}(t) and X(t)X(t) by Theorem 2.1 and Theorem 2.2. Let us recall that Mn(t)\textnormal{{M}}_{n}(t) is self-adjoint, X(t)=1||X(t)||=1 and Mn(t)X(t)=λk(t)X(t)\textnormal{{M}}_{n}(t)X(t)=\lambda_{k}(t)X(t). Therefore by Theorem 2.3, the Hellmann-Feynman Theorem, we get

(4.4) λk(t)=X(t),Mn(t)X(t)=X12(t)X22(t),\lambda_{k}^{\prime}(t)=\langle X(t),\textnormal{{M}}_{n}^{\prime}(t)X(t)\rangle=-X_{1}^{2}(t)-X_{2}^{2}(t),

where X(t)T=[X1(t),X2(t),,Xn(t)]X(t)^{T}=[X_{1}(t),X_{2}(t),\cdots,X_{n}(t)]. Since X(t)X(t) is a non-zero eigenvector of the tridiagonal matrix Mn(t)\textnormal{{M}}_{n}(t), at least one of X1(t)X_{1}(t) and X2(t)X_{2}(t) is non-zero. Therefore by equation (4.4), there exists an open interval II\subset\mathbb{R} containing 0 such that λk(t)<0\lambda_{k}^{\prime}(t)~{}\textless~{}0 for tIt\in I, i.e. λk(t)\lambda_{k}(t) is decreasing on II. This implies existence of 0<t0<b20~{}\textless~{}t_{0}~{}\textless~{}-b_{2} satisfying

λk<λk(t0)λk(b2)=λk(Cn)λ~k.\lambda_{k}~{}\textless~{}\lambda_{k}(t_{0})\leq\lambda_{k}(-b_{2})=\lambda_{k}(C_{n})\leq\tilde{\lambda}_{k}.

This contradicts with our assumption that λk=λ~k\lambda_{k}=\tilde{\lambda}_{k}. Therefore only the first case of the claim is true, i.e. b1=b2=0b_{1}=b_{2}=0 and hence Sn=Fn\textnormal{{S}}_{n}=\textnormal{{F}}_{n}. ∎

Acknowledgement

The authors would like to thank Wencai Liu for introducing them this project and his constant support. This work was partially supported by NSF DMS-2015683 and DMS-2000345.

References