This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Jacobi matrices that realize perfect quantum state transfer and Early State Exclusion

Rachel Bailey RB, Department of Mathematical Sciences
Bentley University
175 Forest Street
Waltham, MA 02452, USA
[email protected]
Sara Costa SC, Department of Mathematics
University of Hartford
200 Bloomfield Avenue
West Hartford, CT 06117, USA
[email protected]
Maxim Derevyagin MD, Department of Mathematics
University of Connecticut
341 Mansfield Road, U-1009
Storrs, CT 06269-1009, USA
[email protected]
Caleb Findley CF, Department of Mathematics
University of Texas Arlington
701 S. Nedderman Drive
Arlington, TX 76019, USA
[email protected]
 and  Kai Zuang KZ, Department of Mathematics
Brown University
151 Thayer Street
Providence, RI 02912, USA
[email protected]
Abstract.

In this paper we show how to construct 1D Hamiltonians, that is, Jacobi matrices, that realize perfect quantum state transfer and also have the property that the overlap of the time evolved state with the initial state is zero for some time before the transfer time. If the latter takes place we call it an early exclusion state. We also show that in some case early state exclusion is impossible. The proofs rely on properties of Krawtchouk and Chebyshev polynomials.

Key words and phrases:
Jacobi matrices, inverse problems, quantum information, Krawtchouk polynomials, Chebyshev polynomials
1991 Mathematics Subject Classification:
Primary 15A29, 81P45; Secondary 47B36, 33C45

1. Introduction

In recent years there has been interest in studying various mathematical aspects of perfect quantum state transfer [2], [3], [5], [6], [8], and [9]. To give a quick account of what this is from the mathematical perspective, let us consider a Jacobi matrix JJ, that is, a symmetric tridiagonal matrix of the form

J=(a0b0b0a1bN1bN1aN),J=\begin{pmatrix}a_{0}&b_{0}&&\\ b_{0}&a_{1}&\ddots&\\ &\ddots&\ddots&b_{N-1}\\ &&b_{N-1}&a_{N}\\ \end{pmatrix},

where aka_{k}’s are real numbers and bkb_{k}’s are positive numbers. Then JJ represents the Hamiltonian of the system of N+1N+1 particles with the nearest-neighbor interactions. The evolution of the system is given by eitJe^{-itJ}. We say that JJ realizes Perfect State Transfer (or, shortly, PST) if there exist T0>0T_{0}>0 and ϕ\phi\in\mathbb{R} such that

(1.1) eiT0J𝐞0=eiϕ𝐞N,e^{-iT_{0}J}{\bf e}_{0}=e^{i\phi}{\bf e}_{N},

where 𝐞0{\bf e}_{0} and 𝐞N{\bf e}_{N} are the first and last elements of the standard basis {𝐞k}k=0N\{{\bf e}_{k}\}_{k=0}^{N} of N+1\mathbb{C}^{N+1}. In other words, this means that in such a system, qubits get transferred from site 11 to site N+1N+1 in time T0T_{0}. Note that we can only know probabilities of quantum states and (1.1) describes the situation of quantum transfer with probability 11, which is a desirable but rare phenomenon. Nevertheless, theoretically one can easily construct such Hamiltonians since (1.1) is equivalent to the two conditions (see [5] or [8])

  1. (i)

    JJ is persymmetric, that is,

    ak=aNk,k=0,1,,N,bl=bN1l,l=0,1,,N1;\begin{split}a_{k}=a_{N-k},\quad k=0,1,\dots,N,\\ b_{l}=b_{N-1-l},\quad l=0,1,\dots,N-1;\end{split}
  2. (ii)

    the eigenvalues λ0<λ1<<λN\lambda_{0}<\lambda_{1}<\dots<\lambda_{N} of JJ satisfy

    (1.2) λk+1λk=(2nk+1)πT0,k=0,,N,\lambda_{k+1}-\lambda_{k}=\frac{(2n_{k}+1)\pi}{T_{0}},\quad k=0,...,N,

    where nkn_{k} are nonnegative integers.

This characterization gives some flexibility for engineering quantum wires. Once such a wire is built, one can potentially attempt to speed up the transfer of several qubits by initiating the second transfer at time tt earlier than T0T_{0}. This could be possible if we know that the overlap of the time evolved state with the initial state is zero for some time before the transfer time. Thus it makes sense to introduce the following concept.

Definition 1.1.

(Early State Exclusion) Let J be a Jacobi matrix that has earliest perfect state transfer at time T0T_{0}. If there is a time tt such that 0<t<T00<t<T_{0} and

(eiJt𝐞0,𝐞𝟎)N+1=0(e^{-iJt}{\bf e}_{0},{\bf e_{0}})_{\mathbb{C}^{N+1}}=0

then we say that J has Early State Exclusion, (ESE), at time tt.

By looking at the above definition, it is not clear if such a JJ exists and in fact the third author has learned that the question whether such JJ exists is open from Christino Tamon who attributed the question to Alastair Kay. In this note we show that the answer to this question is positive and construct an infinite family of such Hamiltonians for any odd N3N\geq 3.

2. Trial and error

In this section we will consider the situation of Early State Exclusion for matrices of sizes 22, 33, and 44, which corresponds to N=1N=1, N=2N=2, and N=3N=3, respectively.

2.1. The case of 2×22\times 2-matrices.

Based on the characterization given in the previous section, any symmetric, persymmetric matrix

J=(a0b0b0a0),a0,b0>0J=\begin{pmatrix}a_{0}&b_{0}\\ b_{0}&a_{0}\end{pmatrix},\quad a_{0}\in\mathbb{R},\quad b_{0}>0

realizes a PST since in this case we can always find T0T_{0} such that λ1λ0=π/T0\lambda_{1}-\lambda_{0}=\pi/T_{0}, where λ1,(>)λ0\lambda_{1},(>)\,\lambda_{0} are the eigenvalues of JJ. However, since we are looking at the zeroes of (eiJt𝐞0,𝐞0)(e^{-iJt}{\bf e}_{0},{\bf e}_{0}) we can simplify the form of JJ further. Namely, without loss of generality we can assume that the matrix JJ has the form

J=(0110)J=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}

Indeed, the shift JJa0IJ\to J-a_{0}I simply brings a factor of eia0te^{ia_{0}t}, which is never 0, to the function in question and then rescaling b0b_{0} to 11 only changes the transfer time T0T_{0}. Next, one can find that

eiJt𝐞0=eit(0110)(10)=(costisint).e^{-iJt}{\bf e}_{0}=e^{-it\begin{pmatrix}0&1\\ 1&0\end{pmatrix}}\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}\cos t\\ -i\sin t\end{pmatrix}.

The latter relation shows that if (eiJt𝐞0,𝐞𝟎)N+1=0(e^{-iJt}{\bf e}_{0},{\bf e_{0}})_{\mathbb{C}^{N+1}}=0 then the magnitude of the second component of eiJt𝐞0e^{-iJt}{\bf e}_{0} is 11, which in turn implies that PST takes place at time tt. As a result, 2×22\times 2 Jacobi matrices do not have ESE.

2.2. The case of 3×33\times 3-matrices.

Here we need to do a bit more computations and so we will formulate the result first.

Proposition 2.1.

Let JJ be a 3×33\times 3 Jacobi matrix such that PST occurs for the first time at time T0T_{0}. Then J does not have ESE.

Proof.

Assume that there exists t<T0t<T_{0} such that (eiJt𝐞0,𝐞0)N+1=0(e^{-iJt}{\bf e}_{0},{\bf e}_{0})_{\mathbb{C}^{N+1}}=0. Note that since PST occurs, JJ must be persymmetric and hence takes the form

J=(a0b00b0a1b00b0a0)J=\begin{pmatrix}a_{0}&b_{0}&0\\ b_{0}&a_{1}&b_{0}\\ 0&b_{0}&a_{0}\end{pmatrix}

for a0,a1a_{0},a_{1}\in\mathbb{R} and b0>0b_{0}>0. As in the case of 2×22\times 2-matrices, without loss of generality one can assume that JJ has the form

J=(0101c1010).J=\begin{pmatrix}0&1&0\\ 1&c&1\\ 0&1&0\end{pmatrix}.

Let λ0<λ1<λ2\lambda_{0}<\lambda_{1}<\lambda_{2} be the eigenvalues of JJ. Note that since we assume PST occurs, the distinctness of the eigenvalues follows from (1.2). It is evident that detJ=0\det J=0 and so one of the eigenvalues of JJ must be 0. Moreover, since (J𝐞0,𝐞0)=0(J{\bf e}_{0},{\bf e}_{0})=0 and 𝐞0kerJ{\bf e}_{0}\not\in\ker J, the matrix JJ must have positive and negative eigenvalues. Taking into account (1.2) we get that

λ1=(2m+1)πT0,λ2=0,λ3=(2n+1)πT0\lambda_{1}=-(2m+1)\frac{\pi}{T_{0}},\quad\lambda_{2}=0,\quad\lambda_{3}=(2n+1)\frac{\pi}{T_{0}}

for some nonnegative integers nn, mm. Consequently, we get that

c=TrJ=2(nm)πT0c=\textrm{Tr}\,J=2(n-m)\frac{\pi}{T_{0}}

and therefore

J=(01012nπ2nπT01010).J=\begin{pmatrix}0&1&0\\ 1&\frac{2n\pi-2n\pi}{T_{0}}&1\\ 0&1&0\end{pmatrix}.

Now we can compute eiJte^{-iJt} using Sylvester’s formula,

eiJt=eiλ1tλ1(λ1λ3)J(Jλ3I)+1λ1λ3(Jλ1I)(Jλ3I)+eiλ3t(λ3λ1)λ3(Jλ1I)J,e^{-iJt}=\frac{e^{-i\lambda_{1}t}}{\lambda_{1}(\lambda_{1}-\lambda_{3})}J(J-\lambda_{3}I)+\frac{1}{\lambda_{1}\lambda_{3}}(J-\lambda_{1}I)(J-\lambda_{3}I)+\frac{e^{-i\lambda_{3}t}}{(\lambda_{3}-\lambda_{1})\lambda_{3}}(J-\lambda_{1}I)J,

where we took into account that λ2=0\lambda_{2}=0. We do not need to compute the entire matrix and, in order to get to the function that we want to analyze, we just need to know that

J(Jλ3I)𝐞0=(1λ11),(Jλ1I)(Jλ3I)𝐞0=(λ1λ3+101),(Jλ1I)J𝐞0=(1λ31).J(J-\lambda_{3}I){\bf e}_{0}=\begin{pmatrix}1\\ \lambda_{1}\\ 1\end{pmatrix},(J-\lambda_{1}I)(J-\lambda_{3}I){\bf e}_{0}=\begin{pmatrix}\lambda_{1}\lambda_{3}+1\\ 0\\ 1\end{pmatrix},(J-\lambda_{1}I)J{\bf e}_{0}=\begin{pmatrix}1\\ \lambda_{3}\\ 1\end{pmatrix}.

The latter yields

(eiJt𝐞0,𝐞𝟎)=eiλ1tλ1(λ1λ3)+λ1λ3+1λ1λ3+eiλ3t(λ3λ1)λ3,(e^{-iJt}{\bf e}_{0},{\bf e_{0}})=\frac{e^{-i\lambda_{1}t}}{\lambda_{1}(\lambda_{1}-\lambda_{3})}+\frac{\lambda_{1}\lambda_{3}+1}{\lambda_{1}\lambda_{3}}+\frac{e^{-i\lambda_{3}t}}{(\lambda_{3}-\lambda_{1})\lambda_{3}},

or, for simplicity, we set

(eiJt𝐞0,𝐞𝟎)=αeiλ1t+β+γeiλ3t,(e^{-iJt}{\bf e}_{0},{\bf e_{0}})=\alpha e^{-i\lambda_{1}t}+\beta+\gamma{e^{-i\lambda_{3}t}},

and notice that α\alpha, β\beta, γ\gamma are positive numbers. Since we know that the transfer time is T0T_{0}, it implies (eiJT0𝐞0,𝐞𝟎)=0(e^{-iJT_{0}}{\bf e}_{0},{\bf e_{0}})=0 and thus we have that

β=α+γ.\beta=\alpha+\gamma.

Due to our assumption there is a tt such that 0<t<T00<t<T_{0} and

αeiλ1t+β+γeiλ3t=0,\alpha e^{-i\lambda_{1}t}+\beta+\gamma{e^{-i\lambda_{3}t}}=0,

which, according to the triangle inequality, implies that eiλ1t=eiλ3t=1e^{-i\lambda_{1}t}=e^{-i\lambda_{3}t}=-1. Furthermore, we get that

eiJt𝐞0=(001),e^{-iJt}{\bf e}_{0}=\begin{pmatrix}0\\ 0\\ -1\end{pmatrix},

which shows that PST takes place at time t<T0t<T_{0} and this contradicts our assumption. ∎

2.3. The case of 4×44\times 4-matrices.

It is not possible to reproduce the previous arguments in this case and so one has to consult Mathematica. After some computations, one can get the feeling that this case is not hopeless and after some more computations one can produce an example of a Jacobi matrix that realizes PST and has ESE. As a product of such experiments, let us consider the following Jacobi matrix

(2.1) J=(015/20015/201001015/20015/20).J=\begin{pmatrix}0&\sqrt{15}/2&0&0\\ \sqrt{15}/2&0&1&0\\ 0&1&0&\sqrt{15}/2\\ 0&0&\sqrt{15}/2&0\end{pmatrix}.

Let us denote the components of the evolved state by xj(t)x_{j}(t), that is,

eiJt𝐞0=(x0(t)x1(t)x2(t)x3(t)).e^{-iJt}{\bf e}_{0}=\begin{pmatrix}x_{0}(t)\\ x_{1}(t)\\ x_{2}(t)\\ x_{3}(t)\end{pmatrix}.

and plot x0x_{0} and x3x_{3} using Mathematica, see Figure 1. Notice that Figure 1 demonstrates that the Jacobi matrix JJ defined by (2.1) gives an example of a 4×44\times 4 Jacobi matrix with PST and ESE. We prove this statement rigorously below.

Refer to caption
Figure 1. This picture demonstrates the occurrence of ESE between t=0.5t=0.5 and t=1t=1 and PST at time t=πt=\pi.
Proposition 2.2.

There is a 4×44\times 4 Jacobi matrix JJ such that it realizes PST and has ESE.

Proof.

Let JJ be the 4×44\times 4 matrix defined by (2.1). Then the eigenvalues of JJ are

52,32,32,52-\frac{5}{2},\quad-\frac{3}{2},\quad\frac{3}{2},\quad\frac{5}{2}

and hence the transfer time T0=πT_{0}=\pi due to (1.2). Then, we compute

(2.2) x0(t)=cos3(t2)(3cos(t)2),x3(t)=isin3(t2)(3cos(t)+2),x_{0}(t)=\cos^{3}\left(\frac{t}{2}\right)(3\cos(t)-2),\quad x_{3}(t)=-i\sin^{3}\left(\frac{t}{2}\right)(3\cos(t)+2),

from which we clearly see that if 3cos(t)2=03\cos(t)-2=0 then

sin2(t2)=1cos(t)2=16|x3(t)|=463/2<443/2=12<1,\sin^{2}\left(\frac{t}{2}\right)=\frac{1-\cos(t)}{2}=\frac{1}{6}\Rightarrow|x_{3}(t)|=\frac{4}{6^{3/2}}<\frac{4}{4^{3/2}}=\frac{1}{2}<1,

which confirms that JJ has ESE at t=arccos(2/3)<πt=\arccos(2/3)<\pi. ∎

3. Absence of Early State Exclusion for Equidistant spectra

In this section we will show that a Jacobi matrix that realizes PST and whose spectrum is equidistant, that is, the quantity λk+1λk\lambda_{k+1}-\lambda_{k} is a constant that does not depend on kk, cannot have ESE. This fact is based on the properties of Krawtchouk polynomials and, thus, we start with their basics.

For p(0,1)p\in(0,1) and a positive integer NN, the monic Krawtchouk polynomials in xx are defined by

Kn(x;p,N)=j=0n(n)j(N+j)njj!pnj(x)j,n=0,1,,N,N+1,K_{n}(x;p,N)=\sum_{j=0}^{n}\frac{(-n)_{j}(-N+j)_{n-j}}{j!}p^{n-j}(-x)_{j},\quad n=0,1,\dots,N,N+1,

where

(x)n={1,n=0,x(x+1)(x+n1),n=1,2,(x)_{n}=\begin{cases}1,&n=0,\\ x(x+1)\dots(x+n-1),&n=1,2,\dots\end{cases}

is the Pochhammer symbol. Observe that

K1(x;p,N)=0,K0(x;p,N)=1,KN+1(x;p,N)=x(x1)(x2)(xN).K_{-1}(x;p,N)=0,\quad K_{0}(x;p,N)=1,\quad K_{N+1}(x;p,N)=x(x-1)(x-2)\dots(x-N).

In what follows we will only need the case p=1/2p=1/2 and so let us set Kn(x):=Kn(x;1/2,N)K_{n}(x):=K_{n}(x;1/2,N). Then the monic Krawtchouk polynomials Kn(x)K_{n}(x) satisfy the following three-term recurrence relation

xKn(x)=Kn+1(x)+N2Kn(x)+(N+1n)n4Kn1(x)xK_{n}(x)=K_{n+1}(x)+\frac{N}{2}K_{n}(x)+\frac{(N+1-n)n}{4}K_{n-1}(x)

for n=0,1,,Nn=0,1,\dots,N, which can be rewritten in the matrix form

(3.1) x(K0(x)K1(x)KN1(x)KN(x))=(N/21N/41N/4N/2)(K0(x)K1(x)KN1(x)KN(x))+(000KN+1(x)).x\begin{pmatrix}K_{0}(x)\\ K_{1}(x)\\ \vdots\\ K_{N-1}(x)\\ K_{N}(x)\end{pmatrix}=\begin{pmatrix}N/2&1&&\\ {N}/4&\ddots&\ddots&\\ &\ddots&\ddots&1\\ &&{N}/4&N/2\\ \end{pmatrix}\begin{pmatrix}K_{0}(x)\\ K_{1}(x)\\ \vdots\\ K_{N-1}(x)\\ K_{N}(x)\end{pmatrix}+\begin{pmatrix}0\\ 0\\ \vdots\\ 0\\ K_{N+1}(x)\end{pmatrix}.

The latter can be symmetrized by multiplying by the diagonal matrix

D=diag(1,c11/2,(c1c2)1/2,,(c1c2cN)1/2),D={\rm diag\,}(1,c_{1}^{-1/2},(c_{1}c_{2})^{-1/2},\dots,(c_{1}c_{2}\dots c_{N})^{-1/2}),

where

ck=(N+1k)k4,k=1,2,N,c_{k}=\frac{(N+1-k)k}{4},\quad k=1,2,\dots N,

on the left and by introducing the normalized Krawtchouk polynomials K^n(x)\widehat{K}_{n}(x):

K^0(x):=K0(x),K^n(x):=Kn(x)c1c2cn,n=1,,N.\widehat{K}_{0}(x):={K}_{0}(x),\quad\widehat{K}_{n}(x):=\frac{K_{n}(x)}{\sqrt{c_{1}}\sqrt{c_{2}}\dots\sqrt{c_{n}}},\quad n=1,\dots,N.

In addition, we can make the diagonal of the underlying Jacobi matrix vanish by making the shift:

𝒳(x):=(K^0(x+N/2),,K^N(x+N/2))=D(K0(x+N/2),,KN(x+N/2)).\mathcal{X}(x):=(\widehat{K}_{0}(x+N/2),\dots,\widehat{K}_{N}(x+N/2))^{\top}=D({K}_{0}(x+N/2),\dots,{K}_{N}(x+N/2))^{\top}.

Then (3.1) takes the form

(3.2) x𝒳(x)=J𝒳(x)+(c1c2cN)1/2KN+1(x+N/2)𝐞N,x\mathcal{X}(x)=J\mathcal{X}(x)+(c_{1}c_{2}\dots c_{N})^{-1/2}K_{N+1}(x+N/2){\bf e}_{N},

where

(3.3) J=(0N/2N/2N/2N/20).J=\begin{pmatrix}0&\sqrt{N}/2&&\\ \sqrt{N}/2&\ddots&\ddots&\\ &\ddots&\ddots&\sqrt{N}/2\\ &&\sqrt{N}/2&0\\ \end{pmatrix}.

From (3.2) we see that the zeroes

xs=sN2,s=0,1,2,,Nx_{s}=s-\frac{N}{2},\quad s=0,1,2,...,N

of KN+1(x+N/2)K_{N+1}(x+N/2) are the eigenvalues of the Jacobi matrix JJ given by (3.3) and therefore the vectors 𝒳(xs)\mathcal{X}(x_{s}) are the eigenvectors of JJ. Now we are in the position to prove the desired result.

Theorem 3.1.

Assume that a persymmetric Jacobi matrix J of order N+1N+1 realizes PST and that its eigenvalues satisfy

λ1λ0=λ2λ1==λNλN1,\lambda_{1}-\lambda_{0}=\lambda_{2}-\lambda_{1}=\dots=\lambda_{N}-\lambda_{N-1},

then ESE is impossible.

Proof.

Applying the same argument we used in Section 2, we can assume without loss of generality that

λ1λ0=λ2λ1==λNλN1=1\lambda_{1}-\lambda_{0}=\lambda_{2}-\lambda_{1}=\dots=\lambda_{N}-\lambda_{N-1}=1

In addition, as before, the choice of λ0\lambda_{0} essentially functions as a phase factor for the unitary matrix in question. As such, we can freely set λ0\lambda_{0} = N2-\dfrac{N}{2}, which yields

λs=xs=sN2,s=0,1,2,,N.\lambda_{s}=x_{s}=s-\frac{N}{2},\quad s=0,1,2,...,N.

Since JJ realizes PST, it is persymmetric. As a result, it coincides with the Jacobi matrix JJ given by (3.3) due to the Hochstadt uniqueness theorem [7, Theorem 3]. Then, we can write

𝐞0=s=0Nfs𝒳(xs)𝒳(xs),fs=(𝐞0,𝒳(xs)𝒳(xs)).{\bf e}_{0}=\sum_{s=0}^{N}f_{s}\frac{\mathcal{X}(x_{s})}{\|\mathcal{X}(x_{s})\|},\quad f_{s}=\left({\bf e}_{0},\frac{\mathcal{X}(x_{s})}{\|\mathcal{X}(x_{s})\|}\right).

Clearly, we have that

fs=1𝒳(xs)(𝐞0,𝒳(xs))=1𝒳(xs)K0(xs+N/2)=1𝒳(xs).f_{s}=\frac{1}{{\|\mathcal{X}(x_{s})\|}}\left({\bf e}_{0},{\mathcal{X}(x_{s})}\right)=\frac{1}{{\|\mathcal{X}(x_{s})\|}}K_{0}(x_{s}+N/2)=\frac{1}{{\|\mathcal{X}(x_{s})\|}}.

As a result we get that

𝐞0=s=0N𝒳(xs)𝒳(xs)2.{\bf e}_{0}=\sum_{s=0}^{N}\frac{\mathcal{X}(x_{s})}{\|\mathcal{X}(x_{s})\|^{2}}.

Next, by the Christoffel-Darboux formula we obtain that

𝒳(xs)2=l=0NK^l(s)2=hN1K^N(s)KN+1(s),s=0,1,,N,\|\mathcal{X}(x_{s})\|^{2}=\sum_{l=0}^{N}\widehat{K}_{l}(s)^{2}=h_{N}^{-1}{\widehat{K}_{N}(s)K_{N+1}^{\prime}(s)},\quad s=0,1,\dots,N,

where hNh_{N} is a positive constant, whose value is not important for now. Consequently, we arrive at the following relation

(3.4) eiJt𝐞0=s=0Nw(xs)eixst𝒳(xs)e^{-iJt}{\bf e}_{0}=\sum_{s=0}^{N}w(x_{s})e^{-ix_{s}t}\mathcal{X}(x_{s})

where, for convenience, we have set

w(xs)=hNK^N(s)KN+1(s),s=0,1,2,,N.w(x_{s})=\frac{h_{N}}{\widehat{K}_{N}(s)K_{N+1}^{\prime}(s)},\quad s=0,1,2,\dots,N.

In particular, we have

(3.5) x0(t)=(eiJt𝐞0,𝐞0)=s=0Nw(xs)eixst.x_{0}(t)=(e^{-iJt}{\bf e}_{0},{\bf e}_{0})=\sum_{s=0}^{N}w(x_{s})e^{-ix_{s}t}.

Since JJ is persymmetric, we know that K^N(s)=(1)N+s\widehat{K}_{N}(s)=(-1)^{N+s} (see [4, Corollary 6.2] or [8]) and so we can write

(3.6) w(xs)=(1)N+shNKN+1(s),s=0,1,2,,N.w(x_{s})=\frac{(-1)^{N+s}h_{N}}{K_{N+1}^{\prime}(s)},\quad s=0,1,2,\dots,N.

Since

KN+1(x)=k=0N(xk),K_{N+1}(x)=\prod_{k=0}^{N}(x-k),

we have

KN+1(x)=s=0Nk=0,ksN(xk)K_{N+1}^{\prime}(x)=\sum_{s=0}^{N}\prod_{k=0,k\neq s}^{N}(x-k)

and hence

KN+1(s)=k=0,ksN(sk).K_{N+1}^{\prime}(s)=\prod_{k=0,k\neq s}^{N}(s-k).

By induction we get that

(1)N+sKN+1(s)=(1)N+sk=0,ksN(sk)=1s!(Ns)!\frac{(-1)^{N+s}}{K_{N+1}^{\prime}(s)}=\frac{(-1)^{N+s}}{\prod_{k=0,k\neq s}^{N}(s-k)}=\frac{1}{s!(N-s)!}

and thus we can write

x0(t)=s=0NhNs!(Ns)!eixst.x_{0}(t)=\sum_{s=0}^{N}\frac{h_{N}}{s!(N-s)!}e^{-ix_{s}t}.

Now, if JJ has ESE then x0(t)=0x_{0}(t)=0, that is,

x0(t)=s=0NhNs!(Ns)!eixst=s=0NhNs!(Ns)!ei(sN/2)t=0,x_{0}(t)=\sum_{s=0}^{N}\frac{h_{N}}{s!(N-s)!}e^{-ix_{s}t}=\sum_{s=0}^{N}\frac{h_{N}}{s!(N-s)!}e^{-i(s-N/2)t}=0,

multiplying the above relation by N!hN2N\frac{N!}{h_{N}\cdot 2^{N}} yields

s=0N12N(Ns)(eit2)Ns(eit2)s=cosN(t2)=0.\sum_{s=0}^{N}\frac{1}{2^{N}}{N\choose s}(e^{i\frac{t}{2}})^{N-s}(e^{-i\frac{t}{2}})^{s}=\cos^{N}\left(\frac{t}{2}\right)=0.

Therefore, we see that cos(t2\cos(\frac{t}{2}) = 0. We now claim that this implies that

|xN(t)|=|(eiJt𝐞0,𝐞N)|=1.|x_{N}(t)|=|(e^{-iJt}{\bf e}_{0},{\bf e}_{N})|=1.

Firstly, note that if cos(t2)=0\cos(\frac{t}{2})=0 then sin(t2)=±1\sin(\frac{t}{2})=\pm 1. Then, similarly to what we did to get the expansion of 𝐞0{\bf e}_{0}, we obtain

𝐞N=s=0N(1)N+s𝒳(xs)𝒳(xs)2{\bf e}_{N}=\sum_{s=0}^{N}(-1)^{N+s}\frac{\mathcal{X}(x_{s})}{\|\mathcal{X}(x_{s})\|^{2}}

and hence

(3.7) xN(t)=(eiJt𝐞0,𝐞N)=s=0N(1)N+sw(xs)eixst=s=0N(1)N+shNs!(Ns)!eixst.x_{N}(t)=(e^{-iJt}{\bf e}_{0},{\bf e}_{N})=\sum_{s=0}^{N}(-1)^{N+s}w(x_{s})e^{-ix_{s}t}=\sum_{s=0}^{N}(-1)^{N+s}\frac{h_{N}}{s!(N-s)!}e^{-ix_{s}t}.

Then since eiJte^{-iJt} is unitary, it follows from (3.5) that

x0(0)=s=0Nw(xs)=1x_{0}(0)=\sum_{s=0}^{N}w(x_{s})=1

and so using the formula for w(xs)w(x_{s}) derived above, we have

hN=1s=0N1s!(Ns)!h_{N}=\frac{1}{\sum_{s=0}^{N}\frac{1}{s!(N-s)!}}

so that

hN2NN!=2Ns=0NN!s!(Ns)!=1\frac{h_{N}2^{N}}{N!}=\frac{2^{N}}{\sum_{s=0}^{N}\frac{N!}{s!(N-s)!}}=1

and hence hN=N!2Nh_{N}=\frac{N!}{2^{N}}. Therefore, multiplying (3.7) by (1i)N\left(\frac{-1}{i}\right)^{N}, we have

(1i)NxN(t)=s=0N(12i)N(Ns)(eit2)s(eit2)Ns=sinN(t2)=(±1)N.\left(\frac{-1}{i}\right)^{N}x_{N}(t)=\sum_{s=0}^{N}\left(\frac{1}{2i}\right)^{N}{N\choose s}(-e^{-i\frac{t}{2}})^{s}(e^{i\frac{t}{2}})^{N-s}=\sin^{N}\left(\frac{t}{2}\right)=(\pm 1)^{N}.

Henceforth, for a persymmetric matrix with equidistant spectrum exhibiting PST, if there exists a time tt such that x0(t)=0x_{0}(t)=0 then |xN(t)|=1|x_{N}(t)|=1. This directly implies early state exclusion is impossible. ∎

4. Appearance of Early State Exclusion for Some Non-Equidistant Spectra

In this section we construct a family of Jacobi matrices that realize PST and have ESE. To this end, we need to recall some basics of Chebyshev polynomials, which are defined by the relation

Tn(x)=cos(nθ),x=cosθT_{n}(x)=\cos(n\theta),\quad x=\cos\theta

for n=0,1,n=0,1,\dots. If the range of the variable xx is the interval [1,1][-1,1], then the range of the corresponding variable θ\theta can be taken as [0,π][0,\pi]. It is not so difficult to verify that they satisfy the orthogonality relation

(4.1) 11Tn(x)Tk(x)dx1x2=0,nk.\int_{-1}^{1}T_{n}(x)T_{k}(x)\frac{dx}{\sqrt{1-x^{2}}}=0,\quad n\neq k.

It is well known that all the zeroes of TnT_{n} belong to (1,1)(-1,1). Moreover, it is also known that at most one zero of the quasi-orthogonal polynomial

Tn+1(x)+ATn(x)T_{n+1}(x)+AT_{n}(x)

of rank 11 lies outside (1,1)(-1,1), see [1, Chapter II, Theorem 5.2]. One can easily adapt one of the available proofs of this fact to a more general linear combination, which we will do here for the convenience of the reader.

Lemma 4.1.

The quasi-orthogonal polynomial

Q(x):=An+1Tn+1(x)+AnTn(x)++AkTk(x),Ak0,kn+1,Q(x):=A_{n+1}T_{n+1}(x)+A_{n}T_{n}(x)+\dots+A_{k}T_{k}(x),\quad A_{k}\neq 0,\quad k\leq n+1,

has at least kk distinct zeros on (1,1)(-1,1).

Proof.

Assume that yry_{r}, r=1,,lr=1,\dots,l are the points of the interval (1,1)(-1,1) at which QQ changes sign and also assume that l<kl<k. Then the polynomial

Q(x)r=1l(xyr)Q(x)\prod_{r=1}^{l}(x-y_{r})

does not change the sign on (1,1)(-1,1) and hence

(4.2) 11Q(x)r=1l(xyr)dx1x20.\int_{-1}^{1}Q(x)\prod_{r=1}^{l}(x-y_{r})\frac{dx}{\sqrt{1-x^{2}}}\neq 0.

On the other hand, since degr=1l(xyr)<k\deg\displaystyle{\prod_{r=1}^{l}(x-y_{r})}<k from (4.1) we get

11Q(x)r=1l(xyr)dx1x2=j=kn+1Aj11Tj(x)r=1l(xyr)dx1x2=0,\int_{-1}^{1}Q(x)\prod_{r=1}^{l}(x-y_{r})\frac{dx}{\sqrt{1-x^{2}}}=\sum_{j=k}^{n+1}A_{j}\int_{-1}^{1}T_{j}(x)\prod_{r=1}^{l}(x-y_{r})\frac{dx}{\sqrt{1-x^{2}}}=0,

which contradicts (4.2). Therefore, QQ must change the sign at least kk times on (1,1)(-1,1). ∎

The above statement allows us to prove the following result.

Theorem 4.2.

Let N=2n1N=2n-1, where n>1n>1 is a positive integer. Then there exists a Jacobi matrix of order N+1N+1 that realizes PST at time T0T_{0} and has ESE. Moreover, for a given positive number mm, it is possible to construct such a Jacobi matrix so that it has mm cases of ESE, i.e. there exist mm times {tj}j=1m\{t_{j}\}_{j=1}^{m} where x0(tj)=0x_{0}(t_{j})=0 and tj<T0t_{j}<T_{0}.

Proof.

Fix two positive integers n>1n>1 and mm, and define a finite sequence of numbers {λs}s=02n1\{\lambda_{s}\}_{s=0}^{2n-1} in the following way:

(4.3) λn+k=2m+2k+12,k=0,1,,n1\lambda_{n+k}=\frac{2m+2k+1}{2},\quad k=0,1,\dots,n-1

and

(4.4) λk=λ2n1k,k=0,1,,n1,\lambda_{k}=-\lambda_{2n-1-k},\quad k=0,1,\dots,n-1,

where mm is a positive integer. In other words, this sequence has the property that the gaps between the eigenvalues are the same except the middle one:

λnλn1=2m+1,λk+1λk=1,kn1.\lambda_{n}-\lambda_{n-1}=2m+1,\quad\lambda_{k+1}-\lambda_{k}=1,\quad k\neq n-1.

We note that equations (4.3) and (4.4) give

λs=2n+2m2s12,s=0,1,2,,2n1.\lambda_{s}=-\frac{2n+2m-2s-1}{2},\quad s=0,1,2,\dots,2n-1.

Next, by the Hochstadt theorem [7, Theorem 3], there is a unique persymmetric Jacobi matrix of order 2n2n whose eigenvalues are λss\lambda_{s}^{\prime}s. Evidently, the eigenvalues satisfy (1.2) with T0=πT_{0}=\pi and thus JJ realizes PST with the earliest transfer time T0=πT_{0}=\pi. This matrix also defines a finite family of orthogonal polynomials like in the case of Krawtchouk polynomials discussed in Section 3. In fact, following similar arguments, we can establish that

(4.5) x0(t)=(eiJt𝐞0,𝐞0)=s=02n1w(λs)eiλstx_{0}(t)=(e^{-iJt}{\bf e}_{0},{\bf e}_{0})=\sum_{s=0}^{2n-1}w(\lambda_{s})e^{-i\lambda_{s}t}

with w(λs)=w(λ2n1s)w(\lambda_{s})=w(\lambda_{2n-1-s}) for s=0,1,2,,2n1s=0,1,2,\dots,2n-1. As a result, we get

x0(t)=s=0n12w(λs)cos((2n+2m2s1)t2),x_{0}(t)=\sum_{s=0}^{n-1}2\cdot{w}(\lambda_{s})\cdot\cos{\left((2n+2m-2s-1)\frac{t}{2}\right)},

which is a linear combination of Chebyshev polynomials. Namely, setting x=cos(t/2)x=\cos(t/2) the linear combination takes the form

x0(t)=A2m+1T2m+1(x)++A2n+2m1T2n+2m1(x),x_{0}(t)=A_{2m+1}T_{2m+1}(x)+...+A_{2n+2m-1}T_{2n+2m-1}(x),

where A2m+1=2w(λn1)0A_{2m+1}=2w(\lambda_{n-1})\neq 0. By Lemma 4.1, the linear combination as a polynomial in xx has at least 2m+12m+1 distinct zeroes on (1,1)(-1,1). In turn, it implies that x0(t)x_{0}(t), as a function of tt, has 2m+12m+1 zeroes on (0,2π)(0,2\pi). We know that one of them is at t=T0=πt=T_{0}=\pi and since

|x0(t+π)|=|x0(tπ)|,|x_{0}(t+\pi)|=|x_{0}(t-\pi)|,

we have that there are at least mm zeroes on (0,π)(0,\pi), meaning that there are mm cases of ESE. ∎

Remark 4.3.

It is worth noting that one can establish existence of Jacobi matrices with ESE by applying a method of spectral surgery (see [8]) to Krawtchouk polynomials. Specifically, for an odd integer NN, one may ”remove” the two inner-most eigenvalues of an (N+3)×(N+3)(N+3)\times(N+3) Jacobi matrix corresponding to Krawtchouk polynomials and obtain a new (N+1)×(N+1)(N+1)\times(N+1) Jacobi matrix with initial state probability amplitude, x~0(t)\tilde{x}_{0}(t), which is given in terms of the probability amplitude, x0(t)x_{0}(t), of an (N+1)×(N+1)(N+1)\times(N+1) Jacobi matrix corresponding to Krawtchouk polynomials. Namely,

(4.6) x~0(t)=((N+12+1)cos(t)(N+12))cosN(t2),\tilde{x}_{0}(t)=\left(\left(\frac{N+1}{2}+1\right)\cos(t)-\left(\frac{N+1}{2}\right)\right)\cos^{N}\left(\frac{t}{2}\right),

where cosN(t2)=x0(t)\cos^{N}\left(\frac{t}{2}\right)=x_{0}(t). Then we can see the presence of ESE from equation (4.6). In particular, if set N=3N=3 we get the matrix (2.1) and the corresponding probability amplitude (2.2). Nevertheless, the proof that we propose in this section is very general and gives a great variety of outcomes. It only relies on the symmetry of the eigenvalues and a gap in the middle. It should be noted that the proof does not require the rest of the gaps to be uniform.

Acknowledgments. The authors acknowledge the support of the NSF DMS grant 2349433. They are also indebted to Prof. Luke Rogers and Prof. Sasha Teplyaev for the opportunity to participate in UConn REU site in Summer 2024 and for many helpful and productive discussions. M.D. was also partially supported by the NSF DMS grant 2008844. Finally, M.D. is extremely grateful to Prof. Christino Tamon for bringing the ESE problem to his attention.

References

  • [1] T.S. Chihara. An Introduction to Orthogonal Polynomials. Vol. 13. New York-London-Paris: Gordon and Breach Science Publishers, 1978.
  • [2] E. Connelly, N. Grammel, M. Kraut, L. Serazo, C. Tamon, Universality in perfect state transfer, Linear Algebra and its Applications, Volume 531, 2017, p. 516-532.
  • [3] M. Derevyagin, G. V. Dunne, G. Mograby, A. Teplyaev, Perfect quantum state transfer on diamond fractal graphs, Quantum Inf. Process. 19 (2020), no. 9, 328.
  • [4] M. Derevyagin, A. Minenkova, N. Sun, A Theorem of Joseph-Alfred Serret and its Relation to Perfect Quantum State Transfer, Expo. Math. 39 (2021), no. 3, 480–499
  • [5] A. Kay, A Review of Perfect State Transfer and its Application as a Constructive Tool, Int. J. Quantum Inf. 8 (2010), 641–676.
  • [6] S. Kirkland, D. McLaren, R. Pereira, S. Plosker and X. Zhang, Perfect quantum state transfer in weighted paths with potentials (loops) using orthogonal polynomials, Linear and Multilinear Algebra 67 (2019), 1043–1061.
  • [7] H. Hochstadt,On the construction of a Jacobi matrix from spectral data, Linear Algebra and its Applications, 8 (5), 1974, p. 435-446.
  • [8] L. Vinet, A. Zhedanov, How to construct spin chains with perfect state transfer, Physical Review A 85, 012323 (2012).
  • [9] W. Xie, A. Kay, and C. Tamon, Breaking the speed limit for perfect quantum state transfer, Phys. Rev. A 108, 012408