This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Factorization for the full-line matrix Schrödinger equation and a unitary transformation to the half-line scatteringthanks: Research partially supported by project PAPIIT-DGAPA UNAM IN100321

Tuncay Aktosun
Department of Mathematics
University of Texas at Arlington
Arlington, TX 76019-0408, USA

Ricardo Weder
Departamento de Física Matemática
Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas
Universidad Nacional Autónoma de México
Apartado Postal 20-126, IIMAS-UNAM
Ciudad de México 01000, México
Emeritus Fellow, Sistema Nacional de Investigadores
Abstract

The scattering matrix for the full-line matrix Schrödinger equation is analyzed when the corresponding matrix-valued potential is selfadjoint, integrable, and has a finite first moment. The matrix-valued potential is decomposed into a finite number of fragments, and a factorization formula is presented expressing the matrix-valued scattering coefficients in terms of the matrix-valued scattering coefficients for the fragments. Using the factorization formula, some explicit examples are provided illustrating that in general the left and right matrix-valued transmission coefficients are unequal. A unitary transformation is established between the full-line matrix Schrödinger operator and the half-line matrix Schrödinger operator with a particular selfadjoint boundary condition and by relating the full-line and half-line potentials appropriately. Using that unitary transformation, the relations are established between the full-line and the half-line quantities such as the Jost solutions, the physical solutions, and the scattering matrices. Exploiting the connection between the corresponding full-line and half-line scattering matrices, Levinson’s theorem on the full line is proved and is related to Levinson’s theorem on the half line.

Dedicated to Vladimir Aleksandrovich Marchenko to commemorate his 100th birthday

AMS Subject Classification (2020): 34L10, 34L25, 34L40, 47A40, 81U99

Keywords: matrix-valued Schrödinger equation on the line, factorization of scattering data, matrix-valued scattering coefficients, Levinson’s theorem, unitary transformation to the half-line scattering

1 Introduction

In this paper we consider certain aspects of the matrix-valued Schrödinger equation on the full line

ψ′′+V(x)ψ=k2ψ,x,-\psi^{\prime\prime}+V(x)\,\psi=k^{2}\psi,\qquad x\in\mathbb{R}, (1.1)

where xx represents the spacial coordinate, :=(,+),\mathbb{R}:=(-\infty,+\infty), the prime denotes the xx-derivative, the wavefunction ψ\psi may be an n×nn\times n matrix or a column vector with nn components. Here, nn can be chosen as any fixed positive integer, including the special value n=1n=1 which corresponds to the scalar case. The potential VV is assumed to be an n×nn\times n matrix-valued function of xx satisfying the selfadjointness

V(x)=V(x),x,V(x)^{\dagger}=V(x),\qquad x\in\mathbb{R}, (1.2)

with the dagger denoting the matrix adjoint (matrix transpose and complex conjugation), and also belonging to the Faddeev class, i.e. satisfying the condition

𝑑x(1+|x|)|V(x)|<+,\int_{-\infty}^{\infty}dx\,(1+|x|)\,|V(x)|<+\infty, (1.3)

with |V(x)||V(x)| denoting the operator norm of the matrix V(x).V(x). Since all matrix norms are equivalent for n×nn\times n matrices, any other matrix norm can be used in (1.3). We use the conventions and notations from [5] and refer the reader to that reference for further details.

Let us decompose the potential VV into two pieces V1V_{1} and V2V_{2} as

V(x)=V1(x)+V2(x),x,V(x)=V_{1}(x)+V_{2}(x),\qquad x\in\mathbb{R}, (1.4)

where we have defined

V1(x):={V(x),x<0,0,x>0,V2(x):={0,x<0,V(x),x>0.V_{1}(x):=\begin{cases}V(x),\qquad x<0,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad x>0,\end{cases}\quad V_{2}(x):=\begin{cases}0,\qquad x<0,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr V(x),\qquad x>0.\end{cases} (1.5)

We refer to V1V_{1} and V2V_{2} as the left and right fragments of V,V, respectively. We are interested in relating the n×nn\times n matrix-valued scattering coefficients corresponding to VV to the n×nn\times n matrix-valued scattering coefficients corresponding to V1V_{1} and V2,V_{2}, respectively. This is done in Theorem 3.3 by presenting a factorization formula in terms of the transition matrix Λ(k)\Lambda(k) defined in (2.24) and an equivalent factorization formula in terms of Σ(k)\Sigma(k) defined in (2.25). In fact, in Theorem 3.6 the scattering coefficients for VV themselves are expressed in terms of the scattering coefficients for the fragments V1V_{1} and V2.V_{2}.

The factorization result of Theorem 3.3 corresponds to the case where the potential VV is fragmented into two pieces at the fragmentation point x=0x=0 as in (1.5). In Theorem 3.4 the factorization result of Theorem 3.3 is generalized by showing that the fragmentation point can be chosen arbitrarily. In Corollary 3.5 the factorization formula is further generalized to the case where the matrix potential is arbitrarily decomposed into any finite number of fragments and by expressing the transition matrices Λ(k)\Lambda(k) and Σ(k)\Sigma(k) in terms of the respective transition matrices corresponding to the fragments.

Since the potential fragments are either supported on a half line or compactly supported, the corresponding reflection coefficients have meromorphic extensions in kk from the real axis to the upper-half or lower-half complex plane or to the whole complex plane, respectively. Thus, it is more efficient to deal with the scattering coefficients of the fragments than the scattering coefficients of the whole potential. Furthermore, it is easier to determine the scattering coefficients when the corresponding potential is compactly supported or supported on a half line. We refer the reader to [2] for a proof of the factorization formula for the full-line scalar Schrödinger equation and to [4] for a generalization of that factorization formula. A composition rule has been presented in [9] to express the factorization of the scattering matrix of a quantum graph in terms of the scattering matrices of its subgraphs.

The factorization formulas are useful in the analysis of direct and inverse scattering problems because they help us to understand the scattering from the whole potential in terms of the scattering from the fragments of that potential. We recall that the direct scattering problem on the half line consists of the determination of the scattering matrix and the bound-state information when the potential and the boundary condition are known. The goal in the inverse scattering problem on the half line is to recover the potential and the boundary condition when the scattering matrix and the bound-state information are available. The direct and inverse scattering problems on the full line are similar to those on the half line except for the absence of a boundary condition. For the direct and inverse scattering theory for the half-line matrix Schrödinger equation, we refer the reader to the seminal monograph [1] of Agranovich and Marchenko when the Dirichlet boundary condition is used and to our recent monograph [6] when the general selfadjoint boundary condition is used.

The factorization formulas yield an efficient method to determine the scattering coefficients for the whole potential by first determining the scattering coefficients for the potential fragments. For example, in Section 4 we provide some explicit examples to illustrate that the matrix-valued transmission coefficients from the left and from the right are not necessarily equal even though the equality holds in the scalar case. In our examples, we determine the left and right transmission coefficients explicitly with the help of the factorization result of Theorem 3.6. Since the resulting explicit expressions for those transmission coefficients are extremely lengthy, we use the symbolic software Mathematica on the first author’s personal computer in order to obtain those lengthy expressions. Even though those transmission coefficients could be determined without using the factorization result, it becomes difficult or impossible to determine them directly and demonstrate their unequivalence by using Mathematica on the same personal computer.

In Section 2.4 of [6] we have presented a unitary transformation between the half-line 2n×2n2n\times 2n matrix Schrödinger operator with a specific selfadjoint boundary condition and the full-line n×nn\times n matrix Schrödinger operator with a point interaction at x=0.x=0. Using that unitary transformation and by introducing a full-line physical solution with a point interaction, in [13] the relation between the half-line and full-line physical solutions and the relation between the half-line and full-line scattering matrices have been found. In our current paper, we elaborate on such relations in the absence of a point interaction on the full line, and we show how the half-line physical solution and the standard full-line physical solution are related to each other and also show how the half-line and full-line scattering matrices are related to each other. We also show how some other relevant half-line and full-line quantities are related to each other. For example, in Theorem 5.2 we establish the relationship between the determinant of the half-line Jost matrix and the determinant of the full-line transmission coefficient, and in Theorem 5.3 we provide the relationship between the determinant of the half-line scattering matrix and the determinant of the full-line transmission coefficient. Those results help us establish in Section 6 Levinson’s theorem for the full-line n×nn\times n matrix Schrödinger operator and compare it with Levinson’s theorem for the corresponding half-line 2n×2n2n\times 2n matrix Schrödinger operator.

We have the following remark on the notation we use. There are many equations in our paper of the form

a(k)=b(k),k{0},a(k)=b(k),\qquad k\in\mathbb{R}\setminus\{0\}, (1.6)

where a(k)a(k) and b(k)b(k) are continuous in k{0},k\in\mathbb{R}\setminus\{0\}, the quantity a(k)a(k) is also continuous at k=0,k=0, but b(k)b(k) is not necessarily well defined at k=0.k=0. We write (1.6) as

a(k)=b(k),k,a(k)=b(k),\qquad k\in\mathbb{R}, (1.7)

with the understanding that we interpret a(0)=b(0)a(0)=b(0) in the sense that by the continuity of a(k)a(k) at k=0,k=0, the limit of b(k)b(k) at k=0k=0 exists and we have

a(0)=limk0b(k).a(0)=\displaystyle\lim_{k\to 0}b(k). (1.8)

Our paper is organized as follows. In Section 2 we provide the relevant results related to the scattering problem for (1.1), and this is done by presenting the Jost solutions, the physical solutions, the scattering coefficients, the scattering matrix for (1.1), and the relevant properties of those quantities. In Section 3 we establish our factorization formula by relating the scattering coefficients for the full-line potential VV to the scattering coefficients for the fragments of the potential. We also provide an alternate version of the factorization formula. In Section 4 we elaborate on the relation between the matrix-valued left and right transmission coefficients, and we provide some explicit examples to illustrate that they are in general not equal to each other. In Section 5, we elaborate on the unitary transformation connecting the half-line and full-line matrix Schrödinger operators, and we establish the connections between the half-line and full-line scattering matrices, the half-line and full-line Jost solutions, the half-line and full-line physical solutions, the half-line Jost matrix and the full-line transmission coefficients, and the half-line and full-line zero-energy solutions that are bounded. Finally, in Section 6 we present Levinson’s theorem for the full-line matrix Schrödinger operator and compare it with Levinson’s theorem for the half-line matrix Schrödinger operator with a selfadjoint boundary condition.

2 The full-line matrix Schrödinger equation

In this section we provide a summary of the results relevant to the scattering problem for the full-line matrix Schrödinger equation (1.1). In particular, for (1.1) we introduce the pair of Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x);f_{\rm{r}}(k,x); the pair of physical solutions Ψl(k,x)\Psi_{\rm{l}}(k,x) and Ψr(k,x);\Psi_{\rm{r}}(k,x); the four n×nn\times n matrix-valued scattering coefficients Tl(k),T_{\rm{l}}(k), Tr(k),T_{\rm{r}}(k), L(k),L(k), and R(k);R(k); the 2n×2n2n\times 2n scattering matrix S(k);S(k); the three relevant 2n×2n2n\times 2n matrices Fl(k,x),F_{\rm{l}}(k,x), Fr(k,x),F_{\rm{r}}(k,x), and G(k,x);G(k,x); and the pair of 2n×2n2n\times 2n transition matrices Λ(k)\Lambda(k) and Σ(k).\Sigma(k). For the preliminaries needed in this section, we refer the reader to [5]. For some earlier results on the full-line matrix Schrödinger equation, the reader can consult [10, 11, 12].

When the potential VV in (1.1) satisfies (1.2) and (1.3), there are two particular n×nn\times n matrix-valued solutions to (1.1), known as the left and right Jost solutions and denoted by fl(k,x)f_{\rm{l}}(k,x) and fr(k,x),f_{\rm{r}}(k,x), respectively, satisfying the respective spacial asymptotics

fl(k,x)=eikx[I+o(1)],fl(k,x)=ikeikx[I+o(1)],x+,f_{\rm{l}}(k,x)=e^{ikx}\left[I+o(1)\right],\quad f^{\prime}_{\rm{l}}(k,x)=ik\,e^{ikx}\left[I+o(1)\right],\qquad x\to+\infty, (2.1)
fr(k,x)=eikx[I+o(1)],fr(k,x)=ikeikx[I+o(1)],x.f_{\rm{r}}(k,x)=e^{-ikx}\left[I+o(1)\right],\quad f^{\prime}_{\rm{r}}(k,x)=-ik\,e^{ikx}\left[I+o(1)\right],\qquad x\to-\infty. (2.2)

For each x,x\in\mathbb{R}, the Jost solutions have analytic extensions in kk from the real axis \mathbb{R} of the complex plane \mathbb{C} to the upper-half complex plane +\mathbb{C}^{+} and they are continuous in k+¯,k\in\overline{\mathbb{C}^{+}}, where we have defined +¯:=+.\overline{\mathbb{C}^{+}}:=\mathbb{C}^{+}\cup\mathbb{R}. As listed in (2.1)–(2.3) of [5], we have the integral representations for fl(k,x)f_{\rm{l}}(k,x) and fr(k,x),f_{\rm{r}}(k,x), which are respectively given by

eikxfl(k,x)=I+12ikx𝑑y[e2ik(yx)1]V(y)eikyfl(k,y),e^{-ikx}f_{\rm{l}}(k,x)=I+\displaystyle\frac{1}{2ik}\displaystyle\int_{x}^{\infty}dy\left[e^{2ik(y-x)}-1\right]V(y)\,e^{-iky}f_{\rm{l}}(k,y), (2.3)
eikxfr(k,x)=I+12ikx𝑑y[e2ik(xy)1]V(y)eikyfr(k,y).e^{ikx}f_{\rm{r}}(k,x)=I+\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{x}dy\left[e^{2ik(x-y)}-1\right]V(y)\,e^{iky}f_{\rm{r}}(k,y). (2.4)

For each fixed k{0},k\in\mathbb{R}\setminus\{0\}, the combined 2n2n columns of fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) form a fundamental set for (1.1), and any solution to (1.1) can be expressed as a linear combination of those column-vector solutions. The n×nn\times n matrix-valued scattering coefficients are defined [5] in terms of the spacial asymptotics of the Jost solutions via

fl(k,x)=eikxTl(k)1+eikxL(k)Tl(k)1+o(1),x,f_{\rm{l}}(k,x)=e^{ikx}\,T_{\rm{l}}(k)^{-1}+e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}+o(1),\qquad x\to-\infty, (2.5)
fr(k,x)=eikxTr(k)1+eikxR(k)Tr(k)1+o(1),x+,f_{\rm{r}}(k,x)=e^{-ikx}\,T_{\rm{r}}(k)^{-1}+e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}+o(1),\qquad x\to+\infty, (2.6)

where Tl(k)T_{\rm{l}}(k) is the left transmission coefficient, Tr(k)T_{\rm{r}}(k) is the right transmission coefficient, L(k)L(k) is the left reflection coefficient, and R(k)R(k) is the right reflection coefficient. With the help of (2.3)–(2.6), it can be shown that

fl(k,x)=ikeikxTl(k)1ikeikxL(k)Tl(k)1+o(1),x,f^{\prime}_{\rm{l}}(k,x)=ik\,e^{ikx}\,T_{\rm{l}}(k)^{-1}-ik\,e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}+o(1),\qquad x\to-\infty, (2.7)
fr(k,x)=ikeikxTr(k)1+ikeikxR(k)Tr(k)1+o(1),x+.f^{\prime}_{\rm{r}}(k,x)=-ik\,e^{-ikx}\,T_{\rm{r}}(k)^{-1}+ik\,e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}+o(1),\qquad x\to+\infty. (2.8)

As a result, the leading asymptotics in (2.7) and (2.8) are obtained by taking the xx-derivatives of the leading asymptotics in (2.5) and (2.6), respectively. From (2.16) and (2.17) of [5] it follows that the matrices Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) are invertible for k{0}.k\in\mathbb{R}\setminus\{0\}. We remark that (2.5) and (2.6) hold in the limit k0k\to 0 as their left-hand sides are continuous at k=0k=0 even though each of the four matrices Tl(k)1,T_{\rm{l}}(k)^{-1}, Tr(k)1,T_{\rm{r}}(k)^{-1}, L(k)Tl(k)1,L(k)\,T_{\rm{l}}(k)^{-1}, and R(k)Tr(k)1R(k)\,T_{\rm{r}}(k)^{-1} generically behaves as O(1/k)O(1/k) when k0.k\to 0. The 2n×2n2n\times 2n scattering matrix for (1.1) is defined as

S(k):=[Tl(k)R(k)L(k)Tr(k)],k.S(k):=\begin{bmatrix}T_{\rm{l}}(k)&R(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L(k)&T_{\rm{r}}(k)\end{bmatrix},\qquad k\in\mathbb{R}. (2.9)

As seen from Theorem 3.1 and Theorem 4.6 of [5], the scattering coefficients can be defined first via (2.5) and (2.6) for k{0}k\in\mathbb{R}\setminus\{0\} and then their domain can be extended in a continuous way to include k=0.k=0. When the potential VV in (1.1) satisfies (1.2) and (1.3), from [11, 12] and the comments below (2.21) in [5] it follows that the n×nn\times n matrix-valued transmission coefficients Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) have meromorphic extensions in kk from \mathbb{R} to +\mathbb{C}^{+} where any possible poles are simple and can only occur on the positive imaginary axis. On the other hand, the domains of the n×nn\times n matrix-valued reflection coefficients L(k)L(k) and R(k)R(k) cannot be extended from kk\in\mathbb{R} unless the potential VV in (1.1) satisfies further restrictions besides (1.2) and (1.3).

The left and right physical solutions to (1.1), denoted by Ψl(k,x)\Psi_{\rm{l}}(k,x) and Ψr(k,x),\Psi_{\rm{r}}(k,x), are the two particular n×nn\times n matrix-valued solutions that are related to the Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x),f_{\rm{r}}(k,x), respectively, as

Ψl(k,x):=fl(k,x)Tl(k),Ψr(k,x):=fr(k,x)Tr(k),\Psi_{\rm{l}}(k,x):=f_{\rm{l}}(k,x)\,T_{\rm{l}}(k),\quad\Psi_{\rm{r}}(k,x):=f_{\rm{r}}(k,x)\,T_{\rm{r}}(k), (2.10)

and, as seen from (2.1), (2.2), (2.5), (2.6), and (2.10) they satisfy the spacial asymptotics

Ψl(k,x)=eikxTl(k)+o(1),Ψr(k,x)=eikxI+eikxR(k)+o(1),x+,\Psi_{\rm{l}}(k,x)=e^{ikx}T_{\rm{l}}(k)+o(1),\quad\Psi_{\rm{r}}(k,x)=e^{-ikx}I+e^{ikx}R(k)+o(1),\qquad x\to+\infty, (2.11)
Ψl(k,x)=eikxI+eikxL(k)+o(1),Ψr(k,x)=eikxTr(k)+o(1),x.\Psi_{\rm{l}}(k,x)=e^{ikx}I+e^{-ikx}L(k)+o(1),\quad\Psi_{\rm{r}}(k,x)=e^{-ikx}T_{\rm{r}}(k)+o(1),\qquad x\to-\infty. (2.12)

Using (2.11) and (2.12), we can interpret Ψl(k,x)\Psi_{\rm{l}}(k,x) in terms of the matrix-valued plane wave eikxIe^{ikx}I of unit amplitude sent from x=,x=-\infty, the matrix-valued reflected plane wave eikxL(k)e^{-ikx}L(k) of amplitude L(k)L(k) at x=,x=-\infty, and the matrix-valued transmitted plane wave eikxTl(k)e^{ikx}\,T_{\rm{l}}(k) of amplitude Tl(k)T_{\rm{l}}(k) at x=+.x=+\infty. Similarly, the physical solution Ψr(k,x)\Psi_{\rm{r}}(k,x) can be interpreted in terms of the matrix-valued plane wave eikxIe^{-ikx}I of unit amplitude sent from x=+,x=+\infty, the matrix-valued reflected plane wave eikxR(k)e^{ikx}R(k) of amplitude R(k)R(k) at x=+,x=+\infty, and the matrix-valued transmitted plane wave eikxTr(k)e^{-ikx}\,T_{\rm{r}}(k) of amplitude Tr(k)T_{\rm{r}}(k) at x=.x=-\infty.

The following proposition is a useful consequence of Theorem 7.3 on p. 28 of [7], where we use det\det and tr to denote the matrix determinant and the matrix trace, respectively.

Proposition 2.1.

Assume that the n×nn\times n matrix-valued potential VV in (1.1) satisfies (1.2) and (1.3). Then, for any pair of n×nn\times n matrix-valued solutions φ(k,x)\varphi(k,x) and ψ(k,x)\psi(k,x) to (1.1), the 2n×2n2n\times 2n matrix determinant given by

det[φ(k,x)ψ(k,x)φ(k,x)ψ(k,x)],\det\begin{bmatrix}\varphi(k,x)&\psi(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\varphi^{\prime}(k,x)&\psi^{\prime}(k,x)\end{bmatrix}, (2.13)

is independent of xx and can only depend on k.k.

Proof.

The second-order matrix-valued systems (1.1) for φ(k,x)\varphi(k,x) and ψ(k,x)\psi(k,x) can be expressed as a first-order 2n×2n2n\times 2n matrix-valued system as

ddx[φ(k,x)ψ(k,x)φ(k,x)ψ(k,x)]=[0IV(x)k20][φ(k,x)ψ(k,x)φ(k,x)ψ(k,x)].\displaystyle\frac{d}{dx}\begin{bmatrix}\varphi(k,x)&\psi(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\varphi^{\prime}(k,x)&\psi^{\prime}(k,x)\end{bmatrix}=\begin{bmatrix}0&I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr V(x)-k^{2}&0\end{bmatrix}\begin{bmatrix}\varphi(k,x)&\psi(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\varphi^{\prime}(k,x)&\psi^{\prime}(k,x)\end{bmatrix}. (2.14)

From Theorem 7.3 on p. 28 of [7] we know that (2.14) implies

ddx(det[φ(k,x)ψ(k,x)φ(k,x)ψ(k,x)])=(tr[0IV(x)k20])(det[φ(k,x)ψ(k,x)φ(k,x)ψ(k,x)]).\begin{split}\displaystyle\frac{d}{dx}&\left(\det\begin{bmatrix}\varphi(k,x)&\psi(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\varphi^{\prime}(k,x)&\psi^{\prime}(k,x)\end{bmatrix}\right)\\ &=\left(\text{\rm{tr}}\begin{bmatrix}0&I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr V(x)-k^{2}&0\end{bmatrix}\right)\left(\det\begin{bmatrix}\varphi(k,x)&\psi(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\varphi^{\prime}(k,x)&\psi^{\prime}(k,x)\end{bmatrix}\right).\end{split} (2.15)

Since the coefficient matrix in (2.14) has zero trace, the right-hand side of (2.15) is zero, and hence the vanishment of the left-hand side of (2.15) shows that the determinant in (2.13) cannot depend on x.x.

Since kk appears as k2k^{2} in (1.1) and we already know that fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) are solutions to (1.1), it follows that fl(k,x)f_{\rm{l}}(-k,x) and fr(k,x)f_{\rm{r}}(-k,x) are also solutions to (1.1). From the known properties of the Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) we conclude that, for each x,x\in\mathbb{R}, the solutions fl(k,x)f_{\rm{l}}(-k,x) and fr(k,x)f_{\rm{r}}(-k,x) have analytic extensions in kk from the real axis \mathbb{R} to the lower-half complex plane \mathbb{C}^{-} and they are continuous in k¯,k\in\overline{\mathbb{C}^{-}}, where we have defined ¯:=.\overline{\mathbb{C}^{-}}:=\mathbb{C}^{-}\cup\mathbb{R}. In terms of the four solutions fl(k,x),f_{\rm{l}}(k,x), fr(k,x),f_{\rm{r}}(k,x), fl(k,x),f_{\rm{l}}(-k,x), fr(k,x)f_{\rm{r}}(-k,x) to (1.1), we introduce three useful 2n×2n2n\times 2n matrices as

Fl(k,x):=[fl(k,x)fl(k,x)fl(k,x)fl(k,x)],x,F_{\rm{l}}(k,x):=\begin{bmatrix}f_{\rm{l}}(k,x)&f_{\rm{l}}(-k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{l}}(k,x)&f^{\prime}_{\rm{l}}(-k,x)\end{bmatrix},\qquad x\in\mathbb{R}, (2.16)
Fr(k,x):=[fr(k,x)fr(k,x)fr(k,x)fr(k,x)],x,F_{\rm{r}}(k,x):=\begin{bmatrix}f_{\rm{r}}(-k,x)&f_{\rm{r}}(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{r}}(-k,x)&f^{\prime}_{\rm{r}}(k,x)\end{bmatrix},\qquad x\in\mathbb{R}, (2.17)
G(k,x):=[fl(k,x)fr(k,x)fl(k,x)fr(k,x)],x.G(k,x):=\begin{bmatrix}f_{\rm{l}}(k,x)&f_{\rm{r}}(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{l}}(k,x)&f^{\prime}_{\rm{r}}(k,x)\end{bmatrix},\qquad x\in\mathbb{R}. (2.18)

Since the kk-domains of those four solutions are known, from (2.16) and (2.17) we see that Fl(k,x)F_{\rm{l}}(k,x) and Fr(k,x)F_{\rm{r}}(k,x) are defined when kk\in\mathbb{R} and from (2.18) we see that G(k,x)G(k,x) is defined when k+¯.k\in\overline{\mathbb{C}^{+}}.

In the next proposition, with the help of Proposition 2.1 we show that the determinant of each of the three matrices defined in (2.16), (2.17), and (2.18), respectively, is independent of x,x, and in fact we determine the values of those determinants explicitly. We also establish the equivalence of the determinants of the left and right transmission coefficients.

Proposition 2.2.

Assume that the n×nn\times n matrix-valued potential VV in (1.1) satisfies (1.2) and (1.3). We then have the following:

  1. (a)

    The determinants of the 2n×2n2n\times 2n matrices Fl(k,x),F_{\rm{l}}(k,x), Fr(k,x),F_{\rm{r}}(k,x), and G(k,x)G(k,x) defined in (2.16), (2.17), and (2.18), respectively, are independent of x,x, and we have

    det[Fl(k,x)]=(2ik)n,k,\det\left[F_{\rm{l}}(k,x)\right]=(-2ik)^{n},\qquad k\in\mathbb{R}, (2.19)
    det[Fr(k,x)]=(2ik)n,k,\det\left[F_{\rm{r}}(k,x)\right]=(-2ik)^{n},\qquad k\in\mathbb{R}, (2.20)
    det[G(k,x)]=(2ik)ndet[Tr(k)],k+¯.\det\left[G(k,x)\right]=\frac{(-2ik)^{n}}{\det\left[T_{\rm{r}}(k)\right]},\qquad k\in\overline{\mathbb{C}^{+}}. (2.21)
  2. (b)

    The n×nn\times n matrix-valued left transmission coefficient Tl(k)T_{\rm{l}}(k) and right transmission coefficient Tr(k)T_{\rm{r}}(k) have the same determinant, i.e. we have

    det[Tl(k)]=det[Tr(k)],k+¯.\det[T_{\rm{l}}(k)]=\det[T_{\rm{r}}(k)],\qquad k\in\overline{\mathbb{C}^{+}}. (2.22)
  3. (c)

    We have

    det[Λ(k)]=1,det[Σ(k)]=1,k,\det[\Lambda(k)]=1,\quad\det[\Sigma(k)]=1,\qquad k\in\mathbb{R}, (2.23)

    where the 2n×2n2n\times 2n matrices Λ(k)\Lambda(k) and Σ(k)\Sigma(k) are defined in terms of the scattering coefficients in (2.9) as

    Λ(k):=[Tl(k)1L(k)Tl(k)1L(k)Tl(k)1Tl(k)1],k{0},\Lambda(k):=\begin{bmatrix}T_{\rm{l}}(k)^{-1}&L(-k)\,T_{\rm{l}}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L(k)\,T_{\rm{l}}(k)^{-1}&T_{\rm{l}}(-k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}, (2.24)
    Σ(k):=[Tr(k)1R(k)Tr(k)1R(k)Tr(k)1Tr(k)1],k{0}.\Sigma(k):=\begin{bmatrix}T_{\rm{r}}(-k)^{-1}&R(k)\,T_{\rm{r}}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R(-k)\,T_{\rm{r}}(-k)^{-1}&T_{\rm{r}}(k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}. (2.25)
Proof.

As already mentioned, the four quantities fl(k,x),f_{\rm{l}}(k,x), fr(k,x),f_{\rm{r}}(k,x), fl(k,x),f_{\rm{l}}(-k,x), fr(k,x)f_{\rm{r}}(-k,x) are all solutions to (1.1), and hence from Proposition 2.1 it follows that the determinants of Fl(k,x),F_{\rm{l}}(k,x), Fr(k,x),F_{\rm{r}}(k,x), and G(k,x)G(k,x) are independent of x.x. In their respective kk-domains, we can evaluate each of those determinants as x+x\to+\infty and x,x\to-\infty, and we know that we have the equivalent values. Using (2.1) in (2.16) we get

det[Fl(k,x)]=det[eikxIeikxIikeikxIikeikxI]+o(1),x+.\det\left[F_{\rm{l}}(k,x)\right]=\det\begin{bmatrix}e^{ikx}I&e^{-ikx}I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr ik\,e^{ikx}I&-ik\,e^{-ikx}I\end{bmatrix}+o(1),\qquad x\to+\infty. (2.26)

Using an elementary row block operation on the block matrix appearing on the right-hand side of (2.26), i.e. by multiplying the first row block by ikIikI and subtracting the resulting row block from the second row block, we obtain (2.19). In a similar way, using (2.2) in (2.17), we have

det[Fr(k,x)]=det[eikxIeikxIikeikxIikeikxI]+o(1),x,\det\left[F_{\rm{r}}(k,x)\right]=\det\begin{bmatrix}e^{ikx}I&e^{-ikx}I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr ik\,e^{ikx}I&-ik\,e^{-ikx}I\end{bmatrix}+o(1),\qquad x\to-\infty, (2.27)

and again using the aforementioned elementary row block operation on the matrix appearing on the right-hand side of (2.27), we get (2.20). For k{0}k\in\mathbb{R}\setminus\{0\} using (2.1) and (2.6)–(2.8) in (2.18), we obtain

G(k,x)=Kr(k,x)+o(1),x+,G(k,x)=K_{\rm{r}}(k,x)+o(1),\qquad x\to+\infty, (2.28)

where we have defined

Kr(k,x):=[eikxIeikxTr(k)1+eikxR(k)Tr(k)1ikeikxIikeikxTr(k)1+ikeikxR(k)Tr(k)1].K_{\rm{r}}(k,x):=\begin{bmatrix}e^{ikx}I&e^{-ikx}\,T_{\rm{r}}(k)^{-1}+e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle ik\,e^{ikx}I&\displaystyle-ik\,e^{-ikx}\,T_{\rm{r}}(k)^{-1}+\displaystyle ik\,e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}\end{bmatrix}.

Using the aforementioned elementary row block operation on the matrix Kr(k,x),K_{\rm{r}}(k,x), we get

det[Kr(k,x)]=det[eikxIeikxTr(k)1+eikxR(k)Tr(k)102ikeikxTr(k)1],\det\left[K_{\rm{r}}(k,x)\right]=\det\begin{bmatrix}e^{ikx}I&e^{-ikx}\,T_{\rm{r}}(k)^{-1}+e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle 0&-2ik\,e^{-ikx}\,T_{\rm{r}}(k)^{-1}\end{bmatrix}, (2.29)

where 0 denotes the n×nn\times n zero matrix. From (2.28) and (2.29) we get (2.21). Thus, the proof of (a) is complete. Let us now turn to the proof of (b). Using (2.2), (2.5), and (2.7) in (2.18), we obtain

G(k,x)=Kl(k,x)+o(1),x,G(k,x)=K_{\rm{l}}(k,x)+o(1),\qquad x\to-\infty, (2.30)

where we have defined

Kl(k,x):=[eikxTl(k)1+eikxL(k)Tl(k)1eikxIikeikxTl(k)1ikeikxL(k)Tl(k)1ikeikxI].K_{\rm{l}}(k,x):=\begin{bmatrix}e^{ikx}\,T_{\rm{l}}(k)^{-1}+e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}&e^{-ikx}I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr ik\,e^{ikx}\,T_{\rm{l}}(k)^{-1}-ike^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}&-ik\,e^{-ikx}I\end{bmatrix}.

Using an elementary row block operation on the matrix Kl(k,x),K_{\rm{l}}(k,x), i.e. by multiplying the first row block by ikIikI and adding the resulting row block to the second row block, we get

det[Kl(k,x)]=det[eikxTl(k)1+eikxL(k)Tl(k)1eikxI2ikeikxTl(k)10].\det\left[K_{\rm{l}}(k,x)\right]=\det\begin{bmatrix}e^{ikx}\,T_{\rm{l}}(k)^{-1}+e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}&e^{-ikx}I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle 2ik\,e^{ikx}\,T_{\rm{l}}(k)^{-1}&0\end{bmatrix}. (2.31)

Interchanging the first and second row blocks of the matrix appearing on the right-hand side of (2.31), we have

det[Kl(k,x)]=(1)ndet[2ikeikxTl(k)10eikxTl(k)1+eikxL(k)Tl(k)1eikxI].\det\left[K_{\rm{l}}(k,x)\right]=(-1)^{n}\det\begin{bmatrix}2ik\,e^{ikx}\,T_{\rm{l}}(k)^{-1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle e^{ikx}\,T_{\rm{l}}(k)^{-1}+e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}&e^{-ikx}I\end{bmatrix}. (2.32)

Then, from (2.30) and (2.32) we conclude that

det[G(k,x)]=(2ik)ndet[Tl(k)],k{0},\det\left[G(k,x)\right]=\frac{(-2ik)^{n}}{\det\left[T_{\rm{l}}(k)\right]},\qquad k\in\mathbb{R}\setminus\{0\}, (2.33)

and by comparing (2.21) and (2.33) we obtain (2.21) for k{0}.k\in\mathbb{R}\setminus\{0\}. However, for each fixed xx\in\mathbb{R} the quantity G(k,x)G(k,x) is analytic in k+k\in\mathbb{C}^{+} and continuous in k+¯.k\in\overline{\mathbb{C}^{+}}. Thus, (2.21) holds for k+¯k\in\overline{\mathbb{C}^{+}} and the proof of (b) is also complete. Using (2.5) and (2.7) in (2.16), and by exploiting the fact that det[Fl(k,x)]\det[F_{\rm{l}}(k,x)] is independent of x,x, we evaluate det[Fl(k,x)]\det[F_{\rm{l}}(k,x)] as x,x\to-\infty, and we get the equality

det[Fl(k,x)]=det[q1q2q3q4],\det\left[F_{\rm{l}}(k,x)\right]=\det\begin{bmatrix}q_{1}&q_{2}\\ q_{3}&q_{4}\end{bmatrix}, (2.34)

where we have defined

q1:=eikxTl(k)1+eikxL(k)Tl(k)1,q_{1}:=e^{ikx}T_{\rm{l}}(k)^{-1}+e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1},
q2:=eikxTl(k)1+eikxL(k)Tl(k)1,q_{2}:=e^{-ikx}T_{\rm{l}}(-k)^{-1}+e^{ikx}L(-k)\,T_{\rm{l}}(-k)^{-1},
q3:=ikeikxTl(k)1ikeikxL(k)Tl(k)1,q_{3}:=ik\,e^{ikx}T_{\rm{l}}(k)^{-1}-ik\,e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1},
q4:=ikeikxTl(k)1+ikeikxL(k)Tl(k)1.q_{4}:=-ik\,e^{-ikx}T_{\rm{l}}(-k)^{-1}+ik\,e^{ikx}L(-k)\,T_{\rm{l}}(-k)^{-1}.

Using two consecutive elementary row block operations on the matrix on the right-hand side of (2.34), i.e. by multiplying the first row block by ikIikI and adding the resulting row block to the second row block and then by dividing the resulting second row block by 2ik2ik and subtracting the resulting row block from the first row block, we can write (2.34) as

det[Fl(k,x)]=det[eikxL(k)Tl(k)1eikxTl(k)12ikeikxTl(k)12ikeikxL(k)Tl(k)1].\det\left[F_{\rm{l}}(k,x)\right]=\det\begin{bmatrix}e^{-ikx}L(k)\,T_{\rm{l}}(k)^{-1}&e^{-ikx}T_{\rm{l}}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 2ik\,e^{ikx}T_{\rm{l}}(k)^{-1}&2ik\,e^{ikx}L(-k)\,T_{\rm{l}}(-k)^{-1}\end{bmatrix}. (2.35)

By interchanging the first and second row blocks of the matrix appearing on the right-hand side of (2.35) and simplifying the determinant of the resulting matrix, from (2.35) we obtain

det[Fl(k,x)]=(2ik)ndet[Tl(k)1L(k)Tl(k)1L(k)Tl(k)1Tl(k)1].\det\left[F_{\rm{l}}(k,x)\right]=(-2ik)^{n}\det\begin{bmatrix}T_{\rm{l}}(k)^{-1}&L(-k)\,T_{\rm{l}}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L(k)\,T_{\rm{l}}(k)^{-1}&T_{\rm{l}}(-k)^{-1}\end{bmatrix}. (2.36)

Comparing (2.19), (2.24), and (2.36), we see that the first equality in (2.23) holds. We remark that the matrix Λ(k)\Lambda(k) defined in (2.24) behaves as O(1/k)O(1/k) as k0.k\to 0. On the other hand, we observe that the first equality of (2.23) holds at k=0k=0 by the continuity argument based on (1.6)–(1.8). In a similar way, using (2.6) and (2.8) in (2.17), and by exploiting the fact that det[Fr(k,x)]\det[F_{\rm{r}}(k,x)] is independent of x,x, we evaluate det[Fr(k,x)]\det[F_{\rm{r}}(k,x)] as x+x\to+\infty and we get the equality

det[Fr(k,x)]=det[q5q6q7q8],\det\left[F_{\rm{r}}(k,x)\right]=\det\begin{bmatrix}q_{5}&q_{6}\\ q_{7}&q_{8}\end{bmatrix}, (2.37)

where we have defined

q5:=eikxTr(k)1+eikxR(k)Tr(k)1,q_{5}:=e^{ikx}\,T_{\rm{r}}(-k)^{-1}+e^{-ikx}R(-k)\,T_{\rm{r}}(-k)^{-1},
q6:=eikxTr(k)1+eikxR(k)Tr(k)1,q_{6}:=e^{-ikx}\,T_{\rm{r}}(k)^{-1}+e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1},
q7:=ikeikxTr(k)1ikeikxR(k)Tr(k)1,q_{7}:=ik\,e^{ikx}\,T_{\rm{r}}(-k)^{-1}-ik\,e^{-ikx}R(-k)\,T_{\rm{r}}(-k)^{-1},
q8:=ikeikxTr(k)1+ikeikxR(k)Tr(k)1.q_{8}:=-ik\,e^{-ikx}\,T_{\rm{r}}(k)^{-1}+ik\,e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}.

Using two elementary row block operations on the matrix on the right-hand side of (2.37), i.e. by multiplying the first row block by ikIikI and adding the resulting row block to the second row block and then by dividing the resulting second row block by 2ik2ik and subtracting the resulting row block from the first row block, we can write (2.37) as

det[Fr(k,x)]=det[eikxR(k)Tr(k)1eikxTr(k)12ikeikxTr(k)12ikeikxR(k)Tr(k)1].\det\left[F_{\rm{r}}(k,x)\right]=\det\begin{bmatrix}e^{-ikx}R(-k)\,T_{\rm{r}}(-k)^{-1}&e^{-ikx}\,T_{\rm{r}}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 2ik\,e^{ikx}\,T_{\rm{r}}(-k)^{-1}&2ik\,e^{ikx}R(k)\,T_{\rm{r}}(k)^{-1}\end{bmatrix}. (2.38)

By interchanging the first and second row blocks of the matrix appearing on the right-hand side of (2.38) and simplifying the determinant of the resulting matrix, from (2.38) we obtain

det[Fr(k,x)]=(2ik)ndet[Tr(k)1R(k)Tr(k)1R(k)Tr(k)1Tr(k)1].\det\left[F_{\rm{r}}(k,x)\right]=(-2ik)^{n}\det\begin{bmatrix}T_{\rm{r}}(-k)^{-1}&R(k)\,T_{\rm{r}}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R(-k)\,T_{\rm{r}}(-k)^{-1}&T_{\rm{r}}(k)^{-1}\end{bmatrix}. (2.39)

Comparing (2.20), (2.25), and (2.39), we see that the second equality in (2.23) holds. We remark that the matrix Σ(k)\Sigma(k) defined in (2.25) behaves as O(1/k)O(1/k) as k0,k\to 0, but the second equality of (2.23) holds at k=0k=0 by the continuity argument expressed in (1.6)–(1.8). ∎

We note that, in the proof of Proposition 2.2, instead of using elementary row block operations, we could alternatively make use of the matrix factorization formula involving a Schur complement. Such a factorization formula is given by

[M1M2M3M4]=[I0M3M11I][M100M4M3M11M2][IM11M20I],\begin{bmatrix}M_{1}&M_{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr M_{3}&M_{4}\end{bmatrix}=\begin{bmatrix}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr M_{3}\,M_{1}^{-1}&I\end{bmatrix}\begin{bmatrix}M_{1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&M_{4}-M_{3}\,M_{1}^{-1}\,M_{2}\end{bmatrix}\begin{bmatrix}I&M_{1}^{-1}\,M_{2}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&I\end{bmatrix}, (2.40)

which corresponds to (1.11) on p. 17 of [8]. In the alternative proof of Proposition 2.2, it is sufficient to use (2.40) in the special case where the block matrices M1,M_{1}, M2,M_{2}, M3,M_{3}, M4M_{4} have the same size n×nn\times n and M1M_{1} is invertible.

Let us use [f(x);g(x)][f(x);g(x)] to denote the Wronskian of two n×nn\times n matrix-valued functions of x,x, where we have defined

[f(x);g(x)]:=f(x)g(x)f(x)g(x).[f(x);g(x)]:=f(x)\,g^{\prime}(x)-f^{\prime}(x)\,g(x).

Given any two n×nn\times n matrix-valued solutions ξ(k,x)\xi(k,x) and ψ(k,x)\psi(k,x) to (1.1), one can directly verify that the Wronskian [ξ(±k,x);ψ(k,x)][\xi(\pm k^{*},x)^{\dagger};\psi(k,x)] is independent of x,x, where we use an asterisk to denote complex conjugation. Evaluating the Wronskians involving the Jost solutions to (1.1) as x+x\to+\infty and also as x,x\to-\infty, respectively, for kk\in\mathbb{R} we obtain

[fl(k,x);fl(k,x)]=2ikI=2ik[Tl(k)]1[IL(k)L(k)]Tl(k)1,[f_{\rm{l}}(k,x)^{\dagger};f_{\rm{l}}(k,x)]=2ikI=2ik\,[T_{\rm{l}}(k)^{\dagger}]^{-1}\left[I-L(k)^{\dagger}L(k)\right]T_{\rm{l}}(k)^{-1}, (2.41)
[fl(k,x);fl(k,x)]=0=2ik[Tl(k)]1[L(k)L(k)]Tl(k)1,[f_{\rm{l}}(-k,x)^{\dagger};f_{\rm{l}}(k,x)]=0=2ik\,[T_{\rm{l}}(-k)^{\dagger}]^{-1}\left[L(-k)^{\dagger}-L(k)\right]T_{\rm{l}}(k)^{-1}, (2.42)
[fr(k,x);fr(k,x)]=2ik[Tr(k)]1[IR(k)R(k)]Tr(k)1=2ikI,[f_{\rm{r}}(k,x)^{\dagger};f_{\rm{r}}(k,x)]=-2ik\,[T_{\rm{r}}(k)^{\dagger}]^{-1}\left[I-R(k)^{\dagger}R(k)\right]T_{\rm{r}}(k)^{-1}=-2ikI, (2.43)
[fr(k,x);fr(k,x)]=2ik[Tr(k)]1[R(k)R(k)]Tr(k)1=0,[f_{\rm{r}}(-k,x)^{\dagger};f_{\rm{r}}(k,x)]=2ik\,[T_{\rm{r}}(-k)^{\dagger}]^{-1}\left[R(k)-R(-k)^{\dagger}\right]T_{\rm{r}}(k)^{-1}=0, (2.44)
[fl(k,x);fr(k,x)]=2ikR(k)Tr(k)1=2ik[Tl(k)]1L(k),[f_{\rm{l}}(k,x)^{\dagger};f_{\rm{r}}(k,x)]=2ik\,R(k)\,T_{\rm{r}}(k)^{-1}=-2ik\,[T_{\rm{l}}(k)^{\dagger}]^{-1}L(k)^{\dagger}, (2.45)

and for k+¯k\in\overline{\mathbb{C}^{+}} we get

[fl(k,x);fr(k,x)]=2ikTr(k)1=2ik[Tl(k)]1,[f_{\rm{l}}(-k^{*},x)^{\dagger};f_{\rm{r}}(k,x)]=-2ik\,T_{\rm{r}}(k)^{-1}=-2ik\,[T_{\rm{l}}(-k^{\ast})^{\dagger}]^{-1}, (2.46)
[fr(k,x);fl(k,x)]=2ik[Tr(k)]1=2ikTl(k)1.[f_{\rm{r}}(-k^{*},x)^{\dagger};f_{\rm{l}}(k,x)]=2ik\,[T_{\rm{r}}(-k^{\ast})^{\dagger}]^{-1}=2ik\,T_{\rm{l}}(k)^{-1}. (2.47)

With the help of (2.41), (2.43), and (2.45), we can prove that

S(k)S(k)=[I00I],k,S(k)^{\dagger}S(k)=\begin{bmatrix}I&0\\ 0&I\end{bmatrix},\qquad k\in\mathbb{R}, (2.48)

where S(k)S(k) is the scattering matrix defined in (2.9). From (2.48) we conclude that S(k)S(k) is unitary, and hence we also have

S(k)S(k)=[I00I],k.S(k)\,S(k)^{\dagger}=\begin{bmatrix}I&0\\ 0&I\end{bmatrix},\qquad k\in\mathbb{R}. (2.49)

From (2.42), (2.44), and (2.46) we obtain

L(k)=L(k),R(k)=R(k),Tl(k)=Tr(k),k,L(-k)=L(k)^{\dagger},\quad R(-k)=R(k)^{\dagger},\quad T_{\rm{l}}(-k)=T_{\rm{r}}(k)^{\dagger},\qquad k\in\mathbb{R}, (2.50)

which can equivalently be expressed as

S(k)=QS(k)Q,k,S(k)^{\dagger}=QS(-k)Q,\qquad k\in\mathbb{R},

where QQ is the constant 2n×2n2n\times 2n matrix given by

Q:=[0II0].Q:=\begin{bmatrix}0&I\\ I&0\end{bmatrix}. (2.51)

We remark that the matrix QQ is equal to its own inverse.

For easy referencing, for kk\in\mathbb{R} we write (2.48) and (2.49) explicitly as

[Tl(k)Tl(k)+L(k)L(k)Tl(k)R(k)+L(k)Tr(k)R(k)Tl(k)+Tr(k)L(k)Tr(k)Tr(k)+R(k)R(k)]=[I00I],\begin{bmatrix}T_{\rm{l}}(k)^{\dagger}\,T_{\rm{l}}(k)+L(k)^{\dagger}\,L(k)&T_{\rm{l}}(k)^{\dagger}\,R(k)+L(k)^{\dagger}\,T_{\rm{r}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R(k)^{\dagger}\,T_{\rm{l}}(k)+T_{\rm{r}}(k)^{\dagger}\,L(k)&T_{\rm{r}}(k)^{\dagger}\,T_{\rm{r}}(k)+R(k)^{\dagger}\,R(k)\end{bmatrix}=\begin{bmatrix}I&0\\ 0&I\end{bmatrix}, (2.52)
[Tl(k)Tl(k)+R(k)R(k)Tl(k)L(k)+R(k)Tr(k)L(k)Tl(k)+Tr(k)R(k)Tr(k)Tr(k)+L(k)L(k)]=[I00I].\begin{bmatrix}T_{\rm{l}}(k)\,T_{\rm{l}}(k)^{\dagger}+R(k)\,R(k)^{\dagger}&T_{\rm{l}}(k)\,L(k)^{\dagger}+R(k)\,T_{\rm{r}}(k)^{\dagger}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L(k)\,T_{\rm{l}}(k)^{\dagger}+T_{\rm{r}}(k)\,R(k)^{\dagger}&T_{\rm{r}}(k)\,T_{\rm{r}}(k)^{\dagger}+L(k)\,L(k)^{\dagger}\end{bmatrix}=\begin{bmatrix}I&0\\ 0&I\end{bmatrix}. (2.53)

In Proposition 2.2(a) we have evaluated the determinants of the matrices Fl(k,x),F_{\rm{l}}(k,x), Fr(k,x),F_{\rm{r}}(k,x), and G(k,x)G(k,x) appearing in (2.16), (2.17), and (2.18), respectively. In the next theorem we determine their matrix inverses explicitly in terms of the Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x).f_{\rm{r}}(k,x).

Theorem 2.3.

Assume that the n×nn\times n matrix-valued potential VV in (1.1) satisfies (1.2) and (1.3). We then have the following:

  1. (a)

    The 2n×2n2n\times 2n matrix Fl(k,x)F_{\rm{l}}(k,x) defined in (2.16) is invertible when k{0},k\in\mathbb{R}\setminus\{0\}, and we have

    Fl(k,x)1=12ik[fl(k,x)fl(k,x)fl(k,x)fl(k,x)],k{0}.F_{\rm{l}}(k,x)^{-1}=\displaystyle\frac{1}{2ik}\begin{bmatrix}-f^{\prime}_{\rm{l}}(k,x)^{\dagger}&f_{\rm{l}}(k,x)^{\dagger}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{l}}(-k,x)^{\dagger}&-f_{\rm{l}}(-k,x)^{\dagger}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}. (2.54)
  2. (b)

    The 2n×2n2n\times 2n matrix Fr(k,x)F_{\rm{r}}(k,x) defined in (2.17) is invertible when k{0},k\in\mathbb{R}\setminus\{0\}, and we have

    Fr(k,x)1=12ik[fr(k,x)fr(k,x)fr(k,x)fr(k,x)],k{0}.F_{\rm{r}}(k,x)^{-1}=\displaystyle\frac{1}{2ik}\begin{bmatrix}-f^{\prime}_{\rm{r}}(-k,x)^{\dagger}&f_{\rm{r}}(-k,x)^{\dagger}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{r}}(k,x)^{\dagger}&-f_{\rm{r}}(k,x)^{\dagger}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}. (2.55)
  3. (c)

    The 2n×2n2n\times 2n matrix G(k,x)G(k,x) given in (2.18) is invertible for k+¯{0}k\in\overline{\mathbb{C}^{+}}\setminus\{0\} except at the poles of the determinant of the transmission coefficient Tl(k),T_{\rm{l}}(k), where such poles can only occur on the positive imaginary axis in the complex kk-plane, those kk-values correspond to the bound states of (1.1), and the number of such poles is finite. Furthermore, for k+¯{0}k\in\overline{\mathbb{C}^{+}}\setminus\{0\} we have

    G(k,x)1=12ik[Tl(k)00Tr(k)][fr(k,x)fr(k,x)fl(k,x)fl(k,x)].G(k,x)^{-1}=-\displaystyle\frac{1}{2ik}\begin{bmatrix}T_{\rm{l}}(k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{r}}(k)\end{bmatrix}\begin{bmatrix}f^{\prime}_{\rm{r}}(-k^{\ast},x)^{\dagger}&-f_{\rm{r}}(-k^{\ast},x)^{\dagger}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-f^{\prime}_{\rm{l}}(-k^{\ast},x)^{\dagger}&f_{\rm{l}}(-k^{\ast},x)^{\dagger}\end{bmatrix}. (2.56)
Proof.

From (2.19) we see that the determinant of Fl(k,x)F_{\rm{l}}(k,x) vanishes only at k=0k=0 on the real axis. We confirm (2.54) by direct verification. This is done by first postmultiplying the right-hand side of (2.54) with the matrix Fl(k,x)F_{\rm{l}}(k,x) given in (2.16) and then by simplifying the block entries of the resulting matrix product with the help of (2.41) and (2.42). Thus, the proof of (a) is complete. We prove (b) in a similar manner. From (2.20) we observe that det[Fr(k,x)]\det[F_{\rm{r}}(k,x)] is nonzero when kk\in\mathbb{R} except at k=0.k=0. Consequently, the matrix Fr(k,x)F_{\rm{r}}(k,x) is invertible when k{0}.k\in\mathbb{R}\setminus\{0\}. By postmultiplying the right-hand side of (2.55) by the matrix Fr(k,x)F_{\rm{r}}(k,x) given in (2.17), we simplify the resulting matrix product with the help of (2.43) and (2.44) and verify that we obtain the 2n×2n2n\times 2n identity matrix as the product. Thus, the proof of (b) is also complete. For the proof of (c) we proceed as follows. From (2.21) we observe that det[G(k,x)]\det[G(k,x)] is nonzero when k+¯k\in\overline{\mathbb{C}^{+}} except when k=0k=0 and when det[Tr(k)]\det[T_{\rm{r}}(k)] has poles. From (2.22) we know that the determinants of Tl(k)T_{\rm{l}}(k) and Tr(k)]T_{\rm{r}}(k)] coincide, and from [5] we know that the poles of det[Tl(k)]\det[T_{\rm{l}}(k)] correspond to the kk-values at which the bound states of (1.1) occur. It is also known [5] that the bound-state kk-values can only occur on the positive imaginary axis in the complex kk-plane and that the number of such kk-values is finite. We verify (2.56) directly. That is done by postmultiplying both sides of (2.56) with the matrix G(k,x)G(k,x) defined in (2.18), by simplifying the matrix product by using (2.42), (2.44), (2.46), and (2.47), and by showing that the corresponding product is equal to the 2n×2n2n\times 2n identity matrix. ∎

3 The factorization formulas

In this section we provide a factorization formula for the full-line matrix Schrödinger equation (1.1), by relating the matrix-valued scattering coefficients corresponding to the potential VV appearing in (1.1) to the matrix-valued scattering coefficients corresponding to the fragments of V.V. We also present an alternate version of the factorization formula, which is equivalent to the original version.

We already know that fl(k,x)f_{\rm{l}}(k,x) and fl(k,x)f_{\rm{l}}(-k,x) are both n×nn\times n matrix-valued solutions to (1.1), and from (2.1) we conclude that their combined 2n2n columns form a fundamental set of column-vector solutions to (1.1) when k{0}.k\in\mathbb{R}\setminus\{0\}. Hence, we can express fr(k,x)f_{\rm{r}}(k,x) as a linear combination of those 2n2n columns, and we get

fr(k,x)=fl(k,x)R(k)Tr(k)1+fl(k,x)Tr(k)1,k,f_{\rm{r}}(k,x)=f_{\rm{l}}(k,x)\,R(k)\,T_{\rm{r}}(k)^{-1}+f_{\rm{l}}(-k,x)\,T_{\rm{r}}(k)^{-1},\qquad k\in\mathbb{R}, (3.1)

where the coefficient matrices are obtained by letting x+x\to+\infty in (3.1) and using (2.1) and (2.6). Note that we have included k=0k=0 in (3.1) by using the continuity of fl(k,x)f_{\rm{l}}(k,x) at k=0k=0 for each fixed x.x\in\mathbb{R}. Similarly, fr(k,x)f_{\rm{r}}(k,x) and fr(k,x)f_{\rm{r}}(-k,x) are both n×nn\times n matrix-valued solutions to (1.1), and from (2.2) we conclude that their combined 2n2n columns form a fundamental set of column-vector solutions to (1.1) when k{0}.k\in\mathbb{R}\setminus\{0\}. Thus, we have

fl(k,x)=fr(k,x)L(k)Tl(k)1+fr(k,x)Tl(k)1,k,f_{\rm{l}}(k,x)=f_{\rm{r}}(k,x)\,L(k)\,T_{\rm{l}}(k)^{-1}+f_{\rm{r}}(-k,x)\,T_{\rm{l}}(k)^{-1},\qquad k\in\mathbb{R}, (3.2)

where we have obtained the coefficient matrices by letting xx\to-\infty in (3.2) and by using (2.2) and (2.5). Note that we can write (3.1) and (3.2), respectively, as

fl(k,x)=fr(k,x)Tr(k)fl(k,x)R(k),k,f_{\rm{l}}(-k,x)=f_{\rm{r}}(k,x)\,T_{\rm{r}}(k)-f_{\rm{l}}(k,x)\,R(k),\qquad k\in\mathbb{R}, (3.3)
fr(k,x)=fl(k,x)Tl(k)fr(k,x)L(k),k.f_{\rm{r}}(-k,x)=f_{\rm{l}}(k,x)\,T_{\rm{l}}(k)-f_{\rm{r}}(k,x)\,L(k),\qquad k\in\mathbb{R}. (3.4)

Using (3.2) in (2.16) and comparing the result with (2.17), we see that the matrices Fl(k,x)F_{\rm{l}}(k,x) and Fr(k,x)F_{\rm{r}}(k,x) defined in (2.16) and (2.17), respectively, are related to each other as

Fl(k,x)=Fr(k,x)Λ(k),k,F_{\rm{l}}(k,x)=F_{\rm{r}}(k,x)\,\Lambda(k),\qquad k\in\mathbb{R}, (3.5)

where Λ(k)\Lambda(k) is the matrix defined in (2.24). We remark that, even though Λ(k)\Lambda(k) has the behavior O(1/k)O(1/k) as k0,k\to 0, by the continuity the equality in (3.5) holds also at k=0.k=0. In a similar way, by using (3.1) in (2.17) and comparing the result with (2.16), we obtain

Fr(k,x)=Fl(k,x)Σ(k),k,F_{\rm{r}}(k,x)=F_{\rm{l}}(k,x)\,\Sigma(k),\qquad k\in\mathbb{R}, (3.6)

where Σ(k)\Sigma(k) is the matrix defined in (2.25). By (2.19) and (2.20), we know that the matrices Fl(k,x)F_{\rm{l}}(k,x) and Fr(k,x)F_{\rm{r}}(k,x) are invertible when k{0}.k\in\mathbb{R}\setminus\{0\}. Thus, from (3.5) and (3.6) we conclude that Λ(k)\Lambda(k) and Σ(k)\Sigma(k) are inverses of each other for each k{0},k\in\mathbb{R}\setminus\{0\}, i.e. we have

Λ(k)Σ(k)=Σ(k)Λ(k)=[I00I],k,\Lambda(k)\,\Sigma(k)=\Sigma(k)\,\Lambda(k)=\begin{bmatrix}I&0\\ 0&I\end{bmatrix},\qquad k\in\mathbb{R}, (3.7)

where the result in (3.7) holds also at k=0k=0 by the continuity. We already know from (2.23) that the determinants of the matrices Λ(k)\Lambda(k) and Σ(k)\Sigma(k) are both equal to 1.1. We remark that (3.7) yields a wealth of relations among the left and right matrix-valued scattering coefficients, which are similar to those given in (2.50), (2.52), and (2.53).

Using (3.3) and (3.4) in (2.18), we get

G(k,x)=G(k,x)[R(k)Tl(k)Tr(k)L(k)],k.G(-k,x)=G(k,x)\begin{bmatrix}-R(k)&T_{\rm{l}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr T_{\rm{r}}(k)&-L(k)\end{bmatrix},\qquad k\in\mathbb{R}. (3.8)

We can write (3.8) in terms of the scattering matrix S(k)S(k) appearing in (2.9) as

G(k,x)=G(k,x)JS(k)JQ,k,G(-k,x)=G(k,x)\,J\,S(k)\,J\,Q,\qquad k\in\mathbb{R}, (3.9)

where QQ is the 2n×2n2n\times 2n constant matrix defined in (2.51) and JJ is the 2n×2n2n\times 2n involution matrix defined as

J:=[I00I].J:=\begin{bmatrix}I&0\\ 0&-I\end{bmatrix}. (3.10)

Note that JJ is equal to its own inverse. By taking the determinants of both sides of (3.9) and using (2.21) and the fact that det[J]=(1)n\det[J]=(-1)^{n} and det[Q]=(1)n,\det[Q]=(-1)^{n}, we obtain the determinant of the scattering matrix as

det[S(k)]=det[Tr(k)]det[Tr(k)],k.\det\left[S(k)\right]=\frac{\det\left[T_{\rm{r}}(k)\right]}{\det\left[T_{\rm{r}}(-k)\right]},\qquad k\in\mathbb{R}. (3.11)

Because of (2.22), we can write (3.11) also as

det[S(k)]=det[Tl(k)]det[Tl(k)],k.\det\left[S(k)\right]=\frac{\det\left[T_{\rm{l}}(k)\right]}{\det\left[T_{\rm{l}}(-k)\right]},\qquad k\in\mathbb{R}.

From (2.22) and (2.50) we see that

det[Tl(k)]=(det[Tl(k)]),det[Tr(k)]=(det[Tr(k)]),k,\det\left[T_{\rm{l}}(-k)\right]=\left(\det\left[T_{\rm{l}}(k)\right]\right)^{*},\quad\det\left[T_{\rm{r}}(-k)\right]=\left(\det\left[T_{\rm{r}}(k)\right]\right)^{*},\qquad k\in\mathbb{R}, (3.12)

where we recall that an asterisk is used to denote complex conjugation. Hence, (3.11) and (3.12) imply that

det[S(k)]=det[Tl(k)](det[Tl(k)]),k.\det\left[S(k)\right]=\frac{\det\left[T_{\rm{l}}(k)\right]}{\left(\det\left[T_{\rm{l}}(k)\right]\right)^{\ast}},\qquad k\in\mathbb{R}. (3.13)

The next proposition indicates how the relevant quantities related to the full-line Schrödinger equation (1.1) are affected when the potential is shifted by bb units to the right, i.e. when we replace V(x)V(x) in (1.1) by V(b)(x)V^{(b)}(x) defined as

V(b)(x):=V(x+b),b.V^{(b)}(x):=V(x+b),\qquad b\in\mathbb{R}. (3.14)

We use the superscript (b)(b) to denote the corresponding transformed quantities. The result will be useful in showing that the factorization formulas (3.44) and (3.45) remain unchanged if the potential is decomposed into two pieces at any fragmentation point instead the fragmentation point x=0x=0 used in (1.4).

Proposition 3.1.

Consider the full-line n×nn\times n matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). Under the transformation V(x)V(b)(x)V(x)\mapsto V^{(b)}(x) described in (3.14), the quantities relevant to (1.1) are transformed as follows:

  1. (a)

    The n×nn\times n Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) are transformed into fl(b)(k,x)f_{\rm{l}}^{(b)}(k,x) and fr(b)(k,x),f_{\rm{r}}^{(b)}(k,x), respectively, where we have defined

    fl(b)(k,x):=eikbfl(k,x+b),fr(b)(k,x):=eikbfr(k,x+b).f_{\rm{l}}^{(b)}(k,x):=e^{-ikb}f_{\rm{l}}(k,x+b),\quad f_{\rm{r}}^{(b)}(k,x):=e^{ikb}f_{\rm{r}}(k,x+b). (3.15)
  2. (b)

    The n×nn\times n matrix-valued left and right transmission coefficients appearing in (2.5) and (2.6) remain unchanged, i.e. we have

    Tl(b)(k)=Tl(k),Tr(b)(k)=Tr(k).T_{\rm{l}}^{(b)}(k)=T_{\rm{l}}(k),\quad T_{\rm{r}}^{(b)}(k)=T_{\rm{r}}(k). (3.16)

    The n×nn\times n matrix-valued left and right reflection coefficients L(k)L(k) and R(k)R(k) appearing in (2.5) and (2.6), respectively, are transformed into L(b)(k)L^{(b)}(k) and R(b)(k),R^{(b)}(k), which are defined as

    L(b)(k):=L(k)e2ikb,R(b)(k):=R(k)e2ikb.L^{(b)}(k):=L(k)\,e^{-2ikb},\quad R^{(b)}(k):=R(k)\,e^{2ikb}. (3.17)
  3. (c)

    The 2n×2n2n\times 2n transition matrix Λ(k)\Lambda(k) appearing in (2.24) is transformed into Λ(b)(k)\Lambda^{(b)}(k) given by

    Λ(b)(k):=[eikbI00eikbI]Λ(k)[eikbI00eikbI].\Lambda^{(b)}(k):=\begin{bmatrix}e^{ikb}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&e^{-ikb}I\end{bmatrix}\Lambda(k)\begin{bmatrix}e^{-ikb}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&e^{ikb}I\end{bmatrix}. (3.18)

    The 2n×2n2n\times 2n transition matrix Σ(k)\Sigma(k) appearing in (2.25) is transformed into Σ(b)(k)\Sigma^{(b)}(k) given by

    Σ(b)(k):=[eikbI00eikbI]Σ(k)[eikbI00eikbI].\Sigma^{(b)}(k):=\begin{bmatrix}e^{ikb}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&e^{-ikb}I\end{bmatrix}\Sigma(k)\begin{bmatrix}e^{-ikb}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&e^{ikb}I\end{bmatrix}. (3.19)
Proof.

The matrix fl(b)(k,x)f_{\rm{l}}^{(b)}(k,x) defined in the first equality of (3.15) is the transformed left Jost solution because it satisfies the transformed matrix-valued Schrödinger equation

ψ′′(k,x)+V(x+b)ψ(k,x)=k2ψ(k,x),x,-\psi^{\prime\prime}(k,x)+V(x+b)\,\psi(k,x)=k^{2}\,\psi(k,x),\qquad x\in\mathbb{R}, (3.20)

and is asymptotic to eikx[I+o(1)]e^{ikx}[I+o(1)] as x+.x\to+\infty. Similarly, the matrix fr(b)(k,x)f_{\rm{r}}^{(b)}(k,x) defined in the second equality of (3.15) is the transformed right Jost solution because it satisfies (3.20) and is asymptotic to eikx[I+o(1)]e^{-ikx}[I+o(1)] as x.x\to-\infty. Thus, the proof of (a) is complete. Using fl(b)(k,x)f_{\rm{l}}^{(b)}(k,x) in the analog of (2.5), as xx\to-\infty we have

fl(b)(k,x)=eikx[Tl(b)(k)]1+eikxL(b)(k)[Tl(b)(k)]1+o(1).f_{\rm{l}}^{(b)}(k,x)=e^{ikx}\,[T_{\rm{l}}^{(b)}(k)]^{-1}+e^{-ikx}L^{(b)}(k)\,[T_{\rm{l}}^{(b)}(k)]^{-1}+o(1). (3.21)

Using the right-hand side of the first equality of (3.15) in (3.21) and comparing the result with (2.5) we get the first equalities of (3.16) and (3.17), respectively. Similarly, using fr(b)(k,x)f_{\rm{r}}^{(b)}(k,x) in the analog of (2.6), as x+x\to+\infty we get

fr(b)(k,x)=eikx[Tr(b)(k)]1+eikxR(b)(k)[Tr(b)(k)]1+o(1).f_{\rm{r}}^{(b)}(k,x)=e^{-ikx}\,[T_{\rm{r}}^{(b)}(k)]^{-1}+e^{ikx}R^{(b)}(k)\,[T_{\rm{r}}^{(b)}(k)]^{-1}+o(1). (3.22)

Using the right-hand side of the second equality of (3.15) in (3.22) and comparing the result with (2.6), we obtain the second equalities of (3.16) and (3.17), respectively. Hence, the proof of (b) is also complete. Using (3.16) and (3.17) in the analogs of (2.24) and (2.25) corresponding to the shifted potential V(b),V^{(b)}, we obtain (3.18) and (3.19). ∎

When the potential VV in (1.1) satisfies (1.2) and (1.3). let us decompose it as in (1.4). For the left fragment V1V_{1} and the right fragment V2V_{2} defined in (1.5), let us use the subscripts 11 and 2,2, respectively, to denote the corresponding relevant quantities. Thus, analogous to (2.9) we define the 2n×2n2n\times 2n scattering matrices S1(k)S_{1}(k) and S2(k)S_{2}(k) corresponding to V1V_{1} and V2,V_{2}, respectively, as

S1(k):=[Tl1(k)R1(k)L1(k)Tr1(k)],S2(k):=[Tl2(k)R2(k)L2(k)Tr2(k)],k,S_{1}(k):=\begin{bmatrix}T_{\rm{l}1}(k)&R_{1}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{1}(k)&T_{\rm{r}1}(k)\end{bmatrix},\quad S_{2}(k):=\begin{bmatrix}T_{\rm{l}2}(k)&R_{2}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{2}(k)&T_{\rm{r}2}(k)\end{bmatrix},\qquad k\in\mathbb{R}, (3.23)

where Tl1(k)T_{\rm{l}1}(k) and Tl2(k)T_{\rm{l}2}(k) are the respective left transmission coefficients, Tr1(k)T_{\rm{r}1}(k) and Tr2(k)T_{\rm{r}2}(k) are the respective right transmission coefficients, L1(k)L_{1}(k) and L2(k)L_{2}(k) are the respective left reflection coefficients, and R1(k)R_{1}(k) and R2(k)R_{2}(k) are the respective right reflection coefficients. In terms of the scattering coefficients for the respective fragments, we use Λ1(k)\Lambda_{1}(k) and Λ2(k)\Lambda_{2}(k) as in (2.24) and use Σ1(k)\Sigma_{1}(k) and Σ2(k)\Sigma_{2}(k) as in (2.25) to denote the transition matrices corresponding to the left and right potential fragments V1V_{1} and V2.V_{2}. Thus, we have

Λ1(k):=[Tl1(k)1L1(k)Tl1(k)1L1(k)Tl1(k)1Tl1(k)1],k{0},\Lambda_{1}(k):=\begin{bmatrix}T_{\rm{l}1}(k)^{-1}&L_{1}(-k)\,T_{\rm{l}1}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{1}(k)\,T_{\rm{l}1}(k)^{-1}&T_{\rm{l}1}(-k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}, (3.24)
Λ2(k):=[Tl2(k)1L2(k)Tl2(k)1L2(k)Tl2(k)1Tl2(k)1],k{0},\Lambda_{2}(k):=\begin{bmatrix}T_{\rm{l}2}(k)^{-1}&L_{2}(-k)\,T_{\rm{l}2}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{2}(k)\,T_{\rm{l}2}(k)^{-1}&T_{\rm{l}2}(-k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}, (3.25)
Σ1(k):=[Tr1(k)1R1(k)Tr1(k)1R1(k)Tr1(k)1Tr1(k)1],k{0},\Sigma_{1}(k):=\begin{bmatrix}T_{\rm{r}1}(-k)^{-1}&R_{1}(k)\,T_{\rm{r}1}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R_{1}(-k)\,T_{\rm{r}1}(-k)^{-1}&T_{\rm{r}1}(k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}, (3.26)
Σ2(k):=[Tr2(k)1R2(k)Tr2(k)1R2(k)Tr2(k)1Tr2(k)1],k{0}.\Sigma_{2}(k):=\begin{bmatrix}T_{\rm{r}2}(-k)^{-1}&R_{2}(k)\,T_{\rm{r}2}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R_{2}(-k)\,T_{\rm{r}2}(-k)^{-1}&T_{\rm{r}2}(k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}. (3.27)

In preparation for the proof of our factorization formula, in the next proposition we express the value at x=0x=0 of the matrix Fl(k,x)F_{\rm{l}}(k,x) defined in (2.16) in terms of the left scattering coefficients for the right fragment V2,V_{2}, and similarly we express the value at x=0x=0 of the matrix Fr(k,x)F_{\rm{r}}(k,x) defined in (2.17) in terms of the right scattering coefficients for the left fragment V1.V_{1}.

Proposition 3.2.

Consider the full-line n×nn\times n matrix Schrödinger equation (1.1), where the potential VV satisfies (1.2) and (1.3) and is fragmented as in (1.4) into the left fragment V1V_{1} and the right fragment V2V_{2} defined in (1.5). We then have the following:

  1. (a)

    For k,k\in\mathbb{R}, the 2n×2n2n\times 2n matrix Fl(k,x)F_{\rm{l}}(k,x) defined in (2.16) satisfies

    Fl(k,0)=[[I+L2(k)]Tl2(k)1[I+L2(k)]Tl2(k)1ik[IL2(k)]Tl2(k)1ik[IL2(k)]Tl2(k)1],F_{\rm{l}}(k,0)=\begin{bmatrix}\left[I+L_{2}(k)\right]T_{\rm{l}2}(k)^{-1}&\left[I+L_{2}(-k)\right]T_{\rm{l}2}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr ik\left[I-L_{2}(k)\right]T_{\rm{l}2}(k)^{-1}&-ik\left[I-L_{2}(-k)\right]T_{\rm{l}2}(-k)^{-1}\end{bmatrix}, (3.28)

    where Tl2(k)T_{\rm{l}2}(k) and L2(k)L_{2}(k) are the left transmission and left reflection coefficients, respectively, for the right fragment V2.V_{2}.

  2. (b)

    For k,k\in\mathbb{R}, the 2n×2n2n\times 2n matrix Fr(k,x)F_{\rm{r}}(k,x) defined in (2.17) satisfies

    Fr(k,0)=[[I+R1(k)]Tr1(k)1[I+R1(k)]Tr1(k)1ik[IR1(k)]Tr1(k)1ik[I+R1(k)]Tr1(k)1],F_{\rm{r}}(k,0)=\begin{bmatrix}\left[I+R_{1}(-k)\right]T_{\rm{r}1}(-k)^{-1}&\left[I+R_{1}(k)\right]T_{\rm{r}1}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr ik\left[I-R_{1}(-k)\right]T_{\rm{r}1}(-k)^{-1}&ik\left[-I+R_{1}(k)\right]T_{\rm{r}1}(k)^{-1}\end{bmatrix}, (3.29)

    where Tr1(k)T_{\rm{r}1}(k) and R1(k)R_{1}(k) are the right transmission and right reflection coefficients, respectively, for the left fragment V1.V_{1}.

  3. (c)

    The matrix Fl(k,0)F_{\rm{l}}(k,0) appearing in (3.28) can be written as a matrix product as

    Fl(k,0)=q9q10q11,k,F_{\rm{l}}(k,0)=q_{9}\,q_{10}\,q_{11},\qquad k\in\mathbb{R}, (3.30)

    where we have defined

    q9:=[I00ikI][II0I][2I00I][I00I][I0II],k,q_{9}:=\begin{bmatrix}I&0\\ 0&ikI\end{bmatrix}\begin{bmatrix}I&-I\\ 0&I\end{bmatrix}\begin{bmatrix}2I&0\\ 0&I\end{bmatrix}\begin{bmatrix}I&0\\ 0&-I\end{bmatrix}\begin{bmatrix}I&0\\ -I&I\end{bmatrix},\qquad k\in\mathbb{R}, (3.31)
    q10:=[IL2(k)0I][IL2(k)L2(k)00I],k,q_{10}:=\begin{bmatrix}I&L_{2}(-k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&I\end{bmatrix}\begin{bmatrix}I-L_{2}(-k)\,L_{2}(k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&I\end{bmatrix},\qquad k\in\mathbb{R}, (3.32)
    q11:=[I0L2(k)I][Tl2(k)100Tl2(k)1],k{0}.q_{11}:=\begin{bmatrix}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{2}(k)&I\end{bmatrix}\begin{bmatrix}T_{\rm{l}2}(k)^{-1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{l}2}(-k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}. (3.33)

    In fact, we have

    q9=[IIikIikI],k,q_{9}=\begin{bmatrix}I&I\\ ikI&-ikI\end{bmatrix},\qquad k\in\mathbb{R}, (3.34)
    q10q11=Λ2(k),k{0},q_{10}\,q_{11}=\Lambda_{2}(k),\qquad k\in\mathbb{R}\setminus\{0\}, (3.35)

    where Λ2(k)\Lambda_{2}(k) is the transition matrix given in (3.25) corresponding to the right potential fragment V2.V_{2}.

  4. (d)

    The matrix Fr(k,0)F_{\rm{r}}(k,0) appearing in (3.29) can be written as a matrix product as

    Fr(k,0)=q9q12q13,k,F_{\rm{r}}(k,0)=q_{9}\,q_{12}\,q_{13},\qquad k\in\mathbb{R}, (3.36)

    where q9q_{9} is the matrix defined in (3.31) and we have let

    q12:=[IR1(k)0I][IR1(k)R1(k)00I],k,q_{12}:=\begin{bmatrix}I&R_{1}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&I\end{bmatrix}\begin{bmatrix}I-R_{1}(k)\,R_{1}(-k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&I\end{bmatrix},\qquad k\in\mathbb{R}, (3.37)
    q13:=[I0R1(k)I][Tr1(k)100Tr1(k)1],k{0}.q_{13}:=\begin{bmatrix}I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R_{1}(-k)&I\end{bmatrix}\begin{bmatrix}T_{\rm{r}1}(-k)^{-1}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{r}1}(k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}. (3.38)

    In fact, we have

    q12q13=Σ1(k),k{0},\quad q_{12}\,q_{13}=\Sigma_{1}(k),\qquad k\in\mathbb{R}\setminus\{0\}, (3.39)

    where Σ1(k)\Sigma_{1}(k) is the transition matrix in (3.26) corresponding to the left potential fragment V1.V_{1}.

Proof.

We remark that, although the matrices q11q_{11} and q13q_{13} behave as O(1/k)O(1/k) as k0,k\to 0, the equalities in (3.30) and (3.36) hold also at k=0k=0 by the continuity. Let us use fl1(k,x)f_{\rm{l}1}(k,x) and fr1(k,x)f_{\rm{r}1}(k,x) to denote the left and right Jost solutions, respectively, corresponding to the potential fragment V1.V_{1}. Similarly, let us use fl2(k,x)f_{\rm{l}2}(k,x) and fr2(k,x)f_{\rm{r}2}(k,x) to denote the left and right Jost solutions, respectively, corresponding to the potential fragment V2.V_{2}. Since fl(k,x)f_{\rm{l}}(k,x) and fl2(k,x)f_{\rm{l}2}(k,x) both satisfy (1.1) and the asymptotics (2.1), we have

fl2(k,x)=fl(k,x),fl2(k,x)=fl(k,x),x[0,+).f_{\rm{l}2}(k,x)=f_{\rm{l}}(k,x),\quad f^{\prime}_{\rm{l}2}(k,x)=f^{\prime}_{\rm{l}}(k,x),\qquad x\in[0,+\infty). (3.40)

Similarly, since fr(k,x)f_{\rm{r}}(k,x) and fr1(k,x)f_{\rm{r}1}(k,x) both satisfy (1.1) and the asymptotics (2.2), we have

fr1(k,x)=fr(k,x),fr1(k,x)=fr(k,x),x(,0].f_{\rm{r}1}(k,x)=f_{\rm{r}}(k,x),\quad f^{\prime}_{\rm{r}1}(k,x)=f^{\prime}_{\rm{r}}(k,x),\qquad x\in(-\infty,0]. (3.41)

From (1.5), (2.5), (2.6), (3.40), and (3.41), we get

fl2(k,x)=eikxTl2(k)1+eikxL2(k)Tl21(k),x(,0],f_{\rm{l}2}(k,x)=e^{ikx}\,T_{\rm{l}2}(k)^{-1}+e^{-ikx}L_{2}(k)\,T_{\rm{l}2}^{-1}(k),\qquad x\in(-\infty,0], (3.42)
fr1(k,x)=eikxTr1(k)1+eikxR1(k)Tr11(k),x[0,+).f_{\rm{r}1}(k,x)=e^{-ikx}\,T_{\rm{r}1}(k)^{-1}+e^{ikx}R_{1}(k)\,T_{\rm{r}1}^{-1}(k),\qquad x\in[0,+\infty). (3.43)

Comparing (3.40) and (3.42) at x=0x=0 and using the result in (2.16), we establish (3.28), which completes the proof of (a). Similarly, comparing (3.41) and (3.43) at x=0x=0 and using the result in (2.17), we establish (3.29). Hence, the proof of (b) is also complete. We can confirm (3.30) directly by evaluating the matrix product on its right-hand side with the help of (3.31)–(3.33). Similarly, (3.34) can directly be confirmed by evaluating the matrix products on the right-hand sides of (3.31)–(3.33). Thus, the proof of (c) is complete. Let us now prove (d). We can verify (3.36) directly by evaluating the matrix product on its right-hand side with the help of (3.34), (3.37), and (3.38). In the same manner, (3.39) can be directly confirmed by evaluating the matrix products on the right-hand sides of (3.37) and (3.38) and by comparing the result with (3.26). Hence, the proof of (d) is also complete. ∎

In the next theorem we present our factorization formula corresponding to the potential fragmentation given in (1.4).

Theorem 3.3.

Consider the full-line n×nn\times n matrix Schrödinger equation (1.1) with the potential VV satisfying (1.2) and (1.3). Let V1V_{1} and V2V_{2} denote the left and right fragments of VV described in (1.4) and (1.5). Let Λ(k),\Lambda(k), Λ1(k),\Lambda_{1}(k), and Λ2(k)\Lambda_{2}(k) be the 2n×2n2n\times 2n transition matrices defined in (2.24), (3.24), and (3.25) corresponding to V,V, V1,V_{1}, and V2,V_{2}, respectively. Similarly, let Σ(k),\Sigma(k), Σ1(k),\Sigma_{1}(k), and Σ2(k)\Sigma_{2}(k) denote the 2n×2n2n\times 2n transition matrices defined in (2.25), (3.26), and (3.27) corresponding to V,V, V1,V_{1}, and V2,V_{2}, respectively. Then, we have the following:

  1. (a)

    The transition matrix Λ(k)\Lambda(k) is equal to the ordered matrix product Λ1(k)Λ2(k),\Lambda_{1}(k)\,\Lambda_{2}(k), i.e. we have

    Λ(k)=Λ1(k)Λ2(k),k{0}.\Lambda(k)=\Lambda_{1}(k)\,\Lambda_{2}(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.44)
  2. (b)

    The factorization formula (3.44) can also be expressed in terms of the transition matrices Σ(k),\Sigma(k), Σ1(k),\Sigma_{1}(k), and Σ2(k)\Sigma_{2}(k) as

    Σ(k)=Σ2(k)Σ1(k),k{0}.\Sigma(k)=\Sigma_{2}(k)\,\Sigma_{1}(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.45)
Proof.

Evaluating (3.5) at x=0,x=0, we get

Fl(k,0)=Fr(k,0)Λ(k),k.F_{\rm{l}}(k,0)=F_{\rm{r}}(k,0)\,\Lambda(k),\qquad k\in\mathbb{R}. (3.46)

Using (3.30) and (3.36) on the left and right-hand sides of (3.46), respectively, we obtain

q9q10q11=q9q12q13Λ(k),k{0}.q_{9}\,q_{10}\,q_{11}=q_{9}\,q_{12}\,q_{13}\,\Lambda(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.47)

From (3.34) we see that q9q_{9} is invertible when k{0}.k\in\mathbb{R}\setminus\{0\}. Thus, using (3.35) and (3.39) in (3.47), we get

Λ2(k)=Σ1(k)Λ(k),k{0},\Lambda_{2}(k)=\Sigma_{1}(k)\,\Lambda(k),\qquad k\in\mathbb{R}\setminus\{0\},

or equivalently

Σ1(k)1Λ2(k)=Λ(k),k{0}.\Sigma_{1}(k)^{-1}\,\Lambda_{2}(k)=\Lambda(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.48)

From (3.7) we already know that Σ1(k)1=Λ1(k),\Sigma_{1}(k)^{-1}=\Lambda_{1}(k), and hence (3.48) yields (3.44). Thus, the proof of (a) is complete. Taking the matrix inverses of both sides of (3.44) and then making use of (3.7), we obtain (3.45). Thus, the proof of (b) is also complete. ∎

The following theorem shows that the factorization formulas (3.44) and (3.45) also hold if the potential VV in (1.1) is decomposed into V1V_{1} and V2V_{2} by choosing the fragmentation point anywhere on the real axis, not necessarily at x=0.x=0.

Theorem 3.4.

Consider the full-line n×nn\times n matrix Schrödinger equation (1.1) with the potential VV satisfying (1.2) and (1.3). Let V1V_{1} and V2V_{2} denote the left and right fragments of VV described as in (1.4), but (1.5) replaced with

V1(x):={V(x),x<b,0,x>b,V2(x):={0,x<b,V(x),x>b,V_{1}(x):=\begin{cases}V(x),\qquad x<b,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad x>b,\end{cases}\quad V_{2}(x):=\begin{cases}0,\qquad x<b,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr V(x),\qquad x>b,\end{cases} (3.49)

where bb is a fixed real constant. Let Λ(k),\Lambda(k), Λ1(k),\Lambda_{1}(k), and Λ2(k)\Lambda_{2}(k) appearing in (2.24), (3.24), (3.25), respectively, be the transition matrices corresponding to V,V, V1,V_{1}, and V2,V_{2}, respectively. Similarly, let Σ(k),\Sigma(k), Σ1(k),\Sigma_{1}(k), and Σ2(k)\Sigma_{2}(k) appearing in (2.25), (3.26), (3.27) be the transition matrices corresponding to V,V, V1,V_{1}, and V2,V_{2}, respectively. Then, we have the following:

  1. (a)

    The transition matrix Λ(k)\Lambda(k) is equal to the ordered matrix product Λ1(k)Λ2(k),\Lambda_{1}(k)\,\Lambda_{2}(k), i.e. we have

    Λ(k)=Λ1(k)Λ2(k),k{0}.\Lambda(k)=\Lambda_{1}(k)\,\Lambda_{2}(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.50)
  2. (b)

    The factorization formula (3.50) can also be expressed in terms of the transition matrices Σ(k),\Sigma(k), Σ1(k),\Sigma_{1}(k), and Σ2(k)\Sigma_{2}(k) as

    Σ(k)=Σ2(k)Σ1(k),k{0}.\Sigma(k)=\Sigma_{2}(k)\,\Sigma_{1}(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.51)
Proof.

Let us translate the potential VV and its fragments V1V_{1} and V2V_{2} as in (3.14). The shifted potentials satisfy

V(b)(x)=V1(b)(x)+V2(b)(x),x,V^{(b)}(x)=V_{1}^{(b)}(x)+V_{2}^{(b)}(x),\qquad x\in\mathbb{R},

where we have defined

V(b)(x):=V(x+b),V1(b)(x):=V1(x+b),V2(b)(x):=V2(x+b),x.V^{(b)}(x):=V(x+b),\quad V_{1}^{(b)}(x):=V_{1}(x+b),\quad V_{2}^{(b)}(x):=V_{2}(x+b),\qquad x\in\mathbb{R}.

Note that the shifted potential fragments V1(b)V_{1}^{(b)} and V2(b)V_{2}^{(b)} correspond to the fragments of V(b)V^{(b)} with the fragmentation point x=0,x=0, i.e. we have

V1(b)(x)={V(b)(x),x<0,0,x>0,V2(b)(x)={0,x<0,V(b)(x),x>0.V_{1}^{(b)}(x)=\begin{cases}V^{(b)}(x),\qquad x<0,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad x>0,\end{cases}\quad V_{2}^{(b)}(x)=\begin{cases}0,\qquad x<0,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr V^{(b)}(x),\qquad x>0.\end{cases}

Since the shifted potential V(b)V^{(b)} is fragmented at x=0,x=0, we can apply Theorem 3.3 to V(b).V^{(b)}. Let us use Tlj(b)(k),T_{\rm{l}j}^{(b)}(k), Trj(b)(k),T_{\rm{r}j}^{(b)}(k), Lj(b)(k),L_{j}^{(b)}(k), Rj(b)(k),R_{j}^{(b)}(k), Λj(b)(k),\Lambda_{j}^{(b)}(k), and Σj(b)(k)\Sigma_{j}^{(b)}(k) to denote the left and right transmission coefficients, the left and right reflections coefficients, and the transition matrices, respectively, for the shifted potentials Vj(b)(x)V^{(b)}_{j}(x) with j=1j=1 and j=2.j=2. In analogy with (3.24)–(3.27), for k{0}k\in\mathbb{R}\setminus\{0\} we have

Λj(b)(k):=[[Tlj(b)(k)]1Lj(b)(k)[Tlj(b)(k)]1Lj(b)(k)[Tlj(b)(k)]1[Tlj(b)(k)]1],j=1,2,\Lambda_{j}^{(b)}(k):=\begin{bmatrix}[T_{\rm{l}j}^{(b)}(k)]^{-1}&L_{j}^{(b)}(-k)\,[T_{\rm{l}j}^{(b)}(-k)]^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{j}^{(b)}(k)\,[T^{(b)}_{\rm{l}j}(k)]^{-1}&[T_{\rm{l}j}^{(b)}(-k)]^{-1}\end{bmatrix},\qquad j=1,2,
Σj(b)(k):=[[Trj(b)(k)]1Rj(b)(k)[Trj(b)(k)]1Rj(b)(k)[Trj(b)(k)]1[Trj(b)(k)]1],j=1,2.\Sigma_{j}^{(b)}(k):=\begin{bmatrix}[T_{\rm{r}j}^{(b)}(-k)]^{-1}&R_{j}^{(b)}(k)\,[T_{\rm{r}j}^{(b)}(k)]^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R_{j}^{(b)}(-k)\,[T_{\rm{r}j}^{(b)}(-k)]^{-1}&[T_{\rm{r}j}^{(b)}(k)]^{-1}\end{bmatrix},\qquad j=1,2.

From Theorem 3.3 we have

Λ(b)(k)=Λ1(b)(k)Λ2(b)(k),Σ(b)(k)=Σ2(b)(k)Σ1(b)(k),k{0}.\Lambda^{(b)}(k)=\Lambda_{1}^{(b)}(k)\,\Lambda_{2}^{(b)}(k),\quad\Sigma^{(b)}(k)=\Sigma_{2}^{(b)}(k)\,\Sigma_{1}^{(b)}(k),\qquad k\in\mathbb{R}\setminus\{0\}. (3.52)

Using (3.18) and (3.19) in (3.52), after some minor simplification we get (3.50) and (3.51). ∎

The result of Theorem 3.4 can easily be extended from two fragments to any finite number of fragments. This is because any existing fragment can be decomposed into further subfragments by applying the factorization formulas (3.50) and (3.51) to each fragment and to its subfragments. Since a proof can be obtained by using an induction on the number of fragments, we state the result as a corollary without a proof.

Corollary 3.5.

Consider the full-line n×nn\times n matrix Schrödinger equation (1.1) with the matrix-valued potential VV satisfying (1.2) and (1.3). Let S(k),S(k), Λ(k),\Lambda(k), and Σ(k)\Sigma(k) defined in (2.9), (2.24), and (2.25), respectively be the corresponding scattering matrix and the transition matrices, with Tl(k),T_{\rm{l}}(k), Tr(k),T_{\rm{r}}(k), L(k),L(k), and R(k)R(k) denoting the corresponding left and right transmission coefficients and the left and right reflection coefficients, respectively. Assume that VV is partitioned into P+1P+1 fragments VjV_{j} at the fragmentation points bjb_{j} with 1jP1\leq j\leq P as

V(x)=j=1P+1Vj(x),V(x)=\sum_{j=1}^{P+1}V_{j}(x),

where PP is any fixed positive integer and

Vj(x):={V(x),x(bj1,bj),0,x(bj1,bj),V_{j}(x):=\begin{cases}V(x),\qquad x\in(b_{j-1},b_{j}),\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad x\notin(b_{j-1},b_{j}),\end{cases}

with b0:=,b_{0}:=-\infty, bP+1:=+,b_{P+1}:=+\infty, and bj<bj+1.b_{j}<b_{j+1}. Let Tlj(k),T_{\rm{l}j}(k), Trj(k),T_{\rm{r}j}(k), Lj(k),L_{j}(k), Rj(k)R_{j}(k) be the corresponding left and right transmission coefficients and the left and right reflection coefficients, respectively, for the potential fragment Vj.V_{j}. Let Sj(k),S_{j}(k), Λj(k),\Lambda_{j}(k), and Σj(k)\Sigma_{j}(k) denote the scattering matrix and the transition matrices for the corresponding fragment Vj,V_{j}, which are defined as

Sj(k):=[Tlj(k)Rj(k)Lj(k)Trj(k)],k,S_{j}(k):=\begin{bmatrix}T_{\rm{l}j}(k)&R_{j}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{j}(k)&T_{\rm{r}j}(k)\end{bmatrix},\qquad k\in\mathbb{R},
Λj(k):=[Tlj(k)1Lj(k)Tlj(k)1Lj(k)Tlj(k)1Tlj(k)1],k{0},\Lambda_{j}(k):=\begin{bmatrix}T_{\rm{l}j}(k)^{-1}&L_{j}(-k)\,T_{\rm{l}j}(-k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr L_{j}(k)\,T_{\rm{l}j}(k)^{-1}&T_{\rm{l}j}(-k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\},
Σj(k):=[Trj(k)1Rj(k)Trj(k)1Rj(k)Trj(k)1Trj(k)1],k{0}.\Sigma_{j}(k):=\begin{bmatrix}T_{\rm{r}j}(-k)^{-1}&R_{j}(k)\,T_{\rm{r}j}(k)^{-1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr R_{j}(-k)\,T_{\rm{r}j}(-k)^{-1}&T_{\rm{r}j}(k)^{-1}\end{bmatrix},\qquad k\in\mathbb{R}\setminus\{0\}.

Then, the transition matrices Λ(k)\Lambda(k) and Σ(k)\Sigma(k) for the whole potential VV are expressed as ordered matrix products of the corresponding transition matrices for the fragments as

Λ(k)=Λ1(k)Λ2(k)ΛP(k)ΛP+1(k),k{0},\Lambda(k)=\Lambda_{1}(k)\,\Lambda_{2}(k)\cdots\Lambda_{P}(k)\,\Lambda_{P+1}(k),\qquad k\in\mathbb{R}\setminus\{0\},
Σ(k)=ΣP+1(k)ΣP(k)Σ2(k)Σ1(k),k{0}.\Sigma(k)=\Sigma_{P+1}(k)\,\Sigma_{P}(k)\cdots\Sigma_{2}(k)\,\Sigma_{1}(k),\qquad k\in\mathbb{R}\setminus\{0\}.

In Theorem 3.4 the transition matrix for a potential on the full line is expressed as a matrix product of the transition matrices for the left and right potential fragments. In the next theorem, we express the scattering coefficients of a potential on the full line in terms of the scattering coefficients of the two potential fragments.

Theorem 3.6.

Consider the full-line n×nn\times n matrix Schrödinger equation (1.1) with the potential VV satisfying (1.2) and (1.3). Assume that VV is fragmented at an arbitrary point x=bx=b into the two pieces V1V_{1} and V2V_{2} as described in (1.4) and (3.49). Let S(k)S(k) given in (2.9) be the scattering matrix for the potential V,V, and let S1(k)S_{1}(k) and S2(k)S_{2}(k) given in (3.23) be the scattering matrices corresponding to the potential fragments V1V_{1} and V2,V_{2}, respectively. Then, for kk\in\mathbb{R} the scattering coefficients in S(k)S(k) are related to the right scattering coefficients in S1(k)S_{1}(k) and the left scattering coefficients in S2(k)S_{2}(k) as

Tl(k)=Tl2(k)[IR1(k)L2(k)]1Tr1(k),T_{\rm{l}}(k)=T_{\rm{l}2}(k)\left[I-R_{1}(k)\,L_{2}(k)\right]^{-1}T_{\rm{r}1}(-k)^{\dagger}, (3.53)
L(k)=[Tr1(k)]1[L2(k)R1(k)][IR1(k)L2(k)]1Tr1(k),L(k)=[T_{\rm{r}1}(k)^{\dagger}]^{-1}\left[L_{2}(k)-R_{1}(-k)\right]\left[I-R_{1}(k)\,L_{2}(k)\right]^{-1}T_{\rm{r}1}(-k)^{\dagger}, (3.54)
Tr(k)=Tr1(k)[IL2(k)R1(k)]1Tl2(k),T_{\rm{r}}(k)=T_{\rm{r}1}(k)\left[I-L_{2}(k)\,R_{1}(k)\right]^{-1}T_{\rm{l}2}(-k)^{\dagger}, (3.55)
R(k)=Tl2(k)[IR1(k)L2(k)]1[R1(k)L2(k)]Tl2(k)1,R(k)=T_{\rm{l}2}(k)\left[I-R_{1}(k)\,L_{2}(k)\right]^{-1}\left[R_{1}(k)-L_{2}(-k)\right]T_{\rm{l}2}(-k)^{-1}, (3.56)

where we recall that II is the n×nn\times n identity matrix and the dagger denotes the matrix adjoint.

Proof.

From the (1,1)(1,1) entry in (3.50), for k{0}k\in\mathbb{R}\setminus\{0\} we have

Tl(k)1=Tl1(k)1Tl2(k)1+L1(k)Tl1(k)1L2(k)Tl2(k)1.T_{\rm{l}}(k)^{-1}=T_{\rm{l}1}(k)^{-1}\,T_{\rm{l}2}(k)^{-1}+L_{1}(-k)\,T_{\rm{l}1}(-k)^{-1}\,L_{2}(k)\,T_{\rm{l}2}(k)^{-1}. (3.57)

From the (2,1)(2,1) entry of (2.52) we know that

R1(k)Tl1(k)+Tr1(k)L1(k)=0,k.R_{1}(k)^{\dagger}\,T_{\rm{l}1}(k)+T_{\rm{r}1}(k)^{\dagger}\,L_{1}(k)=0,\qquad k\in\mathbb{R}. (3.58)

Multiplying both sides of (3.58) by [Tr1(k)]1[T_{\rm{r}1}^{\dagger}(k)]^{-1} from the left and by Tl1(k)1T_{\rm{l}1}(k)^{-1} from the right, we get

[Tr1(k)]1R1(k)+L1(k)Tl1(k)1=0,k{0}.[T_{\rm{r}1}(k)^{\dagger}]^{-1}R_{1}^{\dagger}(k)+L_{1}(k)\,T_{\rm{l}1}(k)^{-1}=0,\qquad k\in\mathbb{R}\setminus\{0\}. (3.59)

In (3.59), after replacing kk by k-k we obtain

L1(k)Tl1(k)1=[Tr1(k)]1R1(k),k{0}.L_{1}(-k)\,T_{\rm{l}1}(-k)^{-1}=-[T_{\rm{r}1}(-k)^{\dagger}]^{-1}R_{1}(-k)^{\dagger},\qquad k\in\mathbb{R}\setminus\{0\}. (3.60)

Next, using (3.60) and the second equality of (2.50) in (3.57), for k{0}k\in\mathbb{R}\setminus\{0\} we get

Tl(k)1=Tl1(k)1Tl2(k)1[Tr1(k)]1R1(k)L2(k)Tl2(k)1.T_{\rm{l}}(k)^{-1}=T_{\rm{l}1}(k)^{-1}\,T_{\rm{l}2}(k)^{-1}-[T_{\rm{r}1}(-k)^{\dagger}]^{-1}R_{1}(k)\,L_{2}(k)\,T_{\rm{l}2}(k)^{-1}. (3.61)

Using the third equality of (2.50) in the first term on the right-hand side of (3.61), for k{0}k\in\mathbb{R}\setminus\{0\} we have

Tl(k)1=[Tr1(k)]1Tl2(k)1[Tr1(k)]1R1(k)L2(k)Tl2(k)1.T_{\rm{l}}(k)^{-1}=[T_{\rm{r}1}(-k)^{\dagger}]^{-1}T_{\rm{l}2}(k)^{-1}-[T_{\rm{r}1}(-k)^{\dagger}]^{-1}R_{1}(k)\,L_{2}(k)\,T_{\rm{l}2}(k)^{-1}. (3.62)

Factoring the right-hand side of (3.62) and then taking the inverses of both sides of the resulting equation, we obtain (3.53). Let us next prove (3.54). From the (2,1)(2,1) entry of (3.50), for k{0}k\in\mathbb{R}\setminus\{0\} we have

L(k)Tl(k)1=L1(k)Tl1(k)1Tl2(k)1+Tl1(k)1L2(k)Tl2(k)1.L(k)\,T_{\rm{l}}(k)^{-1}=L_{1}(k)\,T_{\rm{l}1}(k)^{-1}\,T_{\rm{l}2}(k)^{-1}+T_{\rm{l}1}(-k)^{-1}\,L_{2}(k)\,T_{\rm{l}2}(k)^{-1}. (3.63)

In (3.60) by replacing kk by k-k and using the resulting equality in (3.63), when k{0}k\in\mathbb{R}\setminus\{0\} we get

L(k)Tl(k)1=[Tr1(k)]1R1(k)Tl2(k)1+Tl1(k)1L2(k)Tl2(k)1.L(k)\,T_{\rm{l}}(k)^{-1}=-[T_{\rm{r}1}(k)^{\dagger}]^{-1}R_{1}(k)^{\dagger}\,T_{\rm{l}2}(k)^{-1}+T_{\rm{l}1}(-k)^{-1}\,L_{2}(k)\,T_{\rm{l}2}(k)^{-1}. (3.64)

Next, using the third equality of (2.50) in the second term on the right-hand side of (3.64), for k{0}k\in\mathbb{R}\setminus\{0\} we obtain

L(k)Tl(k)1=[Tr1(k)]1R1(k)Tl2(k)1+[Tr1(k)]1L2(k)Tl2(k)1,L(k)\,T_{\rm{l}}(k)^{-1}=-[T_{\rm{r}1}(k)^{\dagger}]^{-1}R_{1}(k)^{\dagger}\,T_{\rm{l}2}(k)^{-1}+[T_{\rm{r}1}(k)^{\dagger}]^{-1}L_{2}(k)\,T_{\rm{l}2}(k)^{-1},

which is equivalent to

L(k)Tl(k)1=[Tr1(k)]1[L2(k)R1(k)]Tl2(k)1,k{0}.L(k)\,T_{\rm{l}}(k)^{-1}=[T_{\rm{r}1}(k)^{\dagger}]^{-1}\left[L_{2}(k)-R_{1}(k)^{\dagger}\right]T_{\rm{l}2}(k)^{-1},\qquad k\in\mathbb{R}\setminus\{0\}. (3.65)

Using the second equality of (2.50) in (3.65), we have

L(k)Tl(k)1=[Tr1(k)]1[L2(k)R1(k)]Tl2(k)1,k{0}.L(k)\,T_{\rm{l}}(k)^{-1}=[T_{\rm{r}1}(k)^{\dagger}]^{-1}\left[L_{2}(k)-R_{1}(-k)\right]T_{\rm{l}2}(k)^{-1},\qquad k\in\mathbb{R}\setminus\{0\}. (3.66)

Then, multiplying (3.66) from the right by the respective sides of (3.53), we obtain (3.54). Let us now prove (3.55). From the third equality of (2.50) we have Tr(k)=Tl(k).T_{\rm{r}}(k)=T_{\rm{l}}(-k)^{\dagger}. Thus, by taking the matrix adjoint of both sides of (3.53), then replacing kk by k-k in the resulting equality, and then using the first two equalities of (2.50), we get (3.55). Let us finally prove (3.56). From the (1,2)(1,2) entry of (2.53), we have

R(k)=Tl(k)L(k)[Tr(k)]1,k.R(k)=-T_{\rm{l}}(k)\,L(k)^{\dagger}\,[T_{\rm{r}}(k)^{\dagger}]^{-1},\qquad k\in\mathbb{R}. (3.67)

Using the first and third equalities of (2.50) in (3.67), we get

R(k)=Tl(k)L(k)Tl(k)1,k.R(k)=-T_{\rm{l}}(k)\,L(-k)\,T_{\rm{l}}(-k)^{-1},\qquad k\in\mathbb{R}. (3.68)

Then, on the right-hand side of (3.68), we replace Tl(k)T_{\rm{l}}(k) by the right-hand side of (3.53) and we also replace L(k)[Tl(k)]1L(-k)\,[T_{\rm{l}}(-k)]^{-1} by the right-hand side of (3.66) after the substitution kk.k\mapsto-k. After simplifying the resulting modified version of (3.68), we obtain (3.56). ∎

4 The matrix-valued scattering coefficients

The choice n=1n=1 in (1.1) corresponds to the scalar case. In the scalar case, the potential VV satisfying (1.2) and (1.3) is real valued and the corresponding left and right transmission coefficients are equal to each other. However, when n2n\geq 2 the matrix-valued transmission coefficients are in general not equal to each other. In the matrix case, as seen from (2.46) we have

Tl(k)=Tr(k),k+¯.T_{\rm{l}}(-k^{\ast})^{\dagger}=T_{\rm{r}}(k),\qquad k\in\overline{\mathbb{C}^{+}}. (4.1)

On the other hand, from (2.22) we know that the determinants of the left and right transmission coefficients are always equal to each other for n1.n\geq 1. In this section, we first provide some relevant properties of the scattering coefficients for (1.1), and then we present some explicit examples demonstrating the unequivalence of the matrix-valued left and right transmission coefficients.

The following theorem summarizes the large kk-asymptotics of the scattering coefficients for the full-line matrix Schrödinger equation.

Theorem 4.1.

Consider the full-line matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). The large kk-asymptotics of the corresponding n×nn\times n matrix-valued scattering coefficients are given by

Tl(k)=I+12ik𝑑xV(x)+O(1k2),k in +¯,T_{\rm{l}}(k)=I+\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)+O\left(\displaystyle\frac{1}{k^{2}}\right),\qquad k\to\infty\text{\rm{ in }}\overline{\mathbb{C}^{+}}, (4.2)
Tr(k)=I+12ik𝑑xV(x)+O(1k2),k in +¯,T_{\rm{r}}(k)=I+\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)+O\left(\displaystyle\frac{1}{k^{2}}\right),\qquad k\to\infty\text{\rm{ in }}\overline{\mathbb{C}^{+}}, (4.3)
L(k)=12ik𝑑xV(x)e2ikx+O(1k2),k± in ,L(k)=\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)\,e^{2ikx}+O\left(\displaystyle\frac{1}{k^{2}}\right),\qquad k\to\pm\infty\text{\rm{ in }}\mathbb{R}, (4.4)
R(k)=12ik𝑑xV(x)e2ikx+O(1k2),k± in .R(k)=\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)\,e^{-2ikx}+O\left(\displaystyle\frac{1}{k^{2}}\right),\qquad k\to\pm\infty\text{\rm{ in }}\mathbb{R}. (4.5)
Proof.

By solving (2.3) and (2.4) via the method of successive approximation, for each fixed xx\in\mathbb{R} we obtain the large kk-asymptotics of the Jost solutions as

eikxfl(k,x)=I+O(1k),k in +¯,e^{-ikx}\,f_{\rm{l}}(k,x)=I+O\left(\displaystyle\frac{1}{k}\right),\qquad k\to\infty\text{ \rm{in} }\overline{\mathbb{C}^{+}}, (4.6)
eikxfr(k,x)=I+O(1k),k in +¯.e^{ikx}f_{\rm{r}}(k,x)=I+O\left(\displaystyle\frac{1}{k}\right),\qquad k\to\infty\text{ \rm{in} }\overline{\mathbb{C}^{+}}. (4.7)

The integral representations involving the scattering coefficients are obtained from (2.3) and (2.4) with the help of (2.5) and (2.6). As listed in (2.10)–(2.13) of [5], we have

Tl(k)1=I12ik𝑑xV(x)eikxfl(k,x),T_{\rm{l}}(k)^{-1}=I-\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)\,e^{-ikx}f_{\rm{l}}(k,x), (4.8)
Tr(k)1=I12ik𝑑xV(x)eikxfr(k,x),T_{\rm{r}}(k)^{-1}=I-\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)\,e^{ikx}f_{\rm{r}}(k,x), (4.9)
L(k)Tl(k)1=12ik𝑑xV(x)eikxfl(k,x),L(k)\,T_{\rm{l}}(k)^{-1}=\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)\,e^{ikx}f_{\rm{l}}(k,x), (4.10)
R(k)Tr(k)1=12ik𝑑xV(x)eikxfr(k,x).R(k)\,T_{\rm{r}}(k)^{-1}=\displaystyle\frac{1}{2ik}\displaystyle\int_{-\infty}^{\infty}dx\,V(x)\,e^{-ikx}f_{\rm{r}}(k,x). (4.11)

With the help of (4.6) and (4.7), from (4.8)–(4.11) we obtain (4.2)–(4.5) in their appropriate domains. ∎

We observe that it is impossible to tell the unequivalence of Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) from the large kk-limits given in (4.2) and (4.3). However, as the next example shows, the small kk-asymptotics of Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) may be used to see their unequivalence in the matrix case with n2.n\geq 2.

Example 4.2.

Consider the full-line Schrödinger equation (1.1) when the potential VV is a 2×22\times 2 matrix and fragmented as in (1.4) and (1.5), where the fragments V1V_{1} and V2V_{2} are compactly supported and given by

{V1(x)=[32+i2i5],2<x<0,V2(x)=[21+i1i2],0<x<1,\begin{cases}V_{1}(x)=\begin{bmatrix}3&-2+i\\ -2-i&-5\end{bmatrix},\qquad-2<x<0,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr V_{2}(x)=\begin{bmatrix}2&1+i\\ 1-i&-2\end{bmatrix},\qquad 0<x<1,\end{cases} (4.12)

with the understanding that V1V_{1} vanishes when x(2,0)x\not\in(-2,0) and that V2V_{2} vanishes when x(0,1).x\not\in(0,1). Thus, the support of the potential VV is confined to the interval (2,1).(-2,1). Since V2V_{2} vanishes when x>1,x>1, as seen from (2.1) the corresponding Jost solution fl2(k,x)f_{\rm{l}2}(k,x) is equal to eikxIe^{ikx}I there. The evaluation of the corresponding scattering coefficients for the potential specified in (4.12) is not trivial. A relatively efficient way for that evaluation is accomplished as follows. For 0<x<10<x<1 we construct the Jost solution fl2(k,x)f_{\rm{l}2}(k,x) corresponding to the constant 2×22\times 2 matrix V2V_{2} by diagonalizing V2,V_{2}, obtaining the corresponding eigenvalues and eigenvectors of V2,V_{2}, and then constructing the general solution to (1.1) with the help of those eigenvalues and eigenvectors. Next, by using the continuity of fl2(k,x)f_{\rm{l}2}(k,x) and fl2(k,x)f^{\prime}_{\rm{l}2}(k,x) at the point x=1,x=1, we construct fl2(k,x)f_{\rm{l}2}(k,x) explicitly when 0<x<1.0<x<1. Then, by using (3.42) and its derivative at x=0x=0 we obtain the corresponding 2×22\times 2 scattering coefficients Tl2(k)T_{\rm{l}2}(k) and L2(k).L_{2}(k). By using a similar procedure, we construct the right Jost solution fr1(k,x)f_{\rm{r}1}(k,x) corresponding to the potential V1V_{1} as well as the corresponding 2×22\times 2 scattering coefficients Tr1(k)T_{\rm{r}1}(k) and R1(k).R_{1}(k). Next, we construct the transmission coefficients Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) corresponding to the whole potential VV by using (3.53) and (3.55), respectively. In fact, with the help of the symbolic software Mathematica, we evaluate Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) explicitly in a closed form. However, the corresponding explicit expressions are extremely lengthy, and hence it is not feasible to display them here. Instead, we present the small kk-limits of Tl(k)T_{\rm{l}}(k) and Tr(k),T_{\rm{r}}(k), which shows that Tl(k)Tr(k).T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k). We have

Tl(k)1=[al(k)bl(k)cl(k)dl(k)]+O(k3),k0 in +¯,T_{\rm{l}}(k)^{-1}=\begin{bmatrix}a_{\rm{l}}(k)&b_{\rm{l}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr c_{\rm{l}}(k)&d_{\rm{l}}(k)\end{bmatrix}+O(k^{3}),\qquad k\to 0\text{\rm{ in }}\overline{\mathbb{C}^{+}}, (4.13)
Tr(k)1=[ar(k)br(k)cr(k)dr(k)]+O(k3),k0 in +¯,T_{\rm{r}}(k)^{-1}=\begin{bmatrix}a_{\rm{r}}(k)&b_{\rm{r}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr c_{\rm{r}}(k)&d_{\rm{r}}(k)\end{bmatrix}+O(k^{3}),\qquad k\to 0\text{\rm{ in }}\overline{\mathbb{C}^{+}}, (4.14)

where we have defined

al(k):=7.05652¯74.1352¯ik(133.844¯+14.3522¯i)+(19.0756¯170.827¯i)k+(160.955¯+18.1729¯i)k2,\begin{split}a_{\rm{l}}(k):=&-\displaystyle\frac{7.0565\overline{2}-74.135\overline{2}\,i}{k}-\left(133.84\overline{4}+14.352\overline{2}\,i\right)\\ &+\left(19.075\overline{6}-170.82\overline{7}\,i\right)k+\left(160.95\overline{5}+18.172\overline{9}\,i\right)k^{2},\end{split} (4.15)
bl(k):=19.9839¯22.4176¯ik(50.5188¯+39.0893¯i)+(51.289¯68.6615¯i)k+(65.0516¯+48.4089¯i)k2,\begin{split}b_{\rm{l}}(k):=&-\displaystyle\frac{19.983\overline{9}-22.417\overline{6}\,i}{k}-\left(50.518\overline{8}+39.089\overline{3}\,i\right)\\ &+\left(51.28\overline{9}-68.661\overline{5}\,i\right)k+\left(65.051\overline{6}+48.408\overline{9}\,i\right)k^{2},\end{split} (4.16)
cl(k):=10.5752¯15.2172¯ik+(26.0265¯+19.9529¯i)(25.8548¯33.0752¯i)k(31.8705¯+24.1783¯i)k2,\begin{split}c_{\rm{l}}(k):=&\displaystyle\frac{10.575\overline{2}-15.217\overline{2}\,i}{k}+\left(26.026\overline{5}+19.952\overline{9}\,i\right)\\ &-\left(25.854\overline{8}-33.075\overline{2}\,i\right)k-\left(31.870\overline{5}+24.178\overline{3}\,i\right)k^{2},\end{split} (4.17)
dl(k):=7.05652¯2.55528¯ik+(7.27335¯+14.3522¯i)(19.0756¯11.1659¯i)k(10.4903¯+18.1729¯i)k2,\begin{split}d_{\rm{l}}(k):=&\displaystyle\frac{7.0565\overline{2}-2.5552\overline{8}\,i}{k}+\left(7.2733\overline{5}+14.352\overline{2}\,i\right)\\ &-\left(19.075\overline{6}-11.165\overline{9}\,i\right)k-\left(10.490\overline{3}+18.172\overline{9}\,i\right)k^{2},\end{split} (4.18)
ar(k):=7.05652¯+74.1352¯ik(133.844¯14.3522¯i)(19.0756¯+170.827¯i)k+(160.955¯18.1729¯i)k2,\begin{split}a_{\rm{r}}(k):=&\displaystyle\frac{7.0565\overline{2}+74.135\overline{2}\,i}{k}-\left(133.84\overline{4}-14.352\overline{2}\,i\right)\\ &-\left(19.075\overline{6}+170.82\overline{7}\,i\right)k+\left(160.95\overline{5}-18.172\overline{9}\,i\right)k^{2},\end{split} (4.19)
br(k):=10.5752¯+15.2172¯ik+(26.0265¯19.9529¯i)+(25.8548¯+33.0752¯i)k(31.8705¯24.1783¯i)k2,\begin{split}b_{\rm{r}}(k):=&-\displaystyle\frac{10.575\overline{2}+15.217\overline{2}\,i}{k}+\left(26.026\overline{5}-19.952\overline{9}\,i\right)\\ &+\left(25.854\overline{8}+33.075\overline{2}\,i\right)k-\left(31.870\overline{5}-24.178\overline{3}\,i\right)k^{2},\end{split} (4.20)
cr(k):=19.9839¯+22.4176¯ik(50.5188¯39.0893¯i)(51.289¯+68.6615¯i)k+(65.0516¯48.4089¯i)k2,\begin{split}c_{\rm{r}}(k):=&\displaystyle\frac{19.983\overline{9}+22.417\overline{6}\,i}{k}-\left(50.518\overline{8}-39.089\overline{3}\,i\right)\\ &-\left(51.28\overline{9}+68.661\overline{5}\,i\right)k+\left(65.051\overline{6}-48.408\overline{9}\,i\right)k^{2},\end{split} (4.21)
dr(k):=7.05652¯+2.55528¯ik+(7.27335¯14.3522¯i)+(19.0756¯+11.1659¯i)k(10.4903¯18.1729¯i)k2.\begin{split}d_{\rm{r}}(k):=&-\displaystyle\frac{7.0565\overline{2}+2.5552\overline{8}\,i}{k}+\left(7.2733\overline{5}-14.352\overline{2}\,i\right)\\ &+\left(19.075\overline{6}+11.165\overline{9}\,i\right)k-\left(10.490\overline{3}-18.172\overline{9}\,i\right)k^{2}.\end{split} (4.22)

Note that we use an overbar on a digit to denote a roundoff on that digit. From (4.13)–(4.22) we see that Tl(k)Tr(k),T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k), and in fact we confirm (4.1) up to O(k3)O(k^{3}) as k0.k\to 0.

In Example 4.2, we have illustrated that the matrix-valued left and right transmission coefficients are not equal to each other in general, and that has been done when the potential is selfadjoint but not real. In order to demonstrate that the unequivalence of Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) is not caused because the potential is complex valued, we would like to demonstrate that, in general, when n2n\geq 2 we have Tl(k)Tr(k)T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k) even when the matrix potential VV in (1.1) is real valued. Suppose that, in addition to (1.2) and (1.3), the potential VV is real valued, i.e. we have

V(x)=V(x),k,V(x)^{\ast}=V(x),\qquad k\in\mathbb{R},

where we recall that we use an asterisk to denote complex conjugation. Then, from (1.1) we see that if ψ(k,x)\psi(k,x) is a solution to (1.1) then ψ(±k,x)\psi(\pm k^{\ast},x)^{\ast} is also a solution. In particular, using (2.1) and (2.2) we observe that, when the potential is real valued, we have

fl(k,x)=fl(k,x),fr(k,x)=fr(k,x),k+¯.f_{\rm{l}}(-k^{\ast},x)^{\ast}=f_{\rm{l}}(k,x),\quad f_{\rm{r}}(-k^{\ast},x)^{\ast}=f_{\rm{r}}(k,x),\qquad k\in\overline{\mathbb{C}^{+}}. (4.23)

Then, using (4.23), from (2.5) and (2.9) we obtain

Tl(k)=Tl(k),Tr(k)=Tr(k),k+¯,T_{\rm{l}}(-k^{\ast})^{\ast}=T_{\rm{l}}(k),\quad T_{\rm{r}}(-k^{\ast})^{\ast}=T_{\rm{r}}(k),\qquad k\in\overline{\mathbb{C}^{+}}, (4.24)
R(k)=R(k),L(k)=L(k),k.R(-k)^{\ast}=R(k),\quad L(-k)^{\ast}=L(k),\qquad k\in\mathbb{R}. (4.25)

Comparing (4.1) and (4.24), we have

Tl(k)t=Tr(k),k+¯,T_{\rm{l}}(k)^{t}=T_{\rm{r}}(k),\qquad k\in\overline{\mathbb{C}^{+}}, (4.26)

and similarly, by comparing the first two equalities in (2.50) with (4.25) we get

R(k)t=R(k),L(k)t=L(k),k,R(k)^{t}=R(k),\quad L(k)^{t}=L(k),\qquad k\in\mathbb{R},

where we use the superscript tt to denote the matrix transpose. We remark that, in the scalar case, since the potential VV in (1.1) is real valued, the equality of the left and right transmission coefficients directly follow from (4.26). Let us mention that, if the matrix potential is real valued, then (3.53)–(3.56) of Theorem 3.6 yield

Tl(k)=Tl2(k)[IR1(k)L2(k)]1Tr1(k)t,k,T_{\rm{l}}(k)=T_{\rm{l}2}(k)\left[I-R_{1}(k)\,L_{2}(k)\right]^{-1}T_{\rm{r}1}(k)^{t},\qquad k\in\mathbb{R}, (4.27)
L(k)=[Tr1(k)]1[L2(k)R1(k)][IR1(k)L2(k)]1Tr1(k)t,k,L(k)=[T_{\rm{r}1}(k)^{\dagger}]^{-1}\left[L_{2}(k)-R_{1}(k)^{\ast}\right]\left[I-R_{1}(k)\,L_{2}(k)\right]^{-1}T_{\rm{r}1}(k)^{t},\qquad k\in\mathbb{R},
Tr(k)=Tr1(k)[IL2(k)R1(k)]1Tl2(k)t,k,T_{\rm{r}}(k)=T_{\rm{r}1}(k)\left[I-L_{2}(k)\,R_{1}(k)\right]^{-1}T_{\rm{l}2}(k)^{t},\qquad k\in\mathbb{R}, (4.28)
R(k)=Tl2(k)[IR1(k)L2(k)]1[R1(k)L2(k)]Tl2(k)1,k.R(k)=T_{\rm{l}2}(k)\left[I-R_{1}(k)\,L_{2}(k)\right]^{-1}\left[R_{1}(k)-L_{2}(k)^{\ast}\right]T_{\rm{l}2}(-k)^{-1},\qquad k\in\mathbb{R}.

In the next example, we illustrate the unequivalence of the left and right transmission coefficients when the matrix potential VV in (1.1) is real valued. As in the previous example, we construct Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) in terms of the scattering coefficients corresponding to the fragments V1V_{1} and V2V_{2} specified in (1.4) and (1.5), and then we check the unequivalence of the resulting expressions.

Example 4.3.

Consider the full-line Schrödinger equation (1.1) with the matrix potential VV consisting of the fragments V1V_{1} and V2V_{2} as in (1.4) and (1.5), where those fragments are real valued and given by

V1(x)=[322c],2<x<0,V_{1}(x)=\begin{bmatrix}3&2\\ 2&c\end{bmatrix},\qquad-2<x<0, (4.29)
V2(x)=[2111],0<x<1.\quad V_{2}(x)=\begin{bmatrix}2&1\\ 1&1\end{bmatrix},\qquad 0<x<1. (4.30)

The parameter cc appearing in (4.29) is a real constant, and it is understood that the support of V1V_{1} is the interval x(2,0)x\in(-2,0) and the support of V2V_{2} is x(0,1).x\in(0,1). The procedure to evaluate the corresponding scattering coefficients is not trivial. As described in Example 4.2, we evaluate the scattering coefficients corresponding to V1V_{1} and V2V_{2} by diagonalizing each of the constant 2×22\times 2 matrices V1V_{1} and V2V_{2} and by constructing the corresponding Jost solutions fr1(k,x)f_{\rm{r}1}(k,x) and fl2(k,x).f_{\rm{l}2}(k,x). We then use (4.27) and (4.28) to obtain Tl(k)T_{\rm{l}}(k) and Tr(k).T_{\rm{r}}(k). As in Example 4.2 we have prepared a Mathematica notebook to evaluate Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) explicitly in a closed form. The resulting expressions are extremely lengthy, and hence it is not practical to display them in our paper. Because the potential VV is real valued in this example, in order to check if Tl(k)Tr(k),T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k), as seen from (4.26) it is sufficient to check whether the 2×22\times 2 matrix Tl(k)T_{\rm{l}}(k) is symmetric or not. We remark that it is also possible to evaluate Tl(k)T_{\rm{l}}(k) directly by constructing the Jost solution fl(k,x)f_{\rm{l}}(k,x) when x<0x<0 and then by using (2.5). However, the evaluations in that case are much more involved, and Mathematica is not capable of carrying out the computations properly unless a more powerful computer is used. This indicates the usefulness of the factorization formula in evaluating the matrix-valued scattering coefficients for the full-line matrix Schrödinger equation. For various values of the parameter c,c, by evaluating the difference Tl(k)Tl(k)tT_{\rm{l}}(k)-T_{\rm{l}}(k)^{t} at any kk-value we confirm that Tl(k)T_{\rm{l}}(k) is not a symmetric matrix. At any particular kk-value, we have Tl(k)=Tl(k)tT_{\rm{l}}(k)=T_{\rm{l}}(k)^{t} if and only if the matrix norm of Tl(k)Tl(k)tT_{\rm{l}}(k)-T_{\rm{l}}(k)^{t} is zero. Using Mathematica, we are able to plot that matrix norm as a function of kk in the interval k[0,b]k\in[0,b] for any positive b.b. We then observe that that norm is strictly positive, and hence we are able to confirm that in general we have Tl(k)Tr(k)T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k) when n2.n\geq 2. One other reason for us to use the parameter cc in (4.29) is the following. The value of cc affects the number of eigenvalues for the full-line Schrödinger operator with the specified potential V,V, and hence by using various different values of cc as input we can check the unequivalence of Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) as the number of eigenvalues changes. The eigenvalues occur at the kk-values on the positive imaginary axis in the complex plane where the determinant of Tl(k)T_{\rm{l}}(k) has poles. Thus, we are able to identify the eigenvalues by locating the zeros of det[Tl(iκ)1]\det[T_{\rm{l}}(i\kappa)^{-1}] when κ>0.\kappa>0. Let us recall that we use an overbar on a digit to indicate a roundoff. We find that the numerically approximate value c=1.13725¯c=1.1372\overline{5} corresponds to an exceptional case, where the number of eigenvalues changes by 1.1. For example, when c>1.13725¯c>1.1372\overline{5} we observe that there are no eigenvalues and that there is exactly one eigenvalue when 0<c<1.13725¯.0<c<1.1372\overline{5}. When c=0c=0 we have an eigenvalue at k=0.551¯i,k=0.55\overline{1}i, the eigenvalue shifts to k=0.0695¯ik=0.069\overline{5}i when c=1.c=1. When c=1.3c=1.3 we do not have any eigenvalues and the zero of det[Tl(k)1]\det[T_{\rm{l}}(k)^{-1}] occurs on the negative imaginary axis at k=0.794¯i.k=-0.79\overline{4}i. We repeat our examination of the unequivalence of Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) by also changing other entries of the matrices V1(x)V_{1}(x) and V2(x)V_{2}(x) appearing in (4.29) and (4.30), respectively. For example, by letting

V1(x)=[3225],2<x<0,V_{1}(x)=\begin{bmatrix}3&-2\\ -2&-5\end{bmatrix},\qquad-2<x<0, (4.31)
V2(x)=[2112],0<x<1,\quad V_{2}(x)=\begin{bmatrix}2&1\\ 1&-2\end{bmatrix},\qquad 0<x<1, (4.32)

we evaluate Tl(k)T_{\rm{l}}(k) using (4.27). In this case we observe that there are three eigenvalues occurring when

k=0.005635¯i,k=1.2737¯i,k=2.08802¯i,k=0.00563\overline{5}\,i,\quad k=1.273\overline{7}\,i,\quad k=2.0880\overline{2}\,i,

and we still observe that Tl(k)Tr(k)T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k) in this case. In fact, using V1V_{1} and V2V_{2} appearing in (4.31) and (4.32), respectively, as input, we observe that the corresponding transmission coefficients satisfy

2ikTl(k)1=[130.267¯28.3984¯43.555¯9.46508¯]+O(k),k0,2ik\,T_{\rm{l}}(k)^{-1}=\begin{bmatrix}-130.26\overline{7}&28.398\overline{4}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-43.55\overline{5}&9.4650\overline{8}\end{bmatrix}+O(k),\qquad k\to 0,
2ikTr(k)1=[130.267¯43.555¯28.3984¯9.46508¯]+O(k),k0,2ik\,T_{\rm{r}}(k)^{-1}=\begin{bmatrix}-130.26\overline{7}&-43.55\overline{5}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 28.398\overline{4}&9.4650\overline{8}\end{bmatrix}+O(k),\qquad k\to 0,

confirming that we cannot have Tl(k)Tr(k).T_{\rm{l}}(k)\equiv T_{\rm{r}}(k). In Figure 4.1 we present the plot of the matrix norm of Tl(k)Tr(k)T_{\rm{l}}(k)-T_{\rm{r}}(k) as a function of k,k, which also shows that Tl(k)Tr(k).T_{\rm{l}}(k)\not\equiv T_{\rm{r}}(k).

Refer to caption
Figure 4.1: The matrix norm |Tl(k)Tr(k)||T_{\rm{l}}(k)-T_{\rm{r}}(k)| vs kk in Example 4.3 for the potential fragmented as in (1.4) and (1.5) with V1V_{1} and V2V_{2} given in (4.31) and (4.32), respectively.

5 The connection to the half-line Schrödinger operator

In this section we explore an important connection between the full-line n×nn\times n matrix Schrödinger equation (1.1) and the half-line 2n×2n2n\times 2n matrix Schrödinger equation

ϕ′′+𝐕(x)ϕ=k2ϕ,x+,-\phi^{\prime\prime}+\mathbf{V}(x)\,\phi=k^{2}\phi,\qquad x\in\mathbb{R}^{+}, (5.1)

where +:=(0,+)\mathbb{R}^{+}:=(0,+\infty) and the potential 𝐕\mathbf{V} is a 2n×2n2n\times 2n matrix-valued function of x.x. The connection will be made by choosing the potential 𝐕\mathbf{V} in terms of the fragments V1V_{1} and V2V_{2} appearing in (1.4) and (1.5) for the full-line potential VV in an appropriate way and also by supplementing (5.1) with an appropriate boundary condition. To make a distinction between the quantities relevant to the full-line Schrödinger equation (1.1) and the quantities relevant to the half-line Schrödinger equation (5.1), we use boldface to denote some of quantities related to (5.1).

Before making the connection between the half-line and full-line Schrödinger operators, we first provide a summary of some basic relevant facts related to (5.1). Since our interest in the half-line Schrödinger operator is restricted to its connection to the full-line Schrödinger operator, we consider (5.1) when the matrix potential has size 2n×2n,2n\times 2n, where nn is the positive integer related to the matrix size n×nn\times n of the potential VV in (1.1). We refer the reader to [6] for the analysis of (5.1) when the size of the matrix potential 𝐕\mathbf{V} is n×n,n\times n, where nn can be chosen as any positive integer.

We now present some basic relevant facts related to (5.1) by assuming that the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} in (5.1) satisfies

𝐕(x)=𝐕(x),x+,\mathbf{V}(x)^{\dagger}=\mathbf{V}(x),\qquad x\in\mathbb{R}^{+}, (5.2)
0𝑑x(1+x)|𝐕(x)|<+.\int_{0}^{\infty}dx\,(1+x)\,|\mathbf{V}(x)|<+\infty. (5.3)

To construct the half-line matrix Schrödinger operator related to (5.1), we supplement (5.1) with the general selfadjoint boundary condition

Bϕ(0)+Aϕ(0)=0,-B^{\dagger}\phi(0)+A^{\dagger}\phi^{\prime}(0)=0, (5.4)

where AA and BB are two constant 2n×2n2n\times 2n matrices satisfying

AA+BB>0,BA=AB.A^{\dagger}A+B^{\dagger}B>0,\quad B^{\dagger}A=A^{\dagger}B. (5.5)

We recall that a matrix is positive (also called positive definite) when all its eigenvalues are positive.

It is possible to express the 2n2n boundary conditions listed in (5.4) in an uncoupled form. We refer the reader to Section 3.4 of [6] for the explicit steps to transform from any pair (A,B)(A,B) of boundary matrices appearing in the general selfadjoint boundary condition described in (5.4) and (5.5) to the diagonal boundary matrix pair (A~,B~)(\tilde{A},\tilde{B}) given by

{A~=diag{sinθ1,sinθ2,,sinθ2n},B~=diag{cosθ1,cosθ2,,cosθ2n},\begin{cases}\tilde{A}=-\text{\rm{diag}}\left\{\sin\theta_{1},\sin\theta_{2},\cdots,\sin\theta_{2n}\right\},\\ \tilde{B}=\text{\rm{diag}}\left\{\cos\theta_{1},\cos\theta_{2},\cdots,\cos\theta_{2n}\right\},\end{cases} (5.6)

for some appropriate real parameters θj(0,π].\theta_{j}\in(0,\pi]. It can directly be verified that the matrix pair (A~,B~)(\tilde{A},\tilde{B}) satisfies the two equalities in (5.5) and that (5.4) is equivalent to the uncoupled system of 2n2n equations given by

(cosθj)ϕj(0)+(sinθj)ϕj(0)=0,1j2n.\left(\cos\theta_{j}\right)\phi_{j}(0)+\left(\sin\theta_{j}\right)\phi^{\prime}_{j}(0)=0,\qquad 1\leq j\leq 2n. (5.7)

We remark that the case θj=π\theta_{j}=\pi corresponds to the Dirichlet boundary condition and the case θj=π/2\theta_{j}=\pi/2 corresponds to the Neumann boundary condition. Let us use nDn_{\text{\rm D}} and nNn_{\text{\rm N}} to denote the number of Dirichlet and Neumann boundary conditions in (5.7), and let us use nMn_{\text{\rm M}} to denote the number of mixed boundary conditions in (5.7), where a mixed boundary condition occurs when θ(0,π/2)\theta\in(0,\pi/2) or θ(π/2,π).\theta\in(\pi/2,\pi). It is clear that we have

nD+nN+nM=2n.n_{\text{\rm D}}+n_{\text{\rm N}}+n_{\text{\rm M}}=2n.

In Sections 3.3 and 3.5 of [6] we have constructed a selfadjoint realization of the matrix Schrödinger operator d2/dx2+𝐕(x)-d^{2}/dx^{2}+\mathbf{V}(x) with the boundary condition described in (5.4) and (5.5), and we have used HA,B,𝐕H_{A,B,\mathbf{V}} to denote it. In Section 3.6 of [6] we have shown that that selfadjoint realization with the boundary matrices (A,B)(A,B) and the selfadjoint realization with the boundary matrices (A~,B~)(\tilde{A},\tilde{B}) are related to each other as

HA,B,𝐕=MHA~,B~,M𝐕MM.H_{A,B,\mathbf{V}}=MH_{\tilde{A},\tilde{B},M^{\dagger}\mathbf{V}M}M^{\dagger}. (5.8)

for some 2n×2n2n\times 2n unitary matrix M.M.

In Section 2.4 of [6] we have established a unitary transformation between the half-line 2n×2n2n\times 2n matrix Schrödinger operator and the full-line n×nn\times n matrix Schrödinger operator by choosing the boundary matrices AA and BB in (5.4) appropriately so that the full-line potential VV at x=0x=0 includes a point interaction. Using that unitary transformation, in [13] the half-line physical solution to (5.1) and the full-line physical solutions to (1.1) are related to each other, and also the corresponding half-line scattering matrix and full-line scattering matrix are related to each other. In this section of our current paper, in the absence of a point interaction, via a unitary transformation we are interested in analyzing the connection between various half-line quantities and the corresponding full-line quantities.

By proceeding as in Section 2.4 of [6], we establish our unitary operator 𝐔\mathbf{U} from L2(+;2n)L^{2}(\mathbb{R}^{+};\mathbb{C}^{2n}) onto L2(;n)L^{2}(\mathbb{R};\mathbb{C}^{n}) as follows. We decompose any complex-valued, square-integrable column vector ϕ(x)\phi(x) with 2n2n components into two pieces of column vectors ϕ+(x)\phi_{+}(x) and ϕ(x),\phi_{-}(x), each with nn components, as

ϕ(x)=:[ϕ+(x)ϕ(x)],x+.\phi(x)=:\begin{bmatrix}\phi_{+}(x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\phi_{-}(x)\end{bmatrix},\qquad x\in\mathbb{R}^{+}.

Then, our unitary transformation 𝐔\mathbf{U} maps ϕ(x)\phi(x) onto the complex-valued, square-integrable column vector ψ(x)\psi(x) with nn components in such a way that

ψ(x)={ϕ+(x),x>0,ϕ(x),x<0.\psi(x)=\begin{cases}\phi_{+}(x),\qquad x>0,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\phi_{-}(-x),\qquad x<0.\end{cases}

We use the decomposition of the full-line n×nn\times n matrix potential VV into the potential fragments V1V_{1} and V2V_{2} as described in (1.4) and (1.5). We choose the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} in terms of V1V_{1} and V2V_{2} by letting

𝐕(x)=[V2(x)00V1(x)],x+.\mathbf{V}(x)=\begin{bmatrix}V_{2}(x)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&V_{1}(-x)\end{bmatrix},\qquad x\in\mathbb{R}^{+}. (5.9)

The inverse transformation 𝐔1,\mathbf{U}^{-1}, which is equivalent to 𝐔,\mathbf{U}^{\dagger}, maps ψ(x)\psi(x) onto ϕ(x)\phi(x) via

ϕ(x)=[ψ(x)ψ(x)],x+.\phi(x)=\begin{bmatrix}\psi(x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\psi(-x)\end{bmatrix},\qquad x\in\mathbb{R}^{+}.

Under the action of the unitary transformation 𝐔,\mathbf{U}, the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} appearing on the left-hand side of (5.8) is unitarily transformed onto the full-line Hamiltonian HV,H_{V}, where HVH_{V} is related to HA,B,𝐕H_{A,B,\mathbf{V}} as

HV=𝐔HA,B,𝐕𝐔,H_{V}=\mathbf{U}H_{A,B,\mathbf{V}}\mathbf{U}^{\dagger}, (5.10)

and its domain D[HV]D[H_{V}] is given by

D[HV]={ψL2(;n):𝐔ψD[HA,B,𝐕]},D[H_{V}]=\left\{\psi\in L^{2}(\mathbb{R};\mathbb{C}^{n}):\mathbf{U}^{\dagger}\psi\in D[H_{A,B,\mathbf{V}}]\right\},

where D[HA,B,𝐕]D[H_{A,B,\mathbf{V}}] denotes the domain of HA,B,𝐕.H_{A,B,\mathbf{V}}. The operator HVH_{V} specified in (5.10) is a selfadjoint realization in L2(;n)L^{2}(\mathbb{R};\mathbb{C}^{n}) of the formal differential operator d2/dx2+V(x),-d^{2}/dx^{2}+V(x), where VV is the full-line potential appearing in (1.4) and satisfying (1.2) and (1.3).

The boundary condition (5.4) at x=0x=0 of +\mathbb{R}^{+} satisfied by the functions in D[HA,B,𝐕]D[H_{A,B,\mathbf{V}}] implies that the functions in D[HV]D[H_{V}] themselves satisfy a transmission condition at x=0x=0 of the full line .\mathbb{R}. In order to determine that transmission condition, we express the boundary matrices AA and BB appearing in (5.4) in terms of n×2nn\times 2n block matrices A1,A_{1}, A2,A_{2}, B1,B_{1}, and B2B_{2} as

A=:[A1A2],B=:[B1B2].A=:\begin{bmatrix}A_{1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr A_{2}\end{bmatrix},\quad B=:\begin{bmatrix}B_{1}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr B_{2}\end{bmatrix}. (5.11)

Using (5.11) in (5.4) we see that any function ψ(x)\psi(x) in D[HV]D[H_{V}] satisfies the transmission condition at x=0x=0 given by

B1ψ(0+)B2ψ(0)+A1ψ(0+)A2ψ(0)=0.-B_{1}^{\dagger}\,\psi(0^{+})-B_{2}^{\dagger}\,\psi(0^{-})+A_{1}^{\dagger}\,\psi^{\prime}(0^{+})-A_{2}^{\dagger}\,\psi(0^{-})=0. (5.12)

For example, let us choose the boundary matrices AA and BB as

A=[0I0I],B=[I0I0],A=\begin{bmatrix}0&I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&I\end{bmatrix},\quad B=\begin{bmatrix}-I&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr I&0\end{bmatrix}, (5.13)

where we recall that II is the n×nn\times n identity matrix and 0 denotes the n×nn\times n zero matrix. It can directly be verified that the matrices AA and BB appearing in (5.13) satisfy the two matrix equalities in (5.5). Then, the transmission condition at x=0x=0 of \mathbb{R} given in (5.12) is equivalent to the two conditions

ψ(0+)=ψ(0),ψ(0+)=ψ(0),\psi(0^{+})=\psi(0^{-}),\quad\psi^{\prime}(0^{+})=\psi^{\prime}(0^{-}),

which indicate that the functions ψ\psi and ψ\psi^{\prime} are continuous at x=0.x=0. In this case, HVH_{V} is the standard matrix Schrödinger operator on the full line without a point interaction. In the special case when the boundary matrices AA and BB are chosen as in (5.13), as shown in Proposition 5.9 of [13], the corresponding transformed boundary matrices A~\tilde{A} and B~\tilde{B} in (5.6) yield precisely nn Dirichlet and nn Neumann boundary conditions.

We now introduce some relevant quantities for the half-line Schrödinger equation (5.1) with the boundary condition (5.4). We assume that the half-line potential 𝐕\mathbf{V} is chosen as in (5.9), where the full-line potential VV satisfies (1.2) and (1.3). Hence, 𝐕\mathbf{V} satisfies (5.2) and (5.3). As already mentioned, we use boldface to denote some of the half-line quantities in order to make a contrast with the corresponding full-line quantities. For example, 𝐕\mathbf{V} is the half-line 2n×2n2n\times 2n matrix potential whereas VV is the full-line n×nn\times n matrix potential, 𝐒(k)\mathbf{S}(k) is the half-line 2n×2n2n\times 2n scattering matrix while S(k)S(k) is the full-line 2n×2n2n\times 2n scattering matrix, 𝐟(k,x)\mathbf{f}(k,x) is the half-line 2n×2n2n\times 2n matrix-valued Jost solution whereas fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) are the full-line n×nn\times n matrix-valued Jost solutions, 𝚽(k,x)\mathbf{\Phi}(k,x) is the half-line 2n×2n2n\times 2n matrix-valued regular solution, the quantity 𝚿(k,x)\mathbf{\Psi}(k,x) denotes the half-line 2n×2n2n\times 2n matrix-valued physical solution whereas Ψl(k,x)\Psi_{\rm{l}}(k,x) and Ψr(k,x)\Psi_{\rm{r}}(k,x) are the full-line n×nn\times n matrix-valued physical solutions. We use II for the n×nn\times n identity matrix and use 𝐈\mathbf{I} for the 2n×2n2n\times 2n identity matrix. We recall that JJ denotes the 2n×2n2n\times 2n constant matrix defined in (3.10) and it should not be confused with the half-line 2n×2n2n\times 2n Jost matrix 𝐉(k).\mathbf{J}(k).

The Jost solution 𝐟(k,x)\mathbf{f}(k,x) is the solution to (5.1) satisfying the spacial asymptotics

𝐟(k,x)=eikx[𝐈+o(1)],𝐟(k,x)=ikeikx[𝐈+o(1)],x+.\mathbf{f}(k,x)=e^{ikx}\left[\mathbf{I}+o(1)\right],\quad\mathbf{f}^{\prime}(k,x)=ik\,e^{ikx}\left[\mathbf{I}+o(1)\right],\qquad x\to+\infty. (5.14)

In terms of the 2n×2n2n\times 2n boundary matrices AA and BB appearing in (5.4) and the Jost solution 𝐟(k,x),\mathbf{f}(k,x), the 2n×2n2n\times 2n Jost matrix 𝐉(k)\mathbf{J}(k) is defined as

𝐉(k):=𝐟(k,0)B𝐟(k,0)A,k+¯,\mathbf{J}(k):=\mathbf{f}(-k^{*},0)^{\dagger}B-\mathbf{f}^{\prime}(-k^{*},0)^{\dagger}A,\qquad k\in\overline{\mathbb{C}^{+}}, (5.15)

where k-k^{*} is used because 𝐉(k)\mathbf{J}(k) has [6] an analytic extension in kk from \mathbb{R} to +\mathbb{C}^{+} and 𝐉(k)\mathbf{J}(k) is continuous in +¯.\overline{\mathbb{C}^{+}}. The half-line 2n×2n2n\times 2n scattering matrix 𝐒(k)\mathbf{S}(k) is defined in terms of the Jost matrix 𝐉(k)\mathbf{J}(k) as

𝐒(k):=𝐉(k)𝐉(k)1,k.\mathbf{S}(k):=-\mathbf{J}(-k)\,\mathbf{J}(k)^{-1},\qquad k\in\mathbb{R}. (5.16)

When the potential 𝐕\mathbf{V} satisfies (5.2) and (5.3), in the so-called exceptional case the matrix 𝐉(k)1\mathbf{J}(k)^{-1} has a singularity at k=0k=0 even though the limit of the right-hand side of (5.16) exists when k0.k\to 0. Thus, 𝐒(k)\mathbf{S}(k) is continuous in kk\in\mathbb{R} including k=0.k=0. It is known that 𝐒(k)\mathbf{S}(k) satisfies

𝐒(k)1=𝐒(k)=𝐒(k),k.\mathbf{S}(k)^{-1}=\mathbf{S}(k)^{\dagger}=\mathbf{S}(-k),\qquad k\in\mathbb{R}.

We refer the reader to Theorem 3.8.15 of [6] regarding the small kk-behavior of 𝐉(k)1\mathbf{J}(k)^{-1} and 𝐒(k).\mathbf{S}(k).

The 2n×2n2n\times 2n matrix-valued physical solution 𝚿(k,x)\mathbf{\Psi}(k,x) to (5.1) is defined in terms of the Jost solution 𝐟(k,x)\mathbf{f}(k,x) and the scattering matrix 𝐒(k)\mathbf{S}(k) as

𝚿(k,x):=𝐟(k,x)+𝐟(k,x)𝐒(k).\mathbf{\Psi}(k,x):=\mathbf{f}(-k,x)+\mathbf{f}(k,x)\,\mathbf{S}(k). (5.17)

It is known [6] that 𝚿(k,x)\mathbf{\Psi}(k,x) has a meromorphic extension from kk\in\mathbb{R} to k+k\in\mathbb{C}^{+} and it also satisfies the boundary condition (5.4), i.e. we have

B𝚿(k,0)+A𝚿(k,0)=0.-B^{\dagger}\mathbf{\Psi}(k,0)+A^{\dagger}\mathbf{\Psi}^{\prime}(k,0)=0. (5.18)

There is also a particular 2n×2n2n\times 2n matrix-valued solution 𝚽(k,x)\mathbf{\Phi}(k,x) to (5.1) satisfying the initial conditions

𝚽(k,0)=A,𝚽(k,0)=B.\mathbf{\Phi}(k,0)=A,\quad\mathbf{\Phi}^{\prime}(k,0)=B.

Because 𝚽(k,x)\mathbf{\Phi}(k,x) is entire in kk for each fixed x+,x\in\mathbb{R}^{+}, it is usually called the regular solution. The physical solution 𝚿(k,x)\mathbf{\Psi}(k,x) and the regular solution 𝚽(k,x)\mathbf{\Phi}(k,x) are related to each other via the Jost matrix 𝐉(k)\mathbf{J}(k) as

𝚿(k,x)=2ik𝚽(k,x)𝐉(k)1.\mathbf{\Psi}(k,x)=-2ik\,\mathbf{\Phi}(k,x)\,\mathbf{J}(k)^{-1}.

In the next theorem, we consider the special case where the half-line Schrödinger operator HA,B,𝐕H_{A,B,\mathbf{V}} and the full-line Schrödinger operator HVH_{V} are related to each other as in (5.10), with the potentials 𝐕\mathbf{V} and VV being related as in (1.4) and (5.9) and with the boundary matrices AA and BB chosen as in (5.13). We show how the corresponding half-line Jost solution, half-line physical solution, half-line scattering matrix, and half-line Jost matrix are related to the appropriate full-line quantities. We remark that the results in (5.21) and (5.22) have already been proved in [13] by using a different method.

Theorem 5.1.

Consider the full-line matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). Assume that the corresponding full-line Hamiltonian HVH_{V} and the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} are unitarily connected as in (5.10) by relating the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} to VV as in (1.4) and (5.9) and by choosing the boundary matrices AA and BB as in (5.13). Then, we have the following:

  1. (a)

    The half-line 2n×2n2n\times 2n matrix-valued Jost solution 𝐟(k,x)\mathbf{f}(k,x) to (5.1) appearing in (5.14) is related to the full-line n×nn\times n matrix-valued Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) appearing in (2.1) and (2.2), respectively, as

    𝐟(k,x)=[fl(k,x)00fr(k,x)],x+,k+¯.\mathbf{f}(k,x)=\begin{bmatrix}f_{\rm{l}}(k,x)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&f_{\rm{r}}(k,-x)\end{bmatrix},\qquad x\in\mathbb{R}^{+},\quad k\in\overline{\mathbb{C}^{+}}. (5.19)
  2. (b)

    The half-line 2n×2n2n\times 2n scattering matrix 𝐒(k)\mathbf{S}(k) defined in (5.16) is related to the full-line 2n×2n2n\times 2n scattering matrix S(k)S(k) defined in (2.9) as

    𝐒(k)=S(k)Q,k,\mathbf{S}(k)=S(k)\,Q,\qquad k\in\mathbb{R}, (5.20)

    where QQ is the 2n×2n2n\times 2n constant matrix defined in (2.51). Hence, the half-line scattering matrix 𝐒(k)\mathbf{S}(k) is related to the full-line n×nn\times n matrix-valued scattering coefficients as

    𝐒(k)=[R(k)Tl(k)Tr(k)L(k)],k.\mathbf{S}(k)=\begin{bmatrix}R(k)&T_{\rm{l}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr T_{\rm{r}}(k)&L(k)\end{bmatrix},\qquad k\in\mathbb{R}. (5.21)
  3. (c)

    The half-line 2n×2n2n\times 2n physical solution 𝚿(k,x)\mathbf{\Psi}(k,x) defined in (5.17) is related to the full-line n×nn\times n matrix-valued physical solutions Ψl(k,x)\Psi_{\rm{l}}(k,x) and Ψr(k,x)\Psi_{\rm{r}}(k,x) appearing in (2.10) as

    𝚿(k,x)=[Ψr(k,x)Ψl(k,x)Ψr(k,x)Ψl(k,x)],x+,k+¯.\mathbf{\Psi}(k,x)=\begin{bmatrix}\Psi_{\rm{r}}(k,x)&\Psi_{\rm{l}}(k,x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\Psi_{\rm{r}}(k,-x)&\Psi_{\rm{l}}(k,-x)\end{bmatrix},\qquad x\in\mathbb{R}^{+},\quad k\in\overline{\mathbb{C}^{+}}. (5.22)
  4. (d)

    The half-line 2n×2n2n\times 2n Jost matrix 𝐉(k)\mathbf{J}(k) defined in (5.15) and its inverse 𝐉(k)1\mathbf{J}(k)^{-1} are related to the full-line n×nn\times n matrix-valued Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) appearing in (2.1) and (2.2) and the n×nn\times n matrix-valued transmission coefficients Tl(k)T_{\rm{l}}(k) and Tr(k)T_{\rm{r}}(k) appearing in (2.5) and (2.6) as

    𝐉(k)=[fl(k,0)fl(k,0)fr(k,0)fr(k,0)],k+¯,\mathbf{J}(k)=\begin{bmatrix}-f_{\rm{l}}(-k^{*},0)^{\dagger}&-f^{\prime}_{\rm{l}}(-k^{*},0)^{\dagger}\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f_{\rm{r}}(-k^{*},0)^{\dagger}&f^{\prime}_{\rm{r}}(-k^{*},0)^{\dagger}\end{bmatrix},\qquad k\in\overline{\mathbb{C}^{+}}, (5.23)
    𝐉(k)1=12ik[fr(k,0)fl(k,0)fr(k,0)fl(k,0))][Tr(k)00Tl(k)],k+¯{0}.\mathbf{J}(k)^{-1}=\displaystyle\frac{1}{2ik}\begin{bmatrix}f^{\prime}_{\rm{r}}(k,0)&f^{\prime}_{\rm{l}}(k,0)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-f_{\rm{r}}(k,0)&-f_{\rm{l}}(k,0))\end{bmatrix}\begin{bmatrix}T_{\rm{r}}(k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{l}}(k)\end{bmatrix},\qquad k\in\overline{\mathbb{C}^{+}}\setminus\{0\}. (5.24)
Proof.

The Jost solutions fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) satisfy (1.1). Since kk appears as k2k^{2} in (1.1), the quantities fl(k,x),f_{\rm{l}}(-k,x), and fr(k,x)f_{\rm{r}}(-k,x) also satisfy (1.1). Furthermore, fl(k,x)f_{\rm{l}}(k,x) and fr(k,x)f_{\rm{r}}(k,x) satisfy the respective spacial asymptotics in (2.1) and (2.2). Then, with the help of (1.5) and (5.9) we see that the half-line Jost solution 𝐟(k,x)\mathbf{f}(k,x) given in (5.19) satisfies the half-line Schrödinger equation (5.1) and the spacial asymptotics (5.14). Thus, the proof of (a) is complete. Let us now prove (b). We evaluate (5.17) and its xx-derivative at the point x=0,x=0, and then we use the result in (5.18), where AA and BB are the matrices in (5.13). This yields

[fl(k,0)fr(k,0)fl(k,0)fr(k,0)]+[fl(k,0)fr(k,0)fl(k,0)fr(k,0)]𝐒(k)=0.\begin{bmatrix}f_{\rm{l}}(-k,0)&-f_{\rm{r}}(-k,0)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{l}}(-k,0)&-f^{\prime}_{\rm{r}}(-k,0)\end{bmatrix}+\begin{bmatrix}f_{\rm{l}}(k,0)&-f_{\rm{r}}(k,0)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f^{\prime}_{\rm{l}}(k,0)&-f^{\prime}_{\rm{r}}(k,0)\end{bmatrix}\mathbf{S}(k)=0. (5.25)

Using the matrices G(k,x)G(k,x) and JJ defined in (2.18) and (3.10), respectively, we write (5.25) as

G(k,0)J+G(k,0)J𝐒(k)=0,k,G(-k,0)\,J+G(k,0)\,J\,\mathbf{S}(k)=0,\qquad k\in\mathbb{R},

which yields

𝐒(k)=JG(k,0)1G(k,0)J,k.\mathbf{S}(k)=-J\,G(k,0)^{-1}\,G(-k,0)\,J,\qquad k\in\mathbb{R}. (5.26)

We remark that the invertibility of G(k,0)G(k,0) for k{0}k\in\mathbb{R}\setminus\{0\} is assured by Theorem 2.3(c) and that (5.26) holds also at k=0k=0 as a consequence of the continuity of 𝐒(k)\mathbf{S}(k) in k.k\in\mathbb{R}. From (3.9) we have

G(k,0)1G(k,0)=JS(k)JQ,k,G(k,0)^{-1}\,G(-k,0)=J\,S(k)\,J\,Q,\qquad k\in\mathbb{R}, (5.27)

where we recall that S(k)S(k) is the full-line scattering matrix in (2.9) and QQ is the constant matrix in (2.51). Using (5.27) in (5.26) we get

𝐒(k)=J[JS(k)JQ]J,k.\mathbf{S}(k)=-J\left[J\,S(k)\,J\,Q\right]J,\qquad k\in\mathbb{R}. (5.28)

Using J2=IJ^{2}=I and JQ=QJJQ=-QJ in (5.28), we obtain (5.20). Then, we get (5.21) by using (2.9) in (5.20). Thus, the proof of (b) is complete. Let us now prove (c). Using (5.19) and (5.21) in (5.17), we obtain

𝚿(k,x)=[fl(k,x)+fl(k,x)R(k)fl(k,x)Tl(k)fr(k,x)Tr(k)fr(k,x)+fr(k,x)L(k)].\mathbf{\Psi}(k,x)=\begin{bmatrix}f_{\rm{l}}(-k,x)+f_{\rm{l}}(k,x)\,R(k)&f_{\rm{l}}(k,x)\,T_{\rm{l}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f_{\rm{r}}(k,-x)\,T_{\rm{r}}(k)&f_{\rm{r}}(-k,-x)+f_{\rm{r}}(k,-x)\,L(k)\end{bmatrix}. (5.29)

The use of (3.3) and (3.4) in (5.29) yields

𝚿(k,x)=[fr(k,x)Tr(k)fl(k,x)Tl(k)fr(k,x)Tr(k)fl(k,x)Tl(k)].\mathbf{\Psi}(k,x)=\begin{bmatrix}f_{\rm{r}}(k,x)\,T_{\rm{r}}(k)&f_{\rm{l}}(k,x)\,T_{\rm{l}}(k)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr f_{\rm{r}}(k,-x)\,T_{\rm{r}}(k)&f_{\rm{l}}(k,-x)\,T_{\rm{l}}(k)\end{bmatrix}. (5.30)

Then, using (2.10) on the right-hand side of (5.30), we obtain (5.22), which completes the proof of (c). We now turn to the proof of (d). In (5.15) we use (5.13), (5.19), and the xx-derivative of (5.19). This gives us (5.23). Finally, by postmultiplying both sides of (5.24) with the respective sides of (5.23), one can verify that 𝐉(k)𝐉(k)1=𝐈.\mathbf{J}(k)\,\mathbf{J}(k)^{-1}=\mathbf{I}. In the simplification of the left-hand side of the last equality, one uses (2.42), (2.44), (2.46), and (2.47). Thus, the proof of (d) is complete. ∎

We recall that, as (5.10) indicates, the full-line and half-line Hamiltonians HVH_{V} and HA,B,𝐕,H_{A,B,\mathbf{V}}, respectively, are unitarily equivalent. However, as seen from (5.20), the corresponding full-line and half-line scattering matrices S(k)S(k) and 𝐒(k),\mathbf{S}(k), respectively, are not unitarily equivalent. For an elaboration on this issue, we refer the reader to [13].

Let us also mention that it is possible to establish (5.24) without the direct verification used in the proof of Theorem 5.1. This can be accomplished as follows. Comparing (2.18) and (5.23), we see that

𝐉(k)=JG(k,0),k+¯.\mathbf{J}(k)=-J\,G(-k^{\ast},0)^{\dagger},\qquad k\in\overline{\mathbb{C}^{+}}. (5.31)

By taking the matrix inverse of both sides of (5.31), we obtain

𝐉(k)1=[G(k,0)1]J,k+¯{0}.\mathbf{J}(k)^{-1}=-\left[G(-k^{\ast},0)^{-1}\right]^{\dagger}J,\qquad k\in\overline{\mathbb{C}^{+}}\setminus\{0\}. (5.32)

With the help of (2.18), (2.51), and (3.10), we can write (2.56) as

G(k,x)1=12ik[Tl(k)00Tr(k)]JQG(k,x)QJ,k+¯{0}.G(k,x)^{-1}=-\displaystyle\frac{1}{2ik}\,\begin{bmatrix}T_{\rm{l}}(k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{r}}(k)\end{bmatrix}\,J\,Q\,G(-k^{\ast},x)^{\dagger}\,Q\,J,\qquad k\in\overline{\mathbb{C}^{+}}\setminus\{0\}. (5.33)

From (5.33), for k+¯{0}k\in\overline{\mathbb{C}^{+}}\setminus\{0\} we get

[G(k,x)1]=12ikJQG(k,x)QJ[Tl(k)00Tr(k)].\left[G(-k^{\ast},x)^{-1}\right]^{\dagger}=-\displaystyle\frac{1}{2ik}\,J\,Q\,G(k,x)\,Q\,J\,\begin{bmatrix}T_{\rm{l}}(-k^{\ast})^{\dagger}&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{r}}(-k^{\ast})^{\dagger}\end{bmatrix}. (5.34)

Using (4.1) in the last matrix factor on the right-hand side of (5.34), we can write (5.34) as

[G(k,x)1]=12ikJQG(k,x)QJ[Tr(k)00Tl(k)],k+¯{0}.\left[G(-k^{\ast},x)^{-1}\right]^{\dagger}=-\displaystyle\frac{1}{2ik}\,J\,Q\,G(k,x)\,Q\,J\,\begin{bmatrix}T_{\rm{r}}(k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{l}}(k)\end{bmatrix},\qquad k\in\overline{\mathbb{C}^{+}}\setminus\{0\}. (5.35)

Finally, using (5.35) on the right-hand side of (5.32), we obtain

𝐉(k)1=12ikJQG(k,0)Q[Tr(k)00Tl(k)],k+¯{0},\mathbf{J}(k)^{-1}=\displaystyle\frac{1}{2ik}\,J\,Q\,G(k,0)\,Q\,\begin{bmatrix}T_{\rm{r}}(k)&0\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0&T_{\rm{l}}(k)\end{bmatrix},\qquad k\in\overline{\mathbb{C}^{+}}\setminus\{0\}, (5.36)

which is equivalent to (5.24).

In the next theorem, we relate the determinant of the half-line 2n×2n2n\times 2n Jost matrix 𝐉(k)\mathbf{J}(k) to the determinant of the full-line n×nn\times n matrix-valued transmission coefficient Tl(k).T_{\rm{l}}(k).

Theorem 5.2.

Consider the full-line matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). Assume that the corresponding full-line Hamiltonian HVH_{V} is unitarily connected to the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} as in (5.10) by relating the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} to VV as in (1.4) and (5.9) and by choosing the boundary matrices AA and BB as in (5.13). Then, the determinant of the half-line 2n×2n2n\times 2n Jost matrix 𝐉(k)\mathbf{J}(k) defined in (5.15) is related to the determinant of the corresponding full-line n×nn\times n matrix-valued transmission coefficient Tl(k)T_{\rm{l}}(k) appearing in (2.5) as

det[𝐉(k)]=(2ik)ndet[Tl(k)],k+¯.\det[\mathbf{J}(k)]=\displaystyle\frac{(2ik)^{n}}{\det[T_{\rm{l}}(k)]},\qquad k\in\overline{\mathbb{C}^{+}}. (5.37)
Proof.

By taking the determinants of both sides of (5.36), we obtain

1det[𝐉(k)]=(1)n(2ik)2ndet[G(k,0)]det[Tl(k)]det[Tr(k)],k+¯{0},\displaystyle\frac{1}{\det[\mathbf{J}(k)]}=\displaystyle\frac{(-1)^{n}}{(2ik)^{2n}}\,\det[G(k,0)]\,\det[T_{\rm{l}}(k)]\,\det[T_{\rm{r}}(k)],\qquad k\in\overline{\mathbb{C}^{+}}\setminus\{0\}, (5.38)

where we have used Q2=𝐈Q^{2}=\mathbf{I} and det[J]=(1)n,\det[J]=(-1)^{n}, which follow from (2.51) and (3.10), respectively. Using (2.21) in (5.38), we see that (5.37) holds. ∎

When the potential 𝐕\mathbf{V} satisfies (5.2) and (5.3), from Theorems 3.11.1 and 3.11.6 of [6] we know that the zeros of det[𝐉(k)]\det[\mathbf{J}(k)] in +¯{0}\overline{\mathbb{C}^{+}}\setminus\{0\} can only occur on the positive imaginary axis and the number of such zeros is finite. Assume that the full-line Hamiltonian HVH_{V} and the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} are connected to each other through the unitary transformation 𝐔\mathbf{U} as described in (5.10), where VV and 𝐕\mathbf{V} are related as in (1.4) and (5.9) and the boundary matrices are chosen as in (5.13). We then have the following important consequence. The number of eigenvalues of HVH_{V} coincides with the number of eigenvalues of HA,B,𝐕,H_{A,B,\mathbf{V}}, and the multiplicities of the corresponding eigenvalues also coincide. The eigenvalues of HA,B,𝐕H_{A,B,\mathbf{V}} occur at the kk-values on the positive imaginary axis in the complex kk-plane where det[𝐉(k)]\det[\mathbf{J}(k)] vanishes, and the multiplicity of each of those eigenvalues is equal to the order of the corresponding zero of det[𝐉(k)].\det[\mathbf{J}(k)]. Thus, as seen from (5.37) the eigenvalues of HVH_{V} occur on the positive imaginary axis in the complex kk-plane where det[Tl(k)]\det[T_{\rm{l}}(k)] has poles, and the multiplicity of each of those eigenvalues is equal to the order of the corresponding pole of det[Tl(k)].\det[T_{\rm{l}}(k)]. Let us use NN to denote the number of such poles without counting their multiplicities, and let us assume that those poles occur at k=iκjk=i\kappa_{j} for 1jN.1\leq j\leq N. We use mjm_{j} to denote the multiplicity of the pole at k=iκj.k=i\kappa_{j}. The nonnegative integer NN corresponds to the number of distinct eigenvalues κj2-\kappa_{j}^{2} of the corresponding full-line Schrödinger operator HVH_{V} associated with (1.1). If N=0,N=0, then there are no eigenvalues. Let us use 𝒩\mathcal{N} to denote the number of eigenvalues including the multiplicities. Hence, 𝒩\mathcal{N} is related to NN as

𝒩:=j=1Nmj.\mathcal{N}:=\displaystyle\sum_{j=1}^{N}m_{j}. (5.39)

From (5.37) it is seen that the zeros of the determinant of the corresponding Jost matrix 𝐉(k)\mathbf{J}(k) occur at k=iκjk=i\kappa_{j} with multiplicity mjm_{j} for 1jN.1\leq j\leq N. It is known [6] that 𝐉(k)\mathbf{J}(k) is analytic in +\mathbb{C}^{+} and continuous in +¯.\overline{\mathbb{C}^{+}}. Thus, from (5.37) we also see that det[Tl(k)]\det[T_{\rm{l}}(k)] cannot vanish in +¯{0}.\overline{\mathbb{C}^{+}}\setminus\{0\}.

The unitary equivalence given in (5.10) between the half-line matrix Schrödinger operator HA,B,𝐕H_{A,B,\mathbf{V}} and the full-line matrix Schrödinger operator HVH_{V} has other important consequences. Let us comment on the connection between the half-line and full-line cases when k=0.k=0. From Corollary 3.8.16 of [6] it follows that

det[𝐉(k)]=c1kμ[1+o(1)],k0 in +¯,\det[\mathbf{J}(k)]=c_{1}\,k^{\mu}\left[1+o(1)\right],\qquad k\to 0\text{\rm{ in }}\overline{\mathbb{C}^{+}}, (5.40)

where μ\mu is the geometric multiplicity of the zero eigenvalue of 𝐉(0)\mathbf{J}(0) and c1c_{1} is a nonzero constant. Furthermore, from Proposition 3.8.18 of [6] we know that μ\mu coincides with the geometric and algebraic multiplicity of the eigenvalue +1+1 of 𝐒(0).\mathbf{S}(0). In fact, the nonnegative integer μ\mu is related to (5.1) when k=0,k=0, i.e. related to the zero-energy Schrödinger equation given by

ϕ′′+𝐕(x)ϕ=0,x+.-\phi^{\prime\prime}+\mathbf{V}(x)\,\phi=0,\qquad x\in\mathbb{R}^{+}. (5.41)

When the 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} satisfies (5.2) and (5.3), from (3.2.157) and Proposition 3.2.6 of [6] it follows that, among any fundamental set of 4n4n linearly independent column-vector solutions to (5.41), exactly 2n2n of them are bounded and 2n2n are unbounded. In fact, the 2n2n columns of the zero-energy Jost solution 𝐟(0,x)\mathbf{f}(0,x) make up the 2n2n linearly independent bounded solutions to (5.41). Certainly, not all of those 2n2n column-vector solutions necessarily satisfy the selfadjoint boundary condition (5.4) with the 2n×2n2n\times 2n boundary matrices AA and BB chosen as in (5.5). From Remark 3.8.10 of [6] it is known that μ\mu coincides with the number of linearly independent bounded solutions to (5.41) satisfying the boundary condition (5.4) and that we have

0μ2n.0\leq\mu\leq 2n. (5.42)

From (5.37) and (5.40) we obtain

det[Tl(k)]=1c1(2i)nknμ[1+o(1)],k0 in +¯.\det[T_{\rm{l}}(k)]=\frac{1}{c_{1}}\,(2i)^{n}\,k^{n-\mu}\left[1+o(1)\right],\qquad k\to 0\text{\rm{ in }}\overline{\mathbb{C}^{+}}. (5.43)

We have the following additional remarks. Let us consider (1.1) when k=0,k=0, i.e. let us consider the full-line zero-energy Schrödinger equation

ψ′′+V(x)ψ=0,x,-\psi^{\prime\prime}+V(x)\,\psi=0,\qquad x\in\mathbb{R}, (5.44)

where VV is the n×nn\times n matrix potential appearing in (1.1) and satisfying (1.2) and (1.3). We recall that [4], for the full-line Schrödinger equation in the scalar case, i.e. when n=1,n=1, the generic case occurs when there are no bounded solutions to (5.44) and that the exceptional case occurs when there exists a bounded solution to (5.44). In the n×nn\times n matrix case, let us assume that (5.44) has ν\nu linearly independent bounded solutions. It is known [5] that

0νn.0\leq\nu\leq n. (5.45)

We can call ν\nu the degree of exceptionality. From the unitary connection (5.10) it follows that the number of linearly independent bounded solutions to (5.44) is equal to the number of linearly independent bounded solutions to (5.41). Thus, we have

μ=ν.\mu=\nu. (5.46)

As a result, we can write (5.43) also as

det[Tl(k)]=1c1(2i)nknν[1+o(1)],k0 in +¯.\det[T_{\rm{l}}(k)]=\frac{1}{c_{1}}\,(2i)^{n}\,k^{n-\nu}\left[1+o(1)\right],\qquad k\to 0\text{\rm{ in }}\overline{\mathbb{C}^{+}}. (5.47)

From (5.42), (5.45), and (5.46) we conclude that

0μn.0\leq\mu\leq n. (5.48)

The restriction from (5.42) to (5.48), when we have the unitary equivalence (5.10) between the half-line matrix Schrödinger operator HA,B,𝐕H_{A,B,\mathbf{V}} and the full-line matrix Schrödinger operator HV,H_{V}, can be made plausible as follows. As seen from (5.9), the 2n×2n2n\times 2n matrix potential 𝐕(x)\mathbf{V}(x) is a block diagonal matrix consisting of the two n×nn\times n square matrices V2(x)V_{2}(x) and V1(x)V_{1}(-x) for x+.x\in\mathbb{R}^{+}. Thus, we can decompose any column-vector solution ϕ(x)\phi(x) to (5.41) with 2n2n components into two column vectors each having nn components as

ϕ(x)=[ϕ2(x)ϕ1(x)],x+.\phi(x)=\begin{bmatrix}\phi_{2}(x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\phi_{1}(x)\end{bmatrix},\qquad x\in\mathbb{R}^{+}. (5.49)

Consequently, we can decompose the 2n×2n2n\times 2n matrix system (5.41) into the two n×nn\times n matrix subsystems

{ϕ2′′+V2(x)ϕ2=0,x+,ϕ1′′+V1(x)ϕ1=0,x+,\begin{cases}-\phi_{2}^{\prime\prime}+V_{2}(x)\,\phi_{2}=0,\qquad x\in\mathbb{R}^{+},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr-\phi_{1}^{\prime\prime}+V_{1}(-x)\,\phi_{1}=0,\qquad x\in\mathbb{R}^{+},\end{cases} (5.50)

in such a way that the two n×nn\times n subsystems given in the first and second lines, respectively, in (5.50) are uncoupled from each other. We now look for n×nn\times n matrix solutions to each of the two n×nn\times n subsystems in (5.50). Let us first solve the first subsystem in (5.50). From (3.2.157) and Proposition 3.2.6 of [6], we know that there are nn linearly independent bounded solutions to the first subsystem in (5.50). With the help of (5.49) and the boundary matrices AA and BB in (5.13), we see that the boundary condition (5.4) is expressed as two n×nn\times n systems as

ϕ1(0)=ϕ2(0),ϕ1(0)=ϕ2(0).\phi_{1}(0)=\phi_{2}(0),\quad\phi^{\prime}_{1}(0)=-\phi^{\prime}_{2}(0). (5.51)

Thus, once we have ϕ2(x)\phi_{2}(x) at hand, the initial values for ϕ1(0)\phi_{1}(0) and ϕ1(0)\phi^{\prime}_{1}(0) are uniquely determined from (5.51). Then, we solve the second subsystem in (5.50) with the already determined initial conditions given in (5.51). Since the first subsystem in (5.50) can have at most nn linearly independent bounded solutions ϕ2(x)\phi_{2}(x) and we have ϕ1(x)\phi_{1}(x) uniquely determined using ϕ2(x),\phi_{2}(x), from the decomposition (5.49) we conclude that we can have at most nn linearly independent bounded column-vector solutions ϕ(x)\phi(x) to the 2n×2n2n\times 2n system (5.41).

In (3.13) we have expressed the determinant of the scattering matrix S(k)S(k) for the full-line Schrödinger equation (1.1) in terms of the determinant of the left transmission coefficient Tl(k).T_{\rm{l}}(k). When the full-line Hamiltonian HVH_{V} and the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} are related to each other unitarily as in (5.10), in the next theorem we relate the determinant of the corresponding half-line scattering matrix 𝐒(k)\mathbf{S}(k) to det[S(k)].\det[S(k)].

Theorem 5.3.

Consider the full-line matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). Assume that the corresponding full-line Hamiltonian HVH_{V} and the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} are unitarily connected as in (5.10) by relating the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} to VV as in (1.4) and (5.9) and by choosing the boundary matrices AA and BB as in (5.13). Then, the determinant of the half-line scattering matrix 𝐒(k)\mathbf{S}(k) defined in (5.16) is related to the determinant of the full-line scattering matrix S(k)S(k) defined in (2.9) as

det[𝐒(k)]=(1)ndet[S(k)],k,\det\left[\mathbf{S}(k)\right]=(-1)^{n}\det\left[S(k)\right],\qquad k\in\mathbb{R}, (5.52)

and hence we have

det[𝐒(k)]=(1)ndet[Tl(k)](det[Tl(k)]),k,\det\left[\mathbf{S}(k)\right]=(-1)^{n}\displaystyle\frac{\det[T_{\rm{l}}(k)]}{\left(\det[T_{\rm{l}}(k)]\right)^{*}},\qquad k\in\mathbb{R}, (5.53)

where we recall that Tl(k)T_{\rm{l}}(k) is the n×nn\times n matrix-valued left transmission coefficient appearing in (2.5).

Proof.

The scattering matrices 𝐒(k)\mathbf{S}(k) and S(k)S(k) are related to each other as in (5.20), where QQ is the 2n×2n2n\times 2n constant matrix defined in (2.51). By interchanging the first and second row blocks in QQ we get the 2n×2n2n\times 2n identity matrix, and hence we have det[Q]=(1)n.\det[Q]=(-1)^{n}. Thus, by taking the determinant of both sides of (5.20), we obtain (5.52). Then, using (3.13) on the right-hand side of (5.52), we get (5.53). ∎

6 Levinson’s theorem

The main goal in this section is to prove Levinson’s theorem for the full-line matrix Schrödinger equation (1.1). In the scalar case, we recall that Levinson’s theorem connects the continuous spectrum and the discrete spectrum to each other, and it relates the number of discrete eigenvalues including the multiplicities to the change in the phase of the scattering matrix when the independent variable kk changes from k=0k=0 to k=+.k=+\infty. In the matrix case, in Levinson’s theorem one needs to use the phase of the determinant of the scattering matrix. In order to prove Levinson’s theorem for (1.1), we exploit the unitary transformation given in (5.10) relating the half-line and full-line matrix Schrödinger operators and we use Levinson’s theorem presented in Theorem 3.12.3 of [6] for the half-line matrix Schrödinger operator. We also provide an independent proof with the help of the argument principle applied to the determinant of the matrix-valued left transmission coefficient appearing in (2.5).

In preparation for the proof of Levinson’s theorem for (1.1), in the next theorem we relate the large kk-asymptotics of the half-line scattering matrix 𝐒(k)\mathbf{S}(k) to the full-line matrix potential VV in (1.1). Recall that the half-line potential 𝐕\mathbf{V} is related to the full-line potential VV as in (5.9), where V1V_{1} and V2V_{2} are the fragments of VV appearing in (1.4) and (1.5). We also recall that the boundary matrices AA and BB appearing in (5.13) are used in the boundary condition (5.4) in the formation of the half-line scattering matrix 𝐒(k).\mathbf{S}(k).

Theorem 6.1.

Consider the full-line matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). Assume that the corresponding full-line Hamiltonian HVH_{V} and the half-line Hamiltonian HA,B,𝐕H_{A,B,\mathbf{V}} are unitarily connected by relating the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} to VV as in (1.4) and (5.9) and by choosing the 2n×2n2n\times 2n boundary matrices AA and BB as in (5.13). Then, the half-line scattering matrix 𝐒(k)\mathbf{S}(k) in (5.16) has the large kk-asymptotics given by

𝐒(k)=𝐒+𝐆1ik+o(1k),k± in ,\mathbf{S}(k)=\mathbf{S}_{\infty}+\displaystyle\frac{\mathbf{G}_{1}}{ik}+o\left(\displaystyle\frac{1}{k}\right),\qquad k\to\pm\infty\text{ \rm{in} }\mathbb{R}, (6.1)

where we have

𝐒=Q,𝐆1=12[0𝑑xV(x)𝑑xV(x)0],\mathbf{S}_{\infty}=Q,\quad\mathbf{G}_{1}=\displaystyle\frac{1}{2}\begin{bmatrix}0&\int_{-\infty}^{\infty}dx\,V(x)\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\int_{-\infty}^{\infty}dx\,V(x)&0\end{bmatrix}, (6.2)

with QQ being the constant 2n×2n2n\times 2n matrix defined in (2.51).

Proof.

We already know that the half-line scattering matrix 𝐒(k)\mathbf{S}(k) is related to the full-line scattering coefficients as in (5.21). With the help of (4.2)–(4.5) expressing the large kk-asymptotics of the scattering coefficients, from (5.21) we obtain (6.1). ∎

Remark 6.2.

We have the following comments related to the first equality in (6.2). One can directly evaluate the eigenvalues of the matrix QQ defined in (2.51) and confirm that it has the eigenvalue 1-1 with multiplicity nn and has the eigenvalue +1+1 with multiplicity n.n. By Theorem 3.10.6 of [6] we know that the number of Dirichlet boundary conditions associated with the boundary matrices A~\tilde{A} and B~\tilde{B} in (5.13) is equal to the (algebraic and geometric) multiplicity of the eigenvalue 1-1 of the matrix 𝐒,\mathbf{S}_{\infty}, and that the number of non-Dirichlet boundary conditions is equal to the (algebraic and geometric) multiplicity of the eigenvalue +1+1 of the matrix 𝐒.\mathbf{S}_{\infty}. Hence, the first equality in (6.2) implies that the boundary matrices A~\tilde{A} and B~\tilde{B} in (5.13) correspond to nn Dirichlet and nn non-Dirichlet boundary conditions. In fact, it is already proved in Proposition 5.9 of [13] with a different method that the number of Dirichlet boundary conditions equal to n,n, and that the aforementioned nn non-Dirichlet boundary conditions are actually all Neumann boundary conditions.

Next, we provide a review of some relevant facts on the small kk-asymptotics related to the full-line matrix Schrödinger equation (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). We refer the reader to [5] for an elaborate analysis including the proofs and further details.

Let us consider the full-line n×nn\times n matrix-valued zero-energy Schrödinger equation given in (5.44) when the n×nn\times n matrix potential VV satisfies (1.2) and (1.3). There are two n×nn\times n matrix-valued solutions to (5.44) whose 2n2n columns form a fundamental set. We use ϕl(x)\phi_{\rm{l}}(x) to denote one of those two n×nn\times n matrix-valued solutions, where ϕl(x)\phi_{\rm{l}}(x) satisfies the asymptotic conditions

ϕl(x)=x[I+o(1)],ϕl(x)=I+o(1),x+.\phi_{\rm{l}}(x)=x\left[I+o(1)\right],\quad\phi^{\prime}_{\rm{l}}(x)=I+o(1),\qquad x\to+\infty.

Thus, all the nn columns of ϕl(x)\phi_{\rm{l}}(x) correspond to unbounded solutions to (5.44). The other n×nn\times n matrix-valued solution to (5.44) is given by fl(0,x),f_{\rm{l}}(0,x), where fl(k,x)f_{\rm{l}}(k,x) is the left Jost solution to (1.1) appearing in (2.1). As seen from (2.1), the function fl(0,x)f_{\rm{l}}(0,x) satisfies the asymptotics

fl(0,x)=I+o(1),fl(0,x)=o(1),x+,f_{\rm{l}}(0,x)=I+o(1),\quad f^{\prime}_{\rm{l}}(0,x)=o(1),\qquad x\to+\infty,

and hence fl(0,x)f_{\rm{l}}(0,x) remains bounded as x+.x\to+\infty. On the other hand, some or all the nn columns of fl(0,x)f_{\rm{l}}(0,x) may be unbounded as x.x\to-\infty. In fact, we have [5]

fl(0,x)=x[Δl+o(1)],fl(0,x)=Δl+o(1),x,f_{\rm{l}}(0,x)=x\left[\Delta_{\rm{l}}+o(1)\right],\quad f^{\prime}_{\rm{l}}(0,x)=\Delta_{\rm{l}}+o(1),\qquad x\to-\infty,

where Δl\Delta_{\rm{l}} is the n×nn\times n constant matrix defined as

Δl:=limk02ikTl(k)1,\Delta_{\rm{l}}:=\displaystyle\lim_{k\to 0}2ik\,T_{\rm{l}}(k)^{-1}, (6.3)

with the limit taken from within +¯.\overline{\mathbb{C}^{+}}.

From Section 5, we recall that the degree of exceptionality, denoted by ν,\nu, is defined as the number of linearly independent bounded column-vector solutions to (5.44), and we know that ν\nu satisfies (5.45). From (5.45) we see that we have the purely generic case for (1.1) when ν=0\nu=0 and we have the purely exceptional case when ν=n.\nu=n. Hence, nνn-\nu corresponds to the degree of genericity for (1.1). Since the value of ν\nu is uniquely determined by only the potential VV in (1.1), we can also say that the n×nn\times n matrix potential VV is exceptional with degree ν.\nu.

We can characterize the value of ν\nu in various different ways. For example, ν\nu corresponds to the geometric multiplicity of the zero eigenvalue of the n×nn\times n matrix Δl\Delta_{\rm{l}} defined in (6.3). The value of ν\nu is equal to the nullity of the matrix Δl.\Delta_{\rm{l}}. It is also equal to the number of linearly independent bounded columns of the zero-energy left Jost solution fl(0,x).f_{\rm{l}}(0,x). Thus, the remaining nνn-\nu columns of fl(0,x)f_{\rm{l}}(0,x) are all unbounded solutions to (5.44). Hence, (5.44) has 2nν2n-\nu linearly independent unbounded column-vector solutions and it has ν\nu linearly independent bounded column-vector solutions.

The degree of exceptionality ν\nu can also be related to the zero-energy right Jost solution fr(0,x)f_{\rm{r}}(0,x) to (5.44). As we see from (2.2), the function fr(0,x)f_{\rm{r}}(0,x) satisfies the asymptotics

fr(0,x)=I+o(1),fr(0,x)=o(1),x,f_{\rm{r}}(0,x)=I+o(1),\quad f^{\prime}_{\rm{r}}(0,x)=o(1),\qquad x\to-\infty, (6.4)

and hence fr(0,x)f_{\rm{r}}(0,x) remains bounded as x.x\to-\infty. On the other hand, we have

fr(0,x)=x[Δr+o(1)],fr(0,x)=Δr+o(1),x+,f_{\rm{r}}(0,x)=-x\left[\Delta_{\rm{r}}+o(1)\right],\quad f^{\prime}_{\rm{r}}(0,x)=-\Delta_{\rm{r}}+o(1),\qquad x\to+\infty, (6.5)

where Δr\Delta_{\rm{r}} is the constant n×nn\times n matrix defined as

Δr:=limk02ikTr(k)1,\Delta_{\rm{r}}:=\displaystyle\lim_{k\to 0}2ik\,T_{\rm{r}}(k)^{-1}, (6.6)

with the limit taken from within +¯.\overline{\mathbb{C}^{+}}. From (4.1) it follows that the n×nn\times n matrices Δl\Delta_{\rm{l}} and Δr\Delta_{\rm{r}} satisfy

Δr=Δl.\Delta_{\rm{r}}=\Delta_{\rm{l}}^{\dagger}. (6.7)

With the help of (6.4)–(6.7) we observe that the degree of exceptionality ν\nu is equal to the geometric multiplicity of the zero eigenvalue of the n×nn\times n matrix Δr,\Delta_{\rm{r}}, to the nullity of Δr,\Delta_{\rm{r}}, and also to the number of linearly independent bounded columns of fr(0,x).f_{\rm{r}}(0,x).

One may ask whether the degree of exceptionality for the full-line potential VV can be determined from the degrees of exceptionality for its fragments. The answer is no unless all the fragments are purely exceptional, in which case the full-line potential must also be exceptional. This answer can be obtained by first considering the scalar case and then by generalizing it to the matrix case by arguing with a diagonal matrix potential. We refer the reader to [3] for an explicit example in the scalar case, where it is demonstrated that the potential VV may be generic or exceptional if at least one of the fragments is generic. In the next example, we also illustrate this fact by a different example.

Example 6.3.

We recall that, in the full-line scalar case, if the potential VV in (1.1) is real valued and satisfies (1.3) then generically the transmission coefficient T(k)T(k) vanishes linearly as k0k\to 0 and we have T(0)0T(0)\neq 0 in the exceptional case. In the generic case we have L(0)=R(0)=1L(0)=R(0)=-1 and in the exceptional case we have |L(0)|=|R(0)|<1,|L(0)|=|R(0)|<1, where we use L(k)L(k) and R(k)R(k) for the left and right reflection coefficients. Hence, from (3.51) we observe that, in the scalar case, if the two fragments of a full-line potential are both exceptional then the potential itself must be exceptional. By using induction, the result can be proved to hold also in the case where the number of fragments is arbitrary. In this example, we demonstrate that if at least one of the fragments is generic, then the potential itself can be exceptional or generic. In the full-line scalar case, let us consider the square-well potential of width aa and depth b,b, where aa is a positive parameter and bb is a nonnegative parameter. Without any loss of generality, because the transmission coefficient is not affected by shifting the potential, we can assume that the potential VV is given by

V(x)={b,a2<x<a2,0,elsewhere.V(x)=\begin{cases}-b,\qquad-\displaystyle\frac{a}{2}<x<\displaystyle\frac{a}{2},\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad\text{\rm{elsewhere}}.\end{cases} (6.8)

The transmission coefficient corresponding to the scalar potential VV in (6.8) is given by

T(k)=4kb+k2eikaq14+q15+q16,T(k)=\displaystyle\frac{4k\sqrt{b+k^{2}}\,e^{-ika}}{q_{14}+q_{15}+q_{16}}, (6.9)

where we have defined

q14:=bk2+kb+k2,q_{14}:=-b-k^{2}+k\sqrt{b+k^{2}},
q15:=[k2+kb+k2]exp(iab+k2),q_{15}:=\left[-k^{2}+k\sqrt{b+k^{2}}\right]\exp\left(ia\sqrt{b+k^{2}}\right),
q16:=[b+2k2+2kb+k2]exp(iab+k2).q_{16}:=\left[b+2k^{2}+2k\sqrt{b+k^{2}}\right]\exp\left(-ia\sqrt{b+k^{2}}\right).

From (6.9) we obtain

1T(k)=b4k(eiab1)+O(1),k0.\displaystyle\frac{1}{T(k)}=\displaystyle\frac{\sqrt{b}}{4k}\left(e^{-ia\sqrt{b}}-1\right)+O(1),\qquad k\to 0. (6.10)

Hence, (6.10) implies that the exceptional case occurs if and only if aba\sqrt{b} is an integer multiple of 2π,2\pi, in which case T(k)T(k) does not vanish at k=0.k=0. Thus, the potential VV in (6.8) is exceptional if and only if we have the depth bb of the square-well potential is related to the width aa as

b=4p2π2a2,b=\displaystyle\frac{4p^{2}\pi^{2}}{a^{2}}, (6.11)

for some nonnegative integer p.p. For simplicity, let us use a=2a=2 and choose our potential fragments V1V_{1} and V2V_{2} as

V1(x)={b,1<x<c,0,elsewhere,V2(x)={b,c<x<1,0,elsewhere,V_{1}(x)=\begin{cases}-b,\qquad-1<x<c,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad\text{\rm{elsewhere}},\end{cases}\quad V_{2}(x)=\begin{cases}-b,\qquad c<x<1,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad\text{\rm{elsewhere}},\end{cases}

where cc is a parameter in [1,1][-1,1] so that the potential VV in (6.8) is given by

V(x)={b,1<x<1,0,elsewhere.V(x)=\begin{cases}-b,\qquad-1<x<1,\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr 0,\qquad\text{\rm{elsewhere}}.\end{cases} (6.12)

We remark that the zero potential is exceptional. With the help of (6.11), we conclude that V1V_{1} is exceptional if and only if there exists a nonnegative integer p1p_{1} satisfying

b2π=p11+c,\displaystyle\frac{\sqrt{b}}{2\pi}=\displaystyle\frac{p_{1}}{1+c}, (6.13)

that V2V_{2} is exceptional if and only if there exists a nonnegative integer p2p_{2} satisfying

b2π=p21c,\displaystyle\frac{\sqrt{b}}{2\pi}=\displaystyle\frac{p_{2}}{1-c}, (6.14)

and that VV is exceptional if and only if there exists a nonnegative integer pp satisfying

b2π=p2.\displaystyle\frac{\sqrt{b}}{2\pi}=\displaystyle\frac{p}{2}. (6.15)

In fact, (6.15) happens if and only if p=p1+p2.p=p_{1}+p_{2}. From (6.13)–(6.15) we have the following conclusions:

  1. (a)

    If V1V_{1} and V2V_{2} are both exceptional, then VV is also exceptional. This follows from the restriction p=p1+p2p=p_{1}+p_{2} that if p1p_{1} and p2p_{2} are both nonnegative integers then pp must also be a nonnegative integer. As argued earlier, if V1V_{1} and V2V_{2} are both exceptional then VV cannot be generic.

  2. (b

    If the nonnegative integer p1p_{1} satisfying (6.13) exists but the nonnegative integer p2p_{2} satisfying (6.14) does not exist, then the nonnegative integer pp satisfying (6.15) cannot exist because of the restriction p=p1+p2.p=p_{1}+p_{2}. Thus, we can conclude that if V1V_{1} is exceptional then V2V_{2} and VV are either simultaneously exceptional or simultaneously generic. Similarly, we can conclude that if V2V_{2} is exceptional then V1V_{1} and VV are either simultaneously exceptional or simultaneously generic.

  3. (c)

    If both V1V_{1} are V2V_{2} are generic, then VV can be generic or exceptional. This can be seen easily by arguing that in the equation p=p1+p2,p=p_{1}+p_{2}, it may happen that p1+p2p_{1}+p_{2} is a nonnegative integer or not a nonnegative integer even though neither p1p_{1} nor p2p_{2} are nonnegative integers.

Finally in this section, we consider Levinson’s theorem for (1.1). We refer the reader to Theorem 3.12.3 of [6] for Levinson’s theorem for the half-line matrix Schrödinger operator with the selfadjoint boundary condition. With the help of that theorem, we have the following result for the half-line matrix Schrödinger operator associated with (5.1) and (5.4). As mentioned in Remark 6.2, the result given in (6.17) in the next theorem has been proved in [13] by using a different method.

Theorem 6.4.

Consider the half-line matrix Schrödinger operator corresponding to (5.1) with the 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} satisfying (5.2) and (5.3), and with the selfadjoint boundary condition (5.4) with the boundary matrices AA and BB satisfying (5.5). Let 𝒩h\mathcal{N}_{\rm{h}} denote the number of eigenvalues including the multiplicities, 𝐒(k)\mathbf{S}(k) be the corresponding scattering matrix defined as in (5.16), and 𝐒\mathbf{S}_{\infty} be the constant matrix appearing in (6.1) and (6.2). Then, we have the following:

  1. (a)

    The nonnegative integer 𝒩h\mathcal{N}_{\rm{h}} is related to the argument of the determinant of 𝐒(k)\mathbf{S}(k) as

    arg[det[𝐒(0+)]]arg[det[𝐒]]=π[2𝒩h+μ2n+nD],\arg[\det[\mathbf{S}(0^{+})]]-\arg[\det[\mathbf{S}_{\infty}]]=\pi\left[2\,\mathcal{N}_{\rm{h}}+\mu-2\,n+n_{\text{\rm{D}}}\right], (6.16)

    where arg\arg is any single-valued branch of the argument function, the nonnegative integer μ\mu is the algebraic and geometric multiplicity of the eigenvalue +1+1 of 𝐒(0),\mathbf{S}(0), and the nonnegative integer nDn_{\text{\rm{D}}} is the algebraic and geometric multiplicity of the eigenvalue 1-1 of 𝐒.\mathbf{S}_{\infty}.

  2. (b)

    Assume further that the boundary matrices AA and BB appearing in (5.4) are chosen as in (5.13). Then, we have

    nD=n,n_{\text{\rm{D}}}=n, (6.17)

    and hence in that special case, (6.16) yields

    arg[det[𝐒(0+)]]arg[det[𝐒]]=π[2𝒩h+μn].\arg[\det[\mathbf{S}(0^{+})]]-\arg[\det[\mathbf{S}_{\infty}]]=\pi\left[2\,\mathcal{N}_{\rm{h}}+\mu-n\right]. (6.18)
Proof.

We remark that (6.16) directly follows from (3.12.15) of [6] by using the fact that the matrix potential 𝐕\mathbf{V} has size 2n×2n.2n\times 2n. Hence, the proof of (a) is complete. In order to prove (b), it is sufficient to prove (6.17). We can explicitly evaluate the large kk-asymptotics of 𝐒(k)\mathbf{S}(k) when AA and BB are chosen as in (5.13). From (3.10.37) of [6] we know that 𝐒\mathbf{S}_{\infty} is unchanged if the potential 𝐕\mathbf{V} is zero, which is not surprising because the potential cannot affect the leading term in the large kk-asymptotics of the scattering matrix. Thus, 𝐒\mathbf{S}_{\infty} and its eigenvalues are solely determined by the boundary matrices AA and B.B. When the potential is zero, as seen from (3.7.2) of [6] we have

𝐒(k)=(B+ikA)(BikA)1.\mathbf{S}(k)=-(B+ikA)(B-ikA)^{-1}. (6.19)

When we use AA and BB given in (5.13), we get

BikA=[IikIIikI].B-ikA=\begin{bmatrix}-I&-ikI\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr I&-ikI\end{bmatrix}. (6.20)

Using (6.20) in (6.19) we evaluate the large kk-asymptotics of 𝐒(k)\mathbf{S}(k) given in (6.19) as

𝐒(k)=[IikIIikI](12[IIIikIik]),\mathbf{S}(k)=-\begin{bmatrix}-I&ikI\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr I&ikI\end{bmatrix}\left(-\displaystyle\frac{1}{2}\,\begin{bmatrix}I&-I\\ \vskip 6.0pt plus 2.0pt minus 2.0pt\cr\displaystyle\frac{I}{ik}&\displaystyle\frac{I}{ik}\end{bmatrix}\right),

which yields the exact result

𝐒(k)=Q,\mathbf{S}(k)=Q, (6.21)

where we recall that QQ is the 2n×2n2n\times 2n constant matrix defined in (2.51). Thus, we have 𝐒=Q.\mathbf{S}_{\infty}=Q. As indicated in Remark 6.2, the matrix QQ has eigenvalue 1-1 with multiplicity n.n. Hence, (6.17) holds, and the proof of (b) is complete. ∎

We remark that (6.21) also follows from the first equality in (6.2) of Theorem 6.1 by using the unitary transformation (5.10) between the half-line and full-line Schrödinger operators. However, we have established Theorem 6.4(b) without using that unitary transformation and without making any connection to the full-line Schrödinger equation (1.1).

Using (4.2)–(4.5) in (2.9), we see that the full-line scattering matrix S(k)S(k) satisfies

S=𝐈,S_{\infty}=\mathbf{I},

where we have defined

S:=limk±S(k).S_{\infty}:=\displaystyle\lim_{k\to\pm\infty}S(k).

In the next theorem we state and prove Levinson’s theorem for the full-line matrix-valued Schrödinger equation (1.1). As in Theorem 6.4, we again use arg\arg to denote any single-valued branch of the argument function.

Theorem 6.5.

Consider the full-line matrix Schrödinger operator corresponding to (1.1) with the n×nn\times n matrix potential VV satisfying (1.2) and (1.3). Let S(k)S(k) be the corresponding 2n×2n2n\times 2n scattering matrix defined in (2.9). Then, the corresponding number 𝒩\mathcal{N} of eigenvalues including the multiplicities, which appears in (5.39), is related to the argument of the determinant of S(k)S(k) as

arg[det[S(0+)]]arg[det[S]]=π[2𝒩n+ν],\arg[\det[S(0^{+})]]-\arg[\det[S_{\infty}]]=\pi\left[2\mathcal{N}-n+\nu\right], (6.22)

where we recall that the nonnegative integer ν\nu is the degree of exceptionality appearing in (5.45) and denoting the number of linearly independent bounded solutions to (5.44).

Proof.

Even though we can prove (6.22) independently without using any connection to the half-line 2n×2n2n\times 2n matrix Schrödinger equation (5.1), it is illuminating to prove it by exploiting the unitary connection (5.10) between the full-line Hamiltonian HVH_{V} and the half-line Hamiltonian HA,B,𝐕,H_{A,B,\mathbf{V}}, where the half-line 2n×2n2n\times 2n matrix potential 𝐕\mathbf{V} is related to VV as in (1.4) and (5.9) and the boundary matrices AA and BB chosen as in (5.13). Then, from (5.52) it follows that the left-hand side of (6.22) is equal to the left-hand side of (6.18), i.e. we have

arg[det[S(0+)]]arg[det[S]]=arg[det[𝐒(0+)]]arg[det[𝐒]],\arg[\det[S(0^{+})]]-\arg[\det[S_{\infty}]]=\arg[\det[\mathbf{S}(0^{+})]]-\arg[\det[\mathbf{S}_{\infty}]], (6.23)

where 𝐒(k)\mathbf{S}(k) is the 2n×2n2n\times 2n scattering matrix corresponding to HA,B,𝐕.H_{A,B,\mathbf{V}}. Furthermore, because of (5.10) we know that both the number of eigenvalues and their multiplicities for HVH_{V} and HA,B,𝐕H_{A,B,\mathbf{V}} coincide, and hence we have

𝒩h=𝒩,\mathcal{N}_{\rm{h}}=\mathcal{N}, (6.24)

where we recall that 𝒩h\mathcal{N}_{\rm{h}} is the number of eigenvalues of HA,B,𝐕H_{A,B,\mathbf{V}} including the multiplicities. We also know that (5.46) holds, where μ\mu is the geometric and algebraic multiplicity of the eigenvalue +1+1 of 𝐒(0).\mathbf{S}(0). Thus, using (5.46) and (6.24) on the right-hand side of (6.18), with the help of (6.23) we obtain (6.22). Hence, the proof is complete. ∎

An independent proof of (6.22) without using the unitary connection (5.10) can be given by applying the argument principle to the determinant of Tl(k).T_{\rm{l}}(k). For this, we can proceed as follows. As indicated in Section 5, the Hamiltonian HVH_{V} has NN distinct eigenvalues κj2,-\kappa_{j}^{2}, each with multiplicity mjm_{j} for 1jN.1\leq j\leq N. In case N=0,N=0, there are no eigenvalues. The quantity det[Tl(k)]\det[T_{\rm{l}}(k)] has a meromorphic extension from kk\in\mathbb{R} to +\mathbb{C}^{+} such that the only poles there occur at k=iκjk=i\kappa_{j} with multiplicity mjm_{j} for 1jN.1\leq j\leq N. Furthermore, except for those poles, det[Tl(k)]\det[T_{\rm{l}}(k)] is continuous in +¯\overline{\mathbb{C}^{+}} and nonzero in +¯{0},\overline{\mathbb{C}^{+}}\setminus\{0\}, and it has a zero at k=0k=0 with order nν,n-\nu, as indicated in (5.47). We note that det[Tl(k)]\det[T_{\rm{l}}(k)] is nonzero at k=0k=0 when ν=n.\nu=n. To apply the argument principle, we choose our contour 𝒞ε,r\mathcal{C}_{\varepsilon,r} as the positively oriented closed curve given by

𝒞ε,r:=(r,ε)𝒞ε(ε,r)𝒞r.\mathcal{C}_{\varepsilon,r}:=(-r,-\varepsilon)\cup\mathcal{C}_{\varepsilon}\cup(\varepsilon,r)\cup\mathcal{C}_{r}. (6.25)

Note that the first piece (r,ε)(-r,-\varepsilon) on the right-hand side of (6.25) is the directed line segment on the real axis oriented from r-r to ε-\varepsilon for some small positive ε\varepsilon and some large positive rr. The second piece 𝒞ε\mathcal{C}_{\varepsilon} is the upper semicircle centered at the origin with radius ε\varepsilon and oriented from ε-\varepsilon to ε.\varepsilon. The third piece is the real line segment (ε,r)(\varepsilon,r) oriented from ε\varepsilon to r.r. Finally the fourth piece 𝒞r\mathcal{C}_{r} is the upper semicircle centered at zero and with radius rr and oriented from rr to r.-r. From (4.2) we see that the argument of det[Tl(k)]\det[T_{\rm{l}}(k)] does not change along the piece CrC_{r} when r+.r\to+\infty. In the limit ε0+\varepsilon\to 0^{+} and r+,r\to+\infty, the application of the argument principle to det[Tl(k)]\det[T_{\rm{l}}(k)] along the contour Cε,rC_{\varepsilon,r} yields

arg[det[Tl(+)]]arg[det[Tl(0+)]]+arg[det[Tl(0)]]arg[det[Tl()]]=2π[nν2𝒩],\begin{split}\arg[\det[T_{\rm{l}}(+\infty)]]-&\arg[\det[T_{\rm{l}}(0^{+})]]+\arg[\det[T_{\rm{l}}(0^{-})]]\\ &-\arg[\det[T_{\rm{l}}(-\infty)]]=2\pi\left[\displaystyle\frac{n-\nu}{2}-\mathcal{N}\right],\end{split} (6.26)

where the factor 1/21/2 in the brackets on the right-hand side is due to the fact that we integrate along the semicircle Cε.C_{\varepsilon}. From the first equality in (3.12), we conclude that

arg[det[Tl(0)]]arg[det[Tl()]]=arg[det[Tl(+)]]arg[det[Tl(0+)]].\arg[\det[T_{\rm{l}}(0^{-})]]-\arg[\det[T_{\rm{l}}(-\infty)]]=\arg[\det[T_{\rm{l}}(+\infty)]]-\arg[\det[T_{\rm{l}}(0^{+})]]. (6.27)

Using (6.27) in (6.26), we obtain

arg[det[Tl(+)]]arg[det[Tl(0+)]]=π[nν2𝒩].\arg[\det[T_{\rm{l}}(+\infty)]]-\arg[\det[T_{\rm{l}}(0^{+})]]=\pi\left[\displaystyle\frac{n-\nu}{2}-\mathcal{N}\right]. (6.28)

Finally, using (3.13) in (6.28) we get

arg[det[S]]arg[det[S(0+)]]=2π[nν2𝒩],\arg[\det[S_{\infty}]]-\arg[\det[S(0^{+})]]=2\pi\left[\displaystyle\frac{n-\nu}{2}-\mathcal{N}\right],

which is equivalent to (6.22).

References

  • [1] Z. S. Agranovich and V. A. Marchenko, The inverse problem of scattering theory, Gordon and Breach, New York, 1963.
  • [2] T. Aktosun, A factorization of the scattering matrix for the Schrödinger equation and for the wave equation in one dimension, J. Math. Phys. 33, 3865–3869 (1992).
  • [3] T. Aktosun, Bound states and inverse scattering for the Schrödinger equation in one dimension, J. Math. Phys. 35, 6231–6236 (1994).
  • [4] T. Aktosun, M. Klaus, and C. van der Mee, Factorization of the scattering matrix due to partitioning of potentials in one-dimensional Schrödinger type equations, J. Math. Phys. 37, 5897–5915 (1996).
  • [5] T. Aktosun, M. Klaus, and C. van der Mee, Small-energy asymptotics of the scattering matrix for the matrix Schrödinger equation on the line, J. Math. Phys. 42, 4627–4652 (2001).
  • [6] T. Aktosun and R. Weder, Direct and inverse scattering for the matrix Schrödinger equation, Springer Nature, Switzerland, 2021.
  • [7] E. A. Coddington and N. Levinson, Theory of ordinary differential equations, McGraw Hill, New York, 1955.
  • [8] H. Dym, Linear algebra in action, Am. Math. Soc. Publ., Providence, RI, 2006.
  • [9] V. Kostrykin and R. Schrader, Kirchhoff’s rule for quantum wires, J. Phys. A 32, 596–630 (1999).
  • [10] A. Laptev and T. Weidl, Sharp Lieb-Thirring inequalities in high dimensions, Acta Math. 184, 87–111 (2000).
  • [11] I. Martínez Alonso and E. Olmedilla, Trace identities in the inverse scattering transform method associated with matrix Schrödinger operators, J. Math. Phys. 23, 2116–2121 (1982).
  • [12] E. Olmedilla, Inverse scattering transform for general matrix Schrödinger operators and the related sympletic structure, Inverse Probl. 1, 219–236 (1985).
  • [13] R. Weder, The LpL^{p} boundedness of the wave operators for matrix Schrödinger equations, J. Spectr. Theory 12, 707–744 (2022).