This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Splitting Quantum Graphs

Nathaniel Smith and Alim Sukhtayev
Abstract

We derive a counting formula for the eigenvalues of Schrödinger operators with self-adjoint boundary conditions on quantum star graphs. More specifically, we develop techniques using Evans functions to reduce full quantum graph eigenvalue problems into smaller subgraph eigenvalue problems. These methods provide a simple way to calculate the spectra of operators with localized potentials.

1 Introduction

This paper studies the eigenvalues of Schrödinger operators on quantum star graphs with nn edges. We will refer to the iith edge of a quantum graph Ω\Omega as ϵi\epsilon_{i}. Each edge is parameterized to have coordinates between 0 and i>0\ell_{i}>0, where 0 is assigned to the shared origin of all edges and i\ell_{i} is the other endpoint.

A function on a quantum graph can be written as a vector function f{\vec{f}}, where the iith component fif_{i} represents the portion of the function defined over the iith edge. We assign a separate variable xix_{i} to each edge ϵi\epsilon_{i}, which takes values on the interval [0,i][0,\ell_{i}]. The collection of all these xix_{i} is abbreviated as x{\vec{x}}.

1.1 Basic Definitions and Notations

We are interested particularly in sets of separated and self-adjoint boundary conditions Γ={α1,α2,β1,β2}\Gamma=\{\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}\}, where these αi\alpha_{i} and βi\beta_{i} are n×nn\times n matrices satisfying the following:

rank([α1α2])=rank([β1β2])=n,α1α2=α2α1andβ1β2=β2β1,\begin{split}\text{rank}(\begin{bmatrix}\alpha_{1}&\alpha_{2}\end{bmatrix})=\text{rank}(\begin{bmatrix}\beta_{1}&\beta_{2}\end{bmatrix})=n\,,\\ \alpha_{1}\alpha_{2}^{*}=\alpha_{2}\alpha_{1}^{*}\hskip 18.06749pt\text{and}\hskip 18.06749pt\beta_{1}\beta_{2}^{*}=\beta_{2}\beta_{1}^{*}\,,\end{split} (1)

where the * superscript represents the matrix adjoint. We say a function u{\vec{u}} satisfies Γ\Gamma boundary conditions if it satisfies the equations

α1u(0)+α2u(0)=0,β1u()+β2u()=0.\begin{split}\alpha_{1}{\vec{u}}(\vec{0})+\alpha_{2}{\vec{u}}^{\prime}(\vec{0})=\vec{0}\,,\\ \beta_{1}{\vec{u}}(\vec{\ell})+\beta_{2}{\vec{u}}^{\prime}(\vec{\ell})=\vec{0}\,.\end{split} (2)

In this paper, u{\vec{u}}^{\prime} refers to a vector whose iith component is uiu_{i}^{\prime}, which is defined to be the derivative of the iith component uiu_{i} of u{\vec{u}} with respect to the relevant spatial variable xix_{i}.

We will follow the reasonable convention that the matrices β1\beta_{1} and β2\beta_{2} should both be diagonal. This comes from the natural assumption that the only interaction between functions should happen at shared vertices (in this case, only at the origin).

Definition 1.1.

Given a set Γ\Gamma of separated and self-adjoint boundary conditions as described above, we define the Γ\Gamma-trace of a function u{\vec{u}} as the following vector in 2n{\mathbb{C}}^{2n}:

γΓu=[β1u()+β2u()α1u(0)+α2u(0)],\gamma_{\Gamma}{\vec{u}}=\begin{bmatrix}\beta_{1}{\vec{u}}(\vec{\ell})+\beta_{2}{\vec{u}}^{\prime}(\vec{\ell})\\ \alpha_{1}{\vec{u}}(\vec{0})+\alpha_{2}{\vec{u}}^{\prime}(\vec{0})\end{bmatrix}\,, (3)

provided u{\vec{u}} is sufficiently smooth.

As an example, by choosing α1\alpha_{1} and β1\beta_{1} to be identity matrices while letting α2\alpha_{2} and β2\beta_{2} be zero matrices, Γ\Gamma would be equivalent to Dirichlet boundary conditions at all vertices. The Dirichlet trace would then be the column vector

γDu=[u1(1)un(n)u1(0)un(0)]t,\gamma_{D}{\vec{u}}=\begin{bmatrix}u_{1}(\ell_{1})&\ldots&u_{n}(\ell_{n})&u_{1}(0)&\ldots&u_{n}(0)\end{bmatrix}^{t}\,, (4)

where tt represents the matrix (nonconjugating) transpose. One other notable trace that will show up often is the Neumann trace, defined as

γNu=[u1(1)un(n)u1(0)un(0)]t.\gamma_{N}{\vec{u}}=\begin{bmatrix}u_{1}^{\prime}(\ell_{1})&\ldots&u_{n}^{\prime}(\ell_{n})&-u_{1}^{\prime}(0)&\ldots&-u_{n}^{\prime}(0)\end{bmatrix}^{t}\,. (5)

We are interested in solving eigenvalue problems of the form

{(HΩλ)u=0,γΓu=0,\begin{cases}(H^{\Omega}-\lambda){\vec{u}}=\vec{0}\,,\\ \gamma_{\Gamma}{\vec{u}}=\vec{0}\,,\end{cases} (6)

where u(x,λ){\vec{u}}({\vec{x}},\lambda) is a function over Ω\Omega, HΩH^{\Omega} is a differential expression defined as

HΩ=Δ+V:=diag(xi2+Vi(xi)|1in),H^{\Omega}=-\Delta+V:={\rm diag\,}(-\partial^{2}_{x_{i}}+V_{i}(x_{i})|1\leq i\leq n)\,, (7)

where diag{\rm diag\,} denotes a diagonal matrix whose iith diagonal entry is the iith argument specified. We assume that each potential function ViV_{i} is in L1(0,i)L^{1}(0,\ell_{i}).

Definition 1.2.

HΓΩH^{\Omega}_{\Gamma} is a Schrödinger operator which is defined over the domain dom(HΓΩ):={uH2(Ω):γΓu=0}{\text{dom}\,}(H^{\Omega}_{\Gamma}):=\{{\vec{u}}\in H^{2}(\Omega):\gamma_{\Gamma}{\vec{u}}=\vec{0}\}, where uH2(Ω){\vec{u}}\in H^{2}(\Omega) is shorthand to denote (u)iH2(0,i)({\vec{u}})_{i}\in H^{2}(0,\ell_{i}) for all ii, and H2(0,i)H^{2}(0,\ell_{i}) is a Sobolev space defined over ϵi\epsilon_{i}. Then, for functions in this domain,

HΓΩu:=HΩu.H^{\Omega}_{\Gamma}{\vec{u}}:=H^{\Omega}{\vec{u}}.

We denote the spectrum of HΓΩH^{\Omega}_{\Gamma} as σ(HΓΩ)\sigma(H^{\Omega}_{\Gamma}), and the resolvent set as ρ(HΓΩ)\rho(H^{\Omega}_{\Gamma}). Note that our assumptions on the potentials ViV_{i} in HΩH^{\Omega} guarantee that σ(HΓΩ)\sigma(H^{\Omega}_{\Gamma}) consists only of the discrete spectrum [10].

Remark 1.3.

We can also introduce smaller associated operators over subgraphs. Let xj=s1x_{j}=s_{1} be a splitting point (though not a vertex) on edge ϵj\epsilon_{j}, which will play the role of a vertex in two connected subgraphs of Ω\Omega, denoted Ω1\Omega_{1} and Ω2\Omega_{2}, which satisfy following conditions:

Ω1Ω2=Ω and Ω1Ω2={s1},jΩ1 and 0Ω2.\begin{split}\Omega_{1}\cup\Omega_{2}=\Omega&\text{ and }\Omega_{1}\cap\Omega_{2}=\{s_{1}\},\\ \ell_{j}\in\Omega_{1}&\text{ and }0\in\Omega_{2}.\end{split}

An example of such a partition is shown in Figure 5. Let HΩ1H^{\Omega_{1}} and HΩ2H^{\Omega_{2}} be the restrictions of HΩH^{\Omega} to Ω1\Omega_{1} and Ω2\Omega_{2}, respectively. Next, define a new set of boundary conditions Γ1\Gamma_{1} over Ω1\Omega_{1} which shares Γ\Gamma boundary conditions at all common vertices with Ω\Omega, and which includes a Dirichlet boundary condition u(s1)=0u(s_{1})=0 at s1s_{1}. Similarly, we define Γ2\Gamma_{2} as a set of boundary conditions over Ω2\Omega_{2} which is the same as Γ\Gamma at all common vertices with Ω\Omega, and which includes the Dirichlet condition uj(s1)=0u_{j}(s_{1})=0 at s1s_{1}.

Refer to caption
Figure 1: A partition of Ω\Omega into Ω1\Omega_{1} and Ω2\Omega_{2} at s1s_{1}.

Then, we will consider the operators HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}} and HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}}, which are defined as follows:

HΓ1Ω1u:=HΩ1u for udom(HΓ1Ω1):={uH2(Ω1):γΓ1u=0},HΓ2Ω2u:=HΩ2u for udom(HΓ2Ω2):={uH2(Ω2):γΓ2u=0}.\begin{split}H^{\Omega_{1}}_{\Gamma_{1}}u:=H^{\Omega_{1}}u\text{ for }u\in{\text{dom}\,}(H^{\Omega_{1}}_{\Gamma_{1}}):=\{u\in H^{2}(\Omega_{1}):\gamma_{\Gamma_{1}}u=\vec{0}\},\\ H^{\Omega_{2}}_{\Gamma_{2}}{\vec{u}}:=H^{\Omega_{2}}{\vec{u}}\text{ for }{\vec{u}}\in{\text{dom}\,}(H^{\Omega_{2}}_{\Gamma_{2}}):=\{{\vec{u}}\in H^{2}(\Omega_{2}):\gamma_{\Gamma_{2}}{\vec{u}}=\vec{0}\}.\end{split} (8)

Note that γΓ1u\gamma_{\Gamma_{1}}u and γΓ2u\gamma_{\Gamma_{2}}{\vec{u}} have a near identical definition to that in formula (3) for their respective α\alpha and β\beta matrices. However, in the case of γΓ1u\gamma_{\Gamma_{1}}u, the new vertex s1s_{1} will take the place of the origin 0, and in the case of γΓ2u\gamma_{\Gamma_{2}}{\vec{u}} s1s_{1} will take the place of j\ell_{j}.

We will make heavy use of the Evans function, a powerful and popular tool in the stability theory of traveling waves [1, 4, 6, 8, 9]. In general, Evans functions are not uniquely defined. To avoid this issue, we specify a unique construction of an Evans function EΓΩE^{\Omega}_{\Gamma} associated with a given operator HΓΩH^{\Omega}_{\Gamma} in Definition 2.2. Thus, for the remainder of the paper, we will simply refer to the Evans function of an operator, always assuming that it is constructed by Definition 2.2.

If Ω\Omega is split at some some non-vertex point xj=s1x_{j}=s_{1} into regions Ω1\Omega_{1} and Ω2\Omega_{2} as discussed in Remark 1.3, then a one-sided map is defined to be a Dirichlet-to-Neumann map associated with s1s_{1}. In particular, we specify two such maps given our splitting.

Definition 1.4.

Let λρ(HΓ1Ω1)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}}). Fix ff\in{\mathbb{C}}. Then we define a map M1(λ):M_{1}(\lambda):{\mathbb{C}}\rightarrow{\mathbb{C}} as

M1(λ)f=u(s1,λ),M_{1}(\lambda)f=-u^{\prime}(s_{1},\lambda), (9)

where uu is the unique solution to the boundary value problem

{(HΓ1Ω1λ)u=0,u(s1)=f,u satisfies all other Γ1 conditions.\begin{cases}(H^{\Omega_{1}}_{\Gamma_{1}}-\lambda)u=0,\\ u(s_{1})=f,\\ \text{$u$ satisfies all other $\Gamma_{1}$ conditions.}\end{cases}

Due to the definition of uu, we have that

M1(λ)=u(s1,λ)u(s1,λ).M_{1}(\lambda)=-\frac{u^{\prime}(s_{1},\lambda)}{u(s_{1},\lambda)}.

We suppress the s1s_{1} dependence of the map in this paper.

Similarly, for λρ(HΓ2Ω2)\lambda\in\rho(H^{\Omega_{2}}_{\Gamma_{2}}) and ff\in{\mathbb{C}} we define a map M2(λ):M_{2}(\lambda):{\mathbb{C}}\rightarrow{\mathbb{C}} as

M2(λ)f=uj(s1,λ),M_{2}(\lambda)f=u_{j}^{\prime}(s_{1},\lambda), (10)

where uju_{j} is the jjth component of the unique solution u{\vec{u}} to the boundary problem

{(HΓ2Ω2λ)u=0,uj(s1)=f,u satisfies all other Γ2 conditions.\begin{cases}(H^{\Omega_{2}}_{\Gamma_{2}}-\lambda){\vec{u}}=\vec{0},\\ u_{j}(s_{1})=f,\\ \text{${\vec{u}}$ satisfies all other $\Gamma_{2}$ conditions.}\end{cases}

Due to the definition of u{\vec{u}}, we can exactly express M2(λ)M_{2}(\lambda) as

M2(λ)=uj(s1,λ)uj(s1,λ).M_{2}(\lambda)=\frac{u_{j}^{\prime}(s_{1},\lambda)}{u_{j}(s_{1},\lambda)}.

Again, we choose to suppress the s1s_{1} dependence of this map. Note that the right hand sides of equations (9) and (10) are the portions of the Neumann trace over Ω1\Omega_{1} and Ω2\Omega_{2}, respectively, which correspond to s1s_{1}. The sum M1+M2M_{1}+M_{2} is called a two-sided map associated with s1s_{1}.

1.2 Summary of Paper

In this paper we develop methods for finding the spectra of Schrödinger differential operators on quantum star graphs. In particular, we wish to be able to split a quantum star graph into smaller subgraphs, and then find the spectrum of the original problem by examining the spectra of the related subgraph problems. This is particularly helpful for dealing with operators with localized potentials, since it allows us to isolate the zero potential and nonzero potential portions from one another, resulting in simpler subproblems.

In Section 2, we find a general formula for RλΓvR_{\lambda}^{\Gamma}{\vec{v}}, the resolvent of HΓΩH^{\Omega}_{\Gamma} applied to an arbitrary L2(Ω)L^{2}(\Omega) function v{\vec{v}}.

In Section 3, we express uΓ,i{\vec{u}}_{\Gamma,i} (a solution to an inhomogeneous Γ\Gamma boundary value problem) in terms of RλΓvR^{\Gamma}_{\lambda}{\vec{v}}. Such functions are the building blocks of the two-sided maps. This expression (Theorem 3.4) involves three key projection matrices: the Dirichlet, Neumann, and Robin projections. The various properties of these matrices allow us to algebraically apply boundary conditions to solutions in a general way, without relying on any knowledge of the specific type of condition (beyond being representable by the α\alpha and β\beta matrices discussed earlier).

In Section 4, we prove the main results of this paper. They relate the eigenvalues of HΓΩH^{\Omega}_{\Gamma} to those of the smaller operators that come from splitting Ω\Omega as discussed in Remark 1.3. We state them here:

Theorem 1.5.

Suppose Ω\Omega is split into Ω1\Omega_{1} and Ω2\Omega_{2} at some non-vertex point s1s_{1}, and let Γ1\Gamma_{1} and Γ2\Gamma_{2} be the new boundary conditions as constructed in Remark 1.3. Additionally, suppose λρ(HΓ1Ω1)ρ(HΓ2Ω2)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}})\cap\rho(H^{\Omega_{2}}_{\Gamma_{2}}), and let M1+M2M_{1}+M_{2} be the two-sided map at s1s_{1}. Then

EΓΩ(λ)EΓ1Ω1(λ)EΓ2Ω2(λ)=(M1+M2)(λ),\frac{E^{\Omega}_{\Gamma}(\lambda)}{E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)}=(M_{1}+M_{2})(\lambda), (11)

where EΓΩE^{\Omega}_{\Gamma}, EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}}, and EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}} are the Evans functions associated with HΓΩH^{\Omega}_{\Gamma}, HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}}, and HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}}, respectively.

Remark 1.6.

Despite the fact that the left and right sides of equation (11) are not defined on the spectra of any of the operators involved, both sides are equal as meromorphic functions over {\mathbb{C}}. Since we only care about the poles and zeros in this equation, this type of equality is entirely sufficient.

A similar result has already been proven for the interval case in [7].

Since zeros of Evans functions locate eigenvalues, Theorem 1.5 can be used to easily prove anther one of the main results.

Theorem 1.7.

Let 𝒩X(λ)\mathcal{N}_{X}(\lambda) count the number of eigenvalues less than or equal to λ\lambda of any operator XX, and let N(λ)N(\lambda) be the difference between the number of zeros and number of poles (including multiplicities) of M1+M2M_{1}+M_{2} less than or equal to λ\lambda. Then

𝒩HΓΩ(λ)=𝒩HΓ1Ω1(λ)+𝒩HΓ2Ω2(λ)+N(λ).\mathcal{N}_{H^{\Omega}_{\Gamma}}(\lambda)=\mathcal{N}_{H^{\Omega_{1}}_{\Gamma_{1}}}(\lambda)+\mathcal{N}_{H^{\Omega_{2}}_{\Gamma_{2}}}(\lambda)+N(\lambda). (12)

In [2], a similar eigenvalue counting theorem has already been proven for the case of dividing multidimensional domains by hypersurfaces and defining two-sided maps with respect to these hypersurfaces.

We will also demonstrate how repeated applications of Theorem 1.5 can be used to split a quantum graph into three subdomains over which the eigenvalues can be counted separately, which is useful for more general barrier and well problems.

Definition 1.8.

Suppose Ω\Omega has been split into Ω1\Omega_{1} and Ω2\Omega_{2} as described in Remark 1.3 at a non-vertex splitting point xj=s1x_{j}=s_{1}. We can apply the exact same process to split Ω2\Omega_{2} at some non-vertex point xi=s2x_{i}=s_{2} into two connected subgraphs Ω~1\tilde{\Omega}_{1} and Ω~2\tilde{\Omega}_{2} which satisfy the following conditions:

Ω~1Ω~2=Ω2 and Ω~1Ω~2={s2}.\begin{split}\tilde{\Omega}_{1}\cup\tilde{\Omega}_{2}=\Omega_{2}&\text{ and }\tilde{\Omega}_{1}\cap\tilde{\Omega}_{2}=\{s_{2}\}.\end{split}

Additionally, we ensure that 0Ω~20\in\tilde{\Omega}_{2}. If s2s_{2} is on the same wire as s1s_{1}, then s1Ω~1s_{1}\in\tilde{\Omega}_{1}. If the two splitting points are on different wires, then iΩ~1\ell_{i}\in\tilde{\Omega}_{1}.

Remark 1.9.

Again, such a splitting comes with some associated operators. Let HΩ~1H^{\tilde{\Omega}_{1}} and HΩ~2H^{\tilde{\Omega}_{2}} be the restrictions of HΩ2H^{\Omega_{2}} to Ω~1\tilde{\Omega}_{1} and Ω~2\tilde{\Omega}_{2}, respectively. We can define new boundary conditions Γ~1\tilde{\Gamma}_{1} on Ω~1\tilde{\Omega}_{1} which match Γ2\Gamma_{2} conditions at all common vertices with Ω2\Omega_{2}, and which include the Dirichlet condition u(s2)=0u(s_{2})=0 at xi=s2x_{i}=s_{2}. Likewise, we can define Γ~2\tilde{\Gamma}_{2} boundary conditions over Ω~2\tilde{\Omega}_{2} which are identical to Γ2\Gamma_{2} conditions at all common vertices, and with the Dirichlet condition ui(s2)=0u_{i}(s_{2})=0 at s2s_{2}. Finally, using these pieces we can define the operators HΓ~1Ω~1H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}} and HΓ~2Ω~2H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}} as follows:

HΓ~1Ω~1u:=HΩ~1u for udom(HΓ~1Ω~1):={uH2(Ω~1):γΓ~1u=0},HΓ~2Ω~2u:=HΩ~2u for udom(HΓ~2Ω~2):={uH2(Ω~2):γΓ~2u=0}.\begin{split}H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}u:=H^{\tilde{\Omega}_{1}}u\text{ for }u\in{\text{dom}\,}(H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}):=\{u\in H^{2}(\tilde{\Omega}_{1}):\gamma_{\tilde{\Gamma}_{1}}u=\vec{0}\},\\ H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}{\vec{u}}:=H^{\tilde{\Omega}_{2}}{\vec{u}}\text{ for }{\vec{u}}\in{\text{dom}\,}(H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}):=\{{\vec{u}}\in H^{2}(\tilde{\Omega}_{2}):\gamma_{\tilde{\Gamma}_{2}}{\vec{u}}=\vec{0}\}.\end{split} (13)

We can also construct new Dirichlet-to-Neumann maps 1\mathcal{M}_{1} and 2\mathcal{M}_{2}, now associated at two vertices s1s_{1} and s2s_{2}, and sum them to make another two-sided map. The construction of 1\mathcal{M}_{1} and 2\mathcal{M}_{2} is different for splittings on the same wire vs splittings on different wires. The specific constructions are detailed in Definitions 4.5 and 4.8. In either case, we can utilize the determinant of the two-sided map |(1+2)(λ)||(\mathcal{M}_{1}+\mathcal{M}_{2})(\lambda)| in the following theorem:

Refer to caption
Refer to caption
Figure 2: Two example partitions of Ω\Omega into three subgraphs.
Theorem 1.10.

Suppose Ω\Omega is split into three quantum subgraphs Ω1\Omega_{1}, Ω~1\tilde{\Omega}_{1}, and Ω~2\tilde{\Omega}_{2}, as described in Definition 1.8. Additionally, suppose that λρ(HΓ1Ω1)ρ(HΓ~1Ω~1)ρ(HΓ~2Ω~2)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}})\cap\rho(H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}})\cap\rho(H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}), and let 1+2\mathcal{M}_{1}+\mathcal{M}_{2} be the two-sided map associated with s1s_{1} and s2s_{2}. Then

EΓΩEΓ1Ω1EΓ~1Ω~1EΓ~2Ω~2=|(1+2)(λ)|,\frac{E^{\Omega}_{\Gamma}}{E^{\Omega_{1}}_{\Gamma_{1}}E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}}=|(\mathcal{M}_{1}+\mathcal{M}_{2})(\lambda)|, (14)

where EΓΩE^{\Omega}_{\Gamma}, EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}}, EΓ~1Ω~1E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, and EΓ~2Ω~2E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}} are the Evans functions associated with the operators HΓΩH^{\Omega}_{\Gamma}, HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}}, HΓ~1Ω~1H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, and HΓ~2Ω~2H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}, respectively.

Just as with Theorem 1.5, the equality in Theorem 1.10 is one of meromorphic functions, so the desired behavior is preserved. Also, we again obtain a corresponding eigenvalue counting theorem.

Theorem 1.11.

Let 𝒩X(λ)\mathcal{N}_{X}(\lambda) count the number of eigenvalues less than or equal to λ\lambda of any operator XX, and let N(λ)N(\lambda) be the difference between the number of zeros and number of poles (including multiplicities) of |1+2||\mathcal{M}_{1}+\mathcal{M}_{2}| less than or equal to λ\lambda. Then

𝒩HΓΩ(λ)=𝒩HΓ1Ω1(λ)+𝒩HΓ~1Ω~1(λ)+𝒩HΓ~2Ω~2(λ)+N(λ).\mathcal{N}_{H^{\Omega}_{\Gamma}}(\lambda)=\mathcal{N}_{H^{\Omega_{1}}_{\Gamma_{1}}}(\lambda)+\mathcal{N}_{H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}}(\lambda)+\mathcal{N}_{H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}}(\lambda)+N(\lambda). (15)

Finally, in Section 5, we illustrate some applications of the main theorems to quantum star graphs with potential barriers and wells.

2 General Resolvent Formula

2.1 Motivation for Using the Resolvent

While it is possible to construct Dirichlet-to-Neumann maps column-by-column by solving several boundary-value problems, this method does not do much to illuminate the processes by which the zeros and poles containing spectral information appear in the map. Provided that λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), this map-making consists of finding functions uΓ,iH2(Ω){\vec{u}}_{\Gamma,i}\in H^{2}(\Omega) which serve as solutions to the problem:

{(HΩλ)uΓ,i=0,γΓuΓ,i=ei,\begin{cases}(H^{\Omega}-\lambda){\vec{u}}_{\Gamma,i}=\vec{0}\,,\\ \gamma_{\Gamma}{\vec{u}}_{\Gamma,i}=\vec{e}_{i},\end{cases} (16)

where ei\vec{e}_{i} is the iith standard basis vector for 2n{\mathbb{C}}^{2n}, then applying the desired trace to such a solution. Note that we will have ei\vec{e}_{i} representing standard basis vectors for n{\mathbb{C}}^{n} and 2n{\mathbb{C}}^{2n} interchangeably, since the dimension will be clear from context. Instead of solving for uΓ,i{\vec{u}}_{\Gamma,i} by its defining boundary-value problem, we will construct it using the resolvent of HΓΩH^{\Omega}_{\Gamma}.

For λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), a key property of the resolvent RλΓR_{\lambda}^{\Gamma} of HΓΩH^{\Omega}_{\Gamma} is that, for λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), it acts as the inverse operator of HΓΩλH^{\Omega}_{\Gamma}-\lambda. That is,

RλΓ(HΓΩλ)v=(HΓΩλ)RλΓv=v,R_{\lambda}^{\Gamma}(H^{\Omega}_{\Gamma}-\lambda){\vec{v}}=(H^{\Omega}_{\Gamma}-\lambda)R_{\lambda}^{\Gamma}{\vec{v}}={\vec{v}}, (17)

for any function v{\vec{v}} on Ω\Omega.

Equation (17) actually gives us a way to find RλΓvR_{\lambda}^{\Gamma}{\vec{v}} for an arbitrary L2(Ω)L^{2}(\Omega) function v{\vec{v}} by solving the inhomogeneous boundary-value problem:

{(HΓΩλ)y=v,γΓy=0.\begin{cases}(H^{\Omega}_{\Gamma}-\lambda){\vec{y}}={\vec{v}}\,,\\ \gamma_{\Gamma}{\vec{y}}=\vec{0}.\end{cases} (18)

The unique solution y{\vec{y}} to this problem must be the resolvent. We will solve for RλΓvR_{\lambda}^{\Gamma}{\vec{v}} by adding together a complementary and particular solution. Normally, we could generate a particular solution via the method of Variation of Parameters, but since each component of a function on Ω\Omega has a different spatial variable, we need to make some slight adjustments to the method.

2.2 Complementary Homogeneous Solution

We will start by obtaining a complementary solution. First, suppose λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}). Next, let the entries of α1\alpha_{1} and α2\alpha_{2} be named as follows:

α1=[a1,1a1,nan,1an,n]α2=[b1,1b1,nbn,1bn,n],\alpha_{1}=\begin{bmatrix}a_{1,1}&\ldots&a_{1,n}\\ \vdots&\ddots&\vdots\\ a_{n,1}&\ldots&a_{n,n}\end{bmatrix}\hskip 36.135pt\alpha_{2}=\begin{bmatrix}b_{1,1}&\ldots&b_{1,n}\\ \vdots&\ddots&\vdots\\ b_{n,1}&\ldots&b_{n,n}\end{bmatrix}\,, (19)

and similarly let the nonzero diagonal entries of β1\beta_{1} and β2\beta_{2} be named

β1=diag(g1,,gn)β2=diag(h1,,hn).\beta_{1}={\rm diag\,}(g_{1},...,g_{n})\hskip 36.135pt\beta_{2}={\rm diag\,}(h_{1},...,h_{n})\,. (20)

Consider the matrix [α2α1]\begin{bmatrix}-\alpha_{2}^{*}\\ \alpha_{1}^{*}\end{bmatrix}. We can use each one of the columns as initial conditions for complementary homogeneous problems to get solutions yi{\vec{y}}_{i} which solve:

{(HΩλ)yi=0,yi,j(0)=bi,j¯ for 1jn,yi,j(0)=ai,j¯ for 1jn,\begin{cases}(H^{\Omega}-\lambda){\vec{y}}_{i}=\vec{0}\,,\\ y_{i,j}(0)=-\overline{b_{i,j}}\text{ for $1\leq j\leq n$}\,,\\ y^{\prime}_{i,j}(0)=\overline{a_{i,j}}\text{ for $1\leq j\leq n$}\,,\end{cases} (21)

where yi,jy_{i,j} is the jjth component of yi{\vec{y}}_{i} and the overline represents complex conjugation. Observe that each of these nn solutions satisfies Γ\Gamma boundary conditions at the origin. We denote the set of all yi{\vec{y}}_{i} as 𝒴\mathcal{Y}.

Similarly, we can define another nn solutions by using the columns of [β2β1]\begin{bmatrix}-\beta_{2}^{*}\\ \beta_{1}^{*}\end{bmatrix} as initial conditions for homogeneous problems to generate solutions zi{\vec{z}}_{i} which satisfy:

{(HΩλ)zi=0,zi()=h¯iei,zi()=gi¯ei.\begin{cases}(H^{\Omega}-\lambda){\vec{z}}_{i}=\vec{0}\,,\\ {\vec{z}}_{i}(\vec{\ell})=-\overline{h}_{i}\vec{e}_{i}\,,\\ {\vec{z}}_{i}^{\prime}(\vec{\ell})=\overline{g_{i}}\vec{e}_{i}\,.\end{cases} (22)

First, note that these solutions zi{\vec{z}}_{i} satisfy Γ\Gamma boundary conditions at the outer endpoints of the quantum graph. Also, we have that zi,j=0z_{i,j}=0 for all jij\neq i due to the zero conditions. We denote the set of all zi{\vec{z}}_{i} as 𝒵\mathcal{Z}.

Due to the rank conditions of the α\alpha and β\beta matrices imposed in equation (1), we have that 𝒴\mathcal{Y} is a linearly independent set and 𝒵\mathcal{Z} is a linearly independent set. Additionally, it can bee seen that the combined set 𝒴𝒵\mathcal{Y}\cup\mathcal{Z} is linearly dependent if and only if λσ(HΓΩ)\lambda\in\sigma(H^{\Omega}_{\Gamma}). Thus, since λρ(LΓ)\lambda\in\rho(L_{\Gamma}), we must have that 𝒴𝒵\mathcal{Y}\cup\mathcal{Z} forms a linearly independent set of 2n2n functions.

Definition 2.1.

Let YY and ZZ be n×nn\times n matrices such that the iith column of YY is yi\vec{y}_{i} and the iith column of ZZ is zi\vec{z}_{i}. Observe that the ordering of these columns implies that

[Y(0,λ)Y(0,λ)]=[α2α1]and[Z(,λ)Z(,λ)]=[β2β1].\begin{bmatrix}Y(\vec{0},\lambda)\\ Y^{\prime}(\vec{0},\lambda)\end{bmatrix}=\begin{bmatrix}-\alpha_{2}^{*}\\ \alpha_{1}^{*}\end{bmatrix}\hskip 14.22636pt\text{and}\hskip 14.22636pt\begin{bmatrix}Z(\vec{\ell},\lambda)\\ Z^{\prime}(\vec{\ell},\lambda)\end{bmatrix}=\begin{bmatrix}-\beta_{2}^{*}\\ \beta_{1}^{*}\end{bmatrix}. (23)
Definition 2.2.

In this paper, we will define the Evans function for HΓΩH^{\Omega}_{\Gamma} (denoted EΓΩE^{\Omega}_{\Gamma}) as the determinant of the following fundamental solution matrix:

F(x,λ):=[Y(x,λ)Z(x,λ)Y(x,λ)Z(x,λ)].F({\vec{x}},\lambda):=\begin{bmatrix}Y({\vec{x}},\lambda)&Z({\vec{x}},\lambda)\\ Y^{\prime}({\vec{x}},\lambda)&Z^{\prime}({\vec{x}},\lambda)\end{bmatrix}. (24)

That is, EΓΩ(λ)=|F(x,λ)|E^{\Omega}_{\Gamma}(\lambda)=|F({\vec{x}},\lambda)|. The Evans function only depends on λ\lambda due to Theorem A.1.

This definition also extends to the operators defined over subgraphs like HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}} and HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}}; we simply alter the initial value problems defining YY and ZZ to incorporate s1s_{1} as reflected in the new boundary conditions.

2.3 Particular Solution

We wish to emulate Variation of Parameters so as to obtain a particular solution from our complementary solution set. We cannot do normal Variation of Parameters on the whole set of complementary solutions since each solution has multiple spatial variables, but we can do Variation of Parameters for each edge. So, we construct a particular solution yp{\vec{y}}_{p} component by component. The jjth component of the particular solution yp{\vec{y}}_{p} can be built by looking at all the yi,jy_{i,j} and zi,jz_{i,j} (we call such components subvectors), finding a linearly independent pair, and using Variation of Parameters on them to find a particular solution to the one-dimensional second order problem:

(2xi2+Viλ)u=vixi(0,i).(-\frac{\partial^{2}}{\partial x_{i}^{2}}+V_{i}-\lambda)u=v_{i}\hskip 54.2025ptx_{i}\in(0,\ell_{i})\,. (25)
Theorem 2.3.

Fix λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}). Consider the sets of functions 𝒴\mathcal{Y} and 𝒵\mathcal{Z} made up of solutions from equations (21) and (22). For all 1jn1\leq j\leq n, there exists at least one index 1τin1\leq\tau_{i}\leq n such that yτj,jy_{\tau_{j},j} is linearly independent from zj,jz_{j,j}.

Proof.

Suppose for contradiction we could not find another linearly independent function for some index jj. Then there exist constants χi\chi_{i} such that yi,j=χizj,jy_{i,j}=\chi_{i}z_{j,j} for all ii. Since 𝒴𝒵\mathcal{Y}\cup\mathcal{Z} is linearly independent, the set S=𝒵{yiχizj:yi𝒴}S=\mathcal{Z}\cup\{{\vec{y}}_{i}-\chi_{i}{\vec{z}}_{j}:{\vec{y}}_{i}\in\mathcal{Y}\} is also a linearly independent set. We can construct a new set of functions SS^{\prime} by removing the jjth component of every function in S{zj}S\setminus\{{\vec{z}}_{j}\}. Since the jjth component for every vector in S{zj}S\setminus\{{\vec{z}}_{j}\} is 0, we lose no information by deleting them, so SS^{\prime} is a linearly independent set of size 2n12n-1 containing vectors with n1n-1 components. But this cannot be possible, since the solutions in SS^{\prime} are linearly independent solutions to an (n1)(n-1)-dimensional second order ODE, of which there should be at most 2n22n-2. Thus, there must be some yτj,jy_{\tau_{j},j} meeting our desired specifications. ∎

This proof shows that our componentwise construction of yp{\vec{y}}_{p} is well defined for all λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}). We also see in the proof that the choice of each yτj,jy_{\tau_{j},j} is local with respect to λ\lambda. Thus, for specific λ\lambda we can use Variation of Parameters to define the iith entry of yp{\vec{y}}_{p} as

(yp)i(xi,λ):=\displaystyle({\vec{y}}_{p})_{i}(x_{i},\lambda):= 1Di(λ)(yτi,i(xi,λ)xiivi(ti)zi,i(ti,λ)dti\displaystyle-\frac{1}{D_{i}(\lambda)}(y_{\tau_{i},i}(x_{i},\lambda)\int_{x_{i}}^{\ell_{i}}v_{i}(t_{i})z_{i,i}(t_{i},\lambda)\,dt_{i}
+zi,i(xi,λ)0xivi(ti)yτi,i(ti,λ)dti),\displaystyle+z_{i,i}(x_{i},\lambda)\int_{0}^{x_{i}}v_{i}(t_{i})y_{\tau_{i},i}(t_{i},\lambda)\,dt_{i})\,, (26)

where Di:=W0(yτi,i,zi,i)D_{i}:=W_{0}(y_{\tau_{i},i},z_{i,i}) is a Wronskian for solutions on the iith wire.

2.4 Constant Tuning

Let S0=[YZ]S_{0}=\begin{bmatrix}Y&Z\end{bmatrix}. Then our general solution (which is RλΓv)R_{\lambda}^{\Gamma}{\vec{v}}) will be of the form

u=S0c+yp,{\vec{u}}=S_{0}{\vec{c}}+{\vec{y}}_{p}\,, (27)

where c{\vec{c}} is a vector of scalars in 2n{\mathbb{C}}^{2n}. To make sure u{\vec{u}} satisfies the Γ\Gamma boundary conditions, we will need to adjust the components of c{\vec{c}} so that

γΓ(S0c)=γΓyp.\gamma_{\Gamma}(S_{0}{\vec{c}})=-\gamma_{\Gamma}{\vec{y}}_{p}\,. (28)

By the linearity of γΓ\gamma_{\Gamma}, we can modify equation (28) by taking the trace of every column of S0S_{0}, arranging these in a matrix S~0\tilde{S}_{0}, and then multiplying this matrix by c{\vec{c}}. We define S~0\tilde{S}_{0} as

S~0(λ)\displaystyle\tilde{S}_{0}(\lambda) =[β1S0(,λ)+β2S0(,λ)α1S0(0,λ)+α2S0(0,λ)]\displaystyle=\begin{bmatrix}\beta_{1}S_{0}(\vec{\ell},\lambda)+\beta_{2}S_{0}^{\prime}(\vec{\ell},\lambda)\\ \alpha_{1}S_{0}(\vec{0},\lambda)+\alpha_{2}S_{0}^{\prime}(\vec{0},\lambda)\end{bmatrix}
=[β1Y(,λ)+β2Y(,λ)β1Z(,λ)+β2Z(,λ)α1Y(0,λ)+α2Y(0,λ)α1Z(0,λ)+α2Z(0,λ)]\displaystyle=\begin{bmatrix}\beta_{1}Y(\vec{\ell},\lambda)+\beta_{2}Y^{\prime}(\vec{\ell},\lambda)&\beta_{1}Z(\vec{\ell},\lambda)+\beta_{2}Z^{\prime}(\vec{\ell},\lambda)\\ \alpha_{1}Y(\vec{0},\lambda)+\alpha_{2}Y^{\prime}(\vec{0},\lambda)&\alpha_{1}Z(\vec{0},\lambda)+\alpha_{2}Z^{\prime}(\vec{0},\lambda)\end{bmatrix}
=[β1Y(,λ)+β2Y(,λ)00α1Z(0,λ)+α2Z(0,λ)].\displaystyle=\begin{bmatrix}\beta_{1}Y(\vec{\ell},\lambda)+\beta_{2}Y^{\prime}(\vec{\ell},\lambda)&0\\ 0&\alpha_{1}Z(\vec{0},\lambda)+\alpha_{2}Z^{\prime}(\vec{0},\lambda)\end{bmatrix}\,. (29)

The zero blocks in (2.4) appear since YY and ZZ are defined to satisfy α\alpha and β\beta conditions, respectively. We can rewrite equation (28)\eqref{eq:15} as

S~0(λ)c=γΓyp,\tilde{S}_{0}(\lambda){\vec{c}}=-\gamma_{\Gamma}{\vec{y}}_{p}\,, (30)

Due to the integral bounds we chose, we find that

γΓyp=[0α1yp(0,λ)+α2yp(0,λ)].\gamma_{\Gamma}{\vec{y}}_{p}=\begin{bmatrix}\vec{0}\\ \alpha_{1}{\vec{y}}_{p}(\vec{0},\lambda)+\alpha_{2}{\vec{y}}_{p}^{\prime}(\vec{0},\lambda)\end{bmatrix}\,. (31)

Equations (30) and (31) imply that we can always choose c1=c2==cn=0c_{1}=c_{2}=...=c_{n}=0, and thus we can determine cn+ic_{n+i} for 1in1\leq i\leq n by solving the smaller system:

C(λ)[cn+1c2n]=α1yp(0,λ)α2yp(0,λ),C(\lambda)\begin{bmatrix}c_{n+1}\\ \vdots\\ c_{2n}\end{bmatrix}=-\alpha_{1}{\vec{y}}_{p}(\vec{0},\lambda)-\alpha_{2}{\vec{y}}_{p}^{\prime}(\vec{0},\lambda)\,, (32)

where C(λ)C(\lambda) is given by:

C(λ)=α1Z(0,λ)+α2Z(0,λ).C(\lambda)=\alpha_{1}Z(\vec{0},\lambda)+\alpha_{2}Z^{\prime}(\vec{0},\lambda)\,. (33)

That is, C(λ)C(\lambda) is the lower right block of S~0\tilde{S}_{0}. We can find an exact formula for C(λ)C(\lambda). First, we give names to the various zi{\vec{z}}_{i} at the origin:

si(λ)\displaystyle-s_{i}(\lambda) =zi,i(0,λ),\displaystyle=z_{i,i}(0,\lambda)\,,
ri(λ)\displaystyle r_{i}(\lambda) =zi,i(0,λ).\displaystyle=z_{i,i}^{\prime}(0,\lambda)\,. (34)

Then from equation (33), C(λ)C(\lambda) is given by

C(λ)\displaystyle C(\lambda) =[α1α2][diag(s1(λ),,sn(λ))diag(r1(λ),,rn(λ))]\displaystyle=\begin{bmatrix}\alpha_{1}&\alpha_{2}\end{bmatrix}\begin{bmatrix}{\rm diag\,}(-s_{1}(\lambda),...,-s_{n}(\lambda))\\ {\rm diag\,}(r_{1}(\lambda),...,r_{n}(\lambda))\end{bmatrix}
=[|s1(λ)b1,1r1(λ)a1,1||sn(λ)b1,nrn(λ)a1,n||s1(λ)bn,1r1(λ)an,1||sn(λ)bn,nrn(λ)an,n|].\displaystyle=\begin{bmatrix}\begin{vmatrix}-s_{1}(\lambda)&-b_{1,1}\\ r_{1}(\lambda)&a_{1,1}\end{vmatrix}&\ldots&\begin{vmatrix}-s_{n}(\lambda)&-b_{1,n}\\ r_{n}(\lambda)&a_{1,n}\end{vmatrix}\\ \vdots&\ddots&\vdots\\ \begin{vmatrix}-s_{1}(\lambda)&-b_{n,1}\\ r_{1}(\lambda)&a_{n,1}\end{vmatrix}&\ldots&\begin{vmatrix}-s_{n}(\lambda)&-b_{n,n}\\ r_{n}(\lambda)&a_{n,n}\end{vmatrix}\end{bmatrix}\,. (35)

If C(λ)C(\lambda) is invertible then we can solve for the coefficients cn+1,,c2nc_{n+1},...,c_{2n} by inversion in equation (32). The invertibility is guaranteed by the following Lemma.

Lemma 2.4.

Let AA and BB be n×nn\times n matrices, C=α1A+α2BC=\alpha_{1}A+\alpha_{2}B, where αi\alpha_{i} are defined in (1), and

F=[α2Aα1B].F=\begin{bmatrix}-\alpha_{2}^{*}&A\\ \alpha_{1}^{*}&B\end{bmatrix}.

Then,

(1)n|F|=|C|.(-1)^{n}|F|=|C|\,.
Proof.

We know from equation (1) that there exists a set of nn linearly independent columns in the matrix [α1α2]\begin{bmatrix}\alpha_{1}&\alpha_{2}\end{bmatrix}. Thus, by swapping columns between α1\alpha_{1} and α2\alpha_{2}, we can generate a new block matrix [α~1α~2]\begin{bmatrix}\tilde{\alpha}_{1}&\tilde{\alpha}_{2}\end{bmatrix} such that rank(α~2)=n\text{rank}(\tilde{\alpha}_{2})=n. Let k{1,2,,n}k\in\{1,2,...,n\} represent the number of columns that we swap between α1\alpha_{1} and α2\alpha_{2} to generate these new matrices (although there may be different kk for different column swapping configurations). Now, for every pair of columns swapped, swap the rows with the same indices in FF to generate a new matrix F^\hat{F}. We immediately observe that

|F^|=(1)k|F|.|\hat{F}|=(-1)^{k}|F|. (36)

We label the blocks of F^\hat{F} as follows:

F^=[α^2A^α^1B^],\hat{F}=\begin{bmatrix}\hat{\alpha}_{2}&\hat{A}\\ \hat{\alpha}_{1}&\hat{B}\end{bmatrix}, (37)

and observe that

|α^2|=(1)nk|α~2|,|\hat{\alpha}_{2}|=(-1)^{n-k}|\tilde{\alpha}_{2}|, (38)

since the transformation from α2-\alpha_{2}^{*} to α^2\hat{\alpha}_{2} involves swapping away kk of the nn rows, each of which carries away the added negative from the initial scaling of α2\alpha_{2}^{*}.

Note that, since matrix multiplication is defined in terms of vector inner products, by switching rows and columns of the same index to get F^\hat{F} and [α~1α~2]\begin{bmatrix}\tilde{\alpha}_{1}&\tilde{\alpha}_{2}\end{bmatrix}, respectively, we preserve the values of the inner products defining the entries of the matrix product between [α1α2]\begin{bmatrix}\alpha_{1}&\alpha_{2}\end{bmatrix} and FF.

The final object we need for this proof is the invertible block triangular matrix JJ defined as

J=[In0α~1α~2],J=\begin{bmatrix}I_{n}&0\\ \tilde{\alpha}_{1}&\tilde{\alpha}_{2}\end{bmatrix}, (39)

with determinant |J|=|α~2||J|=|\tilde{\alpha}_{2}|, which is nonzero since we specifically constructed α~2\tilde{\alpha}_{2} to consist of nn linearly independent columns. We can calculate the product JF^J\hat{F} as

JF^=[In0α~1α~2][α^2A^α^1B^]\displaystyle J\hat{F}=\begin{bmatrix}I_{n}&0\\ \tilde{\alpha}_{1}&\tilde{\alpha}_{2}\end{bmatrix}\begin{bmatrix}\hat{\alpha}_{2}&\hat{A}\\ \hat{\alpha}_{1}&\hat{B}\end{bmatrix} =[α^2A^α~1α^2+α~2α^1α~1A^+α~2B^]\displaystyle=\begin{bmatrix}\hat{\alpha}_{2}&\hat{A}\\ \tilde{\alpha}_{1}\hat{\alpha}_{2}+\tilde{\alpha}_{2}\hat{\alpha}_{1}&\tilde{\alpha}_{1}\hat{A}+\tilde{\alpha}_{2}\hat{B}\end{bmatrix}
=[α^2A^α1α2+α2α1α1A+α2B]\displaystyle=\begin{bmatrix}\hat{\alpha}_{2}&\hat{A}\\ -\alpha_{1}\alpha_{2}^{*}+\alpha_{2}\alpha_{1}^{*}&\alpha_{1}A+\alpha_{2}B\end{bmatrix}
=(1)[α^2A^0C].\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:3}}}}}}{{=}}}\,\,\,\begin{bmatrix}\hat{\alpha}_{2}&\hat{A}\\ 0&C\end{bmatrix}. (40)

So, on the one hand, we have that

|JF^|=|α^2||C|=(38)(1)nk|α~2||C|,|J\hat{F}|=|\hat{\alpha}_{2}||C|\,\,\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:58}}}}}}{{=}}}\,(-1)^{n-k}|\tilde{\alpha}_{2}||C|\,, (41)

while on the other hand, we have

|JF^|=|J||F^|=(36)|α~2||F|(1)k.|J\hat{F}|=|J||\hat{F}|\,\,\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:56}}}}}}{{=}}}\,\,|\tilde{\alpha}_{2}||F|(-1)^{k}\,. (42)

Thus, by equating equations (41) and (42), multiplying both sides by (1)nk(-1)^{n-k}, and dividing by |α~2||\tilde{\alpha}_{2}|, we obtain the equality we set out to prove. ∎

In particular, using C(λ)C(\lambda) as defined in (33) and F(x,λ)F({\vec{x}},\lambda) defined to be the fundamental matrix in equation (24), we can apply Lemma 2.4 to obtain the following relation:

(1)n|C(λ)|=|F(x,λ)|=EΓΩ(λ).(-1)^{n}|C(\lambda)|=|F({\vec{x}},\lambda)|=E^{\Omega}_{\Gamma}(\lambda). (43)

Hence, |C(λ)||C(\lambda)| is zero if and only if λσ(HΓΩ)\lambda\in\sigma(H^{\Omega}_{\Gamma}). Since we have supposed λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), we have that C(λ)C(\lambda) is invertible and we can apply Cramer’s rule to solve equation (32). Of course, the right side of this equation is the bottom half of γΓyp-\gamma_{\Gamma}{\vec{y}}_{p}, which can be calculated to be

α1yp(0,λ)α2yp(0,λ)\displaystyle-\alpha_{1}{\vec{y}}_{p}(\vec{0},\lambda)-\alpha_{2}{\vec{y}}_{p}^{\prime}(\vec{0},\lambda)
=[i=1n|bτi,i¯b1,iaτi,i¯a1,i|0ivi(ti)zi,i(ti,λ)𝑑tiDi(λ)i=1n|bτi,i¯bn,iaτi,i¯an,i|0ivi(ti)zi,i(ti,λ)𝑑tiDi(λ)].\displaystyle=\begin{bmatrix}\sum_{i=1}^{n}\begin{vmatrix}-\overline{b_{\tau_{i},i}}&-b_{1,i}\\ \overline{a_{\tau_{i},i}}&a_{1,i}\end{vmatrix}\frac{\int_{0}^{\ell_{i}}v_{i}(t_{i})z_{i,i}(t_{i},\lambda)\,dt_{i}}{D_{i}(\lambda)}\\ \vdots\\ \sum_{i=1}^{n}\begin{vmatrix}-\overline{b_{\tau_{i},i}}&-b_{n,i}\\ \overline{a_{\tau_{i},i}}&a_{n,i}\end{vmatrix}\frac{\int_{0}^{\ell_{i}}v_{i}(t_{i})z_{i,i}(t_{i},\lambda)\,dt_{i}}{D_{i}(\lambda)}\end{bmatrix}\,. (44)

So, applying Cramer’s rule to solve equation (32), we obtain

cn+i=|Ci(λ,v)||C(λ)|,\displaystyle c_{n+i}=\frac{|C_{i}(\lambda,{\vec{v}})|}{|C(\lambda)|}\,, (45)

where CiC_{i} is defined as CC with the iith column replaced by the right side of equation (2.4). It is this replaced column that introduces v{\vec{v}} dependence which will be vital later, so we make special note of it.

Thus, the resolvent is given by:

(RλΓv)(x)=(i=1ncn+i(λ,v)zi(x,λ))+yp(x,λ,v).(R_{\lambda}^{\Gamma}{\vec{v}})({\vec{x}})=\left(\sum_{i=1}^{n}c_{n+i}(\lambda,{\vec{v}}){\vec{z}}_{i}({\vec{x}},\lambda)\right)+{\vec{y}}_{p}({\vec{x}},\lambda,{\vec{v}})\,. (46)

As previously mentioned, the choice of the nn different DiD_{i} matrices depends on λ\lambda locally. However, our construction method overall will actually give us the resolvent globally, for all λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}). Since our constructed RλΓvR_{\lambda}^{\Gamma}{\vec{v}} is a meromorphic function which is identical to the resolvent on some disc over which our choice of DiD_{i} matrices is valid, it is in fact equal to the resolvent globally due to the unique analytic continuation of holomorphic functions (which can be adjusted to extend to meromorphic functions).

3 From Resolvents to Maps

Let λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), and fix f2nf\in{\mathbb{C}}^{2n}. If uΓH2(Ω){\vec{u}}_{\Gamma}\in H^{2}(\Omega) solves

{(HΩλ)uΓ=0,γΓu=f,\begin{cases}(H^{\Omega}-\lambda){\vec{u}}_{\Gamma}=\vec{0}\,,\\ \gamma_{\Gamma}{\vec{u}}={\vec{f}}\,,\end{cases} (47)

then, as seen in [5], one can use integration by parts to show that

(uΓ,v)\displaystyle({\vec{u}}_{\Gamma},{\vec{v}}) =(uΓ,(HΓΩλ¯)Rλ¯Γv)\displaystyle=({\vec{u}}_{\Gamma},(H^{\Omega}_{\Gamma}-\bar{\lambda})R_{\bar{\lambda}}^{\Gamma}{\vec{v}})
=((HΓΩλ¯)uΓ,Rλ¯Γv)+γNuΓ,γDRλ¯ΓvγDuΓ,γNRλ¯Γv\displaystyle=((H^{\Omega}_{\Gamma}-\bar{\lambda}){\vec{u}}_{\Gamma},R_{\bar{\lambda}}^{\Gamma}{\vec{v}})+\langle\gamma_{N}{\vec{u}}_{\Gamma},\gamma_{D}R_{\bar{\lambda}}^{\Gamma}{\vec{v}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},\gamma_{N}R_{\bar{\lambda}}^{\Gamma}{\vec{v}}\rangle
=γNuΓ,γDRλ¯ΓvγDuΓ,γNRλ¯Γv\displaystyle=\langle\gamma_{N}{\vec{u}}_{\Gamma},\gamma_{D}R_{\bar{\lambda}}^{\Gamma}{\vec{v}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},\gamma_{N}R_{\bar{\lambda}}^{\Gamma}{\vec{v}}\rangle\,
=γNuΓ,γDRλΓv¯γDuΓ,γNRλΓv¯,\displaystyle=\langle\gamma_{N}{\vec{u}}_{\Gamma},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle\,, (48)

where (,)(\cdot,\cdot) is the L2L^{2} inner product,,{\langle}\cdot,\cdot\rangle is the 2n{\mathbb{C}}^{2n} inner product, and vL2(Ω){\vec{v}}\in L^{2}(\Omega). Both are linear in their first component and antilinear in their second. We wish to somehow incorporate f{\vec{f}} in this formula. Taking inspiration from [3], we introduce several objects. They can be constructed based on α\alpha or β\beta matrices, but we can denote the construction of both at once in terms of the block diagonal matrices:

δi=[βi00αi]i{1,2}.\delta_{i}=\begin{bmatrix}\beta_{i}&0\\ 0&\alpha_{i}\end{bmatrix}\hskip 18.06749pti\in\{1,2\}\,. (49)

First, we define a function η\eta by:

η(k)=(δ1+ikδ2)1(δ1ikδ2).\eta(k)=-(\delta_{1}+ik\delta_{2})^{-1}(\delta_{1}-ik\delta_{2})\,. (50)

We use this function to define the unitary matrix 𝒰\mathcal{U}:

𝒰=η(1)=(δ1iδ2)1(δ1+iδ2).\mathcal{U}=\eta(-1)=-(\delta_{1}-i\delta_{2})^{-1}(\delta_{1}+i\delta_{2})\,. (51)

We will now modify the following basic equation:

δ1[uΓ(,λ)uΓ(0,λ)]+δ2[uΓ(,λ)uΓ(0,λ)]=f.\delta_{1}\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}+\delta_{2}\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}={\vec{f}}\,. (52)

Multiplying equation (52) by 2i(δ1iδ2)1-2i(\delta_{1}-i\delta_{2})^{-1} from the left gives us:

i(𝒰I2n)[uΓ(,λ)uΓ(0,λ)]+(𝒰+I2n)[uΓ(,λ)uΓ(0,λ)]=2i(δ1iδ2)1f.i(\mathcal{U}-I_{2n})\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}+(\mathcal{U}+I_{2n})\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}=-2i(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,. (53)

We can get more useful identities from this one by introducing the Dirichlet, Neumann, and Robin projections.

Definition 3.1.

The Dirichlet Projection PDP_{D} is the projection onto the kernel of δ2\delta_{2}. It can alternatively be defined as the projection onto the eigenspace of 𝒰\mathcal{U} corresponding to the eigenvalue 1-1.

The Neumann Projection PNP_{N} is the projection onto the kernel of δ1\delta_{1}. It can alternatively be defined as the projection onto the eigenspace of 𝒰\mathcal{U} corresponding to the eigenvalue 11.

The Robin Projection PRP_{R} is defined in terms of the other two by the equation

PD+PN+PR=I2n.P_{D}+P_{N}+P_{R}=I_{2n}\,. (54)

It is also the projection onto the eigenspace of 𝒰\mathcal{U} corresponding to all other eigenvalues not ±1\pm 1.

Remark 3.2.

All three projectors commute with 𝒰\mathcal{U}, (𝒰+I2n)(\mathcal{U}+I_{2n}), and (𝒰I2n)(\mathcal{U}-I_{2n}). We also get from the definitions that:

(𝒰+I2n)PD\displaystyle(\mathcal{U}+I_{2n})P_{D} =02n,\displaystyle=0_{2n}\,,
(𝒰I2n)PD\displaystyle(\mathcal{U}-I_{2n})P_{D} =2PD,\displaystyle=-2P_{D}\,,
(𝒰+I2n)PN\displaystyle(\mathcal{U}+I_{2n})P_{N} =2PN,\displaystyle=2P_{N}\,,
(𝒰I2n)PN\displaystyle(\mathcal{U}-I_{2n})P_{N} =02n.\displaystyle=0_{2n}\,. (55)

One more important object is Λ:ranPRranPR\Lambda:\text{ran}\,P_{R}\rightarrow\text{ran}\,P_{R}. It is an invertible self-adjoint operator constructed as follows:

Λ:=i(𝒰+I2n)R1(𝒰I2n),\Lambda:=-i(\mathcal{U}+I_{2n})^{-1}_{R}(\mathcal{U}-I_{2n})\,, (56)

where (𝒰+I2n)R(\mathcal{U}+I_{2n})_{R} is the restriction of (𝒰+I2n)(\mathcal{U}+I_{2n}) to the range of PRP_{R} (this restriction has domain and range ranPR{\text{ran}\,}P_{R}, and is invertible). Since δ1\delta_{1} and δ2\delta_{2} are both block diagonal, these new objects are all also block diagonal as well.

With the introduction of Λ\Lambda, we can now list one more set of defining properties of these objects. Since RλΓR^{\Gamma}_{\lambda} satisfies Γ\Gamma boundary conditions, we have that:

PD[(RλΓv)(,λ)(RλΓv)(0,λ)]\displaystyle P_{D}\begin{bmatrix}(R^{\Gamma}_{\lambda}{\vec{v}})(\vec{\ell},\lambda)\\ (R^{\Gamma}_{\lambda}{\vec{v}})(\vec{0},\lambda)\end{bmatrix} =0,\displaystyle=\vec{0}\,,
PN[(RλΓv)(,λ)(RλΓv)(0,λ)]\displaystyle P_{N}\begin{bmatrix}(R^{\Gamma}_{\lambda}{\vec{v}})^{\prime}(\vec{\ell},\lambda)\\ (R^{\Gamma}_{\lambda}{\vec{v}})^{\prime}(\vec{0},\lambda)\end{bmatrix} =0,\displaystyle=\vec{0}\,,
PR[(RλΓv)(,λ)(RλΓv)(0,λ)]\displaystyle P_{R}\begin{bmatrix}(R^{\Gamma}_{\lambda}{\vec{v}})^{\prime}(\vec{\ell},\lambda)\\ (R^{\Gamma}_{\lambda}{\vec{v}})^{\prime}(\vec{0},\lambda)\end{bmatrix} =ΛPR[(RλΓv)(,λ)(RλΓv)(0,λ)].\displaystyle=\Lambda P_{R}\begin{bmatrix}(R^{\Gamma}_{\lambda}{\vec{v}})(\vec{\ell},\lambda)\\ (R^{\Gamma}_{\lambda}{\vec{v}})(\vec{0},\lambda)\end{bmatrix}\,. (57)

Here our derivations diverge from those in [3], where equations (58), (59), and (3) are found with f=0{\vec{f}}=\vec{0}. Multiplying equation (53)\eqref{eq:31} by PDP_{D} from the left, we obtain:

PD[uΓ(,λ)uΓ(0,λ)]=PD(δ1iδ2)1f.P_{D}\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}=P_{D}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,. (58)

Multiplying equation (53) by PNP_{N} from the left yields:

PN[uΓ(,λ)uΓ(0,λ)]=iPN(δ1iδ2)1f.P_{N}\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}=-iP_{N}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,. (59)

Multiplying equation (53) by PRP_{R} from the left gives:

i(𝒰I2n)PR[uΓ(,λ)uΓ(0,λ)]+(𝒰+I2n)PR[uΓ(,λ)uΓ(0,λ)]=2iPR(δ1iδ2)1f\displaystyle i({\mathcal{U}}-I_{2n})P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}+({\mathcal{U}}+I_{2n})P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}=-2iP_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}
(𝒰+I2n)PR[uΓ(,λ)uΓ(0,λ)]=i(𝒰I2n)PR[uΓ(,λ)uΓ(0,λ)]\displaystyle\Rightarrow({\mathcal{U}}+I_{2n})P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}=-i({\mathcal{U}}-I_{2n})P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}
2iPR(δ1iδ2)1f\displaystyle\hskip 130.08621pt-2iP_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}
PR[uΓ(,λ)uΓ(0,λ)]=i(𝒰+I2n)R1(𝒰I2n)PR[uΓ(,λ)uΓ(0,λ)]\displaystyle\Rightarrow P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}=-i({\mathcal{U}}+I_{2n})^{-1}_{R}({\mathcal{U}}-I_{2n})P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}
2i(𝒰+I2n)R1PR(δ1iδ2)1f\displaystyle\hskip 86.72377pt-2i({\mathcal{U}}+I_{2n})_{R}^{-1}P_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}
PR[uΓ(,λ)uΓ(0,λ)]=ΛPR[uΓ(,λ)uΓ(0,λ)]2i(𝒰+I2n)R1PR(δ1iδ2)1f.\displaystyle\Rightarrow P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}^{\prime}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}^{\prime}(\vec{0},\lambda)\end{bmatrix}=\Lambda P_{R}\begin{bmatrix}{\vec{u}}_{\Gamma}(\vec{\ell},\lambda)\\ {\vec{u}}_{\Gamma}(\vec{0},\lambda)\end{bmatrix}-2i({\mathcal{U}}+I_{2n})_{R}^{-1}P_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,. (60)

We can rewrite the previous three equations in terms of the Dirichlet and Neumann traces if we account for the normal vector scaling in Neumann traces. So, given some 2n×2n2n\times 2n matrix MM, we introduce the new matrix M~\tilde{M} defined as

M~=M[In00In].\tilde{M}=M\begin{bmatrix}I_{n}&0\\ 0&-I_{n}\end{bmatrix}\,. (61)

Then we may rewrite equations (58), (59), (3), and the last line of (59) as

PDγDuΓ\displaystyle P_{D}\gamma_{D}{\vec{u}}_{\Gamma} =PD(δ1iδ2)1f,\displaystyle=P_{D}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,,
PNγNuΓ\displaystyle P_{N}\gamma_{N}{\vec{u}}_{\Gamma} =iP~N(δ1iδ2)1f,\displaystyle=-i\tilde{P}_{N}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,,
PRγNuΓ\displaystyle P_{R}\gamma_{N}{\vec{u}}_{\Gamma} =ΛP~RγDuΓ2i(𝒰+I2n)R1P~R(δ1iδ2)1f,\displaystyle=\Lambda\tilde{P}_{R}\gamma_{D}{\vec{u}}_{\Gamma}-2i({\mathcal{U}}+I_{2n})^{-1}_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,,
PRγNRλΓv\displaystyle P_{R}\gamma_{N}R_{\lambda}^{\Gamma}{\vec{v}} =ΛP~RγDRλΓv.\displaystyle=\Lambda\tilde{P}_{R}\gamma_{D}R_{\lambda}^{\Gamma}{\vec{v}}\,. (62)

Note that, due to the block diagonal nature of all of the new matrices involved, we can safely rewrite the domain and range of Λ\Lambda as ranP~R\text{ran}\,\tilde{P}_{R}. Using all this machinery, we can alter our inner product formula.

Theorem 3.3.

Let λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), f2n{\vec{f}}\in{\mathbb{C}}^{2n}, and vL2(Ω){\vec{v}}\in L^{2}(\Omega). Let QQ be the image of ranP~R{\text{ran}\,}\tilde{P}_{R} under δ2\delta_{2}. If uΓH2(Ω){\vec{u}}_{\Gamma}\in H^{2}(\Omega) solves equation (47), then

(uΓ,v)=\displaystyle({\vec{u}}_{\Gamma},{\vec{v}})= iP~N(δ1iδ2)1f,γDRλΓv¯\displaystyle\langle-i\tilde{P}_{N}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
+(δ2)Q1(δ1iδ2)RP~R(δ1iδ2)1f,γDRλΓv¯\displaystyle+\langle(\delta_{2})_{Q}^{-1}(\delta_{1}-i\delta_{2})_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
PD(δ1iδ2)1f,γNRλΓv¯.\displaystyle-\langle P_{D}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle\,. (63)
Proof.

By utilizing our new self-adjoint projections, we find that:

(uΓ,v)=(3)\displaystyle({\vec{u}}_{\Gamma},{\vec{v}})\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:ibp}}}}}}{{=}}}\,\, γNuΓ,γDRλΓv¯γDuΓ,γNRλΓv¯\displaystyle\langle\gamma_{N}{\vec{u}}_{\Gamma},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
=(54)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:robin_proj}}}}}}{{=}}}\,\, γNuΓ,(PD+PN+PR)γDRλΓv¯\displaystyle\langle\gamma_{N}{\vec{u}}_{\Gamma},(P_{D}+P_{N}+P_{R})\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
γDuΓ,(PD+PN+PR)γNRλΓv¯\displaystyle-\langle\gamma_{D}{\vec{u}}_{\Gamma},(P_{D}+P_{N}+P_{R})\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
=(3)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:projection_equations}}}}}}{{=}}}\,\, PNγNuΓ,γDRλΓv¯+PRγNuΓ,γDRλΓv¯\displaystyle\langle P_{N}\gamma_{N}{\vec{u}}_{\Gamma},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle+\langle P_{R}\gamma_{N}{\vec{u}}_{\Gamma},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
PDγDuΓ,γNRλΓv¯γDuΓ,PRγNRλΓv¯\displaystyle-\langle P_{D}\gamma_{D}{\vec{u}}_{\Gamma},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},P_{R}\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
=(3)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:37}}}}}}{{=}}}\,\, iP~N(δ1iδ2)1f,γDRλΓv¯\displaystyle\langle-i\tilde{P}_{N}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
+ΛP~RγDuΓ2i(𝒰+I2n)R1P~R(δ1iδ2)1f,γDRλΓv¯\displaystyle+\langle\Lambda\tilde{P}_{R}\gamma_{D}{\vec{u}}_{\Gamma}-2i({\mathcal{U}}+I_{2n})^{-1}_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
PD(δ1iδ2)1f,γNRλΓv¯γDuΓ,ΛP~RγDRλΓv¯\displaystyle-\langle P_{D}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},\Lambda\tilde{P}_{R}\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
=\displaystyle= iP~N(δ1iδ2)1f,γDRλΓv¯\displaystyle\langle-i\tilde{P}_{N}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
+P~RΛP~RγDuΓ2i(𝒰+I2n)R1P~R(δ1iδ2)1f,γDRλΓv¯\displaystyle+\langle\tilde{P}_{R}\Lambda\tilde{P}_{R}\gamma_{D}{\vec{u}}_{\Gamma}-2i({\mathcal{U}}+I_{2n})^{-1}_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
PD(δ1iδ2)1f,γNRλΓv¯γDuΓ,P~RΛP~RγDRλΓv¯\displaystyle-\langle P_{D}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle-\langle\gamma_{D}{\vec{u}}_{\Gamma},\tilde{P}_{R}\Lambda\tilde{P}_{R}\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
=\displaystyle= iP~N(δ1iδ2)1f,γDRλΓv¯\displaystyle\langle-i\tilde{P}_{N}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
+2i(𝒰+I2n)R1P~R(δ1iδ2)1f,γDRλΓv¯\displaystyle+\langle-2i({\mathcal{U}}+I_{2n})^{-1}_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
PD(δ1iδ2)1f,γNRλΓv¯\displaystyle-\langle P_{D}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}},\gamma_{N}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle
+[ΛΛ]P~RγDuΓ,P~RγDRλΓv¯.\displaystyle+\langle[\Lambda-\Lambda^{*}]\tilde{P}_{R}\gamma_{D}{\vec{u}}_{\Gamma},\tilde{P}_{R}\gamma_{D}\overline{R_{\lambda}^{\Gamma}{\vec{v}}}\rangle\,. (64)

As the Cayley Transform of the unitary matrix 𝒰\mathcal{U}, Λ\Lambda is self-adjoint as a matrix from ranP~R{\text{ran}\,}\tilde{P}_{R} to ranP~R{\text{ran}\,}\tilde{P}_{R}. Thus, the last term in the last equality of equation (3) is 0.

As an additional simplification, we wish to eliminate 𝒰{\mathcal{U}} from the term 2i(𝒰+I)R1P~R(δ1iδ2)1f-2i({\mathcal{U}}+I)^{-1}_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}. We observe that 𝒰+I2n{\mathcal{U}}+I_{2n} can be rewritten as

𝒰+I2n=2i(δ1iδ2)1δ2.{\mathcal{U}}+I_{2n}=-2i(\delta_{1}-i\delta_{2})^{-1}\delta_{2}\,.

Since (𝒰+I2n)R:ranP~RranP~R({\mathcal{U}}+I_{2n})_{R}:{\text{ran}\,}\tilde{P}_{R}\rightarrow{\text{ran}\,}\tilde{P}_{R} is invertible, we have that if the image of ranP~R{\text{ran}\,}\tilde{P}_{R} under δ2\delta_{2} is some Q2nQ\subseteq{\mathbb{C}}^{2n}, then the image of QQ under (δ1iδ2)1(\delta_{1}-i\delta_{2})^{-1} must be ranP~R{\text{ran}\,}\tilde{P}_{R}. Thus, (δ1iδ2)R(\delta_{1}-i\delta_{2})_{R}, the restriction of (δ1iδ2)(\delta_{1}-i\delta_{2}) to ranP~R{\text{ran}\,}\tilde{P}_{R} will map from ranP~R{\text{ran}\,}\tilde{P}_{R} to QQ, and (δ2)Q1(\delta_{2})_{Q}^{-1} is defined as a map from QQ to ranP~R{\text{ran}\,}\tilde{P}_{R}, so we see that

2i(𝒰+I2n)1P~R(δ1iδ2)1f=(δ2)Q1(δ1iδ2)RP~R(δ1iδ2)1f.-2i({\mathcal{U}}+I_{2n})^{-1}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}=(\delta_{2})_{Q}^{-1}(\delta_{1}-i\delta_{2})_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}{\vec{f}}\,.

With these simplifications to equation (3), we obtain the desired formula. ∎

Theorem 3.3 can be used to construct uΓ{\vec{u}}_{\Gamma} with an arbitrary trace value f{\vec{f}}, by using RλΓvR_{\lambda}^{\Gamma}{\vec{v}}. In particular, by setting v{\vec{v}} to constant functions taking the form of the various standard basis vectors of n{\mathbb{R}}^{n}, we can solve for the components of uΓ,i{\vec{u}}_{\Gamma,i} one by one. Part of this process will involve the integrands produced by two different inner products. Note that the various Ci(λ,ej)C_{i}(\lambda,\vec{e}_{j}) include a column (from equation (2.4)) in which all entries carry definite integrals of zj,jDj\frac{z_{j,j}}{D_{j}} from 0 to j\ell_{j} with respect to the placeholder tjt_{j}. Thus, taking a determinant and expanding down this column, we see that every term in this sum defining |Ci(λ,ej)||C_{i}(\lambda,\vec{e}_{j})| still includes a factor carrying this integral. By absorbing all constants and combining integral terms, we can write the whole determinant as one definite integral from 0 to j\ell_{j}. If we define 𝒞i(λ,ej)\mathcal{C}_{i}(\lambda,\vec{e}_{j}) as Ci(λ,ej)C_{i}(\lambda,\vec{e}_{j}) without these integrals (that is, leaving behind a column of only the scaling constants), then we easily see that

|Ci(λ,ej)|=0j|𝒞i(λ,ej)|zj,j(tj,λ)Dj(λ)𝑑tj.|C_{i}(\lambda,\vec{e}_{j})|=\int_{0}^{\ell_{j}}|\mathcal{C}_{i}(\lambda,\vec{e}_{j})|\frac{z_{j,j}(t_{j},\lambda)}{D_{j}(\lambda)}\,dt_{j}. (65)

Before we proceed, we define the following adjustment vectors:

Li\displaystyle\vec{L}_{i} =iP~N(δ1iδ2)1ei,\displaystyle=-i\tilde{P}_{N}(\delta_{1}-i\delta_{2})^{-1}\vec{e}_{i}\,,
Mi\displaystyle\vec{M}_{i} =(δ2)Q1(δ1iδ2)RP~R(δ1iδ2)1ei,\displaystyle=(\delta_{2})_{Q}^{-1}(\delta_{1}-i\delta_{2})_{R}\tilde{P}_{R}(\delta_{1}-i\delta_{2})^{-1}\vec{e}_{i}\,,
Ni\displaystyle\vec{N}_{i} =PD(δ1iδ2)1ei.\displaystyle=-P_{D}(\delta_{1}-i\delta_{2})^{-1}\vec{e}_{i}\,. (66)
Theorem 3.4.

Let λρ(HΓΩ)\lambda\in\rho(H^{\Omega}_{\Gamma}), and let uΓ,iH2(Ω){\vec{u}}_{\Gamma,i}\in H^{2}(\Omega) be the unique solution to

{(HΩλ)uΓ,i=0γΓuΓ,i=ei.\begin{cases}(H^{\Omega}-\lambda){\vec{u}}_{\Gamma,i}=\vec{0}\\ \gamma_{\Gamma}{\vec{u}}_{\Gamma,i}=\vec{e}_{i}.\end{cases}

Then, the jjth component (uΓ,i)j({\vec{u}}_{\Gamma,i})_{j} can be calculated using the formula:

(uΓ,i)j(xj,λ)=\displaystyle({\vec{u}}_{\Gamma,i})_{j}(x_{j},\lambda)= Li+Mi,[zj,j(xj,λ)Dj(λ)|𝒞1(λ,ej)||C(λ)|z1,1(1,λ)zj,j(j,λ)(zj,j(xj,λ)|𝒞j(λ,ej)||C(λ)|yτj,j(xj,λ))|C(λ)|Dj(λ)zj,j(xj,λ)Dj(λ)|𝒞n(λ,ej)||C(λ)|zn,n(n,λ)zj,j(xj,λ)Dj(λ)|𝒞1(λ,ej)||C(λ)|z1,1(0,λ)zj,j(xj,λ)(zj,j(0,λ)|𝒞j(λ,ej)||C(λ)|yτj,j(0,λ))|C(λ)|Dj(λ)zj,j(xj,λ)Dj(λ)|𝒞n(λ,ej)||C(λ)|zn,n(0,λ)]\displaystyle{\langle}\vec{L}_{i}+\vec{M}_{i},\begin{bmatrix}\frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{1,1}(\ell_{1},\lambda)\\ \vdots\\ \frac{z_{j,j}(\ell_{j},\lambda)(z_{j,j}(x_{j},\lambda)|\mathcal{C}_{j}(\lambda,\vec{e}_{j})|-|C(\lambda)|y_{\tau_{j},j}(x_{j},\lambda))}{|C(\lambda)|D_{j}(\lambda)}\\ \vdots\\ \frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{n}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{n,n}(\ell_{n},\lambda)\\ \frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{1,1}(0,\lambda)\\ \vdots\\ \frac{z_{j,j}(x_{j},\lambda)(z_{j,j}(0,\lambda)|\mathcal{C}_{j}(\lambda,\vec{e}_{j})|-|C(\lambda)|y_{\tau_{j},j}(0,\lambda))}{|C(\lambda)|D_{j}(\lambda)}\\ \vdots\\ \frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{n}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{n,n}(0,\lambda)\end{bmatrix}\rangle
+Ni,[zj,j(xj,λ)Dj(λ)|𝒞1(λ,ej)||C(λ)|z1,1(1,λ)zj,j(j,λ)(zj,j(xj,λ)|𝒞j(λ,ej)||C(λ)|yτj,j(xj,λ))|C(λ)|Dj(λ)zj,j(xj,λ)Dj(λ)|𝒞n(λ,ej)||C(λ)|zn,n(n,λ)zj,j(xj,λ)Dj(λ)|𝒞1(λ,ej)||C(λ)|z1,1(0,λ)zj,j(xj,λ)(zj,j(0,λ)|𝒞j(λ,ej)||C(λ)|yτj,j(0,λ))|C(λ)|Dj(λ)zj,j(xj,λ)Dj(λ)|𝒞n(λ,ej)||C(λ)|zn,n(0,λ)],\displaystyle+{\langle}\vec{N}_{i},\begin{bmatrix}\frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{1,1}^{\prime}(\ell_{1},\lambda)\\ \vdots\\ \frac{z_{j,j}^{\prime}(\ell_{j},\lambda)(z_{j,j}(x_{j},\lambda)|\mathcal{C}_{j}(\lambda,\vec{e}_{j})|-|C(\lambda)|y_{\tau_{j},j}(x_{j},\lambda))}{|C(\lambda)|D_{j}(\lambda)}\\ \vdots\\ \frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{n}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{n,n}^{\prime}(\ell_{n},\lambda)\\ -\frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{1,1}^{\prime}(0,\lambda)\\ \vdots\\ -\frac{z_{j,j}(x_{j},\lambda)(z_{j,j}^{\prime}(0,\lambda)|\mathcal{C}_{j}(\lambda,\vec{e}_{j})|-|C(\lambda)|y_{\tau_{j},j}^{\prime}(0,\lambda))}{|C(\lambda)|D_{j}(\lambda)}\\ \vdots\\ -\frac{z_{j,j}(x_{j},\lambda)}{D_{j}(\lambda)}\frac{|\mathcal{C}_{n}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{n,n}^{\prime}(0,\lambda)\end{bmatrix}\rangle\,, (67)

where the terms with |C(λ)||C(\lambda)| in the numerator occur in the jjth and (n+j)(n+j)th components only.

Proof.

First we calculate the general Dirichlet and Neumann traces of RλΓvR_{\lambda}^{\Gamma}{\vec{v}}. Since each column in ZZ is only nonzero in one component (see formula (22)), by using the definition of the resolvent from equation (46) (which iteslf references the particular solution formula from equation (2.3)), we find that for k{1,,n}k\in\{1,...,n\}:

(γDRλΓv)k=|Ck(λ,v)||C(λ)|zk,k(k,λ)zk,k(k,λ)0kvk(tk)yτk,k(tk,λ)Dk(λ)𝑑tk,(\gamma_{D}R_{\lambda}^{\Gamma}{\vec{v}})_{k}=\frac{|C_{k}({\lambda},{\vec{v}})|}{|C({\lambda})|}z_{k,k}(\ell_{k},{\lambda})-z_{k,k}(\ell_{k},{\lambda})\int_{0}^{\ell_{k}}\frac{v_{k}(t_{k})y_{\tau_{k},k}(t_{k},{\lambda})}{D_{k}({\lambda})}\,dt_{k},
(γDRλΓv)n+k=|Ck(λ,v)||C(λ)|zk,k(0,λ)yτk,k(0,λ)0kvk(tk)zk,k(tk,λ)Dk(λ)𝑑tk,(\gamma_{D}R_{\lambda}^{\Gamma}{\vec{v}})_{n+k}=\frac{|C_{k}(\lambda,{\vec{v}})|}{|C(\lambda)|}z_{k,k}(0,\lambda)-y_{\tau_{k},k}(0,\lambda)\int_{0}^{\ell_{k}}\frac{v_{k}(t_{k})z_{k,k}(t_{k},\lambda)}{D_{k}(\lambda)}\,dt_{k},
(γNRλΓv)k=|Ck(λ,v)||C(λ)|zk,k(k,λ)zk,k(k,λ)0kvk(tk)yτk,k(tk,λ)Dk(λ)𝑑tk,(\gamma_{N}R_{\lambda}^{\Gamma}{\vec{v}})_{k}=\frac{|C_{k}(\lambda,{\vec{v}})|}{|C(\lambda)|}z_{k,k}^{\prime}(\ell_{k},\lambda)-z_{k,k}^{\prime}(\ell_{k},\lambda)\int_{0}^{\ell_{k}}\frac{v_{k}(t_{k})y_{\tau_{k},k}(t_{k},\lambda)}{D_{k}(\lambda)}\,dt_{k},
(γNRλΓv)n+k=(|Ck(λ,v)||C(λ)|zk,k(0,λ)yτk,k(0,λ)0kvk(tk)zk,k(tk,λ)Dk(λ)𝑑tk).(\gamma_{N}R_{\lambda}^{\Gamma}{\vec{v}})_{n+k}=-(\frac{|C_{k}(\lambda,{\vec{v}})|}{|C(\lambda)|}z_{k,k}^{\prime}(0,\lambda)-y_{\tau_{k},k}^{\prime}(0,\lambda)\int_{0}^{\ell_{k}}\frac{v_{k}(t_{k})z_{k,k}(t_{k},\lambda)}{D_{k}(\lambda)}\,dt_{k}).

When v=ej{\vec{v}}=\vec{e}_{j}, the components of these traces can be simplified as follows:

(γDRλΓej)k={|Ck(λ,ej)||C(λ)|zk,k(k,λ),kj|Cj(λ,ej)||C(λ)|zj,j(j,λ)zj,j(j,λ)0jyτj,j(tj,λ)Dj(λ)𝑑tj,k=j(\gamma_{D}R_{\lambda}^{\Gamma}\vec{e}_{j})_{k}=\begin{cases}\frac{|C_{k}({\lambda},\vec{e}_{j})|}{|C({\lambda})|}z_{k,k}(\ell_{k},{\lambda}),&k\neq j\\ \frac{|C_{j}({\lambda},\vec{e}_{j})|}{|C({\lambda})|}z_{j,j}(\ell_{j},{\lambda})-z_{j,j}(\ell_{j},{\lambda})\int_{0}^{\ell_{j}}\frac{y_{\tau_{j},j}(t_{j},{\lambda})}{D_{j}({\lambda})}\,dt_{j},&k=j\end{cases}
(γDRλΓej)n+k={|Ck(λ,ej)||C(λ)|zk,k(0,λ),kj|Cj(λ,ej)||C(λ)|zj,j(0,λ)yτj,j(0,λ)0jzj,j(tj,λ)Dj(λ)𝑑tj,k=j(\gamma_{D}R_{\lambda}^{\Gamma}\vec{e}_{j})_{n+k}=\begin{cases}\frac{|C_{k}({\lambda},\vec{e}_{j})|}{|C({\lambda})|}z_{k,k}(0,{\lambda}),&k\neq j\\ \frac{|C_{j}({\lambda},\vec{e}_{j})|}{|C({\lambda})|}z_{j,j}(0,{\lambda})-y_{\tau_{j},j}(0,{\lambda})\int_{0}^{\ell_{j}}\frac{z_{j,j}(t_{j},{\lambda})}{D_{j}({\lambda})}\,dt_{j},&k=j\end{cases}
(γNRλΓej)k={|Ck(λ,ej)||C(λ)|zk,k(k,λ),kj|Cj(λ,ej)||C(λ)|zj,j(j,λ)zj,j(j,λ)0jyτj,j(tj,λ)Dj(λ)𝑑tj,k=j(\gamma_{N}R_{\lambda}^{\Gamma}\vec{e}_{j})_{k}=\begin{cases}\frac{|C_{k}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{k,k}^{\prime}(\ell_{k},\lambda),&k\neq j\\ \frac{|C_{j}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{j,j}^{\prime}(\ell_{j},\lambda)-z_{j,j}^{\prime}(\ell_{j},\lambda)\int_{0}^{\ell_{j}}\frac{y_{\tau_{j},j}(t_{j},\lambda)}{D_{j}(\lambda)}\,dt_{j},&k=j\end{cases}
(γNRλΓej)n+k={|Ck(λ,ej)||C(λ)|zk,k(0,λ),kj(|Cj(λ,ej)||C(λ)|zj,j(0,λ)yτj,j(0,λ)0jzj,j(tj,λ)Dj(λ)𝑑tj),k=j(\gamma_{N}R_{\lambda}^{\Gamma}\vec{e}_{j})_{n+k}=\begin{cases}-\frac{|C_{k}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{k,k}^{\prime}(0,\lambda),&k\neq j\\ -(\frac{|C_{j}(\lambda,\vec{e}_{j})|}{|C(\lambda)|}z_{j,j}^{\prime}(0,\lambda)-y_{\tau_{j},j}^{\prime}(0,\lambda)\int_{0}^{\ell_{j}}\frac{z_{j,j}(t_{j},\lambda)}{D_{j}(\lambda)}\,dt_{j}),&k=j\end{cases}

Thus, we can recover the jjth component of uΓ,i{\vec{u}}_{\Gamma,i} by setting v=ej{\vec{v}}=\vec{e}_{j}, equating the integrands on both sides of equation (3.3) and using formulas (65), (3), completing the proof. ∎

4 Eigenvalue Counting Formulas

4.1 Single Split

We can split Ω\Omega into two subgraphs at some non-vertex point xj=sjx_{j}=s_{j}, Ω1\Omega_{1} and Ω2\Omega_{2}, over which new conditions and operators are defined as in Remark 1.3. In addition, we consider boundary conditions Γ1\Gamma_{1}^{\prime} and Γ2\Gamma_{2}^{\prime}, which are identical to Γ1\Gamma_{1} and Γ2\Gamma_{2}, respectively, at all vertices except s1s_{1}. In Γ1\Gamma_{1}^{\prime}, we impose the Neumann condition u(s1)=0u^{\prime}(s_{1})=0, and in Γ2\Gamma_{2}^{\prime} we impose the Neumann condition uj(s1)=0u_{j}^{\prime}(s_{1})=0. Of course, from these new conditions we can construct operators HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}^{\prime}} and HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}^{\prime}}, defined as follows:

HΓ1Ω1u:=HΩ1u for udom(HΓ1Ω1):={uH2(Ω1):γΓ1u=0},HΓ2Ω2u:=HΩ2u for udom(HΓ2Ω2):={uH2(Ω2):γΓ2u=0}.\begin{split}H^{\Omega_{1}}_{\Gamma_{1}^{\prime}}u:=H^{\Omega_{1}}u\text{ for }u\in{\text{dom}\,}(H^{\Omega_{1}}_{\Gamma_{1}^{\prime}}):=\{u\in H^{2}(\Omega_{1}):\gamma_{\Gamma_{1}^{\prime}}u=\vec{0}\},\\ H^{\Omega_{2}}_{\Gamma_{2}^{\prime}}{\vec{u}}:=H^{\Omega_{2}}{\vec{u}}\text{ for }{\vec{u}}\in{\text{dom}\,}(H^{\Omega_{2}}_{\Gamma_{2}^{\prime}}):=\{{\vec{u}}\in H^{2}(\Omega_{2}):\gamma_{\Gamma_{2}^{\prime}}{\vec{u}}=\vec{0}\}.\end{split} (68)

We will now demonstrate a way to count eigenvalues of HΓΩH^{\Omega}_{\Gamma} by instead working with HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}} and HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}}.

Remark 4.1.

We will use the following functions satisfying Dirichlet and Neumann conditions throughout the rest of this section: ϕ\vec{\phi} and θ\vec{\theta}. They are solutions to (HΩλ)u=0(H^{\Omega}-\lambda){\vec{u}}=\vec{0} and satisfy the following boundary conditions along the jjth edge:

θj(s1)=1\displaystyle\theta_{j}(s_{1})=1\hskip 36.135pt ϕj(s1)=0,\displaystyle\phi_{j}(s_{1})=0\,,
θj(s1)=0\displaystyle\theta_{j}^{\prime}(s_{1})=0\hskip 36.135pt ϕj(s1)=1.\displaystyle\phi_{j}^{\prime}(s_{1})=1\,. (69)

For convenience, all other components (except for the jth components) for ϕ\vec{\phi} and θ\vec{\theta} are made identically 0.

4.1.1 M1M_{1} Construction

When constructing M1M_{1}, we suppose that λρ(HΓ1Ω1)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}}) and ff\in{\mathbb{C}}. Let uH2(Ω1)u\in H^{2}({\Omega_{1}}) solve the inhomogeneous problem

{(HΩ1λ)u=0,u(s1)=f,u satisfies the Γ condition at j.\begin{cases}(H^{\Omega_{1}}-\lambda)u=0,\\ u(s_{1})=f,\\ \text{$u$ satisfies the $\Gamma$ condition at $\ell_{j}$}.\end{cases} (70)

Since uu does not satisfy the Dirichlet condition at s1s_{1}, we find that u(xj,λ)=czj,j|Ω1(xj,λ)u(x_{j},\lambda)=cz_{j,j}|_{\Omega_{1}}(x_{j},\lambda) for some cc, where zj,jz_{j,j} is the solution satisfying the β\beta conditions (and thus also βΓ1\beta^{\Gamma_{1}} conditions) on ϵj\epsilon_{j} at xj=jx_{j}=\ell_{j}. Then, by equation (1.4), we have that

M1(λ)=u(s1,λ)u(s1,λ)=czj,j|Ω1(s1,λ)czj,j|Ω1(s1,λ)=zj,j(s1,λ)zj,j(s1,λ).M_{1}(\lambda)=-\frac{u^{\prime}(s_{1},\lambda)}{u(s_{1},\lambda)}=-\frac{cz_{j,j}|_{\Omega_{1}}^{\prime}(s_{1},\lambda)}{cz_{j,j}|_{\Omega_{1}}(s_{1},\lambda)}=-\frac{z_{j,j}^{\prime}(s_{1},\lambda)}{z_{j,j}(s_{1},\lambda)}\,. (71)

Of course, we can rewrite this as a quotient of Wronskians by utilizing ϕj\phi_{j} and θj\theta_{j} from equation (4.1):

M1(λ)=W0(θj(s1,λ),zj,j(s1,λ))W0(zj,j(s1,λ),ϕj(s1,λ))=W0(θj(xj,λ),zj,j(xj,λ))W0(ϕj(xj,λ),zj,j(xj,λ)).M_{1}(\lambda)=-\frac{W_{0}(\theta_{j}(s_{1},\lambda),z_{j,j}(s_{1},\lambda))}{W_{0}(z_{j,j}(s_{1},\lambda),\phi_{j}(s_{1},\lambda))}=\frac{W_{0}(\theta_{j}(x_{j},\lambda),z_{j,j}(x_{j},\lambda))}{W_{0}(\phi_{j}(x_{j},\lambda),z_{j,j}(x_{j},\lambda))}\,. (72)

Of course, these Wronskians are simply the determinants of the fundamental matrices for the Γ1\Gamma_{1} and Γ1\Gamma_{1}^{\prime} problems. Thus, rewritten as Evans functions, we have

M1(λ)=EΓ1Ω1(λ)EΓ1Ω1(λ).M_{1}(\lambda)=\frac{E^{\Omega_{1}}_{\Gamma_{1}^{\prime}}(\lambda)}{E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)}. (73)

4.1.2 M2M_{2} Construction

The construction of M2M_{2} as a quotient of Evans functions must use a different argument since it is defined by a problem over Ω2\Omega_{2}, which has a star structure.

Theorem 4.2.

Let λρ(HΓ2Ω2)\lambda\in\rho(H^{\Omega_{2}}_{\Gamma_{2}}). Then

M2(λ)=EΓ2Ω2(λ)EΓ2Ω2(λ).M_{2}(\lambda)=-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)}{E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)}.
Proof.

Let λρ(HΓ2Ω2)\lambda\in\rho(H^{\Omega_{2}}_{\Gamma_{2}}). Let uΓ2,jH2(Ω2){\vec{u}}_{\Gamma_{2},j}\in H^{2}({\Omega_{2}}) be the unique solution to the problem

{(HΩ2λ)uΓ2,j=0,γΓ2uΓ2,j=ej.\begin{cases}(H^{\Omega_{2}}-\lambda){\vec{u}}_{\Gamma_{2},j}=\vec{0},\\ \gamma_{\Gamma_{2}}{\vec{u}}_{\Gamma_{2},j}=\vec{e}_{j}.\end{cases} (74)

That is, uΓ2,j{\vec{u}}_{\Gamma_{2},j} satisfies all Γ2\Gamma_{2} boundary conditions except the Dirichlet condition at s1s_{1}. Then by equation (1.4), we can express the map as follows:

M2(λ)=(uΓ2,j)j(s1,λ)(uΓ2,j)j(s1,λ)=(uΓ2,j)j(s1,λ)1.M_{2}(\lambda)=\frac{({\vec{u}}_{\Gamma_{2},j})^{\prime}_{j}(s_{1},\lambda)}{({\vec{u}}_{\Gamma_{2},j})_{j}(s_{1},\lambda)}=\frac{({\vec{u}}_{\Gamma_{2},j})_{j}^{\prime}(s_{1},\lambda)}{1}\,. (75)

To find a more informative way of writing (uΓ2,j)j(s1,λ)({\vec{u}}_{\Gamma_{2},j})^{\prime}_{j}(s_{1},\lambda), we can use Theorem 3.4 with i=ji=j, j=s1\ell_{j}=s_{1}, and fundamental YY and ZZ solutions defined on Ω2\Omega_{2} using Γ2\Gamma_{2} conditions.

We introduce δ1Γ2\delta^{\Gamma_{2}}_{1} and δ2Γ2\delta^{\Gamma_{2}}_{2}, which are the δ\delta matrices (see formula (49)) corresponding to Γ2\Gamma_{2} boundary conditions. Due to the Dirichlet condition imposed at s1s_{1}, we have that the jjth row and jjth column of δ1Γ2\delta^{\Gamma_{2}}_{1} are both ej\vec{e}_{j}, while the jjth row and column of δ2Γ2\delta^{\Gamma_{2}}_{2} consist of zero entries. Therefore, (δ1Γ2iδ2Γ2)ej=ej(\delta^{\Gamma_{2}}_{1}-i\delta^{\Gamma_{2}}_{2})\vec{e}_{j}=\vec{e}_{j}, which means that (δ1Γ2iδ2Γ2)1ej=ej(\delta^{\Gamma_{2}}_{1}-i\delta^{\Gamma_{2}}_{2})^{-1}\vec{e}_{j}=\vec{e}_{j}. Additionally, since PDP_{D} is a projection onto kerδ2\ker\delta_{2}, which ej\vec{e}_{j} is decidedly in, we have PDej=ejP_{D}\vec{e}_{j}=\vec{e}_{j}. However, since PDP_{D}, P~N\tilde{P}_{N}, and P~R\tilde{P}_{R} are mutually orthogonal, we have that P~Nej=P~Rej=0\tilde{P}_{N}\vec{e}_{j}=\tilde{P}_{R}\vec{e}_{j}=\vec{0}. This means that ej\vec{e}_{j} is sent to zero by Lj+MjL_{j}+M_{j}, while it is passed through NjN_{j} with nothing more than a sign swap. Hence, we can reduce Theorem 3.4 for i=ji=j in this case to

(uΓ2,j)j(xj,λ)=zj,j(s1,λ)(zj,j(xj,λ)|𝒞j(λ,ej)||C(λ)|yτj,j(xj,λ))|C(λ)|Dj(λ).({\vec{u}}_{\Gamma_{2},j})_{j}(x_{j},\lambda)=-\frac{z_{j,j}^{\prime}(s_{1},\lambda)(z_{j,j}(x_{j},\lambda)|\mathcal{C}_{j}(\lambda,\vec{e}_{j})|-|C(\lambda)|y_{\tau_{j},j}(x_{j},\lambda))}{|C(\lambda)|D_{j}(\lambda)}\,. (76)

Note that in this formula, and for the rest of Section 4.1.2, zj,jz_{j,j}, yτj,jy_{\tau_{j},j}, CC, DjD_{j}, etc. are all those objects related to the construction of RλΓ2vR_{\lambda}^{\Gamma_{2}}{\vec{v}}, as opposed to those from RλΓvR_{\lambda}^{\Gamma}{\vec{v}}. However, note that all of the functions except zj,jz_{j,j} constructed using Γ2\Gamma_{2} conditions are equal to their Γ\Gamma condition counterparts.

The only portion of the Neumann trace we care about is (uΓ2,j)j(s1,λ)({\vec{u}}_{\Gamma_{2},j})_{j}^{\prime}(s_{1},\lambda).

(uΓ2,j)j(s1,λ)=zj,j(s1,λ)(zj,j(s1,λ)|𝒞j(λ,ej)||C(λ)|yτj,j(s1,λ))|C(λ)|Dj(λ).({\vec{u}}_{\Gamma_{2},j})^{\prime}_{j}(s_{1},\lambda)=-\frac{z_{j,j}^{\prime}(s_{1},\lambda)(z_{j,j}^{\prime}(s_{1},\lambda)|\mathcal{C}_{j}(\lambda,\vec{e}_{j})|-|C(\lambda)|y_{\tau_{j},j}^{\prime}(s_{1},\lambda))}{|C(\lambda)|D_{j}(\lambda)}\,. (77)

By Lemma 2.4 we have that |C|=(1)n|F||C|=(-1)^{n}|F|, where FF is the fundamental matrix associated with the HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}} operator. Note that EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}} and EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}^{\prime}} are determinants of matrices identical to FF except for the (n+j)(n+j)th column, which is replaced by ϕ\vec{\phi} and θ\vec{\theta}, respectively.

If we replace zj,jz_{j,j} everywhere in FF and CC with yτj,jy_{\tau_{j},j}, we can again apply Lemma 2.4 and get an analogous result. Of course, C(λ)C(\lambda) with all the zj,jz_{j,j} contributions replaced by yτj,jy_{\tau_{j},j} contributions is simply 𝒞j(λ,ej)\mathcal{C}_{j}(\lambda,\vec{e}_{j}) (that is, zj,jz_{j,j} and zj,jz_{j,j}^{\prime} are swapped out everywhere for yτj,jy_{\tau_{j},j} and yτj,jy_{\tau_{j},j}^{\prime}, respectively) as in equation (65). Thus, we have that |𝒞j(λ,ej)|=(1)n|j(λ,ej)||\mathcal{C}_{j}(\lambda,\vec{e}_{j})|=(-1)^{n}|\mathcal{F}_{j}(\lambda,\vec{e}_{j})| where j\mathcal{F}_{j} is FF with all zj,jz_{j,j} contributions replaced with equivalent yτj,jy_{\tau_{j},j} contributions. Then, equation (77) can be rewritten as

M2(λ)\displaystyle M_{2}(\lambda) =(uΓ2,j)j(s1,λ)\displaystyle=({\vec{u}}_{\Gamma_{2},j})^{\prime}_{j}(s_{1},\lambda)
=zj,j(s1,λ)(zj,j(s1,λ)(1)n|j(λ,ej)|(1)n|F(λ)|yτj,j(s1,λ))(1)n|F(λ)|Dj(λ)\displaystyle=-\frac{z_{j,j}^{\prime}(s_{1},\lambda)(z_{j,j}^{\prime}(s_{1},\lambda)(-1)^{n}|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-(-1)^{n}|F(\lambda)|y_{\tau_{j},j}^{\prime}(s_{1},\lambda))}{(-1)^{n}|F(\lambda)|D_{j}(\lambda)}
=zj,j(s1,λ)(zj,j(s1,λ)|j(λ,ej)||F(λ)|yτj,j(s1,λ))|F(λ)|Dj(λ).\displaystyle=-\frac{z_{j,j}^{\prime}(s_{1},\lambda)(z_{j,j}^{\prime}(s_{1},\lambda)|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-|F(\lambda)|y_{\tau_{j},j}^{\prime}(s_{1},\lambda))}{|F(\lambda)|D_{j}(\lambda)}\,. (78)

Since there is a Dirichlet condition at s1s_{1}, we see that zj,j=ϕj|Ω2z_{j,j}=\phi_{j}|_{\Omega_{2}} (the restriction of ϕj\phi_{j} to Ω2ϵj\Omega_{2}\cap\epsilon_{j}), and thus |F|=EΓ2Ω2|F|=E^{\Omega_{2}}_{\Gamma_{2}}. Observe that |F||F| can be rewritten by expanding down the ϕ\phi column when xj=0x_{j}=0 to give us:

|F(λ)|=ϕj(0,λ)B1(λ)ϕj(0,λ)B2(λ),|F(\lambda)|=\phi_{j}(0,\lambda)B_{1}(\lambda)-\phi_{j}^{\prime}(0,\lambda)B_{2}(\lambda)\,, (79)

where B1B_{1} and B2B_{2} are the complementary minors picked up in Laplace expansion. Additionally, by our assumed initial conditions:

θj(xj)ϕj(xj)θj(xj)ϕj(xj)=W0(θj,ϕj)=W0(θj(s1),ϕj(s1))=|1001|=1,\theta_{j}(x_{j})\phi_{j}^{\prime}(x_{j})-\theta_{j}^{\prime}(x_{j})\phi_{j}(x_{j})=W_{0}(\theta_{j},\phi_{j})=W_{0}(\theta_{j}(s_{1}),\phi_{j}(s_{1}))=\begin{vmatrix}1&0\\ 0&1\end{vmatrix}=1\,, (80)

and thus

ϕj(xj)=1+θj(xj)ϕj(xj)θj(xj).\phi_{j}^{\prime}(x_{j})=\frac{1+\theta_{j}^{\prime}(x_{j})\phi_{j}(x_{j})}{\theta_{j}(x_{j})}\,. (81)

Then, by multiplying equation (4.1.2)\eqref{eq:80} by |F(λ)|-|F(\lambda)| we see:

|F(λ)|M2(λ)=zj,j(s,λ)(zj,j(s1,λ)|j(λ,ej)||F(λ)|yτj,j(s1,λ))Dj(λ)\displaystyle-|F(\lambda)|M_{2}(\lambda)=\frac{z_{j,j}^{\prime}(s,\lambda)(z_{j,j}^{\prime}(s_{1},\lambda)|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-|F(\lambda)|y_{\tau_{j},j}^{\prime}(s_{1},\lambda))}{D_{j}(\lambda)}
=\displaystyle= ϕj(s1,λ)(ϕj(s1,λ)|j(λ,ej)||F(λ)|yτj,j(s1,λ))W0(yτj,j,zj,j)\displaystyle\frac{\phi_{j}^{\prime}(s_{1},\lambda)(\phi_{j}^{\prime}(s_{1},\lambda)|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-|F(\lambda)|y_{\tau_{j},j}^{\prime}(s_{1},\lambda))}{W_{0}(y_{\tau_{j},j},z_{j,j})}
=(4.1)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:42}}}}}}{{=}}} |j(λ,ej)||F(λ)|yτj,j(s1,λ)W0(yτj,j,ϕj|Ω2)=|j(λ,ej)||F(λ)||1yτj,j(s1,λ)0yτj,j(s1,λ)|yτj,j(0,λ)ϕj(0,λ)yτj,j(0,λ)ϕj(0,λ)\displaystyle\,\,\frac{|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-|F(\lambda)|y_{\tau_{j},j}^{\prime}(s_{1},\lambda)}{W_{0}(y_{\tau_{j},j},\phi_{j}|_{\Omega_{2}})}=\frac{|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-|F(\lambda)|\begin{vmatrix}1&y_{\tau_{j},j}(s_{1},\lambda)\\ 0&y_{\tau_{j},j}^{\prime}(s_{1},\lambda)\end{vmatrix}}{y_{\tau_{j},j}(0,\lambda)\phi_{j}^{\prime}(0,\lambda)-y_{\tau_{j},j}^{\prime}(0,\lambda)\phi_{j}(0,\lambda)}
=(79),(81)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:71},\eqref{eq:72}}}}}}{{=}}} |j(λ,ej)|(ϕj(0,λ)B1(λ)ϕj(0,λ)B2(λ))|θj(0,λ)yτj,j(0,λ)θj(0,λ)yτj,j(0,λ)|yτj,j(0,λ)1+θj(0,λ)ϕj(0,λ)θj(0,λ)yτj,j(0,λ)ϕj(0,λ)\displaystyle\,\,\,\,\,\,\,\frac{|\mathcal{F}_{j}(\lambda,\vec{e}_{j})|-(\phi_{j}(0,\lambda)B_{1}(\lambda)-\phi_{j}^{\prime}(0,\lambda)B_{2}(\lambda))\begin{vmatrix}\theta_{j}(0,\lambda)&y_{\tau_{j},j}(0,\lambda)\\ \theta_{j}^{\prime}(0,\lambda)&y_{\tau_{j},j}^{\prime}(0,\lambda)\end{vmatrix}}{y_{\tau_{j},j}(0,\lambda)\frac{1+\theta_{j}^{\prime}(0,\lambda)\phi_{j}(0,\lambda)}{\theta_{j}(0,\lambda)}-y_{\tau_{j},j}^{\prime}(0,\lambda)\phi_{j}(0,\lambda)}
(From here on out, all variables stay the same, so we omit dependencies)
=\displaystyle= |j|(ϕjB1ϕjB2)(θjyτj,jyτj,jθj)yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\frac{|\mathcal{F}_{j}|-(\phi_{j}B_{1}-\phi_{j}^{\prime}B_{2})(\theta_{j}y_{\tau_{j},j}^{\prime}-y_{\tau_{j},j}\theta_{j}^{\prime})}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= |j|(ϕjB1ϕjB2)(θj)(yτj,jθjθjyτj,j)yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\frac{|\mathcal{F}_{j}|-(\phi_{j}B_{1}-\phi_{j}^{\prime}B_{2})(-\theta_{j})(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= |j|+θjB1ϕj(yτj,jθjθjyτj,j)θjB2ϕj(yτj,jθjθjyτj,j)yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\frac{|\mathcal{F}_{j}|+\theta_{j}B_{1}\phi_{j}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})-\theta_{j}B_{2}\phi_{j}^{\prime}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= |j|+θjB1(yτj,jθjyτj,jθj+ϕj(yτj,jθjθjyτj,j))yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\frac{|\mathcal{F}_{j}|+\theta_{j}B_{1}(\frac{y_{\tau_{j},j}}{\theta_{j}}-\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime}))}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
B2θj(1+θjϕjθj)(yτj,jθjθjyτj,j)yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle-\frac{B_{2}\theta_{j}(\frac{1+\theta_{j}^{\prime}\phi_{j}}{\theta_{j}})(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= θjB1+|j|θjB1yτj,jθjB2(yτj,jθjθjyτj,j)θjB2ϕj(yτj,jθjθjyτj,j)yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\theta_{j}B_{1}+\frac{|\mathcal{F}_{j}|-\theta_{j}B_{1}\frac{y_{\tau_{j},j}}{\theta_{j}}-B_{2}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})-\theta_{j}^{\prime}B_{2}\phi_{j}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= θjB1+|j|θjB1yτj,jθjB2(yτj,jθjθjyτj,j)yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\theta_{j}B_{1}+\frac{|\mathcal{F}_{j}|-\theta_{j}B_{1}\frac{y_{\tau_{j},j}}{\theta_{j}}-B_{2}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
θjB2(yτj,jθjyτj,jθj+ϕj(yτj,jθjθjyτj,j))yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle-\frac{\theta_{j}^{\prime}B_{2}(\frac{y_{\tau_{j},j}}{\theta_{j}}-\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime}))}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= θjB1θjB2+|j|θjB1yτj,jθjB2(yτj,jθjθjyτj,j)+yτj,jθjθjB2yτj,jθj+ϕj(θjθjyτj,jyτj,j)\displaystyle\theta_{j}B_{1}-\theta_{j}^{\prime}B_{2}+\frac{|\mathcal{F}_{j}|-\theta_{j}B_{1}\frac{y_{\tau_{j},j}}{\theta_{j}}-B_{2}(y_{\tau_{j},j}\frac{\theta_{j}^{\prime}}{\theta_{j}}-y_{\tau_{j},j}^{\prime})+\frac{y_{\tau_{j},j}}{\theta_{j}}\theta_{j}^{\prime}B_{2}}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}
=\displaystyle= θjB1θjB2+|j|yτj,jB1+yτj,jB2yτj,jθj+ϕj(θjθjyτj,jyτj,j).\displaystyle\theta_{j}B_{1}-\theta_{j}^{\prime}B_{2}+\frac{|\mathcal{F}_{j}|-y_{\tau_{j},j}B_{1}+y_{\tau_{j},j}^{\prime}B_{2}}{\frac{y_{\tau_{j},j}}{\theta_{j}}+\phi_{j}(\frac{\theta_{j}^{\prime}}{\theta_{j}}y_{\tau_{j},j}-y_{\tau_{j},j}^{\prime})}\,. (82)

As discussed previously, j\mathcal{F}_{j} is simply FF with all the zj,jz_{j,j} contributions swapped for yτj,jy_{\tau_{j},j} contributions. By the definition of B1B_{1} and B2B_{2}, the difference yτj,jB1yτj,jB2y_{\tau_{j},j}B_{1}-y_{\tau_{j},j}^{\prime}B_{2} is precisely the determinant of the fundamental solution matrix with zj,jz_{j,j} swapped out for yτj,jy_{\tau_{j},j}. So, we certainly have |j|(yτj,jB1yτj,jB2)=0|\mathcal{F}_{j}|-(y_{\tau_{j},j}B_{1}-y_{\tau_{j},j}^{\prime}B_{2})=0, and thus we can represent M2M_{2} as our desired quotient. Indeed, dividing equation (4.1.2) by |F(λ)|-|F(\lambda)| tells us that

M2(λ)=θj(0,λ)B1(λ)θj(0,λ)B2(λ)|F(λ)|=EΓ2Ω2(λ)EΓ2Ω2(λ).M_{2}(\lambda)=-\frac{\theta_{j}(0,\lambda)B_{1}(\lambda)-\theta_{j}^{\prime}(0,\lambda)B_{2}(\lambda)}{|F(\lambda)|}=-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)}{E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)}\,. (83)

4.1.3 An Evans Function Relation

We are now prepared to prove Theorem 1.5.

Proof.

We know from equation (73) and Theorem 4.2 that we can represent M1M_{1} and M2M_{2} as the quotients of Evans functions:

M1(λ)=EΓ1Ω1(λ)EΓ1Ω1(λ),M2(λ)=EΓ2Ω2(λ)EΓ2Ω2(λ),\displaystyle\begin{split}M_{1}(\lambda)=\frac{E^{\Omega_{1}}_{\Gamma_{1}^{\prime}}(\lambda)}{E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)}\,,\\ M_{2}(\lambda)=-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)}{E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)}\,,\end{split} (84)

Then, multiplying M1+M2M_{1}+M_{2} by EΓ1Ω1EΓ2Ω2E^{\Omega_{1}}_{\Gamma_{1}}E^{\Omega_{2}}_{\Gamma_{2}} and applying equation (84), we see:

(M1(λ)+M2(λ))EΓ1Ω1(λ)EΓ2Ω2(λ)=EΓ1Ω1(λ)EΓ2Ω2(λ)EΓ2Ω2(λ)EΓ1Ω1(λ).(M_{1}(\lambda)+M_{2}(\lambda))E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)=E^{\Omega_{1}}_{\Gamma_{1}^{\prime}}(\lambda)E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)-E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)\,. (85)

Since zj,j(xj,λ)z_{j,j}(x_{j},\lambda) is the initial value problem solution satisfying the Γ\Gamma condition associated with the vertex xj=jx_{j}=\ell_{j}, and ϕ\vec{\phi} and θ\vec{\theta} are as defined in Equation (4.1), we can evaluate the 2×22\times 2 determinants EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}} and EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}^{\prime}} at xj=s1x_{j}=s_{1} to obtain:

EΓ1Ω1(λ)EΓ2Ω2(λ)EΓ2Ω2(λ)EΓ1Ω1(λ)\displaystyle E^{\Omega_{1}}_{\Gamma_{1}^{\prime}}(\lambda)E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)-E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)
=\displaystyle= |θj(s1,λ)zj,j(s1,λ)θj(s1,λ)zj,j(s1,λ)|EΓ2Ω2(λ)EΓ2Ω2(λ)|ϕj(s1,λ)zj,j(s1,λ)ϕj(s1,λ)zj,j(s1,λ)|\displaystyle\begin{vmatrix}\theta_{j}(s_{1},\lambda)&z_{j,j}(s_{1},\lambda)\\ \theta_{j}(s_{1},\lambda)&z_{j,j}^{\prime}(s_{1},\lambda)\end{vmatrix}E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)-E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)\begin{vmatrix}\phi_{j}(s_{1},\lambda)&z_{j,j}(s_{1},\lambda)\\ \phi_{j}^{\prime}(s_{1},\lambda)&z_{j,j}^{\prime}(s_{1},\lambda)\end{vmatrix}
=\displaystyle= zj,j(s1,λ)EΓ2Ω2(λ)+EΓ2Ω2(λ)zj,j(s1,λ).\displaystyle z_{j,j}^{\prime}(s_{1},\lambda)E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)+E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)z_{j,j}(s_{1},\lambda)\,. (86)

We recall that EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}} and EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}^{\prime}} are both determinants whose every column is equal to the the same column in the fundamental matrix FF for the Γ\Gamma problem except the (n+j)(n+j)th, which is either ϕ\vec{\phi} or θ\vec{\theta}, respectively. By evaluating these determinants at xj=s1x_{j}=s_{1}, we can multiply the zj,jz^{\prime}_{j,j} and zj,jz_{j,j} terms into the (n+j)(n+j)th columns of EΓ2E_{\Gamma_{2}} and EΓ2E_{\Gamma_{2}^{\prime}}, respectively, to turn the expression from equation (4.1.3) into the determinant of FF. So

(M1+M2)EΓ1Ω1EΓ2Ω2=EΓ1Ω1EΓ2Ω2EΓ2Ω2EΓ1Ω1=|F|=EΓΩ.(M_{1}+M_{2})E^{\Omega_{1}}_{\Gamma_{1}}E^{\Omega_{2}}_{\Gamma_{2}}=E^{\Omega_{1}}_{\Gamma_{1}^{\prime}}E^{\Omega_{2}}_{\Gamma_{2}}-E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}E^{\Omega_{1}}_{\Gamma_{1}}=|F|=E^{\Omega}_{\Gamma}. (87)

Dividing this equation by EΓ1Ω1EΓ2Ω2E^{\Omega_{1}}_{\Gamma_{1}}E^{\Omega_{2}}_{\Gamma_{2}} proves our desired equality. ∎

Since there is a one-to-one correspondence between the zeros of EΓΩE^{\Omega}_{\Gamma} and the eigenvalues of HΓΩH^{\Omega}_{\Gamma} (including algebraic multiplicities), Theorem 1.7 clearly follows from Theorem 1.5.

4.2 Double Split on One Wire

Refer to caption
Figure 3: Splitting Ω\Omega into three subgraphs with two cuts on one wire.

So far our eigenvalue counting theorem is only proven for cases in which Ω1\Omega_{1} and Ω2\Omega_{2} meet at one point. We will now extend this result to cases where we split twice on one wire.

Let s1s_{1} and s2s_{2} be interior points of the jjth wire of Ω\Omega such that s1>s2s_{1}>s_{2}. We can split Ω\Omega into three subgraphs as in Definition 1.8. Then by Theorem 1.5, we have that

EΓΩ=EΓ1Ω1EΓ2Ω2(M1+M2),E^{\Omega}_{\Gamma}=E^{\Omega_{1}}_{\Gamma_{1}}E^{\Omega_{2}}_{\Gamma_{2}}(M_{1}+M_{2}), (88)

where M1+M2M_{1}+M_{2} is the two-sided map constructed at s1s_{1}. In a later proof, it will be beneficial to break up maps and Evans functions built for Ω2\Omega_{2}. This will require the introduction of some new boundary conditions.

Definition 4.3.

Let Γ~1DD\tilde{\Gamma}_{1}^{DD}, Γ~1DN\tilde{\Gamma}_{1}^{DN}, Γ~1ND\tilde{\Gamma}_{1}^{ND}, and Γ~1NN\tilde{\Gamma}_{1}^{NN} be boundary conditions defined over Ω~1\tilde{\Omega}_{1}. A superscript of DD assigns the Dirichlet condition u(si)=0u(s_{i})=0, a superscript of NN assigns the Neumann condition u(si)=0u^{\prime}(s_{i})=0, where ii corresponds to the position in which the letter sits. So for example, Γ~1DN\tilde{\Gamma}_{1}^{DN} represents the boundary conditions u(s1)=u(s2)=0u(s_{1})=u^{\prime}(s_{2})=0. Note that Γ~1DD\tilde{\Gamma}_{1}^{DD} is identical to the set of conditions called Γ~1\tilde{\Gamma}_{1} in Definition 1.8. We also have Γ~2\tilde{\Gamma}_{2} and Γ~2\tilde{\Gamma}_{2}^{\prime} defined over Ω~2\tilde{\Omega}_{2}, which are identical to Γ2\Gamma_{2} at all common vertices. For Γ~2\tilde{\Gamma}_{2}, the Dirichlet condition uj(s2)=0u_{j}(s_{2})=0 is applied; for Γ~2\tilde{\Gamma}_{2}^{\prime}, the Neumann condition uj(s2)=0u_{j}^{\prime}(s_{2})=0 is applied.

Remark 4.4.

At this point we have a standard way of building operators and Evans functions based on sets of boundary conditions on subgraphs. Given boundary conditions Γ0\Gamma_{0} defined over some subgraph Ω\Omega^{*}, we can let HΩH^{\Omega^{*}} be the restriction of HΩH^{\Omega} to Ω\Omega^{*}. Then we can define the operator HΓ0ΩH^{\Omega^{*}}_{\Gamma_{0}} as follows:

HΓ0Ωu:=HΩu for udom(HΓ0Ω):={uH2(Ω):γΓ0u=0}.H^{\Omega^{*}}_{\Gamma_{0}}{\vec{u}}:=H^{\Omega^{*}}{\vec{u}}\text{ for }{\vec{u}}\in{\text{dom}\,}(H^{\Omega^{*}}_{\Gamma_{0}}):=\{{\vec{u}}\in H^{2}(\Omega^{*}):\gamma_{\Gamma_{0}}{\vec{u}}=\vec{0}\}.

We let the Evans function associated with HΓ0ΩH^{\Omega^{*}}_{\Gamma_{0}} be denoted by EΓ0ΩE^{\Omega^{*}}_{\Gamma_{0}}.

With this new notation, we can split Ω2\Omega_{2} at s2s_{2} to get the three desired subgraphs, and another application of Theorem 1.5 gives us that

EΓ2Ω2=EΓ~1DDΩ~1EΓ~2Ω~2(M~1+M~2),E^{\Omega_{2}}_{\Gamma_{2}}=E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\tilde{M}_{1}+\tilde{M}_{2}), (89)

where M~1+M~2\tilde{M}_{1}+\tilde{M}_{2} is a two sided map defined at s2s_{2} with a Dirichlet condition at s1s_{1}.

Combining equations (88) and (89), we find that:

EΓΩ=EΓ1Ω1EΓ2Ω2(M1+M2)=EΓ1Ω1EΓ~1DDΩ~1EΓ~2Ω~2(M~1+M~2)(M1+M2).\displaystyle\begin{split}E^{\Omega}_{\Gamma}&=E^{\Omega_{1}}_{\Gamma_{1}}E^{\Omega_{2}}_{\Gamma_{2}}(M_{1}+M_{2})\\ &=E^{\Omega_{1}}_{\Gamma_{1}}E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\tilde{M}_{1}+\tilde{M}_{2})(M_{1}+M_{2}).\end{split} (90)

We will now define the maps 1\mathcal{M}_{1} and 2\mathcal{M}_{2}, the two-dimensional extensions of the one-sided maps.

Definition 4.5.

Let λρ(HΓ~1DDΩ~1)\lambda\in\rho(H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}) and fix f2{\vec{f}}\in{\mathbb{C}}^{2}. Then, we define 1(λ):22\mathcal{M}_{1}(\lambda):{\mathbb{C}}^{2}\rightarrow{\mathbb{C}}^{2} as follows:

1(λ)f:=[u(s1,λ)u(s2,λ)],\mathcal{M}_{1}(\lambda){\vec{f}}:=\begin{bmatrix}u^{\prime}(s_{1},\lambda)\\ -u^{\prime}(s_{2},\lambda)\end{bmatrix},

where uu is the unique solution to the boundary value problem

{(HΩ~1λ)u=0,γΓ~1DDu=f.\begin{cases}(H^{\tilde{\Omega}_{1}}-\lambda)u=0,\\ \gamma_{\tilde{\Gamma}_{1}^{DD}}u={\vec{f}}.\end{cases}

Likewise, for λρ(HΓ1Ω1)ρ(HΓ~2Ω~2)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}})\cap\rho(H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}) and f2{\vec{f}}\in{\mathbb{C}}^{2}, we can instead define 2(λ):22\mathcal{M}_{2}(\lambda):{\mathbb{C}}^{2}\rightarrow{\mathbb{C}}^{2} as follows

2(λ)f:=[u(s1,λ)uj(s2,λ)],\mathcal{M}_{2}(\lambda){\vec{f}}:=\begin{bmatrix}-u^{\prime}(s_{1},\lambda)\\ u_{j}^{\prime}(s_{2},\lambda)\end{bmatrix},

where uu is the unique solution to the boundary value problem

{(HΩ1λ)u=0,γΓ1u=[0f1].\begin{cases}(H^{\Omega_{1}}-\lambda)u=0,\\ \gamma_{\Gamma_{1}}u=\begin{bmatrix}0\\ f_{1}\end{bmatrix}.\end{cases}

and uju_{j} is the jjth component of the unique solution u{\vec{u}} to the boundary value problem

{(HΩ~2λ)u=0,uj(s2)=f2,u satisfies all other Γ~2 conditions.\begin{cases}(H^{\tilde{\Omega}_{2}}-\lambda){\vec{u}}=\vec{0},\\ u_{j}(s_{2})=f_{2},\\ \text{${\vec{u}}$ satisfies all other $\tilde{\Gamma}_{2}$ conditions.}\end{cases}
Remark 4.6.

1\mathcal{M}_{1} and 2\mathcal{M}_{2} are meant to be 2×22\times 2 analogs of the one-sided maps M1M_{1} and M2M_{2}. We can utilize some of the previously constructed maps to rewrite construct 2\mathcal{M}_{2} as follows:

2(λ):=[M1(λ)00M~2(λ)].\mathcal{M}_{2}(\lambda):=\begin{bmatrix}M_{1}(\lambda)&0\\ 0&\tilde{M}_{2}(\lambda)\end{bmatrix}. (91)

Additionally, we can get a more concrete handle on 1\mathcal{M}_{1} using the following construction

1(λ):=[u(s1,λ)u(s1,λ)w(s1,λ)w(s2,λ)u(s2,λ)u(s1,λ)w(s2,λ)w(s2,λ)],\mathcal{M}_{1}(\lambda):=\begin{bmatrix}\frac{u^{\prime}(s_{1},\lambda)}{u(s_{1},\lambda)}&\frac{w^{\prime}(s_{1},\lambda)}{w(s_{2},\lambda)}\\ -\frac{u^{\prime}(s_{2},\lambda)}{u(s_{1},\lambda)}&-\frac{w^{\prime}(s_{2},\lambda)}{w(s_{2},\lambda)}\end{bmatrix}, (92)

where uu and ww are the solutions to the boundary value problems:

{(HΩ~1λ)u=0,u(s1)=1,u(s2)=0,{(HΩ~1λ)w=0,w(s1)=0,w(s2)=1.\begin{split}\begin{cases}(H^{\tilde{\Omega}_{1}}-\lambda)u=0,\\ u(s_{1})=1,\\ u(s_{2})=0,\end{cases}\\ \begin{cases}(H^{\tilde{\Omega}_{1}}-\lambda)w=0,\\ w(s_{1})=0,\\ w(s_{2})=1.\end{cases}\end{split} (93)

Note that these constructions of 1\mathcal{M}_{1} and 2\mathcal{M}_{2} are identical to those produced by the the definitions.

Theorem 4.7.

Suppose Ω\Omega is split (as described in Definition 1.8) into three quantum subgraphs Ω1\Omega_{1}, Ω~1\tilde{\Omega}_{1}, and Ω~2\tilde{\Omega}_{2}, at the points 0<s2<s1<j0<s_{2}<s_{1}<\ell_{j} for some 1jn1\leq j\leq n. Additionally, suppose that λρ(HΓ1Ω1)ρ(HΓ~1Ω~1)ρ(HΓ~2Ω~2)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}})\cap\rho(H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}})\cap\rho(H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}), and let 1+2\mathcal{M}_{1}+\mathcal{M}_{2} be the two-sided map associated with s1s_{1} and s2s_{2}. Then

EΓΩ(λ)EΓ1Ω1(λ)EΓ~1Ω~1(λ)EΓ~2Ω~2(λ)=|(1+2)(λ)|,\frac{E^{\Omega}_{\Gamma}(\lambda)}{E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}(\lambda)E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\lambda)}=|(\mathcal{M}_{1}+\mathcal{M}_{2})(\lambda)|, (94)

where EΓΩE^{\Omega}_{\Gamma}, EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}}, EΓ~1Ω~1E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, and EΓ~2Ω~2E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}} are the Evans functions associated with the operators HΓΩH^{\Omega}_{\Gamma}, HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}}, HΓ~1Ω~1H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, and HΓ~2Ω~2H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}, respectively.

Proof.

The desired result can be obtained from equation (90). We claim that the corrective term (M~1+M~2)(M1+M2)(\tilde{M}_{1}+\tilde{M}_{2})(M_{1}+M_{2}) is precisely the determinant of the two-sided map 1+2\mathcal{M}_{1}+\mathcal{M}_{2}. First, we construct several key functions. Let ϕ1,ϕ2,θ1,θ2\phi_{1},\phi_{2},\theta_{1},\theta_{2} all be solutions to (HΩ~1λ)u=0(H^{\tilde{\Omega}_{1}}-\lambda)u=0 defined over Ω~1\tilde{\Omega}_{1}, which satisfy the following conditions:

ϕ1(s1)=0\displaystyle\phi_{1}(s_{1})=0 θ1(s1)=1\displaystyle\theta_{1}(s_{1})=1 ϕ2(s2)=0\displaystyle\phi_{2}(s_{2})=0 θ2(s2)=1,\displaystyle\theta_{2}(s_{2})=1,
ϕ1(s1)=1\displaystyle\phi_{1}^{\prime}(s_{1})=1 θ1(s1)=0\displaystyle\theta_{1}^{\prime}(s_{1})=0 ϕ2(s2)=1\displaystyle\phi_{2}^{\prime}(s_{2})=1 θ2(s2)=0.\displaystyle\theta_{2}^{\prime}(s_{2})=0. (95)

Consider the functions uu and ww from equation (93). Since uu satisfies the Dirichlet condition at s2s_{2} but not s1s_{1}, it is parallel to ϕ2\phi_{2}, so there exists σ2\sigma_{2} such that u(xj)=σ2ϕ2(xj)u(x_{j})=\sigma_{2}\phi_{2}(x_{j}). Similarly, since ww satisfies Dirichlet conditions at s1s_{1} by not s2s_{2}, there exists some σ1\sigma_{1} such that w(xj)=σ1ϕ1(xj)w(x_{j})=\sigma_{1}\phi_{1}(x_{j}). Then we can rewrite 1\mathcal{M}_{1} as

1(λ)\displaystyle\mathcal{M}_{1}(\lambda) =[u(s1,λ)u(s1,λ)w(s1,λ)w(s2,λ)u(s2,λ)u(s1,λ)w(s2,λ)w(s2,λ)]=[ϕ2(s1,λ)ϕ2(s1,λ)ϕ1(s1,λ)ϕ1(s2,λ)ϕ2(s2,λ)ϕ2(s1,λ)ϕ1(s2,λ)ϕ1(s2,λ)]\displaystyle=\begin{bmatrix}\frac{u^{\prime}(s_{1},\lambda)}{u(s_{1},\lambda)}&\frac{w^{\prime}(s_{1},\lambda)}{w(s_{2},\lambda)}\\ -\frac{u^{\prime}(s_{2},\lambda)}{u(s_{1},\lambda)}&-\frac{w^{\prime}(s_{2},\lambda)}{w(s_{2},\lambda)}\end{bmatrix}=\begin{bmatrix}\frac{\phi_{2}^{\prime}(s_{1},\lambda)}{\phi_{2}(s_{1},\lambda)}&\frac{\phi_{1}^{\prime}(s_{1},\lambda)}{\phi_{1}(s_{2},\lambda)}\\ -\frac{\phi_{2}^{\prime}(s_{2},\lambda)}{\phi_{2}(s_{1},\lambda)}&-\frac{\phi_{1}^{\prime}(s_{2},\lambda)}{\phi_{1}(s_{2},\lambda)}\end{bmatrix}
=[W(θ1,ϕ2)W(ϕ1,ϕ2)1W(ϕ1,ϕ2)1W(ϕ1,ϕ2)W(ϕ1,θ2)W(ϕ1,ϕ2).]=[W(ϕ2,θ1)W(ϕ2,ϕ1)1W(ϕ2,ϕ1)1W(ϕ2,ϕ1)W(θ2,ϕ1)W(ϕ2,ϕ1)].\displaystyle=\begin{bmatrix}-\frac{W(\theta_{1},\phi_{2})}{W(\phi_{1},\phi_{2})}&\frac{1}{W(\phi_{1},\phi_{2})}\\ \frac{1}{W(\phi_{1},\phi_{2})}&\frac{W(\phi_{1},\theta_{2})}{W(\phi_{1},\phi_{2})}.\end{bmatrix}=\begin{bmatrix}-\frac{W(\phi_{2},\theta_{1})}{W(\phi_{2},\phi_{1})}&-\frac{1}{W(\phi_{2},\phi_{1})}\\ -\frac{1}{W(\phi_{2},\phi_{1})}&\frac{W(\theta_{2},\phi_{1})}{W(\phi_{2},\phi_{1})}\end{bmatrix}. (96)

We quickly note that, by definition,

EΓ~1DDΩ~1=W(ϕ2,ϕ1)\displaystyle E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}=W(\phi_{2},\phi_{1}) EΓ~1DNΩ~1=W(θ2,ϕ1),\displaystyle E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DN}}=W(\theta_{2},\phi_{1}),
EΓ~1NDΩ~1=W(ϕ2,θ1)\displaystyle E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}=W(\phi_{2},\theta_{1}) EΓ~1NNΩ~1=W(θ2,θ1).\displaystyle E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{NN}}=W(\theta_{2},\theta_{1}).

By equation (73) adjusted to a subgraph Ω~1\tilde{\Omega}_{1}, we also have that

M~1=EΓ~1DNΩ~1EΓ~1DDΩ~1=W(θ2,ϕ1)W(ϕ2,ϕ1).\tilde{M}_{1}=\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DN}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}=\frac{W(\theta_{2},\phi_{1})}{W(\phi_{2},\phi_{1})}. (97)

We additionally note that M~1=w(s2,λ)w(s2,λ)\tilde{M}_{1}=-\frac{w^{\prime}(s_{2},\lambda)}{w(s_{2},\lambda)}, and can be substituted as such into 1\mathcal{M}_{1}. The final note we make is that

1=|1001|=|θ2(s2)ϕ2(s2)θ2(s2)ϕ2(s2)|=W(θ2,ϕ2)=θ2ϕ2θ2ϕ2.1=\begin{vmatrix}1&0\\ 0&1\end{vmatrix}=\begin{vmatrix}\theta_{2}(s_{2})&\phi_{2}(s_{2})\\ \theta_{2}^{\prime}(s_{2})&\phi_{2}^{\prime}(s_{2})\end{vmatrix}=W(\theta_{2},\phi_{2})=\theta_{2}\phi_{2}^{\prime}-\theta_{2}^{\prime}\phi_{2}. (98)

We can now begin calculating the determinant of the two-sided map.

|1+2|=(91),(4.2)|W(ϕ2,θ1)W(ϕ2,ϕ1)+M11W(ϕ2,ϕ1)1W(ϕ2,ϕ1)M~1+M~2|\displaystyle|\mathcal{M}_{1}+\mathcal{M}_{2}|\,\,\,\,\,\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:120},\eqref{eq:125}}}}}}{{=}}}\,\,\,\,\,\begin{vmatrix}-\frac{W(\phi_{2},\theta_{1})}{W(\phi_{2},\phi_{1})}+M_{1}&-\frac{1}{W(\phi_{2},\phi_{1})}\\ -\frac{1}{W(\phi_{2},\phi_{1})}&\tilde{M}_{1}+\tilde{M}_{2}\end{vmatrix}
=(W(ϕ2,θ1)W(ϕ2,ϕ1)+M1)(M~1+M~2)1W(ϕ2,ϕ1)2\displaystyle=(-\frac{W(\phi_{2},\theta_{1})}{W(\phi_{2},\phi_{1})}+M_{1})(\tilde{M}_{1}+\tilde{M}_{2})-\frac{1}{W(\phi_{2},\phi_{1})^{2}}
=W(ϕ2,θ1)W(ϕ2,ϕ1)(M~1+M~2)+M1(M~1+M~2)1W(ϕ2,ϕ1)2\displaystyle=-\frac{W(\phi_{2},\theta_{1})}{W(\phi_{2},\phi_{1})}(\tilde{M}_{1}+\tilde{M}_{2})+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{1}{W(\phi_{2},\phi_{1})^{2}}
=(97)W(ϕ2,θ1)W(ϕ2,ϕ1)W(θ2,ϕ1)W(ϕ2,ϕ1)W(ϕ2,θ1)W(ϕ2,ϕ1)M~2+M1(M~1+M~2)1W(ϕ2,ϕ1)2\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:127}}}}}}{{=}}}-\frac{W(\phi_{2},\theta_{1})}{W(\phi_{2},\phi_{1})}\frac{W(\theta_{2},\phi_{1})}{W(\phi_{2},\phi_{1})}-\frac{W(\phi_{2},\theta_{1})}{W(\phi_{2},\phi_{1})}\tilde{M}_{2}+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{1}{W(\phi_{2},\phi_{1})^{2}}
=EΓ~1NDΩ~1EΓ~1DDΩ~1M~2+M1(M~1+M~2)1EΓ~1DDΩ~1(W(ϕ2,θ1)W(θ2,ϕ1)+1W(ϕ2,ϕ1))\displaystyle=-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\tilde{M}_{2}+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{1}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\left(\frac{W(\phi_{2},\theta_{1})W(\theta_{2},\phi_{1})+1}{W(\phi_{2},\phi_{1})}\right)
=EΓ~1NDΩ~1EΓ~1DDΩ~1M~2+M1(M~1+M~2)1EΓ~1DDΩ~1((ϕ2(s1))θ2(s1)+1ϕ2(s1))\displaystyle=-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\tilde{M}_{2}+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{1}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\left(\frac{(-\phi_{2}^{\prime}(s_{1}))\theta_{2}(s_{1})+1}{\phi_{2}(s_{1})}\right)
=(98)EΓ~1NDΩ~1EΓ~1DDΩ~1M~2+M1(M~1+M~2)1EΓ~1DDΩ~1(ϕ2(s1)θ2(s1)ϕ2(s1))\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:129}}}}}}{{=}}}-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\tilde{M}_{2}+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{1}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\left(\frac{-\phi_{2}(s_{1})\theta_{2}^{\prime}(s_{1})}{\phi_{2}(s_{1})}\right)
=EΓ~1NDΩ~1EΓ~1DDΩ~1M~2+M1(M~1+M~2)EΓ~1NNΩ~1EΓ~1DDΩ~1.\displaystyle=-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}\tilde{M}_{2}+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{NN}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}. (99)

On the other hand, we can also simplify the corrective term. By Theorem 4.2, we have that M2=EΓ2Ω2EΓ2Ω2M_{2}=-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}}{E^{\Omega_{2}}_{\Gamma_{2}}}, which can be further decomposed. In particular, another application of Theorem 1.5 allows us to obtain

EΓ2Ω2=EΓ~1NDΩ~1EΓ~2Ω~2(M^1+M^2),E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}=E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\hat{M}_{1}+\hat{M}_{2}), (100)

where M^1+M^2\hat{M}_{1}+\hat{M}_{2} is the associated two-sided map at s2s_{2} (here there is a Neumann condition at s1s_{1}). Observe that M^2=M~2\hat{M}_{2}=\tilde{M}_{2} (the reasoning is clear from Figure 3). Thus, by equations (89) and (100) we have

M2=EΓ2Ω2EΓ2Ω2=EΓ~1NDΩ~1EΓ~2Ω~2(M^1+M^2)EΓ~1DDΩ~1EΓ~2Ω~2(M~1+M~2)=EΓ~1NDΩ~1(M^1+M~2)EΓ~1DDΩ~1(M~1+M~2).M_{2}=-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}}{E^{\Omega_{2}}_{\Gamma_{2}}}=-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\hat{M}_{1}+\hat{M}_{2})}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\tilde{M}_{1}+\tilde{M}_{2})}=-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}(\hat{M}_{1}+\tilde{M}_{2})}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}(\tilde{M}_{1}+\tilde{M}_{2})}. (101)

Finally, noting that, by equation (73) adjusted to a subgraph Ω~1\tilde{\Omega}_{1}, M^1\hat{M}_{1} can be rewritten as Evans functions as follows:

M^1=EΓ~1NNΩ~1EΓ~1NDΩ~1=W(θ2,θ1)W(ϕ2,θ1).\hat{M}_{1}=\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{NN}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}}=\frac{W(\theta_{2},\theta_{1})}{W(\phi_{2},\theta_{1})}. (102)

We can decompose the corrective term into a workable state. Indeed,

(M1+M2)(M~1+M~2)\displaystyle(M_{1}+M_{2})(\tilde{M}_{1}+\tilde{M}_{2}) =(101)(M1EΓ~1NDΩ~1(M^1+M~2)EΓ~1DDΩ~1(M~1+M~2))(M~1+M~2)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:131}}}}}}{{=}}}\left(M_{1}-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}(\hat{M}_{1}+\tilde{M}_{2})}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}(\tilde{M}_{1}+\tilde{M}_{2})}\right)(\tilde{M}_{1}+\tilde{M}_{2})
=M1(M~1+M~2)EΓ~1NDΩ~1(M^1+M~2)EΓ~1DDΩ~1\displaystyle=M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}(\hat{M}_{1}+\tilde{M}_{2})}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}
=M1(M~1+M~2)EΓ~1NDΩ~1M~2EΓ~1DDΩ~1EΓ~1NDΩ~1M^1EΓ~1DDΩ~1\displaystyle=M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}\tilde{M}_{2}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}\hat{M}_{1}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}
=(102)M1(M~1+M~2)EΓ~1NDΩ~1M~2EΓ~1DDΩ~1EΓ~1NDΩ~1EΓ~1NNΩ~1EΓ~1NDΩ~1EΓ~1DDΩ~1\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:128}}}}}}{{=}}}M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}\tilde{M}_{2}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{NN}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}
=M1(M~1+M~2)EΓ~1NDΩ~1M~2EΓ~1DDΩ~1EΓ~1NNΩ~1EΓ~1DDΩ~1.\displaystyle=M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{ND}}\tilde{M}_{2}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{NN}}}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}^{DD}}}. (103)

Comparing equations (4.2) and (4.2), we indeed find that |1+2|=(M1+M2)(M~1+M~2)|\mathcal{M}_{1}+\mathcal{M}_{2}|=(M_{1}+M_{2})(\tilde{M}_{1}+\tilde{M}_{2}) as claimed. This means we can substitute |1+2||\mathcal{M}_{1}+\mathcal{M}_{2}| into equation (90), completing the proof of this theorem. ∎

4.3 Splitting on Two Wires

Refer to caption
Figure 4: Splitting Ω\Omega into three subgraphs with two cuts on seperate wires.

Suppose we wish to split Ω\Omega into three subgraphs, where the cuts are made on two different wires. We can extend our Evans function equivalence to this case as well.

Without loss of generality, we assume that these cuts are made on the first and second wires at vertices s1s_{1} and s2s_{2} satisfying 0<s1<10<s_{1}<\ell_{1} and 0<s2<20<s_{2}<\ell_{2}. The first split is done on the first wire, splitting Ω\Omega into Ω1\Omega_{1} and Ω2\Omega_{2} . We define our usual Γ1\Gamma_{1}, Γ2\Gamma_{2}, Γ1\Gamma_{1}^{\prime}, and Γ2\Gamma_{2}^{\prime} conditions as described in Remark 1.3. The second split is done on the second wire, splitting Ω2\Omega_{2} into Ω~1\tilde{\Omega}_{1} and Ω~2\tilde{\Omega}_{2}. Over these subgraphs we define six sets of boundary conditions: Γ~1\tilde{\Gamma}_{1}, Γ~1\tilde{\Gamma}_{1}^{\prime}, Γ~2DD\tilde{\Gamma}_{2}^{DD}, Γ~2DN\tilde{\Gamma}_{2}^{DN}, Γ~2ND\tilde{\Gamma}_{2}^{ND}, and Γ~2NN\tilde{\Gamma}_{2}^{NN}. Γ~1\tilde{\Gamma}_{1} and Γ~1\tilde{\Gamma}_{1}^{\prime} both include the Γ2\Gamma_{2} condition at 2\ell_{2}, but Γ~1\tilde{\Gamma}_{1} includes the Dirichlet condition u2(s2)=0u_{2}(s_{2})=0 at s2s_{2}, while Γ~1\tilde{\Gamma}_{1}^{\prime} includes the Neumann condition u2(s2)=0u_{2}^{\prime}(s_{2})=0. The various Γ~2\tilde{\Gamma}_{2} sets share Γ2\Gamma_{2} conditions at all common vertices between Ω2\Omega_{2} and Ω~2\tilde{\Omega}_{2} except s1s_{1}. The conditions at sis_{i} are either Dirichlet ui(si)=0u_{i}(s_{i})=0 (associated with superscript DD) or Neumann ui(si)u_{i}^{\prime}(s_{i}) (associated with superscript NN), where ii is determined by the position of the signifying letter in the superscript. For example, Γ~2ND\tilde{\Gamma}_{2}^{ND} includes a Neumann condition at s1s_{1} and a Dirichlet condition at s2s_{2}. Note that Γ~2DD\tilde{\Gamma}_{2}^{DD} is exactly the same as Γ~2\tilde{\Gamma}_{2} as defined in Definition 1.8. We define HH operators and corresponding Evans functions for all of these boundary conditions according to Remark 4.4.

We observe that two applications of Theorem 1.5 give us

EΓΩ=EΓ1Ω1EΓ2Ω2(M1+M2),E^{\Omega}_{\Gamma}=E^{\Omega_{1}}_{\Gamma_{1}}E^{\Omega_{2}}_{\Gamma_{2}}(M_{1}+M_{2}), (104)

and

EΓ2Ω2=EΓ~1Ω~1EΓ~2DDΩ~2(M~1+M~2),E^{\Omega_{2}}_{\Gamma_{2}}=E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\tilde{M}_{1}+\tilde{M}_{2}), (105)

which combine to give us

EΓΩ=EΓ1Ω1EΓ~1Ω~1EΓ~2DDΩ~2(M~1+M~2)(M1+M2).E^{\Omega}_{\Gamma}=E^{\Omega_{1}}_{\Gamma_{1}}E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\tilde{M}_{1}+\tilde{M}_{2})(M_{1}+M_{2}).

As in the previous section, we can extend our counting equivalence if the following equality holds: |1+2|=(M~1+M~2)(M1+M2)|\mathcal{M}_{1}+\mathcal{M}_{2}|=(\tilde{M}_{1}+\tilde{M}_{2})(M_{1}+M_{2}). Of course, for this new case we need to slightly adjust the definitions of 1\mathcal{M}_{1} and 2\mathcal{M}_{2} from those in Definition 4.5.

Definition 4.8.

Let λρ(HΓ1Ω1)ρ(HΓ~1Ω~1)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}})\cap\rho(H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}) and f2{\vec{f}}\in{\mathbb{C}}^{2}. Then we can define 1(λ):22\mathcal{M}_{1}(\lambda):{\mathbb{C}}^{2}\rightarrow{\mathbb{C}}^{2} as follows:

1(λ)f:=[u1(s1,λ)u2(s2,λ)],\mathcal{M}_{1}(\lambda){\vec{f}}:=\begin{bmatrix}-u_{1}^{\prime}(s_{1},\lambda)\\ -u_{2}^{\prime}(s_{2},\lambda)\end{bmatrix},

where u{\vec{u}} solves

{(HΩ1λ)u1=0,(HΩ~1λ)u2=0,u1(s1)=f1,u2(s2)=f2,u satisfies all other Γ1 and Γ~1 conditions.\begin{cases}(H^{\Omega_{1}}-\lambda)u_{1}=0,\\ (H^{\tilde{\Omega}_{1}}-\lambda)u_{2}=0,\\ u_{1}(s_{1})=f_{1},\\ u_{2}(s_{2})=f_{2},\\ \text{${\vec{u}}$ satisfies all other $\Gamma_{1}$ and $\tilde{\Gamma}_{1}$ conditions}.\end{cases}

Note that in this case u{\vec{u}} consists of only two components, since its domain is two disconnected wires.

Likewise, for λρ(HΓ~2DDΩ~2)\lambda\in\rho(H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}) and f2{\vec{f}}\in{\mathbb{C}}^{2}, we define 2(λ):22\mathcal{M}_{2}(\lambda):{\mathbb{C}}^{2}\rightarrow{\mathbb{C}}^{2} as follows:

2(λ)f:=[u1(s1,λ)u2(s2,λ)],\mathcal{M}_{2}(\lambda){\vec{f}}:=\begin{bmatrix}u_{1}^{\prime}(s_{1},\lambda)\\ u_{2}^{\prime}(s_{2},\lambda)\end{bmatrix},

where u{\vec{u}} is the unique solution to the boundary value problem

{(HΩ~2λ)u=0,u1(s1)=f1,u2(s2)=f2,u satisfies all other Γ2DD conditions.\begin{cases}(H^{\tilde{\Omega}_{2}}-\lambda){\vec{u}}=\vec{0},\\ u_{1}(s_{1})=f_{1},\\ u_{2}(s_{2})=f_{2},\\ \text{${\vec{u}}$ satisfies all other $\Gamma_{2}^{DD}$ conditions}.\end{cases}
Remark 4.9.

We can find a helpful matrix representation of 2\mathcal{M}_{2} by observing its action on the standard 2{\mathbb{C}}^{2} basis vectors e1\vec{e}_{1} and e2\vec{e}_{2}. That is, to construct 2\mathcal{M}_{2}, we introduce the functions u{\vec{u}} and w{\vec{w}} over Ω~2\tilde{\Omega}_{2} that represent the unique solutions to the following boundary value problems:

{(HΩ~2λ)u=0,u1(s1)=1,u2(s2)=0,u satisfies all Γ conditions at the origin,\begin{cases}(H^{\tilde{\Omega}_{2}}-\lambda){\vec{u}}=\vec{0},\\ u_{1}(s_{1})=1,\\ u_{2}(s_{2})=0,\\ \text{${\vec{u}}$ satisfies all $\Gamma$ conditions at the origin},\end{cases} (106)

and

{(HΩ~2λ)w=0,w1(s1)=0,w2(s2)=1,w satisfies all Γ conditions at the origin.\begin{cases}(H^{\tilde{\Omega}_{2}}-\lambda){\vec{w}}=\vec{0},\\ w_{1}(s_{1})=0,\\ w_{2}(s_{2})=1,\\ \text{${\vec{w}}$ satisfies all $\Gamma$ conditions at the origin}.\end{cases} (107)

Then 2\mathcal{M}_{2} can be constructed as follows:

2(λ)=[u1(s1,λ)u1(s1,λ)w1(s1,λ)w2(s2,λ)u2(s2,λ)u1(s1,λ)w2(s2,λ)w2(s2,λ)].\mathcal{M}_{2}(\lambda)=\begin{bmatrix}\frac{u_{1}^{\prime}(s_{1},\lambda)}{u_{1}(s_{1},\lambda)}&\frac{w_{1}^{\prime}(s_{1},\lambda)}{w_{2}(s_{2},\lambda)}\\ \frac{u_{2}^{\prime}(s_{2},\lambda)}{u_{1}(s_{1},\lambda)}&\frac{w_{2}^{\prime}(s_{2},\lambda)}{w_{2}(s_{2},\lambda)}\end{bmatrix}. (108)

Also, we will build 1\mathcal{M}_{1} in terms of maps already encountered:

1(λ)=[M1(λ)00M~1(λ)].\mathcal{M}_{1}(\lambda)=\begin{bmatrix}M_{1}(\lambda)&0\\ 0&\tilde{M}_{1}(\lambda)\end{bmatrix}.

We now have all the necessary objects defined, and can prove our extension of Theorem 1.5.

Theorem 4.10.

Suppose Ω\Omega is split (as described in Definition 1.8) into three quantum subgraphs Ω1\Omega_{1}, Ω~1\tilde{\Omega}_{1}, and Ω~2\tilde{\Omega}_{2}, at non-vertex points s1s_{1} and s2s_{2} on seperate edges of Ω\Omega. Additionally, suppose that λρ(HΓ1Ω1)ρ(HΓ~1Ω~1)ρ(HΓ~2Ω~2)\lambda\in\rho(H^{\Omega_{1}}_{\Gamma_{1}})\cap\rho(H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}})\cap\rho(H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}), and let 1+2\mathcal{M}_{1}+\mathcal{M}_{2} be the two-sided map associated with s1s_{1} and s2s_{2}. Then

EΓΩ(λ)EΓ1Ω1(λ)EΓ~1Ω~1(λ)EΓ~2Ω~2(λ)=|(1+2)(λ)|,\frac{E^{\Omega}_{\Gamma}(\lambda)}{E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}(\lambda)E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\lambda)}=|(\mathcal{M}_{1}+\mathcal{M}_{2})(\lambda)|, (109)

where EΓΩE^{\Omega}_{\Gamma}, EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}}, EΓ~1Ω~1E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, and EΓ~2Ω~2E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}} are the Evans functions associated with the operators HΓΩH^{\Omega}_{\Gamma}, HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}}, HΓ~1Ω~1H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, and HΓ~2Ω~2H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}, respectively.

Proof.

Consider the w{\vec{w}} and u{\vec{u}} functions as discussed in Remark 4.9. We note that, by Theorem 4.2 adjusted to a subgraph Ω~2\tilde{\Omega}_{2},

w2(s2,λ)w2(s2,λ)=M~2(λ)=EΓ~2DNΩ~2(λ)EΓ~2DDΩ~2(λ).\frac{w_{2}^{\prime}(s_{2},\lambda)}{w_{2}(s_{2},\lambda)}=\tilde{M}_{2}(\lambda)=-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}(\lambda)}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\lambda)}. (110)

Additionally, walking through the argument leading to equation (83), we find that

u1(s1,λ)u1(s1,λ)=EΓ~2NDΩ~2(λ)EΓ~2DDΩ~2(λ).\frac{u_{1}^{\prime}(s_{1},\lambda)}{u_{1}(s_{1},\lambda)}=-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}(\lambda)}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\lambda)}. (111)

We still need to classify the off diagonal entries of 2\mathcal{M}_{2}. By applying Theorem 3.4 on Ω~2\tilde{\Omega}_{2} with i=1i=1, j=2j=2, and Γ~2\tilde{\Gamma}_{2} boundary conditions, we can find u2u_{2} to be

u2(x2,λ)=z2,2(x2,λ)D2(λ)|𝒞1(λ,e2)||C(λ)|z1,1(s1,λ),u_{2}(x_{2},\lambda)=-\frac{z_{2,2}(x_{2},\lambda)}{D_{2}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{2})|}{|C(\lambda)|}z_{1,1}^{\prime}(s_{1},\lambda),

and the same theorem with i=2i=2, j=1j=1 tells us that w1w_{1} is

w1(x1,λ)=z1,1(x1,λ)D1(λ)|𝒞2(λ,e1)||C(λ)|z2,2(s2,λ).w_{1}(x_{1},\lambda)=-\frac{z_{1,1}(x_{1},\lambda)}{D_{1}(\lambda)}\frac{|\mathcal{C}_{2}(\lambda,\vec{e}_{1})|}{|C(\lambda)|}z_{2,2}^{\prime}(s_{2},\lambda).

There is only one component of the trace vectors taken in Theorem 3.4 due to similar reasoning as that in the start of the proof of Theorem 4.2. That is, because of the Dirichlet conditions of Γ~2\tilde{\Gamma}_{2} at s1s_{1} and s2s_{2}, we find that N1e2=e2\vec{N}_{1}\vec{e}_{2}=-\vec{e}_{2} and N2e1=e1\vec{N}_{2}\vec{e}_{1}=-\vec{e}_{1}, and therefore (L1+M1)e2=(L2+M2)e1=0(\vec{L}_{1}+\vec{M}_{1})\vec{e}_{2}=(\vec{L}_{2}+\vec{M}_{2})\vec{e}_{1}=\vec{0} (the reason why is nearly identical to that at the start of the proof of Theorem 4.2).

As in previous cases, we specify some well-behaved archetypal functions. Let ϕ1,θ1\phi_{1},\theta_{1} on edge ϵ1\epsilon_{1} and ϕ2,θ2\phi_{2},\theta_{2} on edge ϵ2\epsilon_{2} be components of solutions to (HΩλ)u=0(H^{\Omega}-\lambda){\vec{u}}=\vec{0} which satisfy the following conditions:

ϕ1(s1)=0\displaystyle\phi_{1}(s_{1})=0 θ1(s1)=1\displaystyle\theta_{1}(s_{1})=1 ϕ2(s2)=0\displaystyle\phi_{2}(s_{2})=0 θ2(s2)=1,\displaystyle\theta_{2}(s_{2})=1,
ϕ1(s1)=1\displaystyle\phi_{1}^{\prime}(s_{1})=1 θ1(s1)=0\displaystyle\theta_{1}^{\prime}(s_{1})=0 ϕ2(s2)=1\displaystyle\phi_{2}^{\prime}(s_{2})=1 θ2(s2)=0.\displaystyle\theta_{2}^{\prime}(s_{2})=0. (112)

We observe that z1,1z_{1,1} and z2,2z_{2,2} equal ϕ1\phi_{1} and ϕ2\phi_{2}, respectively. Thus, D1=W(yτ1,1,ϕ1)D_{1}=W(y_{\tau_{1},1},\phi_{1}) and D2=W(yτ2,2,ϕ2)D_{2}=W(y_{\tau_{2},2},\phi_{2}). Recognizing that u1(s1,λ)u_{1}(s_{1},\lambda) and w2(s2,λ)w_{2}(s_{2},\lambda) are both specified to be 11, the off diagonal terms in 2(λ)\mathcal{M}_{2}(\lambda) are just w1(s1,λ)w_{1}^{\prime}(s_{1},\lambda) and u2(s2,λ)u_{2}^{\prime}(s_{2},\lambda). Taking the derivative of our current expressions, we obtain

u2(x2,λ)=ϕ2(x2,λ)D2(λ)|𝒞1(λ,e2)||C(λ)|,u_{2}^{\prime}(x_{2},\lambda)=-\frac{\phi_{2}^{\prime}(x_{2},\lambda)}{D_{2}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{2})|}{|C(\lambda)|}, (113)

and

w1(x1,λ)=ϕ1(x1,λ)D1(λ)|𝒞2(λ,e1)||C(λ)|.w_{1}^{\prime}(x_{1},\lambda)=-\frac{\phi_{1}^{\prime}(x_{1},\lambda)}{D_{1}(\lambda)}\frac{|\mathcal{C}_{2}(\lambda,\vec{e}_{1})|}{|C(\lambda)|}. (114)

We know that by using Lemma 2.4, |C(λ)||C(\lambda)| in both of these equations can be transformed into (1)n|F(x,λ)|(-1)^{n}|F({\vec{x}},\lambda)| for a corresponding fundamental matrix FF. By definition, |F|=EΓ~2DDΩ~2|F|=E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}. One can use Lemma 2.4 to see that |𝒞1(λ,e2)|=(1)n|1(x,λ,e2)||\mathcal{C}_{1}(\lambda,\vec{e}_{2})|=(-1)^{n}|\mathcal{F}_{1}({\vec{x}},\lambda,\vec{e}_{2})| where 1(x,λ,e2)\mathcal{F}_{1}({\vec{x}},\lambda,\vec{e}_{2}) is simply the fundamental matrix FF with its z1{\vec{z}}_{1} column replaced by a vector which is yτ2,2y_{\tau_{2},2} in the second wire row, yτ2,2y_{\tau_{2},2}^{\prime} in the second wire derivative row, and zeros in all others. Similarly, |𝒞2(λ,e1)|=(1)n|2(x,λ,e1)||\mathcal{C}_{2}(\lambda,\vec{e}_{1})|=(-1)^{n}|\mathcal{F}_{2}({\vec{x}},\lambda,\vec{e}_{1})| where 2(x,λ,e1)\mathcal{F}_{2}({\vec{x}},\lambda,\vec{e}_{1}) is simply the fundamental matrix FF with its z2{\vec{z}}_{2} column replaced by a vector which is yτ1,1y_{\tau_{1},1} in the first wire row, yτ1,1y_{\tau_{1},1}^{\prime} in the first wire derivative row, and zeros in all others.

Notice that by expanding down the z1{\vec{z}}_{1} and z2{\vec{z}}_{2} columns in |1(x,λ,e2)||\mathcal{F}_{1}({\vec{x}},\lambda,\vec{e}_{2})| with 2×22\times 2 blocks, we find that

|1(x,λ,e2)|=(1)qB1(λ)D2(λ),|\mathcal{F}_{1}({\vec{x}},\lambda,\vec{e}_{2})|=(-1)^{q}B_{1}(\lambda)D_{2}(\lambda), (115)

where B1B_{1} is the complementary minor and (1)q(-1)^{q} is the sign of the defining permutation of this expansion term. Similarly, we can expand down the z1{\vec{z}}_{1} and z2{\vec{z}}_{2} columns in |2(x,λ,e1)||\mathcal{F}_{2}({\vec{x}},\lambda,\vec{e}_{1})| with 2×22\times 2 blocks to obtain

|2(x,λ,e1)|=(1)q+2B2(λ)(D1(λ)),|\mathcal{F}_{2}({\vec{x}},\lambda,\vec{e}_{1})|=(-1)^{q+2}B_{2}(\lambda)(-D_{1}(\lambda)), (116)

where B2B_{2} is the complementary minor, and (1)q+2(-1)^{q+2} again serves to denote the permutation sign. We can now rewrite our off-diagonal u2u_{2} and w1w_{1} entries as follows:

u2(s2,λ)\displaystyle u_{2}^{\prime}(s_{2},\lambda) =(113)ϕ2(s2,λ)D2(λ)|𝒞1(λ,e2)||C(λ)|=Lemma 2.4(1)n|1(x,λ,e2)|D2(λ)(1)n|F(x,λ)|\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:510}}}}}}{{=}}}-\frac{\phi_{2}^{\prime}(s_{2},\lambda)}{D_{2}(\lambda)}\frac{|\mathcal{C}_{1}(\lambda,\vec{e}_{2})|}{|C(\lambda)|}\,\,\,\,\,\,\,\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\text{Lemma }\ref{thm:C_and_fundamental}}}}}}{{=}}}\,\,\,\,\,\,\,-\frac{(-1)^{n}|\mathcal{F}_{1}({\vec{x}},\lambda,\vec{e}_{2})|}{D_{2}(\lambda)(-1)^{n}|F({\vec{x}},\lambda)|}
=(115)(1)qB1(λ)D2(λ)D2(λ)|F(x,λ)|=(1)q+1B1(λ)EΓ~2DDΩ~2(λ)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:512}}}}}}{{=}}}-\frac{(-1)^{q}B_{1}(\lambda)D_{2}(\lambda)}{D_{2}(\lambda)|F({\vec{x}},\lambda)|}=\frac{(-1)^{q+1}B_{1}(\lambda)}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\lambda)}
w1(s1,λ)\displaystyle w_{1}^{\prime}(s_{1},\lambda) =(114)ϕ1(s1,λ)D1(λ)|𝒞2(λ,e1)||C(λ)|=Lemma 2.4(1)n|2(x,λ,e1)|D1(λ)(1)n|F(x,λ)|\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:511}}}}}}{{=}}}-\frac{\phi_{1}^{\prime}(s_{1},\lambda)}{D_{1}(\lambda)}\frac{|\mathcal{C}_{2}(\lambda,\vec{e}_{1})|}{|C(\lambda)|}\,\,\,\,\,\,\,\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\text{Lemma }\ref{thm:C_and_fundamental}}}}}}{{=}}}\,\,\,\,\,\,\,-\frac{(-1)^{n}|\mathcal{F}_{2}({\vec{x}},\lambda,\vec{e}_{1})|}{D_{1}(\lambda)(-1)^{n}|F({\vec{x}},\lambda)|}
=(116)(1)q+2B2(λ)(D1(λ))D1(λ)|F(x,λ)|=(1)q+2B2(λ)EΓ~2DDΩ~2(λ).\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:513}}}}}}{{=}}}-\frac{(-1)^{q+2}B_{2}(\lambda)(-D_{1}(\lambda))}{D_{1}(\lambda)|F({\vec{x}},\lambda)|}=\frac{(-1)^{q+2}B_{2}(\lambda)}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\lambda)}. (117)

Then the determinant of the two-sided map 1+2\mathcal{M}_{1}+\mathcal{M}_{2} can be expanded out using our new expressions of the off-diagonal terms. Note that the objects we work with for the rest of this proof depend on λ\lambda only, so we suppress this dependence to make formulas more readable. With this in mind, using Remark 4.9 and equations (110), (111) and (4.3), the determinant expansion is

|1+2|=|M1EΓ~2NDΩ~2EΓ~2DDΩ~2(1)q+2B2EΓ~2DDΩ~2(1)q+1B1EΓ~2DDΩ~2M~1+M~2|\displaystyle|\mathcal{M}_{1}+\mathcal{M}_{2}|=\begin{vmatrix}M_{1}-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}&\frac{(-1)^{q+2}B_{2}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}\\ \frac{(-1)^{q+1}B_{1}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}&\tilde{M}_{1}+\tilde{M}_{2}\end{vmatrix}
=EΓ~2NDΩ~2EΓ~2DDΩ~2(M~1+M~2)+M1(M~1+M~2)+B1B2(EΓ~2DDΩ~2)2\displaystyle=-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}(\tilde{M}_{1}+\tilde{M}_{2})+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})+\frac{B_{1}B_{2}}{(E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}})^{2}}
=(110)EΓ~2NDΩ~2EΓ~2DDΩ~2M~1+EΓ~2NDΩ~2EΓ~2DNΩ~2(EΓ~2DDΩ~2)2+M1(M~1+M~2)+B1B2(EΓ~2DDΩ~2)2.\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:516}}}}}}{{=}}}-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}\tilde{M}_{1}+\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}}{(E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}})^{2}}+M_{1}(\tilde{M}_{1}+\tilde{M}_{2})+\frac{B_{1}B_{2}}{(E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}})^{2}}. (118)

Using Theorem 1.5 adjusted to a subgraph Ω2\Omega_{2}, we can decompose EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}^{\prime}} as follows:

EΓ2Ω2=EΓ~1Ω~1EΓ~2NDΩ~2(M^1+M^2),E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}=E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}(\hat{M}_{1}+\hat{M}_{2}), (119)

where M^1+M^2\hat{M}_{1}+\hat{M}_{2} is the relevant two-sided map associated with s2s_{2}. Note that M^1=M~1\hat{M}_{1}=\tilde{M}_{1}. Additionally, using Theorem 4.2 adjusted to a subgraph Ω~2\tilde{\Omega}_{2}, we can rewrite M^2\hat{M}_{2} as

M^2=EΓ~2NNΩ~2EΓ~2NDΩ~2.\hat{M}_{2}=-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}. (120)

Therefore, we can expand out (M1+M2)(M~1+M~2)(M_{1}+M_{2})(\tilde{M}_{1}+\tilde{M}_{2}) as

(M1+M2)(M~1+M~2)=M1(M~1+M~2)EΓ2Ω2EΓ2Ω2(M~1+M~2)\displaystyle(M_{1}+M_{2})(\tilde{M}_{1}+\tilde{M}_{2})=M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}}{E^{\Omega_{2}}_{\Gamma_{2}}}(\tilde{M}_{1}+\tilde{M}_{2})
=(105),(119)M1(M~1+M~2)EΓ~1Ω~1EΓ~2NDΩ~2(M^1+M^2)EΓ~1Ω~1EΓ~2DDΩ~2(M~1+M~2)(M~1+M~2)\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:501},\eqref{eq:515}}}}}}{{=}}}\,\,\,\,M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}(\hat{M}_{1}+\hat{M}_{2})}{E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}(\tilde{M}_{1}+\tilde{M}_{2})}(\tilde{M}_{1}+\tilde{M}_{2})
=(120)M1(M~1+M~2)EΓ~2NDΩ~2EΓ~2DDΩ~2M~1+EΓ~2NDΩ~2EΓ~2DDΩ~2EΓ~2NNΩ~2EΓ~2NDΩ~2\displaystyle\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:hM}}}}}}{{=}}}\,\,\,\,M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}\tilde{M}_{1}+\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}
=M1(M~1+M~2)EΓ~2NDΩ~2EΓ~2DDΩ~2M~1+EΓ~2NNΩ~2EΓ~2DDΩ~2.\displaystyle=M_{1}(\tilde{M}_{1}+\tilde{M}_{2})-\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}\tilde{M}_{1}+\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}. (121)

Comparing the terms in equations (4.3) and (4.3), we see that we have the desired equality if

EΓ~2NNΩ~2EΓ~2DDΩ~2=B1B2(EΓ~2DDΩ~2)2+EΓ~2NDΩ~2EΓ~2DNΩ~2(EΓ~2DDΩ~2)2.\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}}{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}}=\frac{B_{1}B_{2}}{(E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}})^{2}}+\frac{E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}}{(E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}})^{2}}. (122)

An equivalent condition can be obtained by rewriting the above equation as

EΓ~2DDΩ~2EΓ~2NNΩ~2EΓ~2NDΩ~2EΓ~2DNΩ~2=B1B2.E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}-E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}=B_{1}B_{2}. (123)

Note that, since every determinant in this proposed equality is being multiplied by another determinant with an underlying matrix of the same shape, we can prove this statement for slightly modified matrices. That is, we can rearrange rows and columns as long as the same swaps are performed for every determinant involved. For the remainder of this proof, we let all matrices involved have row ordering of the form y1,y1,y2,y2,,yn,yny_{1},y_{1}^{\prime},y_{2},y_{2}^{\prime},...,y_{n},y_{n}^{\prime} instead of the standard y1,y2,,yn,y1,y2,,yny_{1},y_{2},...,y_{n},y_{1}^{\prime},y_{2}^{\prime},...,y_{n}^{\prime}. These row swapped objects will be denoted by calligraphic letters. For example, EΓ~2DDΩ~2E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}} and B1B_{1} would be rewritten as Γ~2DDΩ~2{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}} and 1{\mathcal{B}}_{1}, respectively. Therefore, equation (123) is equivalent to

Γ~2DDΩ~2Γ~2NNΩ~2Γ~2NDΩ~2Γ~2DNΩ~2=12.{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}-{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}={\mathcal{B}}_{1}{\mathcal{B}}_{2}. (124)

We now begin a brute expansion of the various Evans functions on the left hand side of equation (124). Note that they are entirely identical except in the z1{\vec{z}}_{1} and z2{\vec{z}}_{2} columns. In these columns, playing the role of z1z_{1} and z2z_{2} (and thus their derivatives), are: ϕ1\phi_{1} and ϕ2\phi_{2} for Γ~2DDΩ~2{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}, ϕ1\phi_{1} and θ2\theta_{2} for Γ~2DNΩ~2{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}, θ1\theta_{1} and ϕ2\phi_{2} for Γ~2NDΩ~2{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}, and θ1\theta_{1} and θ2\theta_{2} for Γ~2NNΩ~2{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}.

We choose to expand each Evans function along these two columns in 2×22\times 2 blocks. This means that, letting fif_{i} stand in for the ϕi\phi_{i} or θi\theta_{i} in the zi{\vec{z}}_{i} column, these determinants expand in the form

|f100f2|M2,4+|f100f2|M2,3+|f100f2|M1,4|f100f2|M1,3\displaystyle-\begin{vmatrix}f_{1}&0\\ 0&f_{2}\end{vmatrix}M_{2,4}+\begin{vmatrix}f_{1}&0\\ 0&f_{2}^{\prime}\end{vmatrix}M_{2,3}+\begin{vmatrix}f_{1}^{\prime}&0\\ 0&f_{2}\end{vmatrix}M_{1,4}-\begin{vmatrix}f_{1}^{\prime}&0\\ 0&f_{2}^{\prime}\end{vmatrix}M_{1,3}
=f1f2M2,4+f1f2M2,3+f1f2M1,4f1f2M1,3,\displaystyle=-f_{1}f_{2}M_{2,4}+f_{1}f_{2}^{\prime}M_{2,3}+f_{1}^{\prime}f_{2}M_{1,4}-f_{1}^{\prime}f_{2}^{\prime}M_{1,3}, (125)

where each Mj,kM_{j,k} is some complementary minor. The jj and kk indicate which two of the first four rows are inherited as the first two rows of the undelying matrix of minor. We note that, using such notation, 12=M1,2M3,4{\mathcal{B}}_{1}{\mathcal{B}}_{2}=M_{1,2}M_{3,4} (defined the same way as the other Mj,kM_{j,k}).

Now, using the expansion from equation (4.3), we can rewrite equation (124) as

Γ~2DDΩ~2Γ~2NNΩ~2Γ~2NDΩ~2Γ~2DNΩ~2\displaystyle{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}-{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}
=(ϕ1ϕ2M2,4+ϕ1ϕ2M2,3+ϕ1ϕ2M1,4ϕ1ϕ2M1,3)\displaystyle=(-\phi_{1}\phi_{2}M_{2,4}+\phi_{1}\phi_{2}^{\prime}M_{2,3}+\phi_{1}^{\prime}\phi_{2}M_{1,4}-\phi_{1}^{\prime}\phi_{2}^{\prime}M_{1,3})
×(θ1θ2M2,4+θ1θ2M2,3+θ1θ2M1,4θ1θ2M1,3)\displaystyle\times(-\theta_{1}\theta_{2}M_{2,4}+\theta_{1}\theta_{2}^{\prime}M_{2,3}+\theta_{1}^{\prime}\theta_{2}M_{1,4}-\theta_{1}^{\prime}\theta_{2}^{\prime}M_{1,3})
(θ1ϕ2M2,4+θ1ϕ2M2,3+θ1ϕ2M1,4θ1ϕ2M1,3)\displaystyle-(-\theta_{1}\phi_{2}M_{2,4}+\theta_{1}\phi_{2}^{\prime}M_{2,3}+\theta_{1}^{\prime}\phi_{2}M_{1,4}-\theta_{1}^{\prime}\phi_{2}^{\prime}M_{1,3})
×(ϕ1θ2M2,4+ϕ1θ2M2,3+ϕ1θ2M1,4ϕ1θ2M1,3)\displaystyle\times(-\phi_{1}\theta_{2}M_{2,4}+\phi_{1}\theta_{2}^{\prime}M_{2,3}+\phi_{1}^{\prime}\theta_{2}M_{1,4}-\phi_{1}^{\prime}\theta_{2}^{\prime}M_{1,3})
=(θ1ϕ2ϕ1θ2+ϕ1ϕ2θ1θ2θ1ϕ2ϕ1θ2+ϕ1ϕ2θ1θ2)M1,3M2,4\displaystyle=(-\theta_{1}\phi_{2}\phi_{1}^{\prime}\theta_{2}^{\prime}+\phi_{1}\phi_{2}\theta_{1}^{\prime}\theta_{2}^{\prime}-\theta_{1}^{\prime}\phi_{2}^{\prime}\phi_{1}\theta_{2}+\phi_{1}^{\prime}\phi_{2}^{\prime}\theta_{1}\theta_{2})M_{1,3}M_{2,4}
+(θ1ϕ2ϕ1θ2+ϕ1ϕ2θ1θ2θ1ϕ2ϕ1θ2+ϕ1ϕ2θ1θ2)M1,4M2,3\displaystyle+(-\theta_{1}\phi_{2}^{\prime}\phi_{1}^{\prime}\theta_{2}+\phi_{1}\phi_{2}^{\prime}\theta_{1}^{\prime}\theta_{2}-\theta_{1}^{\prime}\phi_{2}\phi_{1}\theta_{2}^{\prime}+\phi_{1}^{\prime}\phi_{2}\theta_{1}\theta_{2}^{\prime})M_{1,4}M_{2,3}
=|θ1ϕ1θ1ϕ1||θ2ϕ2θ2ϕ2|(M1,3M2,4M1,4M2,3)\displaystyle=\begin{vmatrix}\theta_{1}&\phi_{1}\\ \theta_{1}^{\prime}&\phi_{1}^{\prime}\end{vmatrix}\begin{vmatrix}\theta_{2}&\phi_{2}\\ \theta_{2}^{\prime}&\phi_{2}^{\prime}\end{vmatrix}(M_{1,3}M_{2,4}-M_{1,4}M_{2,3})
=M1,3M2,4M1,4M2,3.\displaystyle=M_{1,3}M_{2,4}-M_{1,4}M_{2,3}. (126)

We can now prove equation (124) by showing that M1,3M2,4M1,4M2,3=M1,2M3,4M_{1,3}M_{2,4}-M_{1,4}M_{2,3}=M_{1,2}M_{3,4}. This can be done via the construction of a special matrix AA. AA is a (4n4)×(4n4)(4n-4)\times(4n-4) matrix, in which each row is split in two halves, each half being a 2n22n-2 zero row or a 2n22n-2 length row from fundamental matrix FF with its z1{\vec{z}}_{1} and z2{\vec{z}}_{2} columns removed. Let R1R_{1} represent the first row of this modified FF, R2R_{2} the second, and so on down to R2nR_{2n}. Then, listing zero rows as 0, we construct AA like so:

A=[R10R2R2R3R3R4R4R50R2n00R50R2n].A=\begin{bmatrix}R_{1}&0\\ R_{2}&R_{2}\\ R_{3}&R_{3}\\ R_{4}&R_{4}\\ R_{5}&0\\ \vdots&\vdots\\ R_{2n}&0\\ 0&R_{5}\\ \vdots&\vdots\\ 0&R_{2n}\end{bmatrix}. (127)

The rows from R5R_{5} to R2nR_{2n} are those which are common in all the various Mi,jM_{i,j}. Thus, the relevant Mi,jM_{i,j} minors can be written as

Mi,j=|RiRjR5R2n|.M_{i,j}=\begin{vmatrix}R_{i}\\ R_{j}\\ R_{5}\\ \vdots\\ R_{2n}\end{vmatrix}. (128)

By expanding in (2n2)×(2n2)(2n-2)\times(2n-2) blocks down the first 2n22n-2 columns, we arrive at the following expression:

|A|=M1,2M3,4M1,3M2,4+M1,4M2,3.|A|=M_{1,2}M_{3,4}-M_{1,3}M_{2,4}+M_{1,4}M_{2,3}. (129)

Alternatively, we can do some row and column operations before performing this expansion to alter AA without changing its determinant. First, for the jjth column of AA (where 1j2n21\leq j\leq 2n-2), subtract the (2n2+j)(2n-2+j)th column. Next, for each kk such that 2n4k4n42n-4\leq k\leq 4n-4, add the (k2n+4)(k-2n+4)th row to the kkth row. So, the determinant |A||A| can now be written as

|A|=|R100R20R30R4R50R2n00R50R2n|.|A|=\begin{vmatrix}R_{1}&0\\ 0&R_{2}\\ 0&R_{3}\\ 0&R_{4}\\ R_{5}&0\\ \vdots&\vdots\\ R_{2n}&0\\ 0&R_{5}\\ \vdots&\vdots\\ 0&R_{2n}\end{vmatrix}. (130)

This determinant is zero, since we choose to expand in (2n2)×(2n2)(2n-2)\times(2n-2) blocks, but there are only 2n32n-3 nonzero rows to be selected for minors in the left column collection: R1,R5,,R2nR_{1},R_{5},...,R_{2n}. Thus, every term in the expansion will have a zero row, and will be 0. So, by our two calculations of |A||A|, we have

M1,2M3,4M1,3M2,4+M1,4M2,3=0,M_{1,2}M_{3,4}-M_{1,3}M_{2,4}+M_{1,4}M_{2,3}=0,

which implies

M1,2M3,4=M1,3M2,4M1,4M2,3=(4.3)Γ~2DDΩ~2Γ~2NNΩ~2Γ~2NDΩ~2Γ~2DNΩ~2.M_{1,2}M_{3,4}=M_{1,3}M_{2,4}-M_{1,4}M_{2,3}\mathrel{\stackrel{{\scriptstyle\makebox[0.0pt]{\mbox{\tiny{\eqref{eq:200}}}}}}{{=}}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}-{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}}.

Since M1,2M2,4=12M_{1,2}M_{2,4}={\mathcal{B}}_{1}{\mathcal{B}}_{2}, the above equation gives us the following

12=Γ~2DDΩ~2Γ~2NNΩ~2Γ~2NDΩ~2Γ~2DNΩ~2,{\mathcal{B}}_{1}{\mathcal{B}}_{2}={\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DD}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{NN}}-{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{ND}}{\mathcal{E}}^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}^{DN}},

which proves equation (123), thus completing the proof. ∎

Theorems 4.7 and 4.10 are the two cases making up the proof of Theorem 1.10. The straightforward relationship between the counting functions in Theorem 1.11 and the Evans functions and maps in Theorem 1.10 means that a proof for the counting relationship immediately follows.

5 Applications

Here we will consider a quantum star graph with two edges of length 11. We will impose the Kirchhoff conditions u1(0)=u2(0)u_{1}(0)=u_{2}(0) and u1(0)+u2(0)=0u_{1}^{\prime}(0)+u_{2}^{\prime}(0)=0 at the origin. We wish to see how the eigenvalues change as various potentials and endpoint conditions are introduced. In particular, we will graph Evans functions and map curves as λ\lambda varies over a certain interval. Since our eigenvalue counting formulas follow directly from our Evans function equalities, they are true over any interval [λ1,λ2][\lambda_{1},\lambda_{2}]. Thus, we represent the count of eigenvalues of an operator or zeros minus poles of a two sided map for λ[λ1,λ2]\lambda\in[\lambda_{1},\lambda_{2}] by passing the interval as an argument in the relevant counting function from Theorem 1.7 or 1.11. For example, 𝒩X([λ1,λ2])\mathcal{N}_{X}([\lambda_{1},\lambda_{2}]) equals the number of eigenvalues of XX for λ[λ1,λ2]\lambda\in[\lambda_{1},\lambda_{2}].

5.1 Potential Barrier/Well at Wire’s End

A potential barrier or well is a localized constant potential at the end of one wire. We can model these by setting Vi(xi)=0V_{i}(x_{i})=0 for all potentials in HΩH^{\Omega} except when i=1i=1, and letting V1(x1)V_{1}(x_{1}) be defined piecewise as:

V1(x1)={ν,s1x110,0x1<s1,V_{1}(x_{1})=\begin{cases}\nu,&s_{1}\leq x_{1}\leq 1\\ 0,&0\leq x_{1}<s_{1}\end{cases}, (131)

where ν\nu\in{\mathbb{R}}. In this example, the boundary conditions Γ\Gamma will be represented by the following α\alpha and β\beta matrices:

β1=I2,β2=02,α1=[1100],α2=[0011].\displaystyle\beta_{1}=I_{2},\,\,\beta_{2}=0_{2},\,\,\alpha_{1}=\begin{bmatrix}1&-1\\ 0&0\end{bmatrix},\,\,\alpha_{2}=\begin{bmatrix}0&0\\ 1&1\end{bmatrix}. (132)

These are Kirchhoff at the origin, and Dirichlet at the outer vertices. Using Theorem 1.5, we can count the eigenvalues of HΓΩH^{\Omega}_{\Gamma} via Evans functions of its subproblems.

First, we split the problem in two by placing a new vertex at x1=s1x_{1}=s_{1} and imposing a Dirichlet condition there. The operator HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}} defined over Ω1\Omega_{1} is a Dirichlet Laplacian (with nonzero potential ν\nu), while the operator HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}} over Ω2\Omega_{2} is still a Kirchhoff Laplacian, but with a smaller edge ϵ1\epsilon_{1}.

To find the eigenvalues of HΓ1Ω1H^{\Omega_{1}}_{\Gamma_{1}}, we construct the corresponding Evans function. For λν>0\lambda-\nu>0, it is equal to

EΓ1Ω1(λ)=sin((1s1)λν)λν.E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)=\frac{\sin((1-s_{1})\sqrt{\lambda-\nu})}{\sqrt{\lambda-\nu}}. (133)

To find the eigenvalues of HΓ2Ω2H^{\Omega_{2}}_{\Gamma_{2}}, we will construct the Evans function EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}}. For λ>0\lambda>0, we have

EΓ2Ω2(λ)=sin(λ(1+s1))λ.\displaystyle E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)=-\frac{\sin({\sqrt{\lambda}}(1+s_{1}))}{{\sqrt{\lambda}}}\,. (134)

Next we solve for the one-sided maps M1M_{1} and M2M_{2}. For λ>0\lambda>0, we have

M1(λ)=EΓ1Ω1(λ)EΓ1Ω1(λ)=λνcos(λν(s11))sin(λν(s11)),M_{1}(\lambda)=\frac{E^{\Omega_{1}}_{\Gamma_{1}^{\prime}}(\lambda)}{E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)}=\frac{\sqrt{\lambda-\nu}\cos(\sqrt{\lambda-\nu}(s_{1}-1))}{\sin(\sqrt{\lambda-\nu}(s_{1}-1))}\,, (135)

Finally, for λ>0\lambda>0 we have

M2(λ)=EΓ2Ω2(λ)EΓ2Ω2(λ)=λcos(λ(1+s1))sin(λ(1+s1)).M_{2}(\lambda)=-\frac{E^{\Omega_{2}}_{\Gamma_{2}^{\prime}}(\lambda)}{E^{\Omega_{2}}_{\Gamma_{2}}(\lambda)}=-\frac{{\sqrt{\lambda}}\cos({\sqrt{\lambda}}(1+s_{1}))}{\sin({\sqrt{\lambda}}(1+s_{1}))}. (136)

Our method so far has been quite simple, consisting of a few determinants and some easy initial value problems defined by the various boundary condition matrices. We would like to check our results against the actual eigenvalues of HΓΩH^{\Omega}_{\Gamma}. It can be shown that the positive eigenvalues of HΓΩH^{\Omega}_{\Gamma} are precisely the zeros of the function

E(λ)=λcos(λ(1+s1))sin(λν(s11))λνcos(λν(s11))sin(λ(1+s1)),\displaystyle\begin{split}E(\lambda)=&{\sqrt{\lambda}}\cos({\sqrt{\lambda}}(1+s_{1}))\sin(\sqrt{\lambda-\nu}(s_{1}-1))\\ &-\sqrt{\lambda-\nu}\cos(\sqrt{\lambda-\nu}(s_{1}-1))\sin({\sqrt{\lambda}}(1+s_{1})),\end{split} (137)

Note that E(λ)E(\lambda) is an Evans function for HΓΩH^{\Omega}_{\Gamma}, although not necessarily the one specified by our usual construction.

One can graphically see (for example in Figure 5) that the eigenvalues of HΓΩH^{\Omega}_{\Gamma} are counted accurately by plotting EE, M1+M2M_{1}+M_{2}, EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}}, and EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}}.

Refer to caption
Figure 5: A single split with s1=13s_{1}=\frac{1}{3} and ν=10\nu=-10 for λ[5,60]\lambda\in[5,60]. The red curve is 10EΓ2Ω210E^{\Omega_{2}}_{\Gamma_{2}}, the blue is 15EΓ1Ω115E^{\Omega_{1}}_{\Gamma_{1}}, the purple curve is 125(M1+M2)\frac{1}{25}(M_{1}+M_{2}), and the black curve is EE. These objects are vertically rescaled to accentuate the zeros and poles.

In particular, the zeros of EΓ1Ω1E^{\Omega_{1}}_{\Gamma_{1}} and EΓ2Ω2E^{\Omega_{2}}_{\Gamma_{2}} coincide with poles of M1+M2M_{1}+M_{2}, representing cancellation. This is expected, since EΓΩE^{\Omega}_{\Gamma} has no zeros at these points. Also as expected, the zeros of M1+M2M_{1}+M_{2} coincide with those of EΓΩE^{\Omega}_{\Gamma}. In terms of counting functions, we find that

𝒩HΓΩ[5,60]=4,𝒩HΓ1Ω1[5,60]=1,𝒩HΓ2Ω2[5,60]=3,\displaystyle\mathcal{N}_{H^{\Omega}_{\Gamma}}[5,60]=4,\hskip 14.45377pt\mathcal{N}_{H^{\Omega_{1}}_{\Gamma_{1}}}[5,60]=1,\hskip 14.45377pt\mathcal{N}_{H^{\Omega_{2}}_{\Gamma_{2}}}[5,60]=3,
N[5,60]=44=0\displaystyle N[5,60]=4-4=0\hskip 72.26999pt

So the counting functions work in this case as predicted.

5.2 Potential Barrier/Well on Wire’s Interior

In this example, we suppose that the boundary conditions at x1=1x_{1}=1 and x2=1x_{2}=1 are both Neumann (ui(1)=0u_{i}^{\prime}(1)=0) for i=1,2i=1,2. We additionally suppose we have a potential localized on the interior of wire one. Define the nonzero potential V1V_{1} in HΓΩH^{\Omega}_{\Gamma} as:

V1(x1)={ν,x1(s2,s1)0,x1(s2,s1)V_{1}(x_{1})=\begin{cases}\nu,&x_{1}\in(s_{2},s_{1})\\ 0,&x_{1}\not\in(s_{2},s_{1})\end{cases}

where 0<s2<s1<10<s_{2}<s_{1}<1, and V2(x2)=0V_{2}(x_{2})=0. We wish to apply Theorem 1.11 to count the eigenvalues of HΓΩH^{\Omega}_{\Gamma}. For λ>0\lambda>0, we find that the relevant Evans functions are:

EΓ1Ω1(λ)=cos(λ(1s1)),E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)=-\cos({\sqrt{\lambda}}(1-s_{1})),
EΓ~1Ω~1(λ)=sin(λν(s1s2))λν,E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}(\lambda)=\frac{\sin(\sqrt{\lambda-\nu}(s_{1}-s_{2}))}{\sqrt{\lambda-\nu}},
EΓ~2Ω~2(λ)=cos(λ(1+s2)),E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\lambda)=-\cos({\sqrt{\lambda}}(1+s_{2})),

and the relevant one-sided maps are:

1(λ)=[λsin(λ(s11))cos(λ(s11))00λsin(λ(s2+1))cos(λ(s2+1))]\mathcal{M}_{1}(\lambda)=\begin{bmatrix}\frac{{\sqrt{\lambda}}\sin({\sqrt{\lambda}}(s_{1}-1))}{\cos({\sqrt{\lambda}}(s_{1}-1))}&0\\ 0&-\frac{{\sqrt{\lambda}}\sin({\sqrt{\lambda}}(s_{2}+1))}{\cos({\sqrt{\lambda}}(s_{2}+1))}\end{bmatrix}

and

2(λ)=[λνcos(λν(s1s2))sin(λν(s1s2))λνsin(λν(s2s1))λνsin(λν(s1s2))λνcos(λν(s2s1))sin(λν(s2s1))].\mathcal{M}_{2}(\lambda)=\begin{bmatrix}\frac{\sqrt{\lambda-\nu}\cos(\sqrt{\lambda-\nu}(s_{1}-s_{2}))}{\sin(\sqrt{\lambda-\nu}(s_{1}-s_{2}))}&\frac{\sqrt{\lambda-\nu}}{\sin(\sqrt{\lambda-\nu}(s_{2}-s_{1}))}\\ -\frac{\sqrt{\lambda-\nu}}{\sin(\sqrt{\lambda-\nu}(s_{1}-s_{2}))}&-\frac{\sqrt{\lambda-\nu}\cos(\sqrt{\lambda-\nu}(s_{2}-s_{1}))}{\sin(\sqrt{\lambda-\nu}(s_{2}-s_{1}))}\end{bmatrix}.

Therefore, the map determinant is equal to

|(1+2)(λ)|=\displaystyle|(\mathcal{M}_{1}+\mathcal{M}_{2})(\lambda)|= (λtan(λ(s11))+λνcot(λν(s1s2)))\displaystyle\left({\sqrt{\lambda}}\tan({\sqrt{\lambda}}(s_{1}-1))+\sqrt{\lambda-\nu}\cot(\sqrt{\lambda-\nu}(s_{1}-s_{2}))\right)
×(λtan(λ(s2+1))λνcot(λν(s2s1)))\displaystyle\times\left(-{\sqrt{\lambda}}\tan({\sqrt{\lambda}}(s_{2}+1))-\sqrt{\lambda-\nu}\cot(\sqrt{\lambda-\nu}(s_{2}-s_{1}))\right)
(λνsin(λν(s1s2)))2\displaystyle-\left(\frac{\sqrt{\lambda-\nu}}{\sin(\sqrt{\lambda-\nu}(s_{1}-s_{2}))}\right)^{2}

Finally, one can show that the eigenvalues of HΓΩH^{\Omega}_{\Gamma} are given by the zeros of the following curve:

E(λ)=\displaystyle E(\lambda)= (2λν)cos(λ(2+s1s2))sin(λν(s1s2))\displaystyle(2\lambda-\nu)\cos({\sqrt{\lambda}}(2+s_{1}-s_{2}))\sin(\sqrt{\lambda-\nu}(s_{1}-s_{2}))
+νcos(λ(s1+s2))sin(λν(s1s2))\displaystyle+\nu\cos({\sqrt{\lambda}}(s_{1}+s_{2}))\sin(\sqrt{\lambda-\nu}(s_{1}-s_{2}))
2λλνcos(λν(s1s2))sin(λ(2+s1s2)),\displaystyle-2{\sqrt{\lambda}}\sqrt{\lambda-\nu}\cos(\sqrt{\lambda-\nu}(s_{1}-s_{2}))\sin({\sqrt{\lambda}}(2+s_{1}-s_{2})), (138)

which, as in the previous example, is an Evans function for HΓΩH^{\Omega}_{\Gamma}.

Again, we can check our result visually by plotting these objects against one another and making sure the poles and zeros line up in the right places. An example is shown in Figure 6.

Refer to caption
Figure 6: Two splits on one wire with s1=34s_{1}=\frac{3}{4}, s2=14s_{2}=\frac{1}{4}, and ν=10\nu=-10. The green curve is 20EΓ1Ω120E^{\Omega_{1}}_{\Gamma_{1}}, the red is 20EΓ~1Ω~120E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}, the blue is EΓ~2Ω~2E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}, the purple curve is 11000|1+2|\frac{1}{1000}|\mathcal{M}_{1}+\mathcal{M}_{2}|, and the black curve is 110E\frac{1}{10}E. These objects are vertically rescaled to accentuate the zeros and poles.

This figure not only has poles and zeros lining up in the right places, it also has proper multiplicity: order one poles when only one Evans function has a zero of order one, and order two poles when two Evans functions have a zero of order one at the same λ\lambda. Indeed, we find that

𝒩HΓΩ[5,60]=4,𝒩HΓ1Ω1[5,60]=1,𝒩HΓ~1Ω~1[5,60]=1,𝒩HΓ~2Ω~2[5,60]=2,\displaystyle\mathcal{N}_{H^{\Omega}_{\Gamma}}[5,60]=4,\hskip 14.45377pt\mathcal{N}_{H^{\Omega_{1}}_{\Gamma_{1}}}[5,60]=1,\hskip 14.45377pt\mathcal{N}_{H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}}[5,60]=1,\hskip 14.45377pt\mathcal{N}_{H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}}[5,60]=2,
N[5,60]=44=0.\displaystyle N[5,60]=4-4=0.\hskip 101.17755pt

5.3 Potential Over Multiple Wires

To demonstrate our last result, we consider a potential extending over both wires, and suppose that the boundary conditions at x1=1x_{1}=1 and x2=1x_{2}=1 are Dirichlet. Let

Vi(xi)={ν,xi(0,si)0,xi(0,si)V_{i}(x_{i})=\begin{cases}\nu,&x_{i}\in(0,s_{i})\\ 0,&x_{i}\not\in(0,s_{i})\end{cases}

for both i=1i=1 and i=2i=2. To apply our theorem, we must construct the corresponding Evans functions.

EΓ1Ω1(λ)=sin(λ(1s1))λ,E^{\Omega_{1}}_{\Gamma_{1}}(\lambda)=-\frac{\sin({\sqrt{\lambda}}(1-s_{1}))}{{\sqrt{\lambda}}},
EΓ~1Ω~1(λ)=sin(λ(1s2))λ,E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}(\lambda)=-\frac{\sin({\sqrt{\lambda}}(1-s_{2}))}{{\sqrt{\lambda}}},

and

EΓ~2Ω~2(λ)=sin(λν(s1+s2))λν.E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}(\lambda)=-\frac{\sin(\sqrt{\lambda-\nu}(s_{1}+s_{2}))}{\sqrt{\lambda-\nu}}.

It can be shown that the determinant of 1+2\mathcal{M}_{1}+\mathcal{M}_{2} is given by

|(1+2)(λ)|=\displaystyle|(\mathcal{M}_{1}+\mathcal{M}_{2})(\lambda)|= (x+10cot(x+10)+xcot(x2)+x+10sin(x+10))\displaystyle\left(\sqrt{x+10}\cot\left(\sqrt{x+10}\right)+\sqrt{x}\cot\left(\frac{\sqrt{x}}{2}\right)+\frac{\sqrt{x+10}}{\sin\left(\sqrt{x+10}\right)}\right)
×\displaystyle\times (x+10cot(x+10)+xcot(x2)x+10sin(x+10))\displaystyle\left(\sqrt{x+10}\cot\left(\sqrt{x+10}\right)+\sqrt{x}\cot\left(\frac{\sqrt{x}}{2}\right)-\frac{\sqrt{x+10}}{\sin\left(\sqrt{x+10}\right)}\right)

It can also be shown that the function

E(λ)=\displaystyle E(\lambda)= x10+xcos(10+x)sin(x)\displaystyle\sqrt{x}\sqrt{10+x}\cos\left(\sqrt{10+x}\right)\sin\left(\sqrt{x}\right)
+\displaystyle+ (5+(5+x)cos(x))sin(10+x)\displaystyle\left(-5+\left(5+x\right)\cos\left(\sqrt{x}\right)\right)\sin\left(\sqrt{10+x}\right)

indicates the location of the positive eigenvalues of HΓΩH^{\Omega}_{\Gamma}. Once again, one can use a plot to confirm our formula accurately predicts the location of eigenvalues, as demonstrated in Figure 7.

Refer to caption
Figure 7: Two splits on two distinct wires with s1=s2=12s_{1}=s_{2}=\frac{1}{2} and ν=10\nu=-10. The blue curve is 100EΓ1Ω1100E^{\Omega_{1}}_{\Gamma_{1}} as well as 100EΓ~1Ω~1100E^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}} (they are identical in this case), the red is 100EΓ~2Ω~2100E^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}, the purple curve is 10|1+2|10|\mathcal{M}_{1}+\mathcal{M}_{2}|, and the black curve is EE. These objects are vertically rescaled to accentuate the zeros and poles.

Again, this figure demonstrates proper cancellation between the subproblem eigenvalues and the poles of the map determinant, and the proper correspondence between eigenvalues of HΓΩH^{\Omega}_{\Gamma} and the zeros of the map determinant. Examining the counting functions, we find that

𝒩HΓΩ[3,60]=4,𝒩HΓ1Ω1[3,60]=1,𝒩HΓ~1Ω~1[3,60]=1,𝒩HΓ~2Ω~2[3,60]=1,\displaystyle\mathcal{N}_{H^{\Omega}_{\Gamma}}[3,60]=4,\hskip 14.45377pt\mathcal{N}_{H^{\Omega_{1}}_{\Gamma_{1}}}[3,60]=1,\hskip 14.45377pt\mathcal{N}_{H^{\tilde{\Omega}_{1}}_{\tilde{\Gamma}_{1}}}[3,60]=1,\hskip 14.45377pt\mathcal{N}_{H^{\tilde{\Omega}_{2}}_{\tilde{\Gamma}_{2}}}[3,60]=1,
N[5,60]=43=1.\displaystyle N[5,60]=4-3=1.\hskip 101.17755pt

Appendix A Note on Linear Algebraic Concepts

Despite introducing separate spatial variables on each wire, we still wish to often refer to notions of linear dependence and independence of solution sets.

Theorem A.1.

Let {u1,,u2n}\{{\vec{u}}_{1},...,{\vec{u}}_{2n}\} be a set of solutions to the equation (HΩλ)u=0(H^{\Omega}-\lambda){\vec{u}}=\vec{0}, where HΩH^{\Omega} is the differential expression from (7). Let UU be a 2n×2n2n\times 2n matrix whose iith column is ui{\vec{u}}_{i} followed by ui{\vec{u}}_{i}^{\prime}. Then |U||U| is a function of λ\lambda only.

Proof.

We can calculate |U||U| using Laplace expansion by complementary minors. In particular, we can choose to expand along the pair of rows 11 and n+1n+1, which contain information about (ui)1({\vec{u}}_{i})_{1} and (ui)1({\vec{u}}_{i})_{1}^{\prime}, respectively. Let HH be the set of two-element combinations {s,t}\{s,t\} from the set {1,,2n}\{1,...,2n\}. Then the determinant of UU is

|U(λ)|=(1)n1{s,t}Hπ(s,t)|(us)1(x1,λ)(ut)1(x1,λ)(us)1(x1,λ)(ut)1(x1,λ)|cs,t,|U(\lambda)|=(-1)^{n-1}\sum_{\{s,t\}\in H}\pi(s,t)\begin{vmatrix}({\vec{u}}_{s})_{1}(x_{1},\lambda)&({\vec{u}}_{t})_{1}(x_{1},\lambda)\\ ({\vec{u}}_{s})_{1}^{\prime}(x_{1},\lambda)&({\vec{u}}_{t})_{1}^{\prime}(x_{1},\lambda)\end{vmatrix}c_{s,t}\,, (139)

where π(s,t)\pi(s,t) is the sign of a permutation sending 11 to ss, 22 to tt, and all other numbers in {1,,2n}{1,2}\{1,...,2n\}\setminus\{1,2\} are sent to the elements of {1,,2n}{s,t}\{1,...,2n\}\setminus\{s,t\} in increasing order, and cs,tc_{s,t} is the (2n2)×(2n2)(2n-2)\times(2n-2) complementary minor obtained from UU by deleting the 11st and (n+1)(n+1)th rows, and the ssth and ttth columns. The (1)n1(-1)^{n-1} term represents the row permutation sending 11 to 11, n+1n+1 to 22, and all else paired in increasing order. Since the 2×22\times 2 determinants involved in this expansion are solutions to a second order ordinary differential equation with no first order term, by Abel’s Identity they depend only on λ\lambda, and not on the choice of x1x_{1}. We can clearly use a similar method to expand each |cs,t||c_{s,t}| by 2×22\times 2 blocks of functions sharing one spatial variable, so none of these terms depend on x{\vec{x}} either. Thus |U||U| is x{\vec{x}} independent, just as in the single variable case. ∎

As with a set of single variable vector functions, we say that {u1,,u2n}\{{\vec{u}}_{1},...,{\vec{u}}_{2n}\} forms a linearly independent set if the only solution to

i=12nciui(x,λ)=0,\sum_{i=1}^{2n}c_{i}{\vec{u}}_{i}({\vec{x}},\lambda)=\vec{0}\,, (140)

is the one with ci=0c_{i}=0 for all ii. This definition needs no adjustment from the single-variable case since vector addition acts componentwise, and thus the separate spatial variables do not interact at all in this sum. Indeed, we are free to vary each xix_{i} individually without altering the result of the sum. Since the typical definition of linear dependence applies for functions on a quantum graph, and since Abel’s Identity still holds, one can use the standard proof to show that |U|=0|U|=0 if and only if there is a linearly dependent set of columns in UU.

Acknowledgements

A.S. acknowledges the support of an AIM workshop on Computer assisted proofs for stability analysis of nonlinear waves, where some of the key ideas were discussed. A.S. acknowledges support from the National Science Foundation under grant DMS-1910820.

References

  • [1] J. Alexander, R. Gardner and C. Jones “A topological invariant arising in the stability analysis of travelling waves” In J. reine angew. Math. 410, 1990, pp. 167–212
  • [2] G. Berkolaiko, G. Cox and J. L. Marzuola “Nodal deficiency, spectral flow, and the Dirichlet-to-Neumann map” In Lett. Math. Phys. 109.7, 2019
  • [3] G. Berkolaiko and P. Kuchment “Introduction to quantum graphs” 186, Mathematical Surveys and Monographs American Mathematical Society, Providence, RI, 2013
  • [4] G. Cox, Y. Latushkin and A. Sukhtayev “Fredholm determinants, Evans functions and Maslov indices for partial differential equations” In Mathematische Annalen, 2023
  • [5] F. Gesztesy and M. Mitrea “Generalized Robin boundary conditions, Robin-to-Dirichlet maps, and Krein-type resolvent formulas for Schrödinger operators on bounded Lipschitz domains” In Perspectives in partial differential equations, harmonic analysis and applications 79, Proc. Sympos. Pure Math. Amer. Math. Soc., Providence, RI, 2008
  • [6] T. Kapitula and K. Promislow “Spectral and dynamical stability of nonlinear waves”, Applied Mathematical Sciences Springer, New York, 2013
  • [7] Y. Latushkin and A. Sukhtayev “The Evans function and the Weyl-Titchmarsh function” In Discrete Contin. Dyn. Syst. Ser. S 5.5, 2012
  • [8] R. L. Pego and M. I. Weinstein “Eigenvalues, and instabilities of solitary waves” 47–94: Philos. Trans. Roy. Soc. London Ser. A 340, 1992
  • [9] B. Sandstede “Stability of travelling waves” In Handbook of dynamical systems Elsevier, Amsterdam: North-Holland, 2002
  • [10] J. Weidmann “Spectral Theory of Ordinary Differential Operators” 1258, Springer-Verlag, Berlin: Lecture Notes in Mathematics, 1987