Splitting Quantum Graphs
Abstract
We derive a counting formula for the eigenvalues of Schrödinger operators with self-adjoint boundary conditions on quantum star graphs. More specifically, we develop techniques using Evans functions to reduce full quantum graph eigenvalue problems into smaller subgraph eigenvalue problems. These methods provide a simple way to calculate the spectra of operators with localized potentials.
1 Introduction
This paper studies the eigenvalues of Schrödinger operators on quantum star graphs with edges. We will refer to the th edge of a quantum graph as . Each edge is parameterized to have coordinates between and , where is assigned to the shared origin of all edges and is the other endpoint.
A function on a quantum graph can be written as a vector function , where the th component represents the portion of the function defined over the th edge. We assign a separate variable to each edge , which takes values on the interval . The collection of all these is abbreviated as .
1.1 Basic Definitions and Notations
We are interested particularly in sets of separated and self-adjoint boundary conditions , where these and are matrices satisfying the following:
(1) |
where the superscript represents the matrix adjoint. We say a function satisfies boundary conditions if it satisfies the equations
(2) |
In this paper, refers to a vector whose th component is , which is defined to be the derivative of the th component of with respect to the relevant spatial variable .
We will follow the reasonable convention that the matrices and should both be diagonal. This comes from the natural assumption that the only interaction between functions should happen at shared vertices (in this case, only at the origin).
Definition 1.1.
Given a set of separated and self-adjoint boundary conditions as described above, we define the -trace of a function as the following vector in :
(3) |
provided is sufficiently smooth.
As an example, by choosing and to be identity matrices while letting and be zero matrices, would be equivalent to Dirichlet boundary conditions at all vertices. The Dirichlet trace would then be the column vector
(4) |
where represents the matrix (nonconjugating) transpose. One other notable trace that will show up often is the Neumann trace, defined as
(5) |
We are interested in solving eigenvalue problems of the form
(6) |
where is a function over , is a differential expression defined as
(7) |
where denotes a diagonal matrix whose th diagonal entry is the th argument specified. We assume that each potential function is in .
Definition 1.2.
is a Schrödinger operator which is defined over the domain , where is shorthand to denote for all , and is a Sobolev space defined over . Then, for functions in this domain,
We denote the spectrum of as , and the resolvent set as . Note that our assumptions on the potentials in guarantee that consists only of the discrete spectrum [10].
Remark 1.3.
We can also introduce smaller associated operators over subgraphs. Let be a splitting point (though not a vertex) on edge , which will play the role of a vertex in two connected subgraphs of , denoted and , which satisfy following conditions:
An example of such a partition is shown in Figure 5. Let and be the restrictions of to and , respectively. Next, define a new set of boundary conditions over which shares boundary conditions at all common vertices with , and which includes a Dirichlet boundary condition at . Similarly, we define as a set of boundary conditions over which is the same as at all common vertices with , and which includes the Dirichlet condition at .

Then, we will consider the operators and , which are defined as follows:
(8) |
Note that and have a near identical definition to that in formula (3) for their respective and matrices. However, in the case of , the new vertex will take the place of the origin , and in the case of will take the place of .
We will make heavy use of the Evans function, a powerful and popular tool in the stability theory of traveling waves [1, 4, 6, 8, 9]. In general, Evans functions are not uniquely defined. To avoid this issue, we specify a unique construction of an Evans function associated with a given operator in Definition 2.2. Thus, for the remainder of the paper, we will simply refer to the Evans function of an operator, always assuming that it is constructed by Definition 2.2.
If is split at some some non-vertex point into regions and as discussed in Remark 1.3, then a one-sided map is defined to be a Dirichlet-to-Neumann map associated with . In particular, we specify two such maps given our splitting.
Definition 1.4.
Let . Fix . Then we define a map as
(9) |
where is the unique solution to the boundary value problem
Due to the definition of , we have that
We suppress the dependence of the map in this paper.
Similarly, for and we define a map as
(10) |
where is the th component of the unique solution to the boundary problem
Due to the definition of , we can exactly express as
Again, we choose to suppress the dependence of this map. Note that the right hand sides of equations (9) and (10) are the portions of the Neumann trace over and , respectively, which correspond to . The sum is called a two-sided map associated with .
1.2 Summary of Paper
In this paper we develop methods for finding the spectra of Schrödinger differential operators on quantum star graphs. In particular, we wish to be able to split a quantum star graph into smaller subgraphs, and then find the spectrum of the original problem by examining the spectra of the related subgraph problems. This is particularly helpful for dealing with operators with localized potentials, since it allows us to isolate the zero potential and nonzero potential portions from one another, resulting in simpler subproblems.
In Section 2, we find a general formula for , the resolvent of applied to an arbitrary function .
In Section 3, we express (a solution to an inhomogeneous boundary value problem) in terms of . Such functions are the building blocks of the two-sided maps. This expression (Theorem 3.4) involves three key projection matrices: the Dirichlet, Neumann, and Robin projections. The various properties of these matrices allow us to algebraically apply boundary conditions to solutions in a general way, without relying on any knowledge of the specific type of condition (beyond being representable by the and matrices discussed earlier).
In Section 4, we prove the main results of this paper. They relate the eigenvalues of to those of the smaller operators that come from splitting as discussed in Remark 1.3. We state them here:
Theorem 1.5.
Suppose is split into and at some non-vertex point , and let and be the new boundary conditions as constructed in Remark 1.3. Additionally, suppose , and let be the two-sided map at . Then
(11) |
where , , and are the Evans functions associated with , , and , respectively.
Remark 1.6.
Despite the fact that the left and right sides of equation (11) are not defined on the spectra of any of the operators involved, both sides are equal as meromorphic functions over . Since we only care about the poles and zeros in this equation, this type of equality is entirely sufficient.
A similar result has already been proven for the interval case in [7].
Since zeros of Evans functions locate eigenvalues, Theorem 1.5 can be used to easily prove anther one of the main results.
Theorem 1.7.
Let count the number of eigenvalues less than or equal to of any operator , and let be the difference between the number of zeros and number of poles (including multiplicities) of less than or equal to . Then
(12) |
In [2], a similar eigenvalue counting theorem has already been proven for the case of dividing multidimensional domains by hypersurfaces and defining two-sided maps with respect to these hypersurfaces.
We will also demonstrate how repeated applications of Theorem 1.5 can be used to split a quantum graph into three subdomains over which the eigenvalues can be counted separately, which is useful for more general barrier and well problems.
Definition 1.8.
Suppose has been split into and as described in Remark 1.3 at a non-vertex splitting point . We can apply the exact same process to split at some non-vertex point into two connected subgraphs and which satisfy the following conditions:
Additionally, we ensure that . If is on the same wire as , then . If the two splitting points are on different wires, then .
Remark 1.9.
Again, such a splitting comes with some associated operators. Let and be the restrictions of to and , respectively. We can define new boundary conditions on which match conditions at all common vertices with , and which include the Dirichlet condition at . Likewise, we can define boundary conditions over which are identical to conditions at all common vertices, and with the Dirichlet condition at . Finally, using these pieces we can define the operators and as follows:
(13) |
We can also construct new Dirichlet-to-Neumann maps and , now associated at two vertices and , and sum them to make another two-sided map. The construction of and is different for splittings on the same wire vs splittings on different wires. The specific constructions are detailed in Definitions 4.5 and 4.8. In either case, we can utilize the determinant of the two-sided map in the following theorem:


Theorem 1.10.
Suppose is split into three quantum subgraphs , , and , as described in Definition 1.8. Additionally, suppose that , and let be the two-sided map associated with and . Then
(14) |
where , , , and are the Evans functions associated with the operators , , , and , respectively.
Just as with Theorem 1.5, the equality in Theorem 1.10 is one of meromorphic functions, so the desired behavior is preserved. Also, we again obtain a corresponding eigenvalue counting theorem.
Theorem 1.11.
Let count the number of eigenvalues less than or equal to of any operator , and let be the difference between the number of zeros and number of poles (including multiplicities) of less than or equal to . Then
(15) |
Finally, in Section 5, we illustrate some applications of the main theorems to quantum star graphs with potential barriers and wells.
2 General Resolvent Formula
2.1 Motivation for Using the Resolvent
While it is possible to construct Dirichlet-to-Neumann maps column-by-column by solving several boundary-value problems, this method does not do much to illuminate the processes by which the zeros and poles containing spectral information appear in the map. Provided that , this map-making consists of finding functions which serve as solutions to the problem:
(16) |
where is the th standard basis vector for , then applying the desired trace to such a solution. Note that we will have representing standard basis vectors for and interchangeably, since the dimension will be clear from context. Instead of solving for by its defining boundary-value problem, we will construct it using the resolvent of .
For , a key property of the resolvent of is that, for , it acts as the inverse operator of . That is,
(17) |
for any function on .
Equation (17) actually gives us a way to find for an arbitrary function by solving the inhomogeneous boundary-value problem:
(18) |
The unique solution to this problem must be the resolvent. We will solve for by adding together a complementary and particular solution. Normally, we could generate a particular solution via the method of Variation of Parameters, but since each component of a function on has a different spatial variable, we need to make some slight adjustments to the method.
2.2 Complementary Homogeneous Solution
We will start by obtaining a complementary solution. First, suppose . Next, let the entries of and be named as follows:
(19) |
and similarly let the nonzero diagonal entries of and be named
(20) |
Consider the matrix . We can use each one of the columns as initial conditions for complementary homogeneous problems to get solutions which solve:
(21) |
where is the th component of and the overline represents complex conjugation. Observe that each of these solutions satisfies boundary conditions at the origin. We denote the set of all as .
Similarly, we can define another solutions by using the columns of as initial conditions for homogeneous problems to generate solutions which satisfy:
(22) |
First, note that these solutions satisfy boundary conditions at the outer endpoints of the quantum graph. Also, we have that for all due to the zero conditions. We denote the set of all as .
Due to the rank conditions of the and matrices imposed in equation (1), we have that is a linearly independent set and is a linearly independent set. Additionally, it can bee seen that the combined set is linearly dependent if and only if . Thus, since , we must have that forms a linearly independent set of functions.
Definition 2.1.
Let and be matrices such that the th column of is and the th column of is . Observe that the ordering of these columns implies that
(23) |
Definition 2.2.
In this paper, we will define the Evans function for (denoted ) as the determinant of the following fundamental solution matrix:
(24) |
That is, . The Evans function only depends on due to Theorem A.1.
This definition also extends to the operators defined over subgraphs like and ; we simply alter the initial value problems defining and to incorporate as reflected in the new boundary conditions.
2.3 Particular Solution
We wish to emulate Variation of Parameters so as to obtain a particular solution from our complementary solution set. We cannot do normal Variation of Parameters on the whole set of complementary solutions since each solution has multiple spatial variables, but we can do Variation of Parameters for each edge. So, we construct a particular solution component by component. The th component of the particular solution can be built by looking at all the and (we call such components subvectors), finding a linearly independent pair, and using Variation of Parameters on them to find a particular solution to the one-dimensional second order problem:
(25) |
Theorem 2.3.
Proof.
Suppose for contradiction we could not find another linearly independent function for some index . Then there exist constants such that for all . Since is linearly independent, the set is also a linearly independent set. We can construct a new set of functions by removing the th component of every function in . Since the th component for every vector in is , we lose no information by deleting them, so is a linearly independent set of size containing vectors with components. But this cannot be possible, since the solutions in are linearly independent solutions to an -dimensional second order ODE, of which there should be at most . Thus, there must be some meeting our desired specifications. ∎
This proof shows that our componentwise construction of is well defined for all . We also see in the proof that the choice of each is local with respect to . Thus, for specific we can use Variation of Parameters to define the th entry of as
(26) |
where is a Wronskian for solutions on the th wire.
2.4 Constant Tuning
Let . Then our general solution (which is will be of the form
(27) |
where is a vector of scalars in . To make sure satisfies the boundary conditions, we will need to adjust the components of so that
(28) |
By the linearity of , we can modify equation (28) by taking the trace of every column of , arranging these in a matrix , and then multiplying this matrix by . We define as
(29) |
The zero blocks in (2.4) appear since and are defined to satisfy and conditions, respectively. We can rewrite equation as
(30) |
Due to the integral bounds we chose, we find that
(31) |
Equations (30) and (31) imply that we can always choose , and thus we can determine for by solving the smaller system:
(32) |
where is given by:
(33) |
That is, is the lower right block of . We can find an exact formula for . First, we give names to the various at the origin:
(34) |
Then from equation (33), is given by
(35) |
If is invertible then we can solve for the coefficients by inversion in equation (32). The invertibility is guaranteed by the following Lemma.
Lemma 2.4.
Proof.
We know from equation (1) that there exists a set of linearly independent columns in the matrix . Thus, by swapping columns between and , we can generate a new block matrix such that . Let represent the number of columns that we swap between and to generate these new matrices (although there may be different for different column swapping configurations). Now, for every pair of columns swapped, swap the rows with the same indices in to generate a new matrix . We immediately observe that
(36) |
We label the blocks of as follows:
(37) |
and observe that
(38) |
since the transformation from to involves swapping away of the rows, each of which carries away the added negative from the initial scaling of .
Note that, since matrix multiplication is defined in terms of vector inner products, by switching rows and columns of the same index to get and , respectively, we preserve the values of the inner products defining the entries of the matrix product between and .
The final object we need for this proof is the invertible block triangular matrix defined as
(39) |
with determinant , which is nonzero since we specifically constructed to consist of linearly independent columns. We can calculate the product as
(40) |
So, on the one hand, we have that
(41) |
while on the other hand, we have
(42) |
Thus, by equating equations (41) and (42), multiplying both sides by , and dividing by , we obtain the equality we set out to prove. ∎
In particular, using as defined in (33) and defined to be the fundamental matrix in equation (24), we can apply Lemma 2.4 to obtain the following relation:
(43) |
Hence, is zero if and only if . Since we have supposed , we have that is invertible and we can apply Cramer’s rule to solve equation (32). Of course, the right side of this equation is the bottom half of , which can be calculated to be
(44) |
So, applying Cramer’s rule to solve equation (32), we obtain
(45) |
where is defined as with the th column replaced by the right side of equation (2.4). It is this replaced column that introduces dependence which will be vital later, so we make special note of it.
Thus, the resolvent is given by:
(46) |
As previously mentioned, the choice of the different matrices depends on locally. However, our construction method overall will actually give us the resolvent globally, for all . Since our constructed is a meromorphic function which is identical to the resolvent on some disc over which our choice of matrices is valid, it is in fact equal to the resolvent globally due to the unique analytic continuation of holomorphic functions (which can be adjusted to extend to meromorphic functions).
3 From Resolvents to Maps
Let , and fix . If solves
(47) |
then, as seen in [5], one can use integration by parts to show that
(48) |
where is the inner product, is the inner product, and . Both are linear in their first component and antilinear in their second. We wish to somehow incorporate in this formula. Taking inspiration from [3], we introduce several objects. They can be constructed based on or matrices, but we can denote the construction of both at once in terms of the block diagonal matrices:
(49) |
First, we define a function by:
(50) |
We use this function to define the unitary matrix :
(51) |
We will now modify the following basic equation:
(52) |
Multiplying equation (52) by from the left gives us:
(53) |
We can get more useful identities from this one by introducing the Dirichlet, Neumann, and Robin projections.
Definition 3.1.
The Dirichlet Projection is the projection onto the kernel of . It can alternatively be defined as the projection onto the eigenspace of corresponding to the eigenvalue .
The Neumann Projection is the projection onto the kernel of . It can alternatively be defined as the projection onto the eigenspace of corresponding to the eigenvalue .
The Robin Projection is defined in terms of the other two by the equation
(54) |
It is also the projection onto the eigenspace of corresponding to all other eigenvalues not .
Remark 3.2.
All three projectors commute with , , and . We also get from the definitions that:
(55) |
One more important object is . It is an invertible self-adjoint operator constructed as follows:
(56) |
where is the restriction of to the range of (this restriction has domain and range , and is invertible). Since and are both block diagonal, these new objects are all also block diagonal as well.
With the introduction of , we can now list one more set of defining properties of these objects. Since satisfies boundary conditions, we have that:
(57) |
Here our derivations diverge from those in [3], where equations (58), (59), and (3) are found with . Multiplying equation by from the left, we obtain:
(58) |
Multiplying equation (53) by from the left yields:
(59) |
Multiplying equation (53) by from the left gives:
(60) |
We can rewrite the previous three equations in terms of the Dirichlet and Neumann traces if we account for the normal vector scaling in Neumann traces. So, given some matrix , we introduce the new matrix defined as
(61) |
Then we may rewrite equations (58), (59), (3), and the last line of (59) as
(62) |
Note that, due to the block diagonal nature of all of the new matrices involved, we can safely rewrite the domain and range of as . Using all this machinery, we can alter our inner product formula.
Theorem 3.3.
Let , , and . Let be the image of under . If solves equation (47), then
(63) |
Proof.
By utilizing our new self-adjoint projections, we find that:
(64) |
As the Cayley Transform of the unitary matrix , is self-adjoint as a matrix from to . Thus, the last term in the last equality of equation (3) is .
As an additional simplification, we wish to eliminate from the term . We observe that can be rewritten as
Since is invertible, we have that if the image of under is some , then the image of under must be . Thus, , the restriction of to will map from to , and is defined as a map from to , so we see that
With these simplifications to equation (3), we obtain the desired formula. ∎
Theorem 3.3 can be used to construct with an arbitrary trace value , by using . In particular, by setting to constant functions taking the form of the various standard basis vectors of , we can solve for the components of one by one. Part of this process will involve the integrands produced by two different inner products. Note that the various include a column (from equation (2.4)) in which all entries carry definite integrals of from to with respect to the placeholder . Thus, taking a determinant and expanding down this column, we see that every term in this sum defining still includes a factor carrying this integral. By absorbing all constants and combining integral terms, we can write the whole determinant as one definite integral from to . If we define as without these integrals (that is, leaving behind a column of only the scaling constants), then we easily see that
(65) |
Before we proceed, we define the following adjustment vectors:
(66) |
Theorem 3.4.
Let , and let be the unique solution to
Then, the th component can be calculated using the formula:
(67) |
where the terms with in the numerator occur in the th and th components only.
Proof.
First we calculate the general Dirichlet and Neumann traces of . Since each column in is only nonzero in one component (see formula (22)), by using the definition of the resolvent from equation (46) (which iteslf references the particular solution formula from equation (2.3)), we find that for :
When , the components of these traces can be simplified as follows:
Thus, we can recover the th component of by setting , equating the integrands on both sides of equation (3.3) and using formulas (65), (3), completing the proof. ∎
4 Eigenvalue Counting Formulas
4.1 Single Split
We can split into two subgraphs at some non-vertex point , and , over which new conditions and operators are defined as in Remark 1.3. In addition, we consider boundary conditions and , which are identical to and , respectively, at all vertices except . In , we impose the Neumann condition , and in we impose the Neumann condition . Of course, from these new conditions we can construct operators and , defined as follows:
(68) |
We will now demonstrate a way to count eigenvalues of by instead working with and .
Remark 4.1.
We will use the following functions satisfying Dirichlet and Neumann conditions throughout the rest of this section: and . They are solutions to and satisfy the following boundary conditions along the th edge:
(69) |
For convenience, all other components (except for the jth components) for and are made identically .
4.1.1 Construction
When constructing , we suppose that and . Let solve the inhomogeneous problem
(70) |
Since does not satisfy the Dirichlet condition at , we find that for some , where is the solution satisfying the conditions (and thus also conditions) on at . Then, by equation (1.4), we have that
(71) |
Of course, we can rewrite this as a quotient of Wronskians by utilizing and from equation (4.1):
(72) |
Of course, these Wronskians are simply the determinants of the fundamental matrices for the and problems. Thus, rewritten as Evans functions, we have
(73) |
4.1.2 Construction
The construction of as a quotient of Evans functions must use a different argument since it is defined by a problem over , which has a star structure.
Theorem 4.2.
Let . Then
Proof.
Let . Let be the unique solution to the problem
(74) |
That is, satisfies all boundary conditions except the Dirichlet condition at . Then by equation (1.4), we can express the map as follows:
(75) |
To find a more informative way of writing , we can use Theorem 3.4 with , , and fundamental and solutions defined on using conditions.
We introduce and , which are the matrices (see formula (49)) corresponding to boundary conditions. Due to the Dirichlet condition imposed at , we have that the th row and th column of are both , while the th row and column of consist of zero entries. Therefore, , which means that . Additionally, since is a projection onto , which is decidedly in, we have . However, since , , and are mutually orthogonal, we have that . This means that is sent to zero by , while it is passed through with nothing more than a sign swap. Hence, we can reduce Theorem 3.4 for in this case to
(76) |
Note that in this formula, and for the rest of Section 4.1.2, , , , , etc. are all those objects related to the construction of , as opposed to those from . However, note that all of the functions except constructed using conditions are equal to their condition counterparts.
The only portion of the Neumann trace we care about is .
(77) |
By Lemma 2.4 we have that , where is the fundamental matrix associated with the operator. Note that and are determinants of matrices identical to except for the th column, which is replaced by and , respectively.
If we replace everywhere in and with , we can again apply Lemma 2.4 and get an analogous result. Of course, with all the contributions replaced by contributions is simply (that is, and are swapped out everywhere for and , respectively) as in equation (65). Thus, we have that where is with all contributions replaced with equivalent contributions. Then, equation (77) can be rewritten as
(78) |
Since there is a Dirichlet condition at , we see that (the restriction of to ), and thus . Observe that can be rewritten by expanding down the column when to give us:
(79) |
where and are the complementary minors picked up in Laplace expansion. Additionally, by our assumed initial conditions:
(80) |
and thus
(81) |
Then, by multiplying equation by we see:
(From here on out, all variables stay the same, so we omit dependencies) | ||||
(82) |
As discussed previously, is simply with all the contributions swapped for contributions. By the definition of and , the difference is precisely the determinant of the fundamental solution matrix with swapped out for . So, we certainly have , and thus we can represent as our desired quotient. Indeed, dividing equation (4.1.2) by tells us that
(83) |
∎
4.1.3 An Evans Function Relation
We are now prepared to prove Theorem 1.5.
Proof.
We know from equation (73) and Theorem 4.2 that we can represent and as the quotients of Evans functions:
(84) |
Then, multiplying by and applying equation (84), we see:
(85) |
Since is the initial value problem solution satisfying the condition associated with the vertex , and and are as defined in Equation (4.1), we can evaluate the determinants and at to obtain:
(86) |
We recall that and are both determinants whose every column is equal to the the same column in the fundamental matrix for the problem except the th, which is either or , respectively. By evaluating these determinants at , we can multiply the and terms into the th columns of and , respectively, to turn the expression from equation (4.1.3) into the determinant of . So
(87) |
Dividing this equation by proves our desired equality. ∎
4.2 Double Split on One Wire

So far our eigenvalue counting theorem is only proven for cases in which and meet at one point. We will now extend this result to cases where we split twice on one wire.
Let and be interior points of the th wire of such that . We can split into three subgraphs as in Definition 1.8. Then by Theorem 1.5, we have that
(88) |
where is the two-sided map constructed at . In a later proof, it will be beneficial to break up maps and Evans functions built for . This will require the introduction of some new boundary conditions.
Definition 4.3.
Let , , , and be boundary conditions defined over . A superscript of assigns the Dirichlet condition , a superscript of assigns the Neumann condition , where corresponds to the position in which the letter sits. So for example, represents the boundary conditions . Note that is identical to the set of conditions called in Definition 1.8. We also have and defined over , which are identical to at all common vertices. For , the Dirichlet condition is applied; for , the Neumann condition is applied.
Remark 4.4.
At this point we have a standard way of building operators and Evans functions based on sets of boundary conditions on subgraphs. Given boundary conditions defined over some subgraph , we can let be the restriction of to . Then we can define the operator as follows:
We let the Evans function associated with be denoted by .
With this new notation, we can split at to get the three desired subgraphs, and another application of Theorem 1.5 gives us that
(89) |
where is a two sided map defined at with a Dirichlet condition at .
Combining equations (88) and (89), we find that:
(90) |
We will now define the maps and , the two-dimensional extensions of the one-sided maps.
Definition 4.5.
Let and fix . Then, we define as follows:
where is the unique solution to the boundary value problem
Likewise, for and , we can instead define as follows
where is the unique solution to the boundary value problem
and is the th component of the unique solution to the boundary value problem
Remark 4.6.
and are meant to be analogs of the one-sided maps and . We can utilize some of the previously constructed maps to rewrite construct as follows:
(91) |
Additionally, we can get a more concrete handle on using the following construction
(92) |
where and are the solutions to the boundary value problems:
(93) |
Note that these constructions of and are identical to those produced by the the definitions.
Theorem 4.7.
Suppose is split (as described in Definition 1.8) into three quantum subgraphs , , and , at the points for some . Additionally, suppose that , and let be the two-sided map associated with and . Then
(94) |
where , , , and are the Evans functions associated with the operators , , , and , respectively.
Proof.
The desired result can be obtained from equation (90). We claim that the corrective term is precisely the determinant of the two-sided map . First, we construct several key functions. Let all be solutions to defined over , which satisfy the following conditions:
(95) |
Consider the functions and from equation (93). Since satisfies the Dirichlet condition at but not , it is parallel to , so there exists such that . Similarly, since satisfies Dirichlet conditions at by not , there exists some such that . Then we can rewrite as
(96) |
We quickly note that, by definition,
By equation (73) adjusted to a subgraph , we also have that
(97) |
We additionally note that , and can be substituted as such into . The final note we make is that
(98) |
We can now begin calculating the determinant of the two-sided map.
(99) |
On the other hand, we can also simplify the corrective term. By Theorem 4.2, we have that , which can be further decomposed. In particular, another application of Theorem 1.5 allows us to obtain
(100) |
where is the associated two-sided map at (here there is a Neumann condition at ). Observe that (the reasoning is clear from Figure 3). Thus, by equations (89) and (100) we have
(101) |
Finally, noting that, by equation (73) adjusted to a subgraph , can be rewritten as Evans functions as follows:
(102) |
We can decompose the corrective term into a workable state. Indeed,
(103) |
Comparing equations (4.2) and (4.2), we indeed find that as claimed. This means we can substitute into equation (90), completing the proof of this theorem. ∎
4.3 Splitting on Two Wires

Suppose we wish to split into three subgraphs, where the cuts are made on two different wires. We can extend our Evans function equivalence to this case as well.
Without loss of generality, we assume that these cuts are made on the first and second wires at vertices and satisfying and . The first split is done on the first wire, splitting into and . We define our usual , , , and conditions as described in Remark 1.3. The second split is done on the second wire, splitting into and . Over these subgraphs we define six sets of boundary conditions: , , , , , and . and both include the condition at , but includes the Dirichlet condition at , while includes the Neumann condition . The various sets share conditions at all common vertices between and except . The conditions at are either Dirichlet (associated with superscript ) or Neumann (associated with superscript ), where is determined by the position of the signifying letter in the superscript. For example, includes a Neumann condition at and a Dirichlet condition at . Note that is exactly the same as as defined in Definition 1.8. We define operators and corresponding Evans functions for all of these boundary conditions according to Remark 4.4.
As in the previous section, we can extend our counting equivalence if the following equality holds: . Of course, for this new case we need to slightly adjust the definitions of and from those in Definition 4.5.
Definition 4.8.
Let and . Then we can define as follows:
where solves
Note that in this case consists of only two components, since its domain is two disconnected wires.
Likewise, for and , we define as follows:
where is the unique solution to the boundary value problem
Remark 4.9.
We can find a helpful matrix representation of by observing its action on the standard basis vectors and . That is, to construct , we introduce the functions and over that represent the unique solutions to the following boundary value problems:
(106) |
and
(107) |
Then can be constructed as follows:
(108) |
Also, we will build in terms of maps already encountered:
We now have all the necessary objects defined, and can prove our extension of Theorem 1.5.
Theorem 4.10.
Suppose is split (as described in Definition 1.8) into three quantum subgraphs , , and , at non-vertex points and on seperate edges of . Additionally, suppose that , and let be the two-sided map associated with and . Then
(109) |
where , , , and are the Evans functions associated with the operators , , , and , respectively.
Proof.
Consider the and functions as discussed in Remark 4.9. We note that, by Theorem 4.2 adjusted to a subgraph ,
(110) |
Additionally, walking through the argument leading to equation (83), we find that
(111) |
We still need to classify the off diagonal entries of . By applying Theorem 3.4 on with , , and boundary conditions, we can find to be
and the same theorem with , tells us that is
There is only one component of the trace vectors taken in Theorem 3.4 due to similar reasoning as that in the start of the proof of Theorem 4.2. That is, because of the Dirichlet conditions of at and , we find that and , and therefore (the reason why is nearly identical to that at the start of the proof of Theorem 4.2).
As in previous cases, we specify some well-behaved archetypal functions. Let on edge and on edge be components of solutions to which satisfy the following conditions:
(112) |
We observe that and equal and , respectively. Thus, and . Recognizing that and are both specified to be , the off diagonal terms in are just and . Taking the derivative of our current expressions, we obtain
(113) |
and
(114) |
We know that by using Lemma 2.4, in both of these equations can be transformed into for a corresponding fundamental matrix . By definition, . One can use Lemma 2.4 to see that where is simply the fundamental matrix with its column replaced by a vector which is in the second wire row, in the second wire derivative row, and zeros in all others. Similarly, where is simply the fundamental matrix with its column replaced by a vector which is in the first wire row, in the first wire derivative row, and zeros in all others.
Notice that by expanding down the and columns in with blocks, we find that
(115) |
where is the complementary minor and is the sign of the defining permutation of this expansion term. Similarly, we can expand down the and columns in with blocks to obtain
(116) |
where is the complementary minor, and again serves to denote the permutation sign. We can now rewrite our off-diagonal and entries as follows:
(117) |
Then the determinant of the two-sided map can be expanded out using our new expressions of the off-diagonal terms. Note that the objects we work with for the rest of this proof depend on only, so we suppress this dependence to make formulas more readable. With this in mind, using Remark 4.9 and equations (110), (111) and (4.3), the determinant expansion is
(118) |
Using Theorem 1.5 adjusted to a subgraph , we can decompose as follows:
(119) |
where is the relevant two-sided map associated with . Note that . Additionally, using Theorem 4.2 adjusted to a subgraph , we can rewrite as
(120) |
Therefore, we can expand out as
(121) |
Comparing the terms in equations (4.3) and (4.3), we see that we have the desired equality if
(122) |
An equivalent condition can be obtained by rewriting the above equation as
(123) |
Note that, since every determinant in this proposed equality is being multiplied by another determinant with an underlying matrix of the same shape, we can prove this statement for slightly modified matrices. That is, we can rearrange rows and columns as long as the same swaps are performed for every determinant involved. For the remainder of this proof, we let all matrices involved have row ordering of the form instead of the standard . These row swapped objects will be denoted by calligraphic letters. For example, and would be rewritten as and , respectively. Therefore, equation (123) is equivalent to
(124) |
We now begin a brute expansion of the various Evans functions on the left hand side of equation (124). Note that they are entirely identical except in the and columns. In these columns, playing the role of and (and thus their derivatives), are: and for , and for , and for , and and for .
We choose to expand each Evans function along these two columns in blocks. This means that, letting stand in for the or in the column, these determinants expand in the form
(125) |
where each is some complementary minor. The and indicate which two of the first four rows are inherited as the first two rows of the undelying matrix of minor. We note that, using such notation, (defined the same way as the other ).
Now, using the expansion from equation (4.3), we can rewrite equation (124) as
(126) |
We can now prove equation (124) by showing that . This can be done via the construction of a special matrix . is a matrix, in which each row is split in two halves, each half being a zero row or a length row from fundamental matrix with its and columns removed. Let represent the first row of this modified , the second, and so on down to . Then, listing zero rows as , we construct like so:
(127) |
The rows from to are those which are common in all the various . Thus, the relevant minors can be written as
(128) |
By expanding in blocks down the first columns, we arrive at the following expression:
(129) |
Alternatively, we can do some row and column operations before performing this expansion to alter without changing its determinant. First, for the th column of (where ), subtract the th column. Next, for each such that , add the th row to the th row. So, the determinant can now be written as
(130) |
This determinant is zero, since we choose to expand in blocks, but there are only nonzero rows to be selected for minors in the left column collection: . Thus, every term in the expansion will have a zero row, and will be . So, by our two calculations of , we have
which implies
Since , the above equation gives us the following
which proves equation (123), thus completing the proof. ∎
5 Applications
Here we will consider a quantum star graph with two edges of length . We will impose the Kirchhoff conditions and at the origin. We wish to see how the eigenvalues change as various potentials and endpoint conditions are introduced. In particular, we will graph Evans functions and map curves as varies over a certain interval. Since our eigenvalue counting formulas follow directly from our Evans function equalities, they are true over any interval . Thus, we represent the count of eigenvalues of an operator or zeros minus poles of a two sided map for by passing the interval as an argument in the relevant counting function from Theorem 1.7 or 1.11. For example, equals the number of eigenvalues of for .
5.1 Potential Barrier/Well at Wire’s End
A potential barrier or well is a localized constant potential at the end of one wire. We can model these by setting for all potentials in except when , and letting be defined piecewise as:
(131) |
where . In this example, the boundary conditions will be represented by the following and matrices:
(132) |
These are Kirchhoff at the origin, and Dirichlet at the outer vertices. Using Theorem 1.5, we can count the eigenvalues of via Evans functions of its subproblems.
First, we split the problem in two by placing a new vertex at and imposing a Dirichlet condition there. The operator defined over is a Dirichlet Laplacian (with nonzero potential ), while the operator over is still a Kirchhoff Laplacian, but with a smaller edge .
To find the eigenvalues of , we construct the corresponding Evans function. For , it is equal to
(133) |
To find the eigenvalues of , we will construct the Evans function . For , we have
(134) |
Next we solve for the one-sided maps and . For , we have
(135) |
Finally, for we have
(136) |
Our method so far has been quite simple, consisting of a few determinants and some easy initial value problems defined by the various boundary condition matrices. We would like to check our results against the actual eigenvalues of . It can be shown that the positive eigenvalues of are precisely the zeros of the function
(137) |
Note that is an Evans function for , although not necessarily the one specified by our usual construction.
One can graphically see (for example in Figure 5) that the eigenvalues of are counted accurately by plotting , , , and .

In particular, the zeros of and coincide with poles of , representing cancellation. This is expected, since has no zeros at these points. Also as expected, the zeros of coincide with those of . In terms of counting functions, we find that
So the counting functions work in this case as predicted.
5.2 Potential Barrier/Well on Wire’s Interior
In this example, we suppose that the boundary conditions at and are both Neumann () for . We additionally suppose we have a potential localized on the interior of wire one. Define the nonzero potential in as:
where , and . We wish to apply Theorem 1.11 to count the eigenvalues of . For , we find that the relevant Evans functions are:
and the relevant one-sided maps are:
and
Therefore, the map determinant is equal to
Finally, one can show that the eigenvalues of are given by the zeros of the following curve:
(138) |
which, as in the previous example, is an Evans function for .
Again, we can check our result visually by plotting these objects against one another and making sure the poles and zeros line up in the right places. An example is shown in Figure 6.

This figure not only has poles and zeros lining up in the right places, it also has proper multiplicity: order one poles when only one Evans function has a zero of order one, and order two poles when two Evans functions have a zero of order one at the same . Indeed, we find that
5.3 Potential Over Multiple Wires
To demonstrate our last result, we consider a potential extending over both wires, and suppose that the boundary conditions at and are Dirichlet. Let
for both and . To apply our theorem, we must construct the corresponding Evans functions.
and
It can be shown that the determinant of is given by
It can also be shown that the function
indicates the location of the positive eigenvalues of . Once again, one can use a plot to confirm our formula accurately predicts the location of eigenvalues, as demonstrated in Figure 7.

Again, this figure demonstrates proper cancellation between the subproblem eigenvalues and the poles of the map determinant, and the proper correspondence between eigenvalues of and the zeros of the map determinant. Examining the counting functions, we find that
Appendix A Note on Linear Algebraic Concepts
Despite introducing separate spatial variables on each wire, we still wish to often refer to notions of linear dependence and independence of solution sets.
Theorem A.1.
Let be a set of solutions to the equation , where is the differential expression from (7). Let be a matrix whose th column is followed by . Then is a function of only.
Proof.
We can calculate using Laplace expansion by complementary minors. In particular, we can choose to expand along the pair of rows and , which contain information about and , respectively. Let be the set of two-element combinations from the set . Then the determinant of is
(139) |
where is the sign of a permutation sending to , to , and all other numbers in are sent to the elements of in increasing order, and is the complementary minor obtained from by deleting the st and th rows, and the th and th columns. The term represents the row permutation sending to , to , and all else paired in increasing order. Since the determinants involved in this expansion are solutions to a second order ordinary differential equation with no first order term, by Abel’s Identity they depend only on , and not on the choice of . We can clearly use a similar method to expand each by blocks of functions sharing one spatial variable, so none of these terms depend on either. Thus is independent, just as in the single variable case. ∎
As with a set of single variable vector functions, we say that forms a linearly independent set if the only solution to
(140) |
is the one with for all . This definition needs no adjustment from the single-variable case since vector addition acts componentwise, and thus the separate spatial variables do not interact at all in this sum. Indeed, we are free to vary each individually without altering the result of the sum. Since the typical definition of linear dependence applies for functions on a quantum graph, and since Abel’s Identity still holds, one can use the standard proof to show that if and only if there is a linearly dependent set of columns in .
Acknowledgements
A.S. acknowledges the support of an AIM workshop on Computer assisted proofs for stability analysis of nonlinear waves, where some of the key ideas were discussed. A.S. acknowledges support from the National Science Foundation under grant DMS-1910820.
References
- [1] J. Alexander, R. Gardner and C. Jones “A topological invariant arising in the stability analysis of travelling waves” In J. reine angew. Math. 410, 1990, pp. 167–212
- [2] G. Berkolaiko, G. Cox and J. L. Marzuola “Nodal deficiency, spectral flow, and the Dirichlet-to-Neumann map” In Lett. Math. Phys. 109.7, 2019
- [3] G. Berkolaiko and P. Kuchment “Introduction to quantum graphs” 186, Mathematical Surveys and Monographs American Mathematical Society, Providence, RI, 2013
- [4] G. Cox, Y. Latushkin and A. Sukhtayev “Fredholm determinants, Evans functions and Maslov indices for partial differential equations” In Mathematische Annalen, 2023
- [5] F. Gesztesy and M. Mitrea “Generalized Robin boundary conditions, Robin-to-Dirichlet maps, and Krein-type resolvent formulas for Schrödinger operators on bounded Lipschitz domains” In Perspectives in partial differential equations, harmonic analysis and applications 79, Proc. Sympos. Pure Math. Amer. Math. Soc., Providence, RI, 2008
- [6] T. Kapitula and K. Promislow “Spectral and dynamical stability of nonlinear waves”, Applied Mathematical Sciences Springer, New York, 2013
- [7] Y. Latushkin and A. Sukhtayev “The Evans function and the Weyl-Titchmarsh function” In Discrete Contin. Dyn. Syst. Ser. S 5.5, 2012
- [8] R. L. Pego and M. I. Weinstein “Eigenvalues, and instabilities of solitary waves” 47–94: Philos. Trans. Roy. Soc. London Ser. A 340, 1992
- [9] B. Sandstede “Stability of travelling waves” In Handbook of dynamical systems Elsevier, Amsterdam: North-Holland, 2002
- [10] J. Weidmann “Spectral Theory of Ordinary Differential Operators” 1258, Springer-Verlag, Berlin: Lecture Notes in Mathematics, 1987