More than one Author with different Affiliations
A Two-Level Preconditioned Helmholtz-Jacobi-Davidson Method for the Maxwell Eigenvalue Problem
Abstract: In this paper, based on a domain decomposition (DD) method, we shall propose an efficient two-level preconditioned Helmholtz-Jacobi-Davidson (PHJD) method for solving the algebraic eigenvalue problem resulting from the edge element approximation of the Maxwell eigenvalue problem. In order to eliminate the components in orthogonal complement space of the eigenvalue, we shall solve a parallel preconditioned system and a Helmholtz projection system together in fine space. After one coarse space correction in each iteration and minimizing the Rayleigh quotient in a small dimensional Davidson space, we finally get the error reduction of this two-level PHJD method as , where is a constant independent of the mesh size and the diameter of subdomains , is the overlapping size among the subdomains, and decreasing as , which means the greater the number of subdomains, the better the convergence rate. Numerical results supporting our theory shall be given.
Keywords: Maxwell eigenvalue problem, edge element, Helmholtz projection, Jacobi-Davidson method, domain decomposition.
1 Introduction
In this paper, we develop an efficient numerical algorithm for solving the Maxwell eigenvalue problem which plays an important role in computational electromagnetism (see, e.g., [12, 4, 18, 15, 16, 3]). The governing equations are
(1.1) |
and we eliminate the magnetic field , then obtain the equivalent eigenvalue problem for the electric field as follows:
(1.2) |
where the resonant cavity is a bounded convex polyhedral domain, the coefficients and are the real electric permittivity and magnetic permeability, respectively, is an imaginary unit, and the notation is the resonant angular frequency of the electromagnetic wave for cavity . For convenience, we denote that
where and are electric permittivity and magnetic permeability in vacuum, respectively. So equations may be rewritten as:
(1.3) |
where the coefficients and are the real relative electric permittivity and magnetic permeability, respectively. For simplicity, we consider that the media is isotropic and homogeneous, i.e., the coefficients and both are positive constants.
Edge elements were introduced by Ndlec (see, e.g., [19, 20]). Actually, the lowest-order edge elements are often referred to as the Whitney elements (see, e.g., [12, 4] and therein references). As edge elements may eliminate the nonphysical spurious nonzero eigenvalues, they have been widely used and studied to solve the Maxwell eigenvalue problem. It is known that the challenging task for solving the Maxwell eigenvalue problem is how to deal with the infinite dimensional kernel of the operator. Owing to this difficulty, one of approaches is chosen to so-called the penalty method, which imposes divergence-free condition by introducing the penalty term (see [7]), but it usually results in spurious eigenvalues. Alternative approach is the mixed variational method, which is introduced by Kikuchi (see [11]) and Boffi (see, e.g., [5, 4]). The method may be used to handle the kernel of the operator but simultaneously brings larger scale size computational problem and difficult saddle-problem. The standard method drops the divergence-free condition, though it may induce spurious zero frequency, doing so does not contaminate the nonzero eigenvalues (see, e.g., [12, 4]). Two grid methods have been widely used to solve elliptic eigenvalue problems (see, e.g., [27, 14, 28]). Its application to the Maxwell eigenvalue problem has been considered in [30]. The idea of two grid methods is that one may first compute an eigenvalue problem on the coarse grid and then compute a boundary value problem on the fine grid. Rayleigh quotient may be used to accelerate the approximation of the eigenvalue. Under the assumptions , see, [27], [30], [28]), asymptotically optimal error estimates may be obtained, respectively. Based on the de Rham complex, Hiptmair and Neymeyr (see [13]) proposed a projected preconditioned inverse iterative method. It can handle the kernel of the operator by the Helmholtz projection. A multilevel preconditioned technique is chosen to accelerate the inverse iteration procedure. This idea is also extended to the adaptive discretization for the Maxwell eigenvalue problem (see [9]).
The two-level preconditioned Jacobi-Davidson methods for eigenvalue problems were proposed by Zhao, Hwang and Cai (see [29]) and further developed and analyzed by Wang and Xu (see [25, 26]) for -order elliptic eigenvalue problems. The convergence rate of the first eigenvalue of the elliptic eigenvalue problems is bounded by . The first eigenvalue of the partial differential operator is usually so-called principal eigenvalue(see, e.g., [10]). In this paper, based a domain decomposition method, we shall propose an efficient and highly parallel two-level preconditioned Helmholtz-Jacobi-Davidson method (PHJD) for solving the Maxwell eigenvalue problem. We shall prove that the convergence rate of the principal eigenvalue is bounded by , i.e.,
(1.4) |
where is a constant independent of the mesh size and the diameter of subdomains , is the overlapping size among the subdomains, is the first discretized eigenvalue for the Maxwell eigenvalue problem and is the -th iteration of the two-level PHJD method. In addition, our algorithm works very well when in contrast to the two grid method which needs for the Maxwell eigenvalue problem(see [30]). Meanwhile, our PHJD method holds good scalabilities, which shall be verified by our numerical experiments.
We must emphasize that the application of the two-level domain decomposition preconditioned algorithm to the Maxwell eigenvalue problem is nontrivial. The first difficulty is that the condition
(1.5) |
holds for -order() elliptic problems (see [26]), but it is not true for the first-type edge elements because the polynomial space is incomplete; The second difficulty is that the kernel of the operator in , which is in trivial topology domain, is very large while the kernel of the gradient operator in is zero space. Hence, if we take advantage of Jacobi-Davidson method to solve the algebraic system resulting from the edge element approximation of the Maxwell eigenvalue problem directly, we shall fail to work due to the fact that the iterative solution may plunge into the kernel of the operator; The third difficulty is that the discrete divergence-free space is non-nested for (defined in the following, ), which means that the discrete edge element space is essentially nonconforming for the space . In this paper, we shall overcome these difficulties, and try to prove the convergence result of the principal eigenvalue is almost near optimal, i.e., is true.
The outline of this paper is organized as follows: Some preliminaries are introduced in Section 2. In Section 3, our two-level preconditioned Helmholtz-Jacobi-Davidson method for the Maxwell eigenvalue problem is proposed. Some useful lemmas and the main convergence analysis are given in Section 4. Finally we present our numerical results in Section 5.
2 The model problem and preliminary
Let be a bounded convex polygonal or polyhedral domain. We use the standard notations for the Sobolev spaces and denote
(2.1) |
equipped with the norm , where denotes a scale variable ( means a weak derivative), denotes a vector , denotes the outer unit normal vector, denotes the unit tangential vector along the boundary , and symbols usual -norm induced by -inner product . We also denote that
and
(2.2) |
equipped with the norm , where denotes . For the convenience of symbols, we only consider 3-dimensional Maxwell eigenvalue problem in the following, but our algorithm also works very well and theoretical results also hold for 2-dimensional Maxwell eigenvalue problem.
2.1 Maxwell eigenvalue problem
We focus on the governing equations . It is known that the equivalent variational form is as follows(see, e.g., [4]):
(2.3) |
where . Define the norm in . Due to the fact that the Poincar inequality holds in , we may define the norm in . For any , define , such that
(2.4) |
Combining with the Poincar inequality, we know that the definition of the operator is meaningful. It is obvious that is symmetric in the sense of because of the definitions of and . Due to the fact that is embedded compactly in (see, e.g., [1, 12]), we know that the operator is a compact operator. Furthermore, the operator is also a compact operator. Hence, owing to the well-known Riesz-Schauder theorem, it is known that
Meanwhile,
and corresponding eigenvectors are
which satisfy
where is the Kronecker notation. Obviously the eigen-pair in and the eigen-pair of the operator are all corresponding(see [5, 30]).
2.2 Finite element discretization
We use the standard edge elements to discretize and our ultimate conclusions are true for all other edge elements(see [19, 20]). For simplicity, we only consider the lowest-order edge element. The local polynomial space is
(2.5) |
and the moments are
where is the unit tangential vector along edge , and local basic vector fields are
where the is the barycentric coordinate corresponding to the -th node on the tetrahedron . So our edge element space is
Then the discrete standard variational form of may be written as follows:
(2.6) |
Define the discrete divergence-free space to be , with the denoting a continuous piecewise linear polynomial space in with vanishing trace on and define the discrete operator as follows: ,
(2.7) |
Due to the discrete Poincar inequality (see, e.g., [12]), we know that the definition of is meaningful and may define the norm in . It is easy to see that is symmetric and compact. Because of the well-known Riesz-Schauder theorem, we have
Meanwhile,
and the corresponding eigenvectors are
which satisfy
where . Similar to the continuous case, the eigen-pair in and the eigen-pair of the operator are all corresponding. For the convenience of the following error estimate, we define the operator such that
which is the discrete analog of the operator in .
Remark 2.1
In this paper, we are interested in the case that the principal eigenvalue is simple, i.e.,
and for the corresponding discrete version, we also have
The principal eigenvalue with simple algebraic multiplicity is common. For example, the resonant domain is a cuboid in , where the length of each edge is different. Specifically, when the resonant cavity is equipped with vacuum in , we may know that , where and are nonnegative integers and at most one of elements in takes zero(see [3]).
It is known that we have the following spacial decomposition properties(see [4])
(2.8) |
(2.9) |
where and the notation denotes the -orthogonal(also -orthogonal) direct sum. For the convenience of the following symbols, we denote that are the -orthogonal projections. Similarly, we may define the operator on the coarse level and let .
It is obvious that . Denote to be -orthogonal projection, which is usually so-called Hodge operator, and let . It is easy to see that the elements in are not finite element functions, but , where is a well-known Raviart-Thomas element space with vanishing normal trace along the boundary . We define an operator as follows:
(2.10) |
Furthermore, we may also define by a trivial extension.
According to the definition of , we may obtain the following lemma (see [23]).
Lemma 2.1
Let be a convex bounded polyhedral domain, we have
(2.11) |
The following a prior error estimates (see Theorem 5.4 in [5], Subsection 4.3 and Subsection 4.4 in [12]) are useful in our convergence analysis. The proof of the estimates essentially needs strengthened discrete compactness properties and standard approximation properties. For interested readers, please refer to Theorem 19.6 in [4] or [21] for details.
Theorem 2.2
Let be a bounded convex polyhedral domain. There exists such that for any and for any , we have
where C is independent of but not and is the algebraic multiplicity of . Moreover,
where , , the notation denotes the eigenvector space corresponding to the eigenvalue , the denotes the gap between subspace and subspace , i.e.,
or
where the norm is equivalent to the norm in .
Remark 2.2
In general Spaces, the gap between subspace and subspace does not construct a distance due to the fact that it does not satisfy the triangle inequality but does satisfy
where the denotes the Hausdorff distance between subspace and subspace . For interested readers, please refer to [17] for details on the distinction between the gap and Hausdorff distance.
From Theorem 2.1 and Remark 2.2, we obtain the following corollary.
Corollary 2.3
Under the assumption of Theorem 2.2, let be the first eigen-pair of , . Then there exists such that when , it holds that
(2.12) |
and there exists , , such that
(2.13) |
3 The two-level PHJD method
3.1 Domain decomposition
In this subsection, we shall introduce the domain decomposition and the corresponding subspaces decomposition.
Let the coarse quasi-uniform triangulation be , where and . The fine shaped-regular and quasi-uniform triangulation is obtained by subdividing and we denote it by . We may construct the edge element spaces on and but it is well known that . To get the overlapping subdomains , we enlarge the subdomains by adding fine elements inside layer by layer such that does not cut through any fine element. To measure the overlapping width between neighboring subdomains, we define the and denote . We also assume that is the diameter of the .
The local subspaces may be defined by
(3.1) |
(3.2) |
which are associated with the local fine mesh in , where . We denote to be the -orthogonal projection and let .
Assumption 1
The partition may be colored using at most colors, in such a way that subdomains with the same color are disjoint. is independent of the number of subdomains .
According to the Assumption 1, we know that if , then it belongs to at most subdomains in . Besides, we obtain a partition of unity and then there exists a family of functions , which are continuous piecewise linear polynomials, satisfy the following properties (see [24]):
(3.3) |
The strengthened Cauchy-Schwarz inequality is true over the local subspaces , i.e., for and , , there exists such that
(3.4) |
Let be the spectral radius of the matrix and we have (see [24])
Lemma 3.1
Let . If Assumption 1 holds, then
Moreover, for any ,
(3.5) |
3.2 The Jacobi-Davidson method
In this subsection, we focus on the Jacobi-Davidson method proposed by Sleijpen and Vorst (see [22]) which is an efficient algebraic method for solving eigenvalue problems. The idea of Jacobi-Davidson method is that one may first solve the Jacobi correction equation in the orthogonal complement space of the current approximation of the eigenvector and then update the new iterative solution in the expanding Davidson subspace. It suggests to compute the correction variable in the -orthogonal complement space of the current approximation and expand the subspace, in which the new eigen-pair approximation may be found.
Let be -orthogonal projection and we present the detailed Jacobi-Davidson algorithm as follows:
Algorithm 1 The Jacobi-Davidson algorithm |
---|
1. Given the initial approximation of the first eigen-pair, and stopping |
tolerance . |
2. For 1, 2, 3, …, denote , solving the Jacobi correction equation: (3.6) 3. Minimizing the Rayleigh quotient in |
(3.7) where where . |
4. If or , then return , otherwise goto step 2. |
For the above algorithm, it is easy to see that is equivalent to the following formulation
(3.8) |
The coefficient operator of the Jacobi correction equation is , which is indefinite and nearly singular as the approximation . From the fast and parallel computing point of view, we may try to design some efficient preconditioners for . Assume , then we may get the preconditioned correction equation:
The correction variable may be obtained by
where is so-called orthogonal parameter.
3.3 The two-level PHJD algorithm based on a domain decomposition
Next, we shall propose our two-level preconditioned Helmholtz-Jacobi-Davidson (PHJD) algorithm. In our case, the preconditioner may be chosen as
(3.9) |
where denote the -orthogonal projections, denotes and denotes , i.e., they satisfy that for any ,
(3.10) |
Combining the scaling argument, the Poincar inequality and inverse estimate, it is easy to see that is symmetric about and we may obtain the following properties:
(3.11) |
Algorithm 2 The two-level PHJD algorithm |
---|
1. Given the initial approximation of the first eigen-pair. |
(3.12) Let |
2. Solving preconditioned Jacobi correction equation, i.e., for 1, 2, 3, …, |
denote and let (3.13) Then define (3.14) where |
3. Solving the Helmholtz projection system, (3.15) Let . |
4. Minimizing the Rayleigh quotient in |
(3.16) where |
5. If or , then return , otherwise goto step 2 |
Remark 3.1
We are interested in the simply connected domain and we may handle the kernel of the operator by solving the Poisson equation based on the de Rham Complex. If the domain equipped with the nontrivial topology is considered, we need to consider the harmonic form space which is isomorphic to the de Rham co-homology group as well as the quotient space whose dimension is the corresponding Betti number(see, e.g., [2, 4]).
Remark 3.2
For the case , it is obvious that is well-defined. For the case , the may be singular in our iterative procedure. In this case, we only need to refine the initial grid in step 1 in Algorithm 2 to obtain , which can ensure that is well-defined.
4 Convergence analysis
In this section, we focus on giving a convergence analysis of the two-level PHJD method. First, we present some useful lemmas in our main proof. The first lemma illustrates that as the coarse grid size is sufficiently small, the distance in the sense of -norm between the first approximation of the eigenvector with is sufficiently small too, as well as the distance between the first approximation of the eigenvalue with . In fact, the proof of the first inequality in the first lemma essentially needs the Hodge operator lemma (see [12]). You may see a complete proof in [30]. So next, we only prove the second and the third inequality in Lemma 4.1.
Lemma 4.1
There exists a constant C such that
(4.1) |
(4.2) |
(4.3) |
where the notations were defined in the above two-level PHJD algorithm.
Proof. We first prove . Due to and , we have
Next, we compute the formula directly. Owing to and , we have
(4.4) |
Because of , we obtain
(4.5) | ||||
Moreover, from , we know that .
We now illustrate that all the norms defined in are equivalent, i.e., these topologies induced by these norms are homeomorphic.
Lemma 4.2
The following bilinear forms construct the inner product in ,
Moreover, the norms induced by above inner products and the norm in are equivalent.
Proof. Combining and the fact that the Rayleigh quotient is a monotonic decreasing function with expanding Davidson space , for sufficiently small , we have
where the constant is independent of . Furthermore, for any , we have
(4.6) | ||||
Hence, we know that defines an inner product in . For any , we have
Owing to , we know . Hence,
that is
It is easy to see that as
, there exists a constant which is independent of , such that . Other conclusions in this lemma can be proved by a similar argument.
The following lemma, which may be proved by using the same technique as in the case of nodal spaces, may be regarded as the ’bridge’ between the coarse space and the fine space. (see [24, 23] and [6])
Lemma 4.3
Let be a shape-regular and quasi-uniform triangulation. Then
(4.7) |
(4.8) |
Furthermore, we have
Lemma 4.4
For any , it holds that
(4.9) |
(4.10) |
(4.11) |
where is independent of .
Proof. For any , let . According to the Theorem 2.2 and Corollary 2.3, we know that there exists a vector such that
By multiplying the at the both sides of the above inequality, we have
Define , we have . Moreover,
(4.12) |
For the , similarly, we may find a such that
(4.13) |
Hence, combining with the triangle inequality, we obtain
(4.14) |
For , there exists a such that
Then
Finally,
Note that the norm is equivalent to the norm in the discrete divergence-free space. By corollary 2.3, we may prove and by a similar argument.
For the convenience of the following convergence analysis, we choose a special case to analyze the error reduction. Let and . Next, we minimize the in and choose the in () to analyze the error reduction, i.e.,
(4.15) |
and set
(4.16) |
where is an undetermined parameter depending on the . By the above analysis, we know that , where .
Owing to the Helmholtz projection, we know that and we denote
(4.17) |
From , , it is easy to see that
We substitute the expression of the preconditioner into the above formula, and then obtain
where the last equality has used the partitions of the identity operator defined on and . For any , , , so we have
Furthermore, by using the partitions of the identity operator defined on , we obtain
which, together with the definition of residual vector and , yields
(4.18) |
here we define and let . we call as the error principal term and as the error principal operator and as the almost counterbalanced term. Noting that and using , it is easy to prove that
(4.19) |
which is useful in the following error estimates.
Moreover, we note that and . In fact, as the Rayleigh quotient is a monotonic decreasing function with expanding the Davidson subspace , we have . For the case , by corollary 2.3 and , we know that
(4.20) |
For the case , by corollary 2.3 and , we have
(4.21) |
4.1 Estimate of the error principal term
In this subsection, we analyze the error principal term .
Theorem 4.5
For sufficiently small , there exists a constant such that
(4.22) |
where the constant is independent of and .
First of all, we give two useful lemmas.
Lemma 4.6
The error principal operator is symmetric in the sense of . Furthermore, if is sufficiently small, the operator is also positive definite.
Proof. It is easy to see that and are symmetric in the sense of , which result in
(4.23) |
and
(4.24) |
here . Combining , with the definition of , we obtain
(4.25) |
which means that is symmetric in the sense of .
To obtain the second conclusion in this lemma, we define an operator as follows:
(4.26) |
where . For sufficiently small , we know that . By the Lax-Milgram theorem in , we may see that the above definition is meaningful. Similarly, we may also define operator as follows:
(4.27) |
where . Because the Poincar inequality holds in local fine spaces , the above definitions are also meaningful.
It is easy to check that . In fact, for any , we have
Similarly, .
Hence, for any , we have
(4.28) |
For the second term of , owing to the definition of and Cauchy-Schwarz inequality, we have
(4.29) |
Meanwhile,
(4.30) |
Combining and together, we have
(4.31) |
For the third term of , we have
(4.32) |
Owing to , Lemma 3.1, Lemma 4.2 and the Poincar inequality, we get
(4.33) |
Combining and together, we get
(4.34) |
which, together with , yields
where . If we take , then we may obtain the conclusion of this lemma.
Lemma 4.7
For the error space , it holds that
and there exist and such that
(4.35) |
Proof. For any , we take and , where is the standard edge element interpolation operator. Then
(4.36) |
which, together with the fact that for any , yields
Next, we prove . For the component in the coarse space, owing to Lemma 4.2, the orthogonality on and in the sense of and , the definition of and the Poincar inequality, we get
(4.37) |
Using Lemma 4.3, the definition of and the embedding theorem , we get
(4.38) |
Then
(4.39) |
For the local fine component, we may use the properties of the partition of unity and the property of the interpolation operator to prove that
(4.40) |
where the last inequality holds (see [23] or Lemma 10.9 in [24]). We first estimate the first term of . Combining the triangle inequality, , Lemma 2.1 and Lemma 4.3, we have
(4.41) |
On the one hand, it is easy to see that . So by using the orthogonality of , we know that the second term of may be estimate as follows:
Then, combining Lemma 2.1 and Lemma 4.3, we obtain
that is
(4.42) |
On the other hand, we know Using the same argument with the proof of Lemma 4.4, we know that there exists such that
Hence, by the orthogonality of , Cauchy-Schwarz inequality, Lemma 2.1 and Lemma 4.3, we obtain
Then, by the definition of , we get
(4.43) |
which, together with , , yields
(4.44) |
For the second term of , by and Lemma 4.2, we have
(4.45) |
Combining and , we know
(4.46) |
Finally, by Lemma 4.2, and , we may obtain the proof of .
Proof of Theorem 4.5: Because of Lemma 4.7, we know that for any , there exist and such that
(4.47) |
By , and Cauchy-Schwarz inequality, we may obtain
(4.48) |
Moreover,
Based on Lemma 4.6, we may define and know that is also a symmetric positive definite operator on . Hence, we have
Then
(4.49) |
which completes the proof of this theorem.
4.2 Estimate of the almost counterbalanced term
Before we analyze the estimate of the almost counterbalanced term , we first estimate the orthogonal parameter .
Lemma 4.8
For sufficiently small , it holds that
where is independent of .
Proof. First, we decompose the numerator of . Due to the definition of and the fact , we get
(4.50) |
where the last inequality holds owing to Cauchy-Schwarz inequality and . In fact, for , it is obvious that in algorithm 2. As is the minimum value point of Rayleigh quotient in in algorithm 2, without loss of generality, let . Next, we estimate one by one. For the first term in , we have
(4.51) |
By the definition of , , and Assumption , we know that for sufficiently small ,
(4.52) |
For the second term in , by the definition of , Lemma 4.4 and , we get
(4.53) |
For the third term in , by the Poincar inequality, and , we obtain
(4.54) |
For the fourth term in , by Lemma 3.1, the Poincar inequality, and , we get
(4.55) |
Combining and together, for sufficiently small , we have
(4.56) |
On the other hand, we analyze the denominator of
(4.57) |
Similarly, we estimate the terms in one by one. Note that by Lemma 4.4, we have
(4.58) |
and
(4.59) |
Meanwhile,
(4.60) |
where the last inequality holds due to the following estimate
where the similar argument in is used. Hence, for the first two terms of , combining and , as long as sufficiently small , we have
(4.61) |
and
(4.62) |
For the third term of , by and Assumption , we know
(4.63) |
Combining together, we get
which, together with , proves this lemma.
Next, we estimate the almost counterbalanced term . For convenience of analysis, we set It is easy to see that
For , we have
(4.64) |
Denote
(4.65) |
Using Lemma 4.2, the Pythagorean theorem on in the sense of and Lemma 4.4, we have
(4.66) |
Moreover,
Then
By , we may obtain
Furthermore, combining and , we have
(4.67) |
Using the orthogonal property of Jacobi-Davidson method and Helmholtz orthogonal decomposotion, we get
By , and the Poincar inequality, for sufficiently small , we have
(4.68) |
Then
It is known that
we have
Furthermore,
(4.69) |
Combining , , and , we obtain
Then
(4.70) |
Next, we estimate the terms and in . For the first term in , by the Poincar inequality, and , we get
(4.71) |
For the second term in , by Lemma 3.1, the Poincar inequality, and , we have
(4.72) |
For the third term in , by , Lemma 4.4 and Lemma 4.8, we obtain
(4.73) |
For the fourth term in , by and Lemma 4.8, we know
(4.74) |
Finally, we may finish the estimate of the almost counterbalanced term by combining Lemma 4.4, , and , i.e.,
(4.75) |
4.3 The main result of this paper
In this subsection, we first estimate the term in , and then we shall give our main result of this paper. For convenience, we denote . So
For any , we have
(4.76) |
Combining Lemma 4.4, and , we obtain
Hence,
Moreover, we may obtain the following estimate
that is
(4.77) |
For the term in , by the similar argument with , it is easy to see that
Hence, by Lemma 4.4, Lemma 4.8 and , we know
(4.78) |
For the terms and in , we denote . For any , combining the Poincar inequality and , we have
Then
(4.79) |
Specially, taking , we obtain
(4.80) |
and similarly taking , we then have
(4.81) |
Combining Theorem 4.5, Lemma 4.8, , , , and , we may obtain the main convergence result of this paper.
Theorem 4.9
For the Algorithm 2, the discrete principal eigenvalue of the Maxwell eigenvalue problem satisfies
(4.82) |
and
(4.83) |
where , as and is independent of .
Proof. By , and Lemma 4.2, we have
where , as . Combining the orthogonal property of Jacobi-Davidson method and Helmholtz orthogonal decomposition, we have . Due to and , we have
Then
By Lemma 4.2, we may obtain
Moreover,
We know that is a special vector in but is not stable function when minimizes the Rayleigh quotient. Hence,
Then
Moreover,
which completes the proof of the theorem.
5 Numerical experiments
In this section, we shall present several numerical experiments in two and three dimensional cases to support our theoretical findings. We do the computation in double decision but only display four digits after the decimal in tables except in order to show the convergent process. In 2D cases, the stopping tolerance is endowed by . In 3D cases, the stopping tolerance is endowed by .
5.1 2D Maxwell eigenvalue problems
In this subsection, we shall present some numerical results for the 2D Maxwell eigenvalue problem in rectangle domain, square domain and L-shaped domain.
Example 5.1
We consider the Maxwell eigenvalue problem on the two dimensional domain with and use the lowest triangle edge element to compute the principal eigenvalue which is with algebraic multiplicity . First, we choose an initial uniform partition in with the number of subdomains , coarse grid size . we refine uniformly the grid layer by layer and fix the ratio . Next, we test the optimality and scalability of our PHJD algorithm.
1488 | 8 | 8.8778e-10 | 1.1214e-04 | 0.249933057840449 | - | |
6048 | 7 | 1.1871e-09 | 2.5439e-04 | 0.249983266630354 | 2.0002 | |
24384 | 7 | 1.2434e-09 | 2.1708e-04 | 0.249995816760741 | 2.0000 | |
97920 | 7 | 7.5604e-10 | 1.3077e-04 | 0.249998954192258 | 2.0000 | |
392448 | 7 | 6.9670e-10 | 1.0839e-04 | 0.249999738755104 | 2.0011 | |
1571328 | 7 | 1.3278e-10 | 8.8096e-04 | 0.249999934697985 | 2.0002 | |
6288384 | 7 | 1.4611e-10 | 9.4527e-04 | 0.249999983669919 | 1.9996 |
64 | 6288384 | 7 |
256 | 6288384 | 6 |
1024 | 6288384 | 6 |
The notation means the fine grid size. means the degree of freedom. means iterative steps in our algorithm. means the convergence order of the principal eigenvalue. It is shown in Table 1 that the convergence rate does not deteriorate when . Meanwhile, the PHJD algorithm works very well when which coincides with our theory. In order to verify the scalability of our PHJD method, we choose the number of domains to count the iterative steps for . The iterative steps decreasing in table 2 illustrates that our method based on domain decomposition is scalable, which coincides with our theory. Actually, the near optimality and scalability of our method hold not only for simple case but also for multiple case.
Example 5.2
We consider the Maxwell eigenvalue problem in with and use the lowest triangle edge element to compute the principal eigenvalue which is with algebraic multiplicity . First, we choose an initial uniform partition in with the number of subdomains , coarse grid size . we refine uniformly the grid layer by layer and fix the ratio . Next, we also test the optimality and scalability of our PHJD algorithm.
736 | 7 | 2.1972e-09 | 2.0136e-04 | 0.998065902158390 | - | |
3008 | 7 | 3.2392e-09 | 1.8341e-04 | 0.999515561847595 | 1.9973 | |
12160 | 7 | 1.0886e-09 | 3.0479e-04 | 0.999878833291020 | 1.9993 | |
48896 | 7 | 1.5651e-09 | 1.8827e-04 | 0.999969704716447 | 1.9998 | |
196096 | 7 | 1.1096e-09 | 8.5103e-05 | 0.999992425964469 | 2.0000 | |
785408 | 7 | 1.6332e-09 | 2.1306e-04 | 0.999998106489943 | 2.0000 | |
3143680 | 7 | 1.2116e-10 | 6.8551e-04 | 0.999999526630577 | 2.0000 | |
12578816 | 7 | 1.0379e-10 | 9.3310e-04 | 0.999999881662115 | 2.0001 |
8 | 12578816 | 7 |
32 | 12578816 | 6 |
128 | 12578816 | 6 |
512 | 12578816 | 6 |
Although we only give the theoretical analysis for the first simple eigenvalue case, our algorithm still works well and keeps good scalability for the first multiple eigenvalue case. It is shown in Table 3 that the convergence rate does not deteriorate when . Meanwhile, our PHJD algorithm works very well when . Similarly, we choose the number of domains to count the iterative steps for . Table 4 shows that the iterative steps keep stable when the number of domains increases, which illustrates that our PHJD method is scalable for the multiple principal eigenvalue case.
Example 5.3
We consider the Maxwell eigenvalue problem in with and use the lowest triangle edge element to compute the principal eigenvalue which is with algebraic multiplicity (see [8, 30]). First, we choose an initial uniform partition in with the number of subdomains , coarse grid size . We refine uniformly the grid layer by layer and fix the ratio . Next, we test the optimality and scalability of our PHJD algorithm.
9088 | 8 | 2.4067e-09 | 1.472164089752624 | - | |
36608 | 7 | 9.7419e-09 | 1.474258883109039 | 1.3431 | |
146944 | 7 | 2.8876e-09 | 1.475083316185965 | 1.3397 | |
588800 | 7 | 4.1055e-10 | 1.475408720712429 | 1.3374 | |
2357248 | 7 | 8.3497e-11 | 1.475537406431712 | 1.3360 | |
9433088 | 7 | 2.8653e-11 | 1.475588361248540 | 1.3351 |
96 | 9433088 | 7 |
384 | 9433088 | 7 |
1536 | 9433088 | 7 |
We note that the first Maxwell eigenvector in L-shaped domain has a strong unbounded singularity at the re-entrant corner but our PHJD method still works very well. It is shown in Table 5 that the iterative steps keep stable when , which illustrates that the convergence rate of our algorithm is independent of the fine grid size . The fact that iterative steps keep stable with the number of subdomains increasing in Table 6 illustrates our algorithm is scalable. Although we only give the theoretical analysis for convex domain, our PHJD method works well and may be extended to more general Lipschitz domain.
5.2 3D Maxwell eigenvalue problems
In this subsection, we shall present some numerical results for the 3D Maxwell eigenvalue problem in cuboid domain and cube domain.
Example 5.4
We consider the Maxwell eigenvalue problem in with and use the lowest cuboid edge element to compute the principal eigenvalue which is with algebraic multiplicity . First, we choose an initial uniform partition in with the number of subdomains , coarse grid size . We refine uniformly the grid layer by layer and fix the ratio . Next, we test the optimality and scalability of our PHJD algorithm.
22080 | 9 | 5.4339e-06 | 0.946879258942834 | - | |
186496 | 8 | 3.1060e-06 | 0.945052693133526 | 2.0011 | |
1532160 | 8 | 7.6311e-06 | 0.944598398293954 | 1.9822 |
128 | 1532160 | 8 |
1024 | 1532160 | 7 |
It is shown in Table 7 that the iterative steps keep stable when , which illustrates that the convergence rate of our algorithm is independent of the fine grid size . The fact that iterative steps decreases in Table 8 illustrates our algorithm is scalable. These coincide with our theory in this paper. Actually, the near optimality and scalability of our method hold not only for simple case but also for multiple case in 3D case.
Example 5.5
We consider the Maxwell eigenvalue problem in with and use the lowest cuboid edge element to compute the principal eigenvalue which is with algebraic multiplicity . First, we choose an initial uniform partition in with the number of subdomains , coarse grid size . We refine uniformly the grid layer by layer and fix the ratio . Next, we also test the optimality and scalability of our PHJD algorithm.
10800 | 9 | 7.6503e-06 | 2.006433745425857 | - | |
92256 | 8 | 4.5546e-06 | 2.001607079135022 | 2.0012 | |
762048 | 7 | 9.8823e-06 | 2.000419881720004 | 1.9364 | |
6193536 | 7 | 4.1799e-06 | 2.000105199582128 | 1.9969 |
64 | 6193536 | 7 |
512 | 6193536 | 6 |
Although we only give the theoretical analysis for the first simple eigenvalue case, our algorithm still works well and keeps good scalability for the first multiple eigenvalue case. The iterative steps keep stable in Table when , which illustrates that the convergence rate is independent of . It is shown in Table 10 that the iterative steps decrease, which illustrates that the PHJD method is scalable.
6 Conclusions
In this paper, based on the domain decomposition method, we propose a new and robust two-level PHJD method for solving the Maxwell eigenvalue problem. The two-level PHJD method has a good scalability and is asymptotically optimal without any assumption between coarse size and fine size . Numerical results confirm our theoretical findings.
References
- [1] Cherif Amrouche, Christine Bernardi, Monique Dauge, and Vivette Girault. Vector potentials in three-dimensional non-smooth domains. Mathematical Methods in the Applied Sciences, 21(9):823–864, 1998.
- [2] Douglas N Arnold, Richard S Falk, and Ragnar Winther. Finite element exterior calculus, homological techniques, and applications. Acta numerica, 15:1–155, 2006.
- [3] Constantine A Balanis. Advanced Engineering Electromagnetics 2nd ed. John Wiley & Sons, 2012.
- [4] Daniele Boffi. Finite element approximation of eigenvalue problems. Acta Numerica, 19:1–120, 2010.
- [5] Daniele Boffi, Paolo Fernandes, Lucia Gastaldi, and Ilaria Perugia. Computational models of electromagnetic resonators: analysis of edge element approximation. SIAM journal on numerical analysis, 36(4):1264–1290, 1999.
- [6] James H Bramble and Jinchao Xu. Some estimates for a weighted projection. Mathematics of Computation, 56(194):463–476, 1991.
- [7] Annalisa Buffa, Patrick Ciarlet, and Erell Jamelot. Solving electromagnetic eigenvalue problems in polyhedral domains with nodal finite elements. Numerische Mathematik, 113(4):497–518, 2009.
- [8] Annalisa Buffa, Paul Houston, and Ilaria Perugia. Discontinuous Galerkin computation of the Maxwell eigenvalues on simplicial meshes. Journal of Computational Applied Mathematics, 204(2):317–333, 2007.
- [9] Junqing Chen, Yifeng Xu, and Jun Zou. An adaptive inverse iteration for Maxwell eigenvalue problem based on edge elements. Journal of Computational Physics, 229(7):2649–2658, 2010.
- [10] Lawrence C Evans. Partial Differential Equations: Second Edition, volume 19. American Methematical Society, 2010.
- [11] Kikuchi Fumio. Mixed and penalty formulations for finite element analysis of an eigenvalue problem in electromagnetism. Computer Methods in Applied Mechanics and Engineering, 64(1-3):509–521, 1987.
- [12] Ralf Hiptmair. Finite elements in computational electromagnetism. Acta Numerica, 11:237–339, 2002.
- [13] Ralf Hiptmair and Klaus Neymeyr. Multilevel method for mixed eigenproblems. SIAM Journal on Scientific Computing, 23(6):2141–2164, 2002.
- [14] Xiaozhe Hu and Xiaoliang Cheng. Acceleration of a two-grid method for eigenvalue problems. Mathematics of Computation, 80(275):1287–1301, 2011.
- [15] Wei Jiang, Na Liu, Yifa Tang, and Qing Huo Liu. Mixed finite element method for 2d vector Maxwell’s eigenvalue problem in anisotropic media. Progress In Electromagnetics Research, 148:159–170, 2014.
- [16] Wei Jiang, Na Liu, Yongqing Yue, and Qing Huo Liu. Mixed finite-element method for resonant cavity problem with complex geometric topology and anisotropic lossless media. IEEE Transactions on Magnetics, 52(2):1–8, 2015.
- [17] Tosio Kato. Perturbation Theory for Linear Operators, volume 132. Springer Science & Business Media, 2013.
- [18] Jie Liu, Wei Jiang, Fubiao Lin, Na Liu, and Qing Huo Liu. A two-grid vector discretization scheme for the resonant cavity problem with anisotropic media. IEEE Transactions on Microwave Theory and Techniques, 65(8):2719–2725, 2017.
- [19] Jean-Claude Nédélec. Mixed finite elements in . Numerische Mathematik, 35(3):315–341, 1980.
- [20] Jean-Claude Nédélec. A new family of mixed finite elements in . Numerische Mathematik, 50(1):57–81, 1986.
- [21] John E Osborn. Spectral approximation for compact operators. Mathematics of Computation, 29(131):712–725, 1975.
- [22] Gerard LG Sleijpen and Henk A Van der Vorst. A Jacobi–Davidson iteration method for linear eigenvalue problems. SIAM Review, 42(2):267–293, 2000.
- [23] Andrea Toselli. Overlapping schwarz methods for Maxwell’s equations in three dimensions. Numerische Mathematik, 86(4):733–752, 2000.
- [24] Andrea Toselli and Olof Widlund. Domain Decomposition Methods-Algorithms and Theory, volume 34. Springer Science & Business Media, 2006.
- [25] Wei Wang and Xuejun Xu. A two-level overlapping hybrid domain decomposition method for eigenvalue problems. SIAM Journal on Numerical Analysis, 56(1):344–368, 2018.
- [26] Wei Wang and Xuejun Xu. On the convergence of a two-level preconditioned Jacobi–Davidson method for eigenvalue problems. Mathematics of Computation, 88(319):2295–2324, 2019.
- [27] Jinchao Xu and Aihui Zhou. A two-grid discretization scheme for eigenvalue problems. Mathematics of Computation, 70(233):17–25, 2001.
- [28] Yidu Yang and Hai Bi. Two-grid finite element discretization schemes based on shifted-inverse power method for elliptic eigenvalue problems. SIAM Journal on Numerical Analysis, 49(4):1602–1624, 2011.
- [29] Tao Zhao, Feng-Nan Hwang, and Xiao-Chuan Cai. A domain decomposition based Jacobi-Davidson algorithm for quantum dot simulation. In Domain Decomposition Methods in Science and Engineering XXII, pages 415–423. Springer, 2016.
- [30] J Zhou, X Hu, L Zhong, S Shu, and L Chen. Two-grid methods for Maxwell eigenvalue problems. SIAM Journal on Numerical Analysis, 52(4):2027–2047, 2014.