This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Single-step triangular splitting iteration method for a class of two-by-two block linear system

Jie Wu [email protected] Xi-An Li [email protected],[email protected] 214 Institute of China North Industries Group, Anhui Province, Bengbu 233000, China School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai 200240, China
Abstract

For solving a class of block two-by-two real linear system, a new single-step iteration method based on triangular splitting scheme is proposed in this paper. Then the convergence properties of this method are carefully investigated. In addition, we determine its optimal iteration parameters and give the corresponding optimal convergence factor. It is worth mentioning that the SSTS iteration method is robust and superior to SBTS and PSBTS iteration methods under suitable conditions. Finally, some numerical experiments are carried out to validate the theoretical results and evaluate this new method.

keywords:
Two-by-two; Positive semidefinite; SSTS; Parameters; Preconditioner
MSC:
[2010]65F10 , 65F50
journal: Journal of  Templates

1 Introduction

We consider the numerical solution for a class of block two-by-two linear system of the form

𝒜z=[WTTW][xy]=[pq],\mathcal{A}z=\left[\begin{array}[]{cc}W&-T\\ T&W\\ \end{array}\right]\left[\begin{array}[]{c}x\\ y\end{array}\right]=\left[\begin{array}[]{c}p\\ q\end{array}\right], (1.1)

where W,Tn×nW,T\in\mathbb{R}^{n\times n} are both symmetric positive semidefinite matrices and satisfy null(W)null(T)={𝟎}\text{null}(W)\bigcap\text{null}(T)=\{\bm{0}\} with null()\text{null}(\cdot) denoting the null space of a given matrix, x,ynx,y\in\mathbb{R}^{n} are unknown vectors and p,qnp,q\in\mathbb{R}^{n} are given vectors. The solution of linear system (1.1)\eqref{0101} is unique [1] and it is a real equivalent formulation of the following complex symmetric system of linear equations

Au=bwithA=W+iTn×nandu=x+iy,b=p+iq,Au=b~{}\mathrm{with}~{}A=W+iT\in\mathbb{C}^{n\times n}~{}\mathrm{and}~{}u=x+iy,b=p+iq, (1.2)

where i=1i=\sqrt{-1}. This system (1.1) or (1.2) frequently stem from scientific and engineering fields [2, 3, 4, 5].

Lots of effective iterative methods have been proposed in the literatures for solving the linear systems (1.1). For example, Bai et al. presented a HSS method [6] based on the Hermitian/skew-Hermitian (HS) splitting of the matrix AA, and developed a MHSS method [7] in order to accelerate the convergence rate of HSS method. This MHSS method is algorithmically described in the following.

Method 1 (The MHSS iteration method).

Given an initial guess x(0)nx^{(0)}\in\mathbb{C}^{n}, for k=0,1,2,k=0,1,2,\cdots, until {x(k)}\{x^{(k)}\} converges, compute

{(αI+W)x(k+1/2)=(αIiT)x(k)+b,(αI+T)x(k+1)=(αI+iW)x(k+1/2)ib.\begin{cases}(\alpha I+W)x^{(k+1/2)}=(\alpha I-iT)x^{(k)}+b,\\ (\alpha I+T)x^{(k+1)}=(\alpha I+iW)x^{(k+1/2)}-ib.\end{cases}

where α\alpha is a positive constant and II is the identify matrix.

They have also proved that for any positive parameter α\alpha, the HSS and MHSS methods will converge unconditionally to the unique solution of the linear systems (1.1) when TT is symmetric positive semi-definite. After that, some variants of the above two methods were generalized and discussed by many researchers, please see [8, 9, 10] and the references therein. In 2016, an efficient scale-splitting (SCSP) iteration method [11] was constructed by multiplying a complex number (αi)(\alpha-i) through both sides of (1.2), then some generalized versions of this method were developed[12, 13]. These methods are considerable to deal with linear system (1.2).

To solve (1.1), a block preconditioned MHSS (PMHSS) iteration method [1] and its alternating-direction version [14] were presented following the idea of MHSS-like methods.Recent years, some other effective techniques to solve linear system (1.1) are also proposed, such as C-to-R iteration methods [15, 5] and GSOR-like iteration methods [16, 17]. Very recently, Li et al. [18] established a symmetric block triangular splitting (SBTS) method to linear systems (1.1) basing on upper and lower triangular splitting formulations for matrix 𝒜\mathcal{A}. It can be simply expressed as follows.

Method 2 (The SBTS iteration method).

Given an initial vectors x(0)nx^{(0)}\in\mathbb{C}^{n} and y(0)ny^{(0)}\in\mathbb{C}^{n}, and a real relaxation factor α>0\alpha>0. For k=0,1,2,k=0,1,2,\cdots, until {(x(k)T,y(k)T)T}\left\{(x^{(k)^{T}},y^{(k)^{T}})^{T}\right\} converges to the exact solution of (1.1), compute

{Wx(k+1/2)=Ty(k)+p,αWy(k+1/2)=(α1)Wy(k)Tx(k+1/2)+q,αWy(k+1)=(α1)Wy(k+1/2)Tx(k+1/2)+q,Wx(k+1)=Ty(k+1)+p,\begin{cases}Wx^{(k+1/2)}=Ty^{(k)}+p,\\ \alpha Wy^{(k+1/2)}=(\alpha-1)Wy^{(k)}-Tx^{(k+1/2)}+q,\\ \alpha Wy^{(k+1)}=(\alpha-1)Wy^{(k+1/2)}-Tx^{(k+1/2)}+q,\\ Wx^{(k+1)}=Ty^{(k+1)}+p,\end{cases}

where α\alpha is a positive constant and II is the identify matrix.

Numerical testes indicate it is powerful than the HSS and MHSS methods to solve (1.1). Analogous to GSOR and PGSOR iteration methods, a preconditioned SBTS(PSBTS) iteration method was designed by Zhang et al.[19] and its performance is outstanding than the SBTS one under some restrictions. Viewing the SBTS and PSBTS methods, it is similar to the iterative scheme of SSOR method and has a symmetric structure involve with every iterative step. By making a study of the PGSOR and SSOR iteration methods, this SBTS method should have some room for improvement.

In this work, a single-step triangular splitting (SSTS) iteration method is developed for solving (1.1) which stems from a class of complex symmetric linear system. Then, we investigate its convergence properties and provide the approaches of choosing the optimal iteration parameters. Moreover, this new method and PSBTS method have the same optimal convergence factor, but the former one is faster than the latter one.

The paper is structured as follows. In Section 2, we derive the process of establishing the SSTS iteration method for solving (1.1). In Section 3, the convergence properties of the SSTS iteration method are discussed. In Section 4, some numerical experiments are implemented to test this novel method. Finally, some brief conclusions are made in Section 5.

2 The SSTS iteration method

In this section, we provide the process of establishing our SSTS iteration method to deal with linear system (1.1). According to the preconditioned technique [17, 19], we firstly reconstruct (1.1) by means of the following matrix

𝒫ω=[ωIIIωI],\mathcal{P}_{\omega}=\left[\begin{array}[]{cc}\omega I&I\\ -I&\omega I\end{array}\right],

where ω\omega is a positive real constant. Multiplying by matrix 𝒫ω\mathcal{P}_{\omega} on both sides of (1.1) gives

𝒜~z:=[ωW+T(ωTW)ωTWωW+T][xy]=[ωp+qωqp]=:b~.\widetilde{\mathcal{A}}z:=\left[\begin{array}[]{cc}\omega W+T&-(\omega T-W)\\ \omega T-W&\omega W+T\end{array}\right]\left[\begin{array}[]{c}x\\ y\end{array}\right]=\left[\begin{array}[]{c}\omega p+q\\ \omega q-p\end{array}\right]=:\widetilde{b}. (2.1)

For ease of discussion, we denote W~ω=ωW+T,T~ω=ωTW\widetilde{W}_{\omega}=\omega W+T,\widetilde{T}_{\omega}=\omega T-W and p~=ωp+q,q~=ωqp\widetilde{p}=\omega p+q,\widetilde{q}=\omega q-p throughout this paper. In light of the PSBTS iteration method, we split the coefficient matrix 𝒜~\widetilde{\mathcal{A}} of (2.1) into two matrices:

𝒜~=[W~ωOT~ωαW~ω][OT~ωO(α1)W~ω]:=α,ω𝒩α,ω,\widetilde{\mathcal{A}}=\left[\begin{array}[]{cc}\widetilde{W}_{\omega}&O\\ \widetilde{T}_{\omega}&\alpha\widetilde{W}_{\omega}\end{array}\right]-\left[\begin{array}[]{cc}O&\widetilde{T}_{\omega}\\ O&(\alpha-1)\widetilde{W}_{\omega}\end{array}\right]:=\mathcal{M}_{\alpha,\omega}-\mathcal{N}_{\alpha,\omega}, (2.2)

where α\alpha is a positive real parameter. Then, we consider the following SSTS iteration scheme according to the above splitting form.

[x(k+1)y(k+1)]=α,ω[x(k)y(k)]+𝒢α,ω[pq],\left[\begin{array}[]{c}x^{(k+1)}\\ y^{(k+1)}\end{array}\right]=\mathcal{H}_{\alpha,\omega}\left[\begin{array}[]{c}x^{(k)}\\ y^{(k)}\end{array}\right]+\mathcal{G}_{\alpha,\omega}\left[\begin{array}[]{c}p\\ q\end{array}\right], (2.3)

where

α,ω\displaystyle\mathcal{H}_{\alpha,\omega} =α,ω1𝒩α,ω\displaystyle=\mathcal{M}^{-1}_{\alpha,\omega}\mathcal{N}_{\alpha,\omega}
=[W~ωOT~ωαW~ω]1[OT~ωO(α1)W~ω]\displaystyle=\left[\begin{array}[]{cc}\widetilde{W}_{\omega}&O\\ \widetilde{T}_{\omega}&\alpha\widetilde{W}_{\omega}\end{array}\right]^{-1}\left[\begin{array}[]{cc}O&\widetilde{T}_{\omega}\\ O&(\alpha-1)\widetilde{W}_{\omega}\end{array}\right]
=[OW~ω1T~ωOα1αI1αW~ω1T~ωW~ω1T~ω]\displaystyle=\left[\begin{array}[]{cc}O&\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega}\\ O&\frac{\alpha-1}{\alpha}I-\frac{1}{\alpha}\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega}\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega}\end{array}\right]

is the iteration matrix and 𝒢α,ω1=α,ω\mathcal{G}_{\alpha,\omega}^{-1}=\mathcal{M}_{\alpha,\omega}.

Analogous to classical stationary iteration scheme, we give the SSTS iteration method to solve linear system (1.1) by utilizing (2.2), it is described in the following.

Method 3 (The SSTS iteration method).

Given any two initial vectors x(0),y(0)nx^{(0)},y^{(0)}\in\mathbb{R}^{n} and two real relaxation factors α,ω>0\alpha,\omega>0. Using the following procedures to update the iteration sequence {(x(k)T,y(k)T)T}(k=0,1,2,)\left\{(x^{(k)^{T}},y^{(k)^{T}})^{T}\right\}(k=0,1,2,\cdots), until it converges to the exact solution of (1.1).

{W~ωx(k+1)=T~ωy(k)+p~,αW~ωy(k+1)=(α1)W~ωy(k)T~ωx(k+1)+q~.\begin{cases}\widetilde{W}_{\omega}x^{(k+1)}=\widetilde{T}_{\omega}y^{(k)}+\widetilde{p},\\ \alpha\widetilde{W}_{\omega}y^{(k+1)}=(\alpha-1)\widetilde{W}_{\omega}y^{(k)}-\widetilde{T}_{\omega}x^{(k+1)}+\widetilde{q}.\\ \end{cases}

Since W,Tn×nW,T\in\mathbb{R}^{n\times n} are symmetric positive semi-definite and satisfy null(W)null(T)={𝟎}\textup{null}(W)\cap\textup{null}(T)=\{\bm{0}\}, the matrix W~ω\widetilde{W}_{\omega} is symmetric positive definite, then each step of the SSTS iteration can be solved effectively using mostly real arithmetic either exactly by a Cholesky factorization or inexactly by conjugate gradient and multigrid scheme.

In light of the matrix splitting form (2.2), the the block lower triangular matrix α,ω\mathcal{M}_{\alpha,\omega} whose diagonal parts are symmetric positive could be used as a preconditioner for the linear systems (2.1). At each step of applying the preconditioner α,ω\mathcal{M}_{\alpha,\omega} for a preconditioned system, it is necessary to solve sequences of generalized residual equations of the form

[W~ωOT~ωαW~ω][ef]=[rs],\left[\begin{array}[]{cc}\widetilde{W}_{\omega}&O\\ \widetilde{T}_{\omega}&\alpha\widetilde{W}_{\omega}\end{array}\right]\left[\begin{array}[]{c}e\\ f\end{array}\right]=\left[\begin{array}[]{c}r\\ s\end{array}\right], (2.4)

In the sequel, the matrix α,ω\mathcal{M}_{\alpha,\omega} will be referred to as a SSTS-preconditioner and accelerate Krylov subspace methods such as GMRES. The system (2.4) can be solved by the following steps:
(1) Solve W~ωe=r\widetilde{W}_{\omega}e=r for ee;
(2) Solve W~ωf=1α(sT~ωe)\widetilde{W}_{\omega}f=\frac{1}{\alpha}\left(s-\widetilde{T}_{\omega}e\right) for ff.

Remark 1.

Observing the above procedures, we need to address two linear systems with the same coefficient matrix W~ω\widetilde{W}_{\omega} at each iteration in our algorithm. In actual computations, W~ω\widetilde{W}_{\omega} is generally sparse, one can use fast numerical algorithms to solve the linear systems in steps (1) and (2).

3 Convergence discussion for the SSTS iteration method

In this section, we turn to study the convergence properties of the SSTS iteration method. Firstly, some useful lemmas are introduced to support our theories.

Lemma 3.1.

Let matrices W,Tn×nW,T\in\mathbb{R}^{n\times n} be symmetric positive semi-definite and satisfy null(W)null(T)={𝟎}{null}(W)\cap null(T)=\{\bm{0}\}. Then the matrices W~ω\widetilde{W}_{\omega} and T~ω\widetilde{T}_{\omega} with a real constant ω>0\omega>0 are symmetric positive definite and symmetric, respectively.

Lemma 3.2.

[16] Let matrices W~ω\widetilde{W}_{\omega}, T~ωn×n\widetilde{T}_{\omega}\in\mathbb{R}^{n\times n} be symmetric positive definite and symmetric, respectively. Then the eigenvalues of the matrix S~=W~ω1T~ω\widetilde{S}=\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega} are all real.

Lemma 3.3.

[11] Let matrices W,Tn×nW,T\in\mathbb{R}^{n\times n} be symmetric positive semi-definite and satisfy null(W)null(T)={𝟎}{null}(W)\cap null(T)=\{\bm{0}\}. If μ\mu is an eigenvalue of S~=W~ω1T~ω\widetilde{S}=\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega} with 0<ω0<\omega\in\mathbb{R}, then there is a generalized eigenvalue η\eta of matrix pair (W,T)(W,T) that satisfies μ=ωη1ω+η\mu=\frac{\omega\eta-1}{\omega+\eta} and the spectral radius of S~\widetilde{S} holds

ρ(S~)=max{1ωηminω+ηmin,ωηmax1ω+ηmax},\rho(\widetilde{S})={\max}\left\{\frac{1-\omega\eta_{\min}}{\omega+\eta_{\min}},\frac{\omega\eta_{\max}-1}{\omega+\eta_{\max}}\right\}, (3.1)

here and thereafter ηmin\eta_{\min} and ηmax\eta_{\max} are the extreme generalized eigenvalues of matrix pair (W,T)(W,T).

Based on the above lemmas, the following main conclusions about our new method are induced.

Theorem 3.4.

Let matrices W,Tn×nW,T\in\mathbb{R}^{n\times n} be symmetric positive semi-definite and satisfy null(W)null(T)={𝟎}{null}(W)\cap null(T)=\{\bm{0}\}. Also let 0<α,ω0<\alpha,\omega\in\mathbb{R} and S~=W~ω1T~ω\widetilde{S}=\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega}. Assuming λ\lambda is an eigenvalue of α,ω\mathcal{H}_{\alpha,\omega}, then λ=0\lambda=0 with multiplicity nn and the remaining nn eigenvalues of α,ω\mathcal{H}_{\alpha,\omega} satisfy the following equation:

λ1+1+μi2α=0\lambda-1+\frac{1+\mu_{i}^{2}}{\alpha}=0 (3.2)

with μi(i=1,2,,n)\mu_{i}(i=1,2,\cdots,n) being the eigenvalues of S~\widetilde{S}. Furthermore, the spectral radius of α,ω\mathcal{H}_{\alpha,\omega} satisfies

ρ(α,ω)=max{|11+μmin2α|,|11+μmax2α|}.\rho(\mathcal{H}_{\alpha,\omega})=\max\left\{\left|1-\frac{1+\mu_{\min}^{2}}{\alpha}\right|,\left|1-\frac{1+\mu_{\max}^{2}}{\alpha}\right|\right\}. (3.3)

Here and thereinafter, μmin=minμisp(S~){|μi|}\mu_{\min}=\underset{\mu_{i}\in\textup{sp}(\widetilde{S})}{\min}\{|\mu_{i}|\} and μmax=maxμisp(S~){|μi|}\mu_{\max}=\underset{\mu_{i}\in\textup{sp}(\widetilde{S})}{\max}\{|\mu_{i}|\} with sp()\textup{sp}(\cdot) representing the spectrum for a given matrix.

Proof.

By Lemma 3.2, it is not difficult to verify that S~=W~ω1T~ω\widetilde{S}=\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega} has a spectral decomposition S~=PΛP1\widetilde{S}=P\Lambda P^{-1}, where Pn×nP\in\mathbb{R}^{n\times n} is an invertible matrix and Λ=diag(μ1,μ2,,μn)\Lambda=diag(\mu_{1},\mu_{2},\cdots,\mu_{n}) is a diagonal matrix spanning by the spectrum of S~\widetilde{S}. Since

^α,ω=𝒱1α,ω𝒱=[OΛOα1αI1αΛ2]with𝒱=[POOP],\hat{\mathcal{H}}_{\alpha,\omega}=\mathcal{V}^{-1}\mathcal{H}_{\alpha,\omega}\mathcal{V}=\left[\begin{matrix}O&\Lambda\\ O&\frac{\alpha-1}{\alpha}I-\frac{1}{\alpha}\Lambda^{2}\end{matrix}\right]~{}~{}\text{with}~{}~{}\mathcal{V}=\left[\begin{matrix}P&O\\ O&P\end{matrix}\right],

is similar to α,ω\mathcal{H}_{\alpha,\omega}, then they have the same spectrum. Suppose λ\lambda is the eigenvalue of α,ω\mathcal{H}_{\alpha,\omega}, we have

det(λIα,ω)\displaystyle det(\lambda I-\mathcal{H}_{\alpha,\omega}) =det(λI^α,ω)\displaystyle=det(\lambda I-\hat{\mathcal{H}}_{\alpha,\omega}) (3.4)
=det(λI[OΛOα1αI1αΛ2])\displaystyle=det\left(\lambda I-\left[\begin{array}[]{cc}O&\Lambda\\ O&\frac{\alpha-1}{\alpha}I-\frac{1}{\alpha}\Lambda^{2}\end{array}\right]\right)
=λndet(λIα1αI+1αΛ2)\displaystyle=\lambda^{n}det\left(\lambda I-\frac{\alpha-1}{\alpha}I+\frac{1}{\alpha}\Lambda^{2}\right)
=0.\displaystyle=0.

Here, II represents the nn-by-nn identity matrix. Obviously, λ=0\lambda=0 is an eigenvalue of α,ω\mathcal{H}_{\alpha,\omega} with multiplicity nn and the remaining nn eigenvalues satisfy (3.2) with respect to μi(i=1,2,,n)\mu_{i}(i=1,2,\cdots,n). Moreover,

(μi2)=11+μi2α\ell(\mu_{i}^{2})=1-\frac{1+\mu_{i}^{2}}{\alpha} (3.5)

is a decreasing function with respect to μi2\mu^{2}_{i}. According to the definition of spectral radius for a given matrix, we clearly have (3.3). ∎

In the following theorem, we derive the convergence domain of the SSTS iteration method.

Theorem 3.5.

Let matrices W,Tn×nW,T\in\mathbb{R}^{n\times n} be symmetric positive semi-definite and satisfy null(W)null(T)={𝟎}{null}(W)\cap null(T)=\{\bm{0}\}, ω>0\omega>0. Then the SSTS iteration method is convergent if and only if

α>1+μmax22.\alpha>\frac{1+\mu^{2}_{\max}}{2}. (3.6)
Proof.

In light of the Theorem 3.4, the eigenvalue of α,ω\mathcal{H}_{\alpha,\omega} with λ=0\lambda=0 of nn multiple and the remaining nn eigenvalues are

λ1+1+μi2α=0.\lambda-1+\frac{1+\mu_{i}^{2}}{\alpha}=0.

If the SSTS iteration method is convergent, it should have ρ(α,ω)<1\rho(\mathcal{H}_{\alpha,\omega})<1. That is

|λ|=|11+μi2α|<1,|\lambda|=\left|1-\frac{1+\mu_{i}^{2}}{\alpha}\right|<1, (3.7)

or equivalently,

{2α>1+μi2,1+μi2>0.\begin{cases}2\alpha>1+\mu_{i}^{2},\\ 1+\mu_{i}^{2}>0.\end{cases} (3.8)

Using the Lemma 3.1, it holds if and only if 2α>1+μmax22\alpha>1+\mu^{2}_{\max}. ∎

Remark 2.

From the above result and Theorem 2 in [19], the SSTS iteration method has a larger convergent domain than PSBTS iteration method for parameter α\alpha and it is insensitive to parameter ω\omega. Thus, our method will be more robust than the PSBTS iteration method to solve linear system (1.1).

Next, we provide the optimal selections of the parameters α\alpha and ω\omega, respectively, which minimize the spectral radius of the iterative matrix α,ω\mathcal{H}_{\alpha,\omega} for SSTS iteration method. Meantime, we also give the corresponding optimal convergence factor for our method.

Theorem 3.6.

Let the conditions of Theorems 3.4 and 3.5 be satisfied. Then the optimal values of the relaxation parameters α\alpha and ω\omega for the SSTS iteration method are given by

αopt=2+μmin2+μmax22andωopt=1ηminηmax+(1+ηmin2)(1+ηmax2)ηmin+ηmax,\alpha_{opt}=\frac{2+\mu^{2}_{\min}+\mu^{2}_{\max}}{2}~{}\textup{and}~{}\omega_{opt}=\frac{1-\eta_{\min}\eta_{\max}+\sqrt{(1+\eta^{2}_{\min})(1+\eta^{2}_{\max})}}{\eta_{\min}+\eta_{\max}}, (3.9)

respectively, then the corresponding optimal convergence factor is

ρ(αopt,ωopt)=μmax2μmin22+μmin2+μmax2.\rho(\mathcal{H}_{\alpha_{opt},\omega_{opt}})=\frac{\mu^{2}_{\max}-\mu^{2}_{\min}}{2+\mu^{2}_{\min}+\mu^{2}_{\max}}. (3.10)
Proof.

Based on (3.2) and (3.3), we choose the optimal parameter αopt\alpha_{opt} by addressing the following problem

min𝛼maxμisp(S~)|11+μi2α|.\underset{\alpha}{\min}\underset{\mu_{i}\in\textup{sp}(\widetilde{S})}{\max}\left|1-\frac{1+\mu^{2}_{i}}{\alpha}\right|.

Let

f(α)=11+μmin2αandg(α)=11+μmax2α,f{(\alpha)}=1-\frac{1+\mu^{2}_{\min}}{\alpha}~{}\textup{and}~{}g{(\alpha)}=1-\frac{1+\mu^{2}_{\max}}{\alpha},

then the optimal parameter αopt\alpha_{opt} is attained when f(α)=g(α)f{(\alpha)}=-g{(\alpha)}. By simple calculations, we have the former result of (3.9). Substituting the first equality of (3.9) into (3.3) can easily lead to (3.10). Since

h(μmin2,μmax2)=μmax2μmin22+μmin2+μmax2h(\mu^{2}_{\min},\mu^{2}_{\max})=\frac{\mu^{2}_{\max}-\mu^{2}_{\min}}{2+\mu^{2}_{\min}+\mu^{2}_{\max}}

is increasing and decreasing about μmin2\mu^{2}_{\min} and μmax2\mu^{2}_{\max}, respectively, then we hope to choose a proper parameter ω\omega which minimizes the μmax2\mu^{2}_{\max}. Based on the Theorem 1 of [11], the latter result of (3.9) is easy to obtain. ∎

Remark 3.

For the SSTS and PSBTS iteration methods, they have the same optimal convergence factor according to the above theorem and Theorem 3 in [19]. The PSBTS iteration method need solving four sub-systems with coefficient matrix W~ω\widetilde{W}_{\omega}, but our method only need dealing with two sub-systems with respect to W~ω\widetilde{W}_{\omega}. Thus, the SSTS iteration method will be more practical and effective under certain situations.

Remark 4.

The optimal parameter αopt\alpha_{opt} belongs to [1+12μmax2,1+μmax2][1+\frac{1}{2}\mu^{2}_{\max},1+\mu^{2}_{\max}] by (3.9), then it will be convenient for determining the range of parameter αopt\alpha_{opt} when the μmax\mu_{\max} is known.

In section 2, the splitting matrix α,ω\mathcal{M}_{\alpha,\omega} can serve as a preconditioner and its properties are necessary to study. Here, we analyze and describe the spectral properties of the matrix α,ω1𝒜~\mathcal{M}_{\alpha,\omega}^{-1}\widetilde{\mathcal{A}}.

Corollary 3.7.

Let 𝒜~2n×2n\widetilde{\mathcal{A}}\in\mathbb{R}^{2n\times 2n} be the nonsingular block two-by-two matrix defined in (2.1), with W~n×n\widetilde{W}\in\mathbb{R}^{n\times n} being symmetric positive definite and T~n×n\widetilde{T}\in\mathbb{R}^{n\times n} being symmetric. Also let α,ω\alpha,\omega be two positive constants and α,ω\mathcal{M}_{\alpha,\omega} be defined in (2.2). Then, the following results of the preconditioned matrix Γα,ω=α,ω1𝒜~\Gamma_{\alpha,\omega}=\mathcal{M}_{\alpha,\omega}^{-1}\widetilde{\mathcal{A}} hold.

  • (i)

    Γα,ω\Gamma_{\alpha,\omega} has an eigenvalue 11 with multiplicity at least nn;

  • (ii)

    the remaining nonunit eigenvalues of Γα,ω\Gamma_{\alpha,\omega} satisfy the equation

    σ=1+ξα,\sigma=\frac{1+\xi}{\alpha},

    where ξ=vS~2vvv\xi=\frac{v^{*}\widetilde{S}^{2}v}{v^{*}v} with S~=W~ω1T~ω\widetilde{S}=\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega}.

Proof.

According to (2.1), we have

Γα=1𝒜=[W~ωOT~ωαW~ω]1[W~ωT~ωT~ωW~ω]=(IS~O1αI+1αS~2),\begin{aligned} \Gamma_{\alpha}=\mathcal{M}^{-1}\mathcal{A}&=\left[\begin{array}[]{cc}\widetilde{W}_{\omega}&O\\ \widetilde{T}_{\omega}&\alpha\widetilde{W}_{\omega}\end{array}\right]^{-1}\left[\begin{array}[]{cc}\widetilde{W}_{\omega}&-\widetilde{T}_{\omega}\\ \widetilde{T}_{\omega}&\widetilde{W}_{\omega}\end{array}\right]=\left(\begin{array}[]{cc}I&-\widetilde{S}\\ O&\frac{1}{\alpha}I+\frac{1}{\alpha}\widetilde{S}^{2}\end{array}\right)\end{aligned}, (3.11)

where S~=W~ω1T~ω\widetilde{S}=\widetilde{W}_{\omega}^{-1}\widetilde{T}_{\omega}. It is clear that the matrix Γα,ω\Gamma_{\alpha,\omega} has an eigenvalue 11 with multiplicity nn. Additionally, the remaining nn eigenvalues satisfy the following eigenvalue problem

σv=1α(I+S~2)v,\sigma v=\frac{1}{\alpha}(I+\widetilde{S}^{2})v, (3.12)

premultiplying vvv\frac{v^{*}}{v^{*}v} on the bothsides of (3.12), we further have

σ=1+ξα.\sigma=\frac{1+\xi}{\alpha}.

Lemma 3.2 tells us that α>0\alpha>0 and ξ0\xi\geq 0, then the eigenvalue σ\sigma of Γα,ω\Gamma_{\alpha,\omega} is positive. ∎

4 Numerical experiments

With two numerical examples being introduced, we test and verify the feasibility and efficiency of the SSTS iteration method for solving linear system (1.1) in this section. Meantime, we compare their numerical results including iteration steps (denoted as IT) and elapsed CPU time in seconds (denoted as CPU) with those of the MHSS, SBTS, PGSOR, PSBTS and SSTS iteration methods. The numerical experiments are performed in MATLAB[version 9.0.0.341360 (R2016a)] with machine precision 101610^{-16}.

In our implementations, the initial guess is chosen to be zero vector and the iteration is terminated once the relative residual error satisfies

RES:=r(k)2r(0)2<106.\mathrm{RES}:=\frac{\left\|r^{(k)}\right\|_{2}}{\left\|r^{(0)}\right\|_{2}}<10^{-6}.

where r(0)=br^{(0)}=b and r(k)=b𝒜z(k)r^{(k)}=b-\mathcal{A}z^{(k)} with z(k)=(x(k)T,y(k)T)Tz^{(k)}=(x^{(k)^{T}},y^{(k)^{T}})^{T} being the current approximant solution.

Example 1.

[7, 16] Consider the complex symmetric linear system of the form

[(K+33τI)+i(K+3+3τI)]u=b,\left[\left(K+\frac{3-\sqrt{3}}{\tau}I\right)+i\left(K+\frac{3+\sqrt{3}}{\tau}I\right)\right]u=b, (4.1)

where τ\tau is the time step-size, and KK is the five-point centered difference approximation of negative Laplacian operator L=ΔL=-\Delta for homogeneous Dirichlet boundary conditions on uniform mesh in the unit square [0,1]×[0,1][0,1]\times[0,1]. Hence, K=ImVm+VmImK=I_{m}\otimes V_{m}+V_{m}\otimes I_{m} with Vm=h2tridiag(1,2,1)m×mV_{m}=h^{-2}\mathrm{tridiag}(-1,2,-1)\in\mathbb{R}^{m\times m}, where \otimes the Kronecker product symbol and h=1/(m+1)h=1/(m+1) is the discretization mesh-size.

This complex symmetric linear systems arises in centered difference discretization of R22R_{22}-Pade approximations in the time integration of parabolic partial differential equations [15]. In this example, KK is an n×nn\times n block diagonal matrix with n=m2n=m^{2}. In our testes, we take τ=h\tau=h. Furthermore, we normalize coefficient matrix and righthand side of (4.1) by multiplying both by h2h^{2}. We take

W=K+33τIandT=K+3+3τIW=K+\frac{3-\sqrt{3}}{\tau}I~{}~{}~{}\mathrm{and}~{}~{}~{}T=K+\frac{3+\sqrt{3}}{\tau}I

The right-hand vector bb is given with its jjth entry

bj=(1i)jτ(1+j)2,j=1,2,,n.b_{j}=\frac{(1-i)j}{\tau(1+j)^{2}},j=1,2,...,n.
Example 2.

[7, 5] Consider the complex symmetric linear system of the form

[(θ2M+K)+i(θCV+CH]u=b,[(-\theta^{2}M+K)+i(\theta C_{V}+C_{H}]u=b, (4.2)

where MM and KK are the inertia and the stiffness matrices, CVC_{V} and CHC_{H} are the viscous and hysteretic damping matrices, respectively. θ\theta is the driving circular frequency and KK is defined the same as in Example 1.

In this example, KK is an n×nn\times n block diagonal matrix with n=m2n=m^{2}. We choose CH=ςKC_{H}=\varsigma K with ς\varsigma being a damping coefficient, M=In,CV=10InM=I_{n},C_{V}=10I_{n}. Additionally, we set θ=π\theta=\pi, ς=0.02\varsigma=0.02, and the righthand side vector bb is chosen such that the exact solution of the linear system (4.2) is b=(1+i)A𝟏b=(1+i)A\bm{1}. Similar to Example 1, the linear system is normalized by multiplying both sides with h2h^{2}.

Firstly, the optimal iteration parameters of the MHSS, SBTS, PGSOR, PSBTS and SSTS iteration methods for Examples 1 and 2 are listed in Table 1. Expect for the MHSS iteration method, the parameters for tested methods are the theoretical optimal ones. The parameter of the SBTS, PGSOR and PSBTS iteration methods are chosen according to Theorem 3.5 in [18], Theorem 2.4 in [17] and Theorems 3 and 4 in [19], respectively. In terms of the SSTS iteration method, we choose the theoretical optimal ones by the Theorem 3.6 and also provide the experiential optimal ones at the same time.

Table 1: The parameters for the MHSS, SBTS, PGSOR, PSBTS and SSTS iteration methods
Example Method Grid
16×1616\times 16 32×3232\times 32 64×6464\times 64 128×128128\times 128 256×256256\times 256
No.1 MHSS αopt\alpha_{opt} 1.06 0.75 0.54 0.40 0.30
SBTS αopt\alpha_{opt} 0.532 0.525 0.520 0.518 0.517
PGSOR αopt/ωopt\alpha_{opt}/\omega_{opt} 0.990/0.657 0.988/0.624 0.986/0.602 0.984/0.590 0.983/0.583
PSBTS αopt/ωopt\alpha_{opt}/\omega_{opt} 0.881/0.657 0.864/0.624 0.854/0.602 0.849/0.590 0.844/0.583
SSTS αopt/ωopt\alpha_{opt}/\omega_{opt} 1.019/0.657 1.025/0.624 1.030/0.602 1.033/0.590 1.035/0.583
αexp/ωexp\alpha_{exp}/\omega_{exp} 1.04/0.601 1.04/0.602 1.045/0.605 1.05/0.61 1.05/0.61
No.2 MHSS αopt\alpha_{opt} 0.21 0.08 0.04 0.02 0.01
SBTS αopt\alpha_{opt} 11.986 11.898 11.875 11.868 11.863
PGSOR αopt/ωopt\alpha_{opt}/\omega_{opt} 0.898/1.308 0.896/1.324 0.896/1.328 0.895/1.330 0.895/1.330
PSBTS αopt/ωopt\alpha_{opt}/\omega_{opt} 0.689/1.308 0.688/1.324 0.687/1.328 0.687/1.330 0.687/1.330
SSTS αopt/ωopt\alpha_{opt}/\omega_{opt} 1.254/1.308 1.259/1.324 1.261/1.328 1.262/1.330 1.262/1.330
αexp/ωexp\alpha_{exp}/\omega_{exp} 1.34/1.38 1.38/1.32 1.38/1.33 1.40/1.33 1.41/1.38

Tables 3 and 4 show the numerical results including iteration step number and CPU time to the above five methods with two examples. For our numerical implementation, the MHSS ieration method is employed to cope with the complex symmetric linear system (1.2), while the SBTS, PGSOR, PSBTS and SSTS iteration methods are employed to deal with the block two-by-two linear system (1.1).

Table 2: Numerical results for Example 1
Method 16×1616\times 16 32×3232\times 32 64×6464\times 64 128×128128\times 128 256×256256\times 256
MHSS IT 40 54 73 98 133
CPU 0.0165 0.0727 0.4019 3.2004 23.0072
SBTS IT 24 32 39 45 48
CPU 0.0168 0.0806 0.3929 2.3942 13.1121
PGSOR IT 4 4 5 5 5
CPU 0.0035 0.0088 0.0381 0.1806 0.9131
PSBTS IT 4 4 4 4 4
CPU 0.0041 0.0132 0.0464 0.2445 1.2345
SSTSopt IT 4 5 5 5 5
CPU 0.0031 0.0099 0.0386 0.1952 0.9409
SSTSexp IT 4 4 4 4 4
CPU 0.0021 0.0082 0.0326 0.1752 0.8163
Table 3: Numerical results for Example 2
Method 16×1616\times 16 32×3232\times 32 64×6464\times 64 128×128128\times 128 256×256256\times 256
MHSS IT 34 38 50 81 139
CPU 0.0153 0.0261 0.1182 1.3595 12.7184
SBTS IT 78 77 77 77 77
CPU 0.0185 0.0686 0.2725 1.8316 9.9311
PGSOR IT 8 7 8 8 8
CPU 0.0016 0.0055 0.0231 0.1277 0.6957
PSBTS IT 8 9 9 9 9
CPU 0.0037 0.0115 0.0398 0.2481 1.4364
SSTSopt IT 9 9 10 10 10
CPU 0.0026 0.0069 0.0286 0.1665 0.9509
SSTSexp IT 8 8 7 7 6
CPU 0.0015 0.0059 0.0223 0.1259 0.6823

From Tables 3 and 4, the following conclusions are clear. At first, the ITs of SSTS iteration method keep almost unchanged for theoretical and experiential optimal iteration parameters, respectively. Hence, the SSTS iteration method is stable with the problem size increasing. Secondly, the PGSOR, PSBTS and SSTS iteration methods outperform the MHSS and SBTS iteration methods for tested examples including both ITs and CPU time. It is neceassary to note that our method is slightly weaker than PGSOR iteration method for their theoretical optimal iteration parameters. It may be affected by the selection of parameters and the rounding error of computer, the IT is not same to SSTS and PSBTS iteration methods which is inconsonant to our analysis. However, the SSTS iteration method needs less CPU time than that of PSBTS. For experiential optimal iteration parameters, the SSTS iteration method exceeds the other four methods in ITs and CPU time. Hence, our method with optimal parameters can be used to effectively solve (1.1).

Table 4: Numerical results for GMRES(10) with different preconditioners
Method 16×1616\times 16 32×3232\times 32 64×6464\times 64 128×128128\times 128 256×256256\times 256
GMRES(10) IT 5(4) 8(1) 12(6) 20(4) 35(3)
CPU 0.0108 0.0201 0.0864 0.3126 1.7025
MHSS-GMRES(10) IT 1(9) 2(1) 2(3) 2(5) 2(8)
CPU 0.0115 0.0348 0.1512 1.2079 4.5305
SBTS-GMRES(10) IT 1(8) 1(10) 2(2) 2(6) 2(7)
CPU 0.0099 0.0358 0.1937 1.2692 6.6211
PSBTS-GMRES(10) IT 1(3) 1(4) 1(4) 1(4) 1(4)
CPU 0.0123 0.0211 0.0652 0.4059 1.9511
PGSOR-GMRES(10) IT 1(4) 1(4) 1(4) 1(4) 1(4)
CPU 0.0065 0.0108 0.0369 0.2253 0.9987
SSTSopt-GMRES(10) IT 1(4) 1(4) 1(4) 1(4) 1(4)
CPU 0.0051 0.0109 0.0382 0.2202 0.9967
SSTSexp-GMRES(10) IT 1(4) 1(4) 1(4) 1(5) 1(5)
CPU 0.0050 0.0105 0.0372 0.2571 1.1897
Table 5: Numerical results for GMRES(10) with different preconditioners
Method 16×1616\times 16 32×3232\times 32 64×6464\times 64 128×128128\times 128 256×256256\times 256
GMRES(10) IT 5(4) 8(1) 12(6) 20(4) 35(3)
CPU 0.0108 0.0201 0.0864 0.3126 1.7025
MHSS-GMRES(10) IT 1(9) 2(1) 2(3) 2(5) 2(8)
CPU 0.0115 0.0348 0.1512 1.2079 4.5305
SBTS-GMRES(10) IT 1(8) 1(10) 2(2) 2(6) 2(7)
CPU 0.0099 0.0358 0.1937 1.2692 6.6211
PSBTS-GMRES(10) IT 1(3) 1(4) 1(4) 1(4) 1(4)
CPU 0.0123 0.0211 0.0652 0.4059 1.9511
PGSOR-GMRES(10) IT 1(4) 1(4) 1(4) 1(4) 1(4)
CPU 0.0065 0.0108 0.0369 0.2253 0.9987
SSTSopt-GMRES(10) IT 1(4) 1(4) 1(4) 1(4) 1(4)
CPU 0.0051 0.0109 0.0382 0.2202 0.9967
SSTSexp-GMRES(10) IT 1(4) 1(4) 1(4) 1(5) 1(5)
CPU 0.0050 0.0105 0.0372 0.2571 1.1897

Tables 25 show that ITs of SSTS iteration method keep almost unchanged for theoretical and experiential optimal iteration parameters with mesh-grid increasing, respectively, it means our method is stable. In addition, they all outperform the MHSS, SBTS and PSBTS when they serve as solvers or preconditoners. Noting that our method is slightly weaker than PGSOR iteration method for their theoretical optimal iteration parameters, but the experiential one is superior. Hence, our method with optimal parameters can be used to effectively solve linear system (1.1).

5 Conclusion

We present a practical and effective single-step triangular splitting method for a class of block two-by-two linear system (1.1) and investigate its convergence properties. Under suitable convergence conditions, the optimal iteration parameters and corresponding convergence factor of this method are also derived. For solving a class of block two-by-two system which arises from complex symmetric linear system, numerical experiments show that this novel method is more powerful comparing with the MHSS, SBTS and PSBTS iteration methods and is competed with the PGSOR iteration method. Moreover, when the SSTS is used as a preconditioner to Krylov subspace method such as GMRES(\ell), it will improve obviously computing efficiency of the GMRES(\ell) method.

References

  • Bai et al. [2013] Z.-Z. Bai, M. Benzi, F. Chen, Z.-Q. Wang, Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal. 33 (2013) 343–369.
  • Arridge [1999] S. R. Arridge, Optical tomography in medical imaging, Inverse Probl. 15 (1999) 41–93.
  • Poirier [2000] B. Poirier, Efficient preconditioning scheme for block partitioned matrices with structured sparsity, Numer. Linear Algebra Appl. 7 (2000) 715–726.
  • Bertaccini [2004] D. Bertaccini, Efficient preconditioning for sequences of parametric complex symmetric linear systems, Electr. Trans. Numer. Anal. Etna 18 (2004) 49–64.
  • Benzi and Bertaccini [2010] M. Benzi, D. Bertaccini, Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal. 28 (2010) 598–618.
  • Bai et al. [2003] Z.-Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626.
  • Bai et al. [2010] Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing 87 (2010) 93–111.
  • Bai et al. [2011] Z.-Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms 56 (2011) 297–317.
  • Dehghan et al. [2013] M. Dehghan, M. Dehghanimadiseh, M. Hajarian, A generalized preconditioned MHSS method for a class of complex symmetric linear systems, Math. Model. Anal. 18 (2013) 561–576.
  • Wu et al. [2017] Y.-J. Wu, X. Li, J.-Y. Yuan, A non-alternating preconditioned HSS iteration method for non-Hermitian positive definite linear systems, Comput. Appl. Math. 36 (2017) 367–381.
  • Hezari et al. [2016] D. Hezari, D. K. Salkuyeh, V. Edalatpour, A new iterative method for solving a class of complex symmetric system of linear equations, Numer. Algorithms 73 (2016) 1–29.
  • Zheng et al. [2017] Z. Zheng, F.-L. Huang, Y.-C. Peng, Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett. 73 (2017).
  • Huang et al. [2019] Z.-G. Huang, L.-G. Wang, Z. Xu, J.-J. Cui, The generalized double steps scale-SOR iteration method for solving complex symmetric linear systems, J. Comput. Appl. Math. 346 (2019) 284–306.
  • Wang and Lu [2016] T. Wang, L.-Z. Lu, Alternating-directional PMHSS iteration method for a class of two-by-two block linear systems, Appl. Math. Lett. 58 (2016) 159–164.
  • Axelsson and Kucherov [2000] O. Axelsson, A. Kucherov, Real valued iterative methods for solving complex symmetric linear systems, Numer. Linear Algebra Appl. 7 (2000) 197–218.
  • Salkuyeh et al. [2014] D. K. Salkuyeh, D. Hezari, V. Edalatpour, Generalized SOR iterative method for a class of complex symmetric linear system of equations, Int. J. Comput. Math. 92 (2014).
  • Hezari et al. [2015] D. Hezari, V. Edalatpour, D. K. Salkuyeh, Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numer. Linear Algebra Appl. 22 (2015) 761–776.
  • Li et al. [2018] X.-A. Li, W.-H. Zhang, Y.-J. Wu, On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations, Appl. Math. Lett. 79 (2018) 131–137.
  • Zhang et al. [2018] J. Zhang, Z. Wang, J. Zhao, Preconditioned symmetric block triangular splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett. (2018) 95–102.