Single-step triangular splitting iteration method for a class of two-by-two block linear system
Abstract
For solving a class of block two-by-two real linear system, a new single-step iteration method based on triangular splitting scheme is proposed in this paper. Then the convergence properties of this method are carefully investigated. In addition, we determine its optimal iteration parameters and give the corresponding optimal convergence factor. It is worth mentioning that the SSTS iteration method is robust and superior to SBTS and PSBTS iteration methods under suitable conditions. Finally, some numerical experiments are carried out to validate the theoretical results and evaluate this new method.
keywords:
Two-by-two; Positive semidefinite; SSTS; Parameters; PreconditionerMSC:
[2010]65F10 , 65F501 Introduction
We consider the numerical solution for a class of block two-by-two linear system of the form
(1.1) |
where are both symmetric positive semidefinite matrices and satisfy with denoting the null space of a given matrix, are unknown vectors and are given vectors. The solution of linear system is unique [1] and it is a real equivalent formulation of the following complex symmetric system of linear equations
(1.2) |
where . This system (1.1) or (1.2) frequently stem from scientific and engineering fields [2, 3, 4, 5].
Lots of effective iterative methods have been proposed in the literatures for solving the linear systems (1.1). For example, Bai et al. presented a HSS method [6] based on the Hermitian/skew-Hermitian (HS) splitting of the matrix , and developed a MHSS method [7] in order to accelerate the convergence rate of HSS method. This MHSS method is algorithmically described in the following.
Method 1 (The MHSS iteration method).
Given an initial guess , for , until converges, compute
where is a positive constant and is the identify matrix.
They have also proved that for any positive parameter , the HSS and MHSS methods will converge unconditionally to the unique solution of the linear systems (1.1) when is symmetric positive semi-definite. After that, some variants of the above two methods were generalized and discussed by many researchers, please see [8, 9, 10] and the references therein. In 2016, an efficient scale-splitting (SCSP) iteration method [11] was constructed by multiplying a complex number through both sides of (1.2), then some generalized versions of this method were developed[12, 13]. These methods are considerable to deal with linear system (1.2).
To solve (1.1), a block preconditioned MHSS (PMHSS) iteration method [1] and its alternating-direction version [14] were presented following the idea of MHSS-like methods.Recent years, some other effective techniques to solve linear system (1.1) are also proposed, such as C-to-R iteration methods [15, 5] and GSOR-like iteration methods [16, 17]. Very recently, Li et al. [18] established a symmetric block triangular splitting (SBTS) method to linear systems (1.1) basing on upper and lower triangular splitting formulations for matrix . It can be simply expressed as follows.
Method 2 (The SBTS iteration method).
Given an initial vectors and , and a real relaxation factor . For , until converges to the exact solution of (1.1), compute
where is a positive constant and is the identify matrix.
Numerical testes indicate it is powerful than the HSS and MHSS methods to solve (1.1). Analogous to GSOR and PGSOR iteration methods, a preconditioned SBTS(PSBTS) iteration method was designed by Zhang et al.[19] and its performance is outstanding than the SBTS one under some restrictions. Viewing the SBTS and PSBTS methods, it is similar to the iterative scheme of SSOR method and has a symmetric structure involve with every iterative step. By making a study of the PGSOR and SSOR iteration methods, this SBTS method should have some room for improvement.
In this work, a single-step triangular splitting (SSTS) iteration method is developed for solving (1.1) which stems from a class of complex symmetric linear system. Then, we investigate its convergence properties and provide the approaches of choosing the optimal iteration parameters. Moreover, this new method and PSBTS method have the same optimal convergence factor, but the former one is faster than the latter one.
The paper is structured as follows. In Section 2, we derive the process of establishing the SSTS iteration method for solving (1.1). In Section 3, the convergence properties of the SSTS iteration method are discussed. In Section 4, some numerical experiments are implemented to test this novel method. Finally, some brief conclusions are made in Section 5.
2 The SSTS iteration method
In this section, we provide the process of establishing our SSTS iteration method to deal with linear system (1.1). According to the preconditioned technique [17, 19], we firstly reconstruct (1.1) by means of the following matrix
where is a positive real constant. Multiplying by matrix on both sides of (1.1) gives
(2.1) |
For ease of discussion, we denote and throughout this paper. In light of the PSBTS iteration method, we split the coefficient matrix of (2.1) into two matrices:
(2.2) |
where is a positive real parameter. Then, we consider the following SSTS iteration scheme according to the above splitting form.
(2.3) |
where
is the iteration matrix and .
Analogous to classical stationary iteration scheme, we give the SSTS iteration method to solve linear system (1.1) by utilizing (2.2), it is described in the following.
Method 3 (The SSTS iteration method).
Given any two initial vectors and two real relaxation factors . Using the following procedures to update the iteration sequence , until it converges to the exact solution of (1.1).
Since are symmetric positive semi-definite and satisfy , the matrix is symmetric positive definite, then each step of the SSTS iteration can be solved effectively using mostly real arithmetic either exactly by a Cholesky factorization or inexactly by conjugate gradient and multigrid scheme.
In light of the matrix splitting form (2.2), the the block lower triangular matrix whose diagonal parts are symmetric positive could be used as a preconditioner for the linear systems (2.1). At each step of applying the preconditioner for a preconditioned system, it is necessary to solve sequences of generalized residual equations of the form
(2.4) |
In the sequel, the matrix will be referred to as a SSTS-preconditioner and accelerate Krylov subspace methods such as GMRES. The system (2.4) can be solved by the following steps:
(1) Solve for ;
(2) Solve for .
Remark 1.
Observing the above procedures, we need to address two linear systems with the same coefficient matrix at each iteration in our algorithm. In actual computations, is generally sparse, one can use fast numerical algorithms to solve the linear systems in steps (1) and (2).
3 Convergence discussion for the SSTS iteration method
In this section, we turn to study the convergence properties of the SSTS iteration method. Firstly, some useful lemmas are introduced to support our theories.
Lemma 3.1.
Let matrices be symmetric positive semi-definite and satisfy . Then the matrices and with a real constant are symmetric positive definite and symmetric, respectively.
Lemma 3.2.
[16] Let matrices , be symmetric positive definite and symmetric, respectively. Then the eigenvalues of the matrix are all real.
Lemma 3.3.
[11] Let matrices be symmetric positive semi-definite and satisfy . If is an eigenvalue of with , then there is a generalized eigenvalue of matrix pair that satisfies and the spectral radius of holds
(3.1) |
here and thereafter and are the extreme generalized eigenvalues of matrix pair .
Based on the above lemmas, the following main conclusions about our new method are induced.
Theorem 3.4.
Let matrices be symmetric positive semi-definite and satisfy . Also let and . Assuming is an eigenvalue of , then with multiplicity and the remaining eigenvalues of satisfy the following equation:
(3.2) |
with being the eigenvalues of . Furthermore, the spectral radius of satisfies
(3.3) |
Here and thereinafter, and with representing the spectrum for a given matrix.
Proof.
By Lemma 3.2, it is not difficult to verify that has a spectral decomposition , where is an invertible matrix and is a diagonal matrix spanning by the spectrum of . Since
is similar to , then they have the same spectrum. Suppose is the eigenvalue of , we have
(3.4) | ||||
Here, represents the -by- identity matrix. Obviously, is an eigenvalue of with multiplicity and the remaining eigenvalues satisfy (3.2) with respect to . Moreover,
(3.5) |
is a decreasing function with respect to . According to the definition of spectral radius for a given matrix, we clearly have (3.3). ∎
In the following theorem, we derive the convergence domain of the SSTS iteration method.
Theorem 3.5.
Let matrices be symmetric positive semi-definite and satisfy , . Then the SSTS iteration method is convergent if and only if
(3.6) |
Proof.
Remark 2.
Next, we provide the optimal selections of the parameters and , respectively, which minimize the spectral radius of the iterative matrix for SSTS iteration method. Meantime, we also give the corresponding optimal convergence factor for our method.
Theorem 3.6.
Proof.
Based on (3.2) and (3.3), we choose the optimal parameter by addressing the following problem
Let
then the optimal parameter is attained when . By simple calculations, we have the former result of (3.9). Substituting the first equality of (3.9) into (3.3) can easily lead to (3.10). Since
is increasing and decreasing about and , respectively, then we hope to choose a proper parameter which minimizes the . Based on the Theorem 1 of [11], the latter result of (3.9) is easy to obtain. ∎
Remark 3.
For the SSTS and PSBTS iteration methods, they have the same optimal convergence factor according to the above theorem and Theorem 3 in [19]. The PSBTS iteration method need solving four sub-systems with coefficient matrix , but our method only need dealing with two sub-systems with respect to . Thus, the SSTS iteration method will be more practical and effective under certain situations.
Remark 4.
The optimal parameter belongs to by (3.9), then it will be convenient for determining the range of parameter when the is known.
In section 2, the splitting matrix can serve as a preconditioner and its properties are necessary to study. Here, we analyze and describe the spectral properties of the matrix .
Corollary 3.7.
Let be the nonsingular block two-by-two matrix defined in (2.1), with being symmetric positive definite and being symmetric. Also let be two positive constants and be defined in (2.2). Then, the following results of the preconditioned matrix hold.
-
(i)
has an eigenvalue with multiplicity at least ;
-
(ii)
the remaining nonunit eigenvalues of satisfy the equation
where with .
Proof.
According to (2.1), we have
(3.11) |
where . It is clear that the matrix has an eigenvalue with multiplicity . Additionally, the remaining eigenvalues satisfy the following eigenvalue problem
(3.12) |
premultiplying on the bothsides of (3.12), we further have
Lemma 3.2 tells us that and , then the eigenvalue of is positive. ∎
4 Numerical experiments
With two numerical examples being introduced, we test and verify the feasibility and efficiency of the SSTS iteration method for solving linear system (1.1) in this section. Meantime, we compare their numerical results including iteration steps (denoted as IT) and elapsed CPU time in seconds (denoted as CPU) with those of the MHSS, SBTS, PGSOR, PSBTS and SSTS iteration methods. The numerical experiments are performed in MATLAB[version 9.0.0.341360 (R2016a)] with machine precision .
In our implementations, the initial guess is chosen to be zero vector and the iteration is terminated once the relative residual error satisfies
where and with being the current approximant solution.
Example 1.
[7, 16] Consider the complex symmetric linear system of the form
(4.1) |
where is the time step-size, and is the five-point centered difference approximation of negative Laplacian operator for homogeneous Dirichlet boundary conditions on uniform mesh in the unit square . Hence, with , where the Kronecker product symbol and is the discretization mesh-size.
This complex symmetric linear systems arises in centered difference discretization of -Pade approximations in the time integration of parabolic partial differential equations [15]. In this example, is an block diagonal matrix with . In our testes, we take . Furthermore, we normalize coefficient matrix and righthand side of (4.1) by multiplying both by . We take
The right-hand vector is given with its th entry
Example 2.
[7, 5] Consider the complex symmetric linear system of the form
(4.2) |
where and are the inertia and the stiffness matrices, and are the viscous and hysteretic damping matrices, respectively. is the driving circular frequency and is defined the same as in Example 1.
In this example, is an block diagonal matrix with . We choose with being a damping coefficient, . Additionally, we set , , and the righthand side vector is chosen such that the exact solution of the linear system (4.2) is . Similar to Example 1, the linear system is normalized by multiplying both sides with .
Firstly, the optimal iteration parameters of the MHSS, SBTS, PGSOR, PSBTS and SSTS iteration methods for Examples 1 and 2 are listed in Table 1. Expect for the MHSS iteration method, the parameters for tested methods are the theoretical optimal ones. The parameter of the SBTS, PGSOR and PSBTS iteration methods are chosen according to Theorem 3.5 in [18], Theorem 2.4 in [17] and Theorems 3 and 4 in [19], respectively. In terms of the SSTS iteration method, we choose the theoretical optimal ones by the Theorem 3.6 and also provide the experiential optimal ones at the same time.
Example | Method | Grid | |||||
---|---|---|---|---|---|---|---|
No.1 | MHSS | 1.06 | 0.75 | 0.54 | 0.40 | 0.30 | |
SBTS | 0.532 | 0.525 | 0.520 | 0.518 | 0.517 | ||
PGSOR | 0.990/0.657 | 0.988/0.624 | 0.986/0.602 | 0.984/0.590 | 0.983/0.583 | ||
PSBTS | 0.881/0.657 | 0.864/0.624 | 0.854/0.602 | 0.849/0.590 | 0.844/0.583 | ||
SSTS | 1.019/0.657 | 1.025/0.624 | 1.030/0.602 | 1.033/0.590 | 1.035/0.583 | ||
1.04/0.601 | 1.04/0.602 | 1.045/0.605 | 1.05/0.61 | 1.05/0.61 | |||
No.2 | MHSS | 0.21 | 0.08 | 0.04 | 0.02 | 0.01 | |
SBTS | 11.986 | 11.898 | 11.875 | 11.868 | 11.863 | ||
PGSOR | 0.898/1.308 | 0.896/1.324 | 0.896/1.328 | 0.895/1.330 | 0.895/1.330 | ||
PSBTS | 0.689/1.308 | 0.688/1.324 | 0.687/1.328 | 0.687/1.330 | 0.687/1.330 | ||
SSTS | 1.254/1.308 | 1.259/1.324 | 1.261/1.328 | 1.262/1.330 | 1.262/1.330 | ||
1.34/1.38 | 1.38/1.32 | 1.38/1.33 | 1.40/1.33 | 1.41/1.38 |
Tables 3 and 4 show the numerical results including iteration step number and CPU time to the above five methods with two examples. For our numerical implementation, the MHSS ieration method is employed to cope with the complex symmetric linear system (1.2), while the SBTS, PGSOR, PSBTS and SSTS iteration methods are employed to deal with the block two-by-two linear system (1.1).
Method | ||||||
---|---|---|---|---|---|---|
MHSS | IT | 40 | 54 | 73 | 98 | 133 |
CPU | 0.0165 | 0.0727 | 0.4019 | 3.2004 | 23.0072 | |
SBTS | IT | 24 | 32 | 39 | 45 | 48 |
CPU | 0.0168 | 0.0806 | 0.3929 | 2.3942 | 13.1121 | |
PGSOR | IT | 4 | 4 | 5 | 5 | 5 |
CPU | 0.0035 | 0.0088 | 0.0381 | 0.1806 | 0.9131 | |
PSBTS | IT | 4 | 4 | 4 | 4 | 4 |
CPU | 0.0041 | 0.0132 | 0.0464 | 0.2445 | 1.2345 | |
SSTSopt | IT | 4 | 5 | 5 | 5 | 5 |
CPU | 0.0031 | 0.0099 | 0.0386 | 0.1952 | 0.9409 | |
SSTSexp | IT | 4 | 4 | 4 | 4 | 4 |
CPU | 0.0021 | 0.0082 | 0.0326 | 0.1752 | 0.8163 |
Method | ||||||
---|---|---|---|---|---|---|
MHSS | IT | 34 | 38 | 50 | 81 | 139 |
CPU | 0.0153 | 0.0261 | 0.1182 | 1.3595 | 12.7184 | |
SBTS | IT | 78 | 77 | 77 | 77 | 77 |
CPU | 0.0185 | 0.0686 | 0.2725 | 1.8316 | 9.9311 | |
PGSOR | IT | 8 | 7 | 8 | 8 | 8 |
CPU | 0.0016 | 0.0055 | 0.0231 | 0.1277 | 0.6957 | |
PSBTS | IT | 8 | 9 | 9 | 9 | 9 |
CPU | 0.0037 | 0.0115 | 0.0398 | 0.2481 | 1.4364 | |
SSTSopt | IT | 9 | 9 | 10 | 10 | 10 |
CPU | 0.0026 | 0.0069 | 0.0286 | 0.1665 | 0.9509 | |
SSTSexp | IT | 8 | 8 | 7 | 7 | 6 |
CPU | 0.0015 | 0.0059 | 0.0223 | 0.1259 | 0.6823 |
From Tables 3 and 4, the following conclusions are clear. At first, the ITs of SSTS iteration method keep almost unchanged for theoretical and experiential optimal iteration parameters, respectively. Hence, the SSTS iteration method is stable with the problem size increasing. Secondly, the PGSOR, PSBTS and SSTS iteration methods outperform the MHSS and SBTS iteration methods for tested examples including both ITs and CPU time. It is neceassary to note that our method is slightly weaker than PGSOR iteration method for their theoretical optimal iteration parameters. It may be affected by the selection of parameters and the rounding error of computer, the IT is not same to SSTS and PSBTS iteration methods which is inconsonant to our analysis. However, the SSTS iteration method needs less CPU time than that of PSBTS. For experiential optimal iteration parameters, the SSTS iteration method exceeds the other four methods in ITs and CPU time. Hence, our method with optimal parameters can be used to effectively solve (1.1).
Method | |||||||
---|---|---|---|---|---|---|---|
GMRES(10) | IT | 5(4) | 8(1) | 12(6) | 20(4) | 35(3) | |
CPU | 0.0108 | 0.0201 | 0.0864 | 0.3126 | 1.7025 | ||
MHSS-GMRES(10) | IT | 1(9) | 2(1) | 2(3) | 2(5) | 2(8) | |
CPU | 0.0115 | 0.0348 | 0.1512 | 1.2079 | 4.5305 | ||
SBTS-GMRES(10) | IT | 1(8) | 1(10) | 2(2) | 2(6) | 2(7) | |
CPU | 0.0099 | 0.0358 | 0.1937 | 1.2692 | 6.6211 | ||
PSBTS-GMRES(10) | IT | 1(3) | 1(4) | 1(4) | 1(4) | 1(4) | |
CPU | 0.0123 | 0.0211 | 0.0652 | 0.4059 | 1.9511 | ||
PGSOR-GMRES(10) | IT | 1(4) | 1(4) | 1(4) | 1(4) | 1(4) | |
CPU | 0.0065 | 0.0108 | 0.0369 | 0.2253 | 0.9987 | ||
SSTSopt-GMRES(10) | IT | 1(4) | 1(4) | 1(4) | 1(4) | 1(4) | |
CPU | 0.0051 | 0.0109 | 0.0382 | 0.2202 | 0.9967 | ||
SSTSexp-GMRES(10) | IT | 1(4) | 1(4) | 1(4) | 1(5) | 1(5) | |
CPU | 0.0050 | 0.0105 | 0.0372 | 0.2571 | 1.1897 |
Method | |||||||
---|---|---|---|---|---|---|---|
GMRES(10) | IT | 5(4) | 8(1) | 12(6) | 20(4) | 35(3) | |
CPU | 0.0108 | 0.0201 | 0.0864 | 0.3126 | 1.7025 | ||
MHSS-GMRES(10) | IT | 1(9) | 2(1) | 2(3) | 2(5) | 2(8) | |
CPU | 0.0115 | 0.0348 | 0.1512 | 1.2079 | 4.5305 | ||
SBTS-GMRES(10) | IT | 1(8) | 1(10) | 2(2) | 2(6) | 2(7) | |
CPU | 0.0099 | 0.0358 | 0.1937 | 1.2692 | 6.6211 | ||
PSBTS-GMRES(10) | IT | 1(3) | 1(4) | 1(4) | 1(4) | 1(4) | |
CPU | 0.0123 | 0.0211 | 0.0652 | 0.4059 | 1.9511 | ||
PGSOR-GMRES(10) | IT | 1(4) | 1(4) | 1(4) | 1(4) | 1(4) | |
CPU | 0.0065 | 0.0108 | 0.0369 | 0.2253 | 0.9987 | ||
SSTSopt-GMRES(10) | IT | 1(4) | 1(4) | 1(4) | 1(4) | 1(4) | |
CPU | 0.0051 | 0.0109 | 0.0382 | 0.2202 | 0.9967 | ||
SSTSexp-GMRES(10) | IT | 1(4) | 1(4) | 1(4) | 1(5) | 1(5) | |
CPU | 0.0050 | 0.0105 | 0.0372 | 0.2571 | 1.1897 |
Tables 2 – 5 show that ITs of SSTS iteration method keep almost unchanged for theoretical and experiential optimal iteration parameters with mesh-grid increasing, respectively, it means our method is stable. In addition, they all outperform the MHSS, SBTS and PSBTS when they serve as solvers or preconditoners. Noting that our method is slightly weaker than PGSOR iteration method for their theoretical optimal iteration parameters, but the experiential one is superior. Hence, our method with optimal parameters can be used to effectively solve linear system (1.1).
5 Conclusion
We present a practical and effective single-step triangular splitting method for a class of block two-by-two linear system (1.1) and investigate its convergence properties. Under suitable convergence conditions, the optimal iteration parameters and corresponding convergence factor of this method are also derived. For solving a class of block two-by-two system which arises from complex symmetric linear system, numerical experiments show that this novel method is more powerful comparing with the MHSS, SBTS and PSBTS iteration methods and is competed with the PGSOR iteration method. Moreover, when the SSTS is used as a preconditioner to Krylov subspace method such as GMRES(), it will improve obviously computing efficiency of the GMRES() method.
References
- Bai et al. [2013] Z.-Z. Bai, M. Benzi, F. Chen, Z.-Q. Wang, Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems, IMA J. Numer. Anal. 33 (2013) 343–369.
- Arridge [1999] S. R. Arridge, Optical tomography in medical imaging, Inverse Probl. 15 (1999) 41–93.
- Poirier [2000] B. Poirier, Efficient preconditioning scheme for block partitioned matrices with structured sparsity, Numer. Linear Algebra Appl. 7 (2000) 715–726.
- Bertaccini [2004] D. Bertaccini, Efficient preconditioning for sequences of parametric complex symmetric linear systems, Electr. Trans. Numer. Anal. Etna 18 (2004) 49–64.
- Benzi and Bertaccini [2010] M. Benzi, D. Bertaccini, Block preconditioning of real-valued iterative algorithms for complex linear systems, IMA J. Numer. Anal. 28 (2010) 598–618.
- Bai et al. [2003] Z.-Z. Bai, G. H. Golub, M. K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626.
- Bai et al. [2010] Z.-Z. Bai, M. Benzi, F. Chen, Modified HSS iteration methods for a class of complex symmetric linear systems, Computing 87 (2010) 93–111.
- Bai et al. [2011] Z.-Z. Bai, M. Benzi, F. Chen, On preconditioned MHSS iteration methods for complex symmetric linear systems, Numer. Algorithms 56 (2011) 297–317.
- Dehghan et al. [2013] M. Dehghan, M. Dehghanimadiseh, M. Hajarian, A generalized preconditioned MHSS method for a class of complex symmetric linear systems, Math. Model. Anal. 18 (2013) 561–576.
- Wu et al. [2017] Y.-J. Wu, X. Li, J.-Y. Yuan, A non-alternating preconditioned HSS iteration method for non-Hermitian positive definite linear systems, Comput. Appl. Math. 36 (2017) 367–381.
- Hezari et al. [2016] D. Hezari, D. K. Salkuyeh, V. Edalatpour, A new iterative method for solving a class of complex symmetric system of linear equations, Numer. Algorithms 73 (2016) 1–29.
- Zheng et al. [2017] Z. Zheng, F.-L. Huang, Y.-C. Peng, Double-step scale splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett. 73 (2017).
- Huang et al. [2019] Z.-G. Huang, L.-G. Wang, Z. Xu, J.-J. Cui, The generalized double steps scale-SOR iteration method for solving complex symmetric linear systems, J. Comput. Appl. Math. 346 (2019) 284–306.
- Wang and Lu [2016] T. Wang, L.-Z. Lu, Alternating-directional PMHSS iteration method for a class of two-by-two block linear systems, Appl. Math. Lett. 58 (2016) 159–164.
- Axelsson and Kucherov [2000] O. Axelsson, A. Kucherov, Real valued iterative methods for solving complex symmetric linear systems, Numer. Linear Algebra Appl. 7 (2000) 197–218.
- Salkuyeh et al. [2014] D. K. Salkuyeh, D. Hezari, V. Edalatpour, Generalized SOR iterative method for a class of complex symmetric linear system of equations, Int. J. Comput. Math. 92 (2014).
- Hezari et al. [2015] D. Hezari, V. Edalatpour, D. K. Salkuyeh, Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations, Numer. Linear Algebra Appl. 22 (2015) 761–776.
- Li et al. [2018] X.-A. Li, W.-H. Zhang, Y.-J. Wu, On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations, Appl. Math. Lett. 79 (2018) 131–137.
- Zhang et al. [2018] J. Zhang, Z. Wang, J. Zhao, Preconditioned symmetric block triangular splitting iteration method for a class of complex symmetric linear systems, Appl. Math. Lett. (2018) 95–102.