This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

11institutetext: Zhong-bao Wang, Corresponding author 22institutetext: 1. Department of Mathematics, Southwest Jiaotong University
Chengdu, Sichuan 611756, China
2. National Engineering Laboratory of Integrated Transportation Big Data Application Technology
Chengdu, Sichuan 611756, China
3. School of Mathematical Sciences, University of Electronic Science and Technology of China
Chengdu, Sichuan, 611731, China
[email protected]
33institutetext: Zhen-yin Lei 44institutetext: 1. Department of Mathematics, Southwest Jiaotong University
Chengdu, Sichuan 611756, China
2. National Engineering Laboratory of Integrated Transportation Big Data Application Technology
Chengdu, Sichuan 611756, China
[email protected]
55institutetext: Xin Long 66institutetext: 1. Department of Mathematics, Southwest Jiaotong University
Chengdu, Sichuan 611756, China
2. National Engineering Laboratory of Integrated Transportation Big Data Application Technology
Chengdu, Sichuan 611756, China
[email protected]
77institutetext: Zhang-you Chen 88institutetext: 1. Department of Mathematics, Southwest Jiaotong University
Chengdu, Sichuan 611756, China
2. National Engineering Laboratory of Integrated Transportation Big Data Application Technology
Chengdu, Sichuan 611756, China
[email protected]

Tseng Splitting Method with Double Inertial Steps for Solving Monotone Inclusion Problems

Zhong-bao Wang    Zhen-yin Lei    Xin Long
   Zhang-you Chen
(Received: date / Accepted: date)

Abstract

In this paper, based on double inertial extrapolation steps strategy and relaxed techniques, we introduce a new Tseng splitting method with double inertial extrapolation steps and self-adaptive step sizes for solving monotone inclusion problems in real Hilbert spaces. Under mild and standard assumptions, we establish successively the weak convergence, nonasymptotic O(1n)O(\frac{1}{\sqrt{n}}) convergence rate, strong convergence and linear convergence rate of the proposed algorithm. Finally, several numerical experiments are provided to illustrate the performance and theoretical outcomes of our algorithm.

Keywords Monotone inclusion problem; Tseng splitting method; Double inertial extrapolation steps; Strong and weak convergence; Linear convergence rate

1 Introduction

Let HH be a real Hilbert space with the inner product ,\left\langle{\cdot,\cdot}\right\rangle and the induced norm \left\|\cdot\right\|. The monotone inclusion problem (MIP) is as follows

findxHsuchthat0(A+B)x.{\rm find}~{}~{}x^{*}\in H~{}~{}{\rm such~{}that}~{}~{}0\in(A+B)x^{*}. (1.1)

where A:HHA:H\to H is a single mapping and B:H2HB:H\to{2^{H}} is a multivalued mapping. The solution set is denoted by Ω:=(A+B)1(0)\Omega:={(A+B)^{-1}}(0).

The monotone inclusion problem has drawn much attention because it provides a broad unifying frame for variational inequalities, convex minimization problems, split feasibility problems and equilibrium problems, and has been applied to solve several real-world problems from machine learning, signal processing and image restoration, see AB ; BH ; CD ; CW ; DS ; RF ; TYCR .

One of famous methods for solving MIP (1.1) is forward-backward splitting method, which was introduced by Passty PG and Lions et al. LP . This method generates an iterative sequence {xn}\{x_{n}\} in following way

xn+1=(I+λnB)1(IλnA)xn.{x_{n+1}}={(I+{\lambda_{n}}B)^{-1}}(I-{\lambda_{n}}A){x_{n}}. (1.2)

where the mapping AA is 1L\frac{1}{L}-co-coercive, BB is maximal monotone, II is an identity mapping on HH and λn>0\lambda_{n}>0. The operator (IλnA)(I-{\lambda_{n}}A) is called an forward operator and (I+λnB)1{(I+{\lambda_{n}}B)^{-1}} is said to be a backward operator.

Tseng TP proposed a modified forward-backward splitting method (also known as Tseng splitting algorithm), whose iterative formula is as follows

{yn=(I+λnB)1(IλnA)xnxn+1=ynλn(AynAxn),\left\{\begin{array}[]{l}{y_{n}}={(I+{\lambda_{n}}B)^{-1}}(I-{\lambda_{n}}A){x_{n}}\\ {x_{n+1}}={y_{n}}-{\lambda_{n}}(A{y_{n}}-A{x_{n}}),\end{array}\right. (1.3)

where AA is LL-lipschitz continuous and {λn}(0,1/L)\{\lambda_{n}\}\subset(0,1/L). However, the Lipschitz constant of an operator is often unknown or difficult to estimate in nonlinear problems. To overcome this drawback, Cholamjiak et al. CV introduce a relaxed forward-backward splitting method, which uses a simple step-size rule without the prior knowledge of Lipschitz constant of the operator, for solving MIP (1.1) and prove the linear convergence rate of the proposed algorithm.

In recent year, the inertial method was introduced in AF , which can be regarded as a procedure of speeding up the convergence rate of algorithms. Many researchers utilize inertial methods to design algorithm for solving monotone inclusion problems and variational inequalities, see, for example, AB ; CD ; CV ; CH ; CHG ; DQ ; LN ; PTK ; SY ; TD ; YI . To enhance the numerical efficiency, Çopur et al. CHG introduce firstly the double inertial extrapolation steps for solving quasi-variational inequalities in real Hilbert spaces. Combining relaxation techniques with the inertial methods, Cholamjiak et al. CV modify Tseng splitting method to solve MIP (1.1) in real Hilbert spaces. Very recently, incorporating double inertial extrapolation steps and relaxation techniques, Yao et al. YI present a novel subgradients extragradient method to solve variational inequalities, and prove its strong convergence, weak convergence and linear convergence, respectively. However, the linear convergence of YI is obtained under a single inertia rather than double inertias.

This paper devotes to further modifying Tseng splitting method for solving MIP (1.1) in real Hilbert spaces. We obtain successively the weak convergence, nonasymptotic O(1n)O(\frac{1}{\sqrt{n}}) convergence rate, strong convergence and linear convergence rate of the proposed algorithm. Our results obtained in this paper improve the corresponding results in AB ; CV ; CH ; VA ; YI as follows:

\bullet Combining double inertial extrapolation steps strategy and relaxed techniques, we propose a new Tseng splitting method, which include the corresponding methods considered in AB ; CV ; YI as special cases. The two inertial factors in our algorithm are variable sequences, different from the constant inertial factor in AB ; CV ; CH ; YI . Especially, when our algorithm is applied to solving variational inequalities, its some parameters have larger choosing interval than the ones of YI . In addition, one of our inertial factors can be equal to 1, which is not allowed in the single inertial methods CV ; CH , which require that the inertial factor must be strictly less than 1.

\bullet We prove the strong convergence, nonasymptotic O(1n)O(\frac{1}{\sqrt{n}}) convergence rate and linear convergence rate of the proposed algorithm. Note that that the strong convergence does not require to know the modulus of strong monotonicity and the Lipschitz constant in advance. As far as we know, there is no convergence rate results in the literature for methods with double inertial extrapolation steps for solving MIP (1.1) in infinite-dimensional Hilbert spaces.

\bullet Our algorithm use double inertial extrapolation steps to accelerate the speed of the algorithm. The step sizes of our algorithm are updated by a simple calculation without knowing the Lipschitz constant of the underlying operator. Some numerical experiments show that our algorithm has better efficiency than the corresponding algorithms in AB ; CV ; CH ; VA ; YI .

The structure of this article is as follows. In section 2, we recall some essential definitions and results which is relate to this paper. In section 3, we present our algorithm and analyze its weak convergence. In section 4, we establish the strong convergence and the linear convergence rate of our method. In section 5, we present some numerical experiments to demonstrate the performance of our algorithm. We give some concluding remarks in section 6.

2 Preliminaries

In this section, we first give some definitions and results that will be used in this paper. The weak convergence and strong convergence of sequences are denoted by \rightharpoonup and \to, respectively.

Definition 1

The mapping A:HHA:H\rightarrow H is called

  • (i)

    pseudomonotone on HH if Ax,yx0\langle Ax,y-x\rangle\geq 0 implies that Ay,yx0,x,yH\langle Ay,y-x\rangle\geq 0,~{}\forall x,y\in H;

  • (ii)

    monotone on HH if

    AxAy,xy0,x,yH;\langle Ax-Ay,x-y\rangle\geq 0,~{}\forall~{}x,y\in H;
  • (iii)

    μ\mu-strongly monotone on HH if there exists a positive constant μ>0\mu>0

    AxAy,xyμxy2,x,yH;\langle Ax-Ay,x-y\rangle\geq\mu{\left\|{x-y}\right\|^{2}},~{}\forall~{}x,y\in H;
  • (iv)

    LL-lipschitz continuous on HH if there exists a scalar L>0L>0 satisfying

    AxAyLxy,x,yH;\|Ax-Ay\|\leq L\|x-y\|,~{}\forall~{}x,y\in H;
  • (v)

    rr-strongly pseudomonotone on HH if there exists a positive constant r>0r>0 such that

    Ay,xy0\langle Ay,x-y\rangle\geq 0 implies that Ax,xyrxy2,x,yH.\langle Ax,x-y\rangle\geq r{\left\|{x-y}\right\|^{2}},~{}\forall~{}x,y\in H.

Definition 2

The graph of AA is the set in H×HH\times H defined by

Graph(A):={(x,u):xH,uAx}.Graph(A):=\{(x,u):x\in H,u\in Ax\}.

Let CHC\subset H be a nonempty, closed and convex set. The normal cone NC(x)N_{C}(x) of CC at xx is represented by

NC(x):={{zH:z,yx0,yC},ifxC,,otherwise.{N_{C}}(x):=\left\{\begin{array}[]{l}\{z\in H:\left\langle{z,y-x}\right\rangle\leq 0,\forall y\in C\},~{}if~{}x\in C,\\ \emptyset,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}otherwise.\end{array}\right.

The projection of xHx\in H onto CC, denoted by PC(x)P_{C}(x), is defined as

PC(x):=argminyCxy.{P_{C}}(x):=\arg{\min_{y\in C}}\left\|{x-y}\right\|.

and is has xPC(x),yPC(x)0,yC.\left\langle{x-{P_{C}}(x),y-{P_{C}}(x)}\right\rangle\leq 0,\forall y\in C.
The sequence {un}\{u_{n}\} is QQ-linear convergence if there is q(0,1)q\in(0,1) such that uk+1uquku\left\|{{u^{k+1}}-u}\right\|\leq q\left\|{{u^{k}}-u}\right\| for all kk large enough.

Definition 3

The set-valued mapping A:H2HA:H\to{2^{H}} is called

  • (i)

    monotone on HH if for all x,yHx,y\in H, uAxu\in Ax and vAyv\in Ay implies that

    uv,xy0;\langle u-v,x-y\rangle\geq 0;
  • (ii)

    maximal monotone on HH if it is monotone and for any (x,u)H×H,uv,xy0(x,u)\in H\times H,\left\langle{u-v,x-y}\right\rangle\geq 0 for every (y,v)Graph(A)(y,v)\in Graph(A) implies that uAxu\in Ax;

  • (iii)

    μ\mu-strongly monotone on HH if for all x,yHx,y\in H, uAxu\in Ax and vAyv\in Ay implies that

    uv,xyμxy2,x,yH.\langle u-v,x-y\rangle\geq\mu{\left\|{x-y}\right\|^{2}},~{}\forall~{}x,y\in H.
Lemma 1

(MP )Let {φn},{δn}\{\varphi_{n}\},\{\delta_{n}\} and {αn}\{{\alpha_{n}}\} be sequences in [0,+)[0,+\infty) such that

φn+1φn+αn(φnφn1)+δn,n1,n=1δn<+\varphi_{n+1}\leq\varphi_{n}+{\alpha_{n}}(\varphi_{n}-\varphi_{n-1})+\delta_{n},\forall n\geq 1,\sum\limits_{n=1}^{\infty}{\delta_{n}}<+\infty

and there exists a real number α\alpha with 0αnα<10\leq{\alpha_{n}}\leq\alpha<1 for all nNn\in N.Then the following hold :

  • (i)

    [φnφn1]+n=1+<+\sum{{}_{n=1}^{+\infty}}{[\varphi_{n}-\varphi_{n-1}]_{+}}<+\infty where [t]+:=max{t,0}{[t]_{+}}:=\max\{t,0\};

  • (ii)

    there exists φ[0,+){\varphi^{*}}\in[0,+\infty) such that limnφn=φ\mathop{\lim}\limits_{n\to\infty}\varphi_{n}={\varphi^{*}}.

Lemma 2

(OZ )Let CC be a nonempty set of HH and {xn}\{{x_{n}}\} be a sequence in HH such that the following two conditions hold:

  • (i)

    for every xC,limnxnxx\in C,\mathop{\lim}\limits_{n\to\infty}\left\|{{x_{n}}-x}\right\| exists;

  • (ii)

    every sequential weak cluster point of {xn}\{{x_{n}}\} is in CC.

Then {xn}\{{x_{n}}\} converges weakly to a point in CC.

Lemma 3

TX Let A:HHA:H\to H be a maximal monotone mapping and B:H2HB:H\to{2^{H}} be a Lipschitz continuous and monotone mapping. Then the mapping A+BA+B is a maximal monotone mapping.

Lemma 4

LQ Let {an}\{a_{n}\} and {bn}\{b_{n}\} be nonnegative real numbers sequences for which there exists 0q<10\leq q<1, so that

an+1qan+bnforeverynN.a_{n+1}\leq q{a_{n}}+b_{n}~{}~{}for~{}every~{}n\in N.

If limnbn=0\mathop{\lim}\limits_{n\to\infty}{b_{n}}=0, then limnan=0\mathop{\lim}\limits_{n\to\infty}{a_{n}}=0.

Lemma 5

XH Let {αn}\{{\alpha_{n}}\}, {an}\{a_{n}\}, {bn}\{b_{n}\} and {cn}\{c_{n}\} be nonnegative real numbers sequences and there exists n0Nn_{0}\in N such that

an+1(1αn)an+αnbn+cnn1,{a_{n+1}}\leq(1-{\alpha_{n}}){a_{n}}+{\alpha_{n}}{b_{n}}+{c_{n}}~{}~{}\forall n\geq 1,

where {αn}\{{\alpha_{n}}\}, {bn}\{b_{n}\} and {cn}\{c_{n}\} satisfy the following conditions

  • (i)

    {αn}(0,1)\{\alpha_{n}\}\subset(0,1) and n=0αn=;\sum\limits_{n=0}^{\infty}{{\alpha_{n}}}=\infty;

  • (ii)

    limsupnbn0;\lim{\sup_{n\to\infty}}{b_{n}}\leq 0;

  • (iii)

    cn0,n0,n=1cn<{c_{n}}\geq 0,~{}\forall n\geq 0~{},\sum\limits_{n=1}^{\infty}{{c_{n}}}<\infty.

Then limnan=0\mathop{\lim}\limits_{n\to\infty}{a_{n}}=0.

Lemma 6

GT Let B:H2HB:H\to{2^{H}} be a set-valued maximal monotone mapping and A:HHA:H\to H is a mapping. Define Tλ:=(I+λB)1(IλA),λ>0{T_{\lambda}}:={(I+\lambda B)^{-1}}(I-\lambda A),\lambda>0. Then Fix(Tλ)=(A+B)10Fix({T_{\lambda}})={(A+B)^{-1}}0.

Lemma 7

(BH ,Corollaty 2.14) For all x,yHx,y\in H and αR\alpha\in R, the following equality holds:

αx+(1α)y2=αx2+(1α)y2α(1α)xy2.{\left\|{\alpha x+(1-\alpha)y}\right\|^{2}}=\alpha{\left\|x\right\|^{2}}+(1-\alpha){\left\|y\right\|^{2}}-\alpha(1-\alpha){\left\|{x-y}\right\|^{2}}.

3 Weak convergence

In this section, we introduce the Tseng splitting method with double inertial steps to solve MIP (1.1) and discuss convergence and convergence rate of the new algorithm. We firstly give the following conditions.

  • (C1C_{1})

    The solution set of the inclusion problem (1.1) is nonempty, that is, Ω\Omega\neq\emptyset.

  • (C2C_{2})

    The mappings A:HHA:H\to H is LL-Lipschitz continuous and monotone and the set-valued mapping B:H2HB:H\to{2^{H}} is maximal monotone.

  • (C3C_{3})

    The real sequences {αn}\{\alpha_{n}\}, {βn}\{\beta_{n}\}, {βn}\{\beta_{n}\}, {an}\{a_{n}\}, {pn}\{p_{n}\} and {μn}\{\mu_{n}\} satisfy the following conditions

  • (i)

    0αn10\leq\alpha_{n}\leq 1;

  • (ii)

    0βnβn+1β<3+2ε8ε+172ε,ε(1,+)0\leq\beta_{n}\leq\beta_{n+1}\leq\beta<\frac{{3+2\varepsilon-\sqrt{8\varepsilon+17}}}{{2\varepsilon}},\varepsilon\in(1,+\infty);

  • (iii)

    0<θ<θnθn+111+ε,ε(1,+)0<\theta<\theta_{n}\leq\theta_{n+1}\leq\dfrac{1}{1+\varepsilon},\varepsilon\in(1,+\infty);

  • (iv)

    an=(1θn)βn+θnαn{a_{n}}=(1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}} is a non-decreasing sequence;

  • (v)

    n=1pn<\sum\limits_{n=1}^{\infty}{{p_{n}}}<\infty and limnμn=0\mathop{\lim}\limits_{n\to\infty}{\mu_{n}}=0.

Remark 1

Note that if {αn}\{\alpha_{n}\} is a non-decreasing sequence, δ~0\tilde{\delta}\geq 0 and βn=δ~α1\beta_{n}=\tilde{\delta}\leq\alpha_{1}, then the condition (C3C_{3})(iv) holds naturally. In addition, if we choose αn=15+16+n\alpha_{n}=\frac{1}{5}+\frac{1}{{6+n}}, βn=1616+n\beta_{n}=\frac{1}{6}-\frac{1}{{6+n}} and θn=1416+n\theta_{n}=\frac{1}{4}-\frac{1}{{6+n}}, then the condition (C3C_{3})(iv) is also true.

Algorithm 3.1

Choose x0,x1H{x_{0}},{x_{1}}\in H , μ(0,1)\mu\in(0,1) and λ1>0\lambda_{1}>0

  • Step 1.

    Compute

    wn=xn+αn(xnxn1)zn=xn+βn(xnxn1)yn=(I+λnB)1(IλnA)wn\begin{array}[]{l}{w_{n}}={x_{n}}+{\alpha_{n}}({x_{n}}-{x_{n-1}})\\ {z_{n}}={x_{n}}+{\beta_{n}}({x_{n}}-{x_{n-1}})\\ {y_{n}}={(I+{\lambda_{n}}B)}^{-1}(I-{\lambda_{n}}A){w_{n}}\end{array}

    where

    λn+1={min{(μn+μ)wnynAwnAyn,λn+pn},AwnAynλn+pn,otherwise.{\lambda_{n+1}}=\left\{\begin{array}[]{l}\min\{\frac{{(\mu_{n}+\mu)\left\|{{w_{n}}-{y_{n}}}\right\|}}{{\left\|{A{w_{n}}-A{y_{n}}}\right\|}},{\lambda_{n}}+p_{n}\},A{w_{n}}\neq A{y_{n}}\\ {\lambda_{n}+p_{n}},\quad\quad\quad\quad\quad\quad\quad\quad{\rm otherwise}.\end{array}\right.

    If wn=yn{w_{n}}={y_{n}}, stop and yny_{n} is a solution of the problem (1.1). Otherwise

  • Step 2.

    Compute

    xn+1=(1θn)zn+θn(ynλn(AynAwn)){x_{n+1}}=(1-{\theta_{n}}){z_{n}}+{\theta_{n}}({y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}}))

    Let n=n+1n=n+1 and return to Step 1.

Remark 2
  • (i)

    In our Algorithm 3.1, we can take the inertial factor αn=1\alpha_{n}=1. This is not allowed in the corresponding algorithms of CV ; CH , where the only single inertial extrapolation step is considered and the inertia is bounded away from 11.

  • (ii)

    In order to get the larger step sizes, being similar to the sequence {θn}\{\theta_{n}\} in Algorithm 3.1 of W , the sequence {μn}\{\mu_{n}\} is used to relax the parameter μ\mu. The sequence {θn}\{\theta_{n}\} can be called a relaxed parameter sequence, which can often improve numerical efficiency of algorithms, see CV . If μn=0\mu_{n}=0, then the step size λn\lambda_{n} is the same as the one of Algorithm 4.1 in CH . If μn=0\mu_{n}=0 and pn=0p_{n}=0, then the step size λn\lambda_{n} is the same as the one of Algorithm 3.1 of VA .

  • (iii)

    Note that if βn=0\beta_{n}=0, then the condition (C3C_{3})(iii) can be relaxed as 0<θ<θnθn+111+ε,ε[0,+)0<\theta<\theta_{n}\leq\theta_{n+1}\leq\dfrac{1}{1+\varepsilon},\varepsilon\in[0,+\infty), which indicates that θn=θ^{\theta_{n}}=\hat{\theta} can equal 1. Setting αn=α{\alpha_{n}}=\alpha, θn=θ^{\theta_{n}}=\hat{\theta}, μn=0{\mu_{n}}=0 and βn=pn=0{\beta_{n}}={p_{n}}=0, our algorithm can reduce to Algorithm 2 of CV . In addition, if αn=θn=α{\alpha_{n}}={\theta_{n}}=\alpha, μn=0{\mu_{n}}=0 and βn=pn=0{\beta_{n}}={p_{n}}=0, Algorithm 3.1 can reduce to Algorithm 1 of AB .

Lemma 8

The sequences {λn}\{\lambda_{n}\} from Algorithm 3.1 is bounded and λn[min{μL,λ1},λ1+P]\lambda_{n}\in[\min\{\frac{\mu}{L},{\lambda_{1}}\},{\lambda_{1}}+P]. Furthermore there exists λ[min{μL,λ1},λ1+P]\lambda\in[\min\{\frac{\mu}{L},{\lambda_{1}}\},{\lambda_{1}}+P] such that limnλn=λ\mathop{\lim}\limits_{n\to\infty}{\lambda_{n}}=\lambda, where P=n=1pnP=\sum\limits_{n=1}^{\infty}{{p_{n}}}.

Proof

By the definition of λn\lambda_{n}, if AwnAynA{w_{n}}\neq A{y_{n}}, we get

λn(μ+μn)wnynAwnAynμ+μnLμL.\begin{array}[]{l}{\lambda_{n}}\geq\frac{{(\mu+{\mu_{n}})\left\|{w_{n}-y_{n}}\right\|}}{{\left\|{Aw_{n}-Ay_{n}}\right\|}}\geq\frac{{\mu+{\mu_{n}}}}{L}\geq\frac{\mu}{L}.\end{array} (3.1)

Since P=n=1pnP=\sum\limits_{n=1}^{\infty}{{p_{n}}}, we have

λn+1λn+pnλ1+n=1pn=λ1+P.\begin{array}[]{l}{\lambda_{n+1}}\leq{\lambda_{n}}+{p_{n}}\leq{\lambda_{1}}+\sum\limits_{n=1}^{\infty}{{p_{n}}}={\lambda_{1}}+P.\end{array} (3.2)

It implies that {μL,λ1}λnλ1+P\{\frac{\mu}{L},{\lambda_{1}}\}\leq{\lambda_{n}}\leq{\lambda_{1}}+P.
We have

λn+1λn=[λn+1λn]+[λn+1λn].\begin{array}[]{l}{\lambda_{n+1}}-{\lambda_{n}}={[{\lambda_{n+1}}-{\lambda_{n}}]_{+}}-{[{\lambda_{n+1}}-{\lambda_{n}}]_{-}}.\end{array} (3.3)

Thus

λn+1λ1=i=1n[λi+1λi]+i=1n[λi+1λi].\begin{array}[]{l}{\lambda_{n+1}}-{\lambda_{1}}=\sum\limits_{i=1}^{n}{{{[{\lambda_{i+1}}-{\lambda_{i}}]}_{+}}}-\sum\limits_{i=1}^{n}{{{[{\lambda_{i+1}}-{\lambda_{i}}]}_{-}}}.\end{array} (3.4)

Since {λn}\{\lambda_{n}\} is bounded and n=1[λn+1λn]+n=1pn<\sum\limits_{n=1}^{\infty}{{{[{\lambda_{n+1}}-{\lambda_{n}}]}_{+}}}\leq\sum\limits_{n=1}^{\infty}{{p_{n}}}<\infty, we get n=1[λn+1λn]\sum\limits_{n=1}^{\infty}{{{[{\lambda_{n+1}}-{\lambda_{n}}]}_{-}}} is convergent. Therefore, there exists λ[min{μL,λ1},λ1+P]\lambda\in[\min\{\frac{\mu}{L},{\lambda_{1}}\},{\lambda_{1}}+P] such that limnλn=λ\mathop{\lim}\limits_{n\to\infty}{\lambda_{n}}=\lambda. This proof is completed.

Remark 3

If wn=yn{w_{n}}={y_{n}}, then yn=(I+λnB)1(IλnA)yn{y_{n}}={(I+{\lambda_{n}}B)}^{-1}(I-{\lambda_{n}}A){y_{n}}. By Lemma 6, we know ynΩ{y_{n}}\in\Omega.

Lemma 9

Suppose that the sequence {yn}\{y_{n}\} is generated by Algorithm 3.1. Thus the following assertions hold:

  • (i)

    if the conditions (C1C_{1}) and (C2C_{2}) hold, then

    ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn2,pΩ;\begin{array}[]{l}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}},\forall p\in\Omega;\end{array}
  • (ii)

    if the conditions (C1C_{1}) and (C2C_{2}) hold and AA or BB is rr-strongly monotone, then

    ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn22rλnynp2,pΩ.\begin{array}[]{l}\begin{split}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}&\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}\\ &-2r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}},\forall p\in\Omega.\end{split}\end{array}
Proof

(i) According to the definition of λn\lambda_{n}, we have that

ynλn(AynAwn)p2=ynp2+λn2AynAwn22λnAynAwn,ynpynwn2+wnp2+2ynwn,wnp+(μ+μn)2λn2λn+12ynwn22λnAynAwn,ynp=(1+(μ+μn)2λn2λn+12)ynwn2+wnp2+2ynwn,wnyn+2ynwn,ynp2λnAynAwn,ynp=(1+(μ+μn)2λn2λn+12)ynwn2+wnp22wnynλn(AwnAyn),ynp.\begin{array}[]{c}\begin{split}\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}&={\left\|{{y_{n}}-p}\right\|^{2}}+{\lambda_{n}}^{2}{\left\|{A{y_{n}}-A{w_{n}}}\right\|^{2}}-2{\lambda_{n}}\left\langle{A{y_{n}}-A{w_{n}},{y_{n}}-p}\right\rangle\\ &\leq{\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}+{\left\|{{w_{n}}-p}\right\|^{2}}+2\left\langle{{y_{n}}-{w_{n}},{w_{n}}-p}\right\rangle\\ &\quad+\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}{\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}-2{\lambda_{n}}\left\langle{A{y_{n}}-A{w_{n}},{y_{n}}-p}\right\rangle\\ &=(1+\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}+{\left\|{{w_{n}}-p}\right\|^{2}}+2\left\langle{{y_{n}}-{w_{n}},{w_{n}}-{y_{n}}}\right\rangle\\ &\quad+2\left\langle{{y_{n}}-{w_{n}},{y_{n}}-p}\right\rangle-2{\lambda_{n}}\left\langle{A{y_{n}}-A{w_{n}},{y_{n}}-p}\right\rangle\\ &=(-1+\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}+{\left\|{{w_{n}}-p}\right\|^{2}}\\ &\quad-2\left\langle{{w_{n}}-{y_{n}}-{\lambda_{n}}(A{w_{n}}-A{y_{n}}),{y_{n}}-p}\right\rangle.\end{split}\end{array} (3.5)

Since yn=(I+λnB)1(IλnA)wn{y_{n}}={(I+{\lambda_{n}}B)^{-1}}(I-{\lambda_{n}}A){w_{n}} and BB is maximal monotone, we obtain

wnynλnAwnλnByn.\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}}\in B{y_{n}}. (3.6)

Thus

wnynλnAwnλn+Ayn(A+B)yn.\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}}+A{y_{n}}\in(A+B){y_{n}}.

Since A:HHA:H\to H is monotone and B:H2HB:H\to{2^{H}} is a maximal monotone operator, Lemma 3 implies that A+BA+B is maximal monotone. Since pΩp\in\Omega, 0(A+B)p0\in(A+B)p and so

wnynλnAwnλn+Ayn0,ynp0.\left\langle{\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}}+A{y_{n}}-0,{y_{n}}-p}\right\rangle\geq 0.

Hence

wnynλn(AwnAyn),ynp0.\left\langle{{w_{n}}-{y_{n}}-{\lambda_{n}}(A{w_{n}}-A{y_{n}}),{y_{n}}-p}\right\rangle\geq 0. (3.7)

Combing (3.5) with (3.7), we get

ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn2.\begin{array}[]{l}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}.\end{array} (3.8)

The proof of the part (i) is completed.

(ii) Case I BB is rr-strongly monotone.

Since pωp\in\omega, we have 0(A+B)p0\in(A+B)p and thus ApBp-Ap\in Bp. Since BB is rr-strongly monotone, by (3.6), we have

wnynλnAwnλn+Ap,ynprynp2.\left\langle{\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}}+Ap,{y_{n}}-p}\right\rangle\geq r{\left\|{{y_{n}}-p}\right\|^{2}}. (3.9)

The monotonicity of AA implies that

AynAp,ynp0.\left\langle Ay_{n}-Ap,y_{n}-p\right\rangle\geq 0. (3.10)

Adding together (3.9) and (3.10), we have

wnynλnAwnλn+Ayn,ynprynp2,\left\langle{\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}}+A{y_{n}},{y_{n}}-p}\right\rangle\geq r{\left\|{{y_{n}}-p}\right\|^{2}},

which implies

wnynλnAwn+λnAyn,ynprλnynp2.\left\langle{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}+{\lambda_{n}}A{y_{n}},{y_{n}}-p}\right\rangle\geq r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}}. (3.11)

Utilizing (3.5) and (3.11), we get

ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn22rλnynp2.\begin{array}[]{l}\begin{split}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}&\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}\\ &-2r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}}.\end{split}\end{array} (3.12)

Case II AA is rr-strongly monotone.

The strong monotonicity of AA implies that

AynAp,ynprynp2.\left\langle Ay_{n}-Ap,y_{n}-p\right\rangle\geq r{\left\|{{y_{n}}-p}\right\|^{2}}. (3.13)

Since pωp\in\omega, we have 0(A+B)p0\in(A+B)p and thus ApBp-Ap\in Bp. Since BB is monotone, by (3.6), we have

wnynλnAwnλn+Ap,ynp0.\left\langle{\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}}+Ap,{y_{n}}-p}\right\rangle\geq 0. (3.14)

The rest of proof is the same as in Case I. This completes the proof of Lemma 9.

Lemma 10

Assume that the conditions (C1C_{1}) and (C2C_{2}) hold, and {wn}\{{w_{n}}\} and {yn}\{{y_{n}}\} are sequences generated by Algorithm 3.1. If limnwnyn=0\mathop{\lim}\limits_{n\to\infty}{\left\|{{w_{n}}-{y_{n}}}\right\|}=0 and {wn}{\{{w_{n}}\}} converges weakly to some zHz\in H, then zΩz\in\Omega.

Proof

Letting (u,v)(u,v)\inGraph(A+B)(A+B), we get vAuBuv-Au\in Bu. Since BB is maximal monotone, by (3.6), we have

vAuwnynλnAwnλn,uyn0.\left\langle v-Au-\frac{{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}}}{{{\lambda_{n}}}},u-{y_{n}}\right\rangle\geq 0. (3.15)

This implies that

v,uyn1λnλnAu+wnynλnAwn,uyn=1λnuyn,wnyn+uyn,AuAwn=1λnuyn,wnyn+uyn,AuAyn+uyn,AynAwn.\begin{array}[]{c}\begin{split}\left\langle v,u-{y_{n}}\right\rangle&\geq\frac{1}{{{\lambda_{n}}}}\left\langle{\lambda_{n}}Au+{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}},u-{y_{n}}\right\rangle\\ &=\frac{1}{{{\lambda_{n}}}}\left\langle{u-{y_{n}},{w_{n}}-{y_{n}}}\right\rangle+\left\langle{u-{y_{n}},Au-A{w_{n}}}\right\rangle\\ &=\frac{1}{{{\lambda_{n}}}}\left\langle{u-{y_{n}},{w_{n}}-{y_{n}}}\right\rangle+\left\langle{u-{y_{n}},Au-A{y_{n}}}\right\rangle+\left\langle{u-{y_{n}},A{y_{n}}-A{w_{n}}}\right\rangle.\end{split}\end{array} (3.16)

From the Lipschitz continuity and monotonicity of AA, it follows that

uyn,v1λnuyn,wnyn+uyn,AynAwn1λnuyn,wnynuynAynAwn1λnuyn,wnynLuynynwn.\begin{array}[]{c}\begin{split}\left\langle{u-{y_{n}},v}\right\rangle&\geq\frac{1}{{{\lambda_{n}}}}\left\langle{u-{y_{n}},{w_{n}}-{y_{n}}}\right\rangle+\left\langle{u-{y_{n}},A{y_{n}}-A{w_{n}}}\right\rangle\\ &\geq\frac{1}{{{\lambda_{n}}}}\left\langle{u-{y_{n}},{w_{n}}-{y_{n}}}\right\rangle-\left\|{u-{y_{n}}}\right\|\left\|{A{y_{n}}-A{w_{n}}}\right\|\\ &\geq\frac{1}{{{\lambda_{n}}}}\left\langle{u-{y_{n}},{w_{n}}-{y_{n}}}\right\rangle-L\left\|{u-{y_{n}}}\right\|\left\|{{y_{n}}-{w_{n}}}\right\|.\end{split}\end{array} (3.17)

Since limnwnyn=0\mathop{\lim}\limits_{n\to\infty}{\left\|{{w_{n}}-{y_{n}}}\right\|}=0, by Lemma 8 and (3.17), we have

uz,v=limnuwn,v=limnuyn,v0\left\langle{u-z,v}\right\rangle=\mathop{\lim}\limits_{n\to\infty}\left\langle{u-{w_{n}},v}\right\rangle=\mathop{\lim}\limits_{n\to\infty}\left\langle{u-{y_{n}},v}\right\rangle\geq 0

Since v(A+B)uv\in(A+B)u and A+BA+B is maximal monotone, we know 0(A+B)z0\in(A+B)z, that is, zΩz\in\Omega. This proof is completed.

Theorem 3.1

Assume that the conditions (C1C_{1})-(C3C_{3}) hold. Then the sequence {xn}\{{x_{n}}\} from Algorithm 3.1 converges weakly to some element pΩp\in\Omega.

Proof

By the definition of xn+1x_{n+1} and Lemma 7, we have

xn+1p2=(1θn)zn+θn(ynλn(AynAwn))p2=(1θn)znp2+θnynλn(AynAwn)p2(1θn)θnznyn+λn(AynAwn)2.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&={\left\|{(1-{\theta_{n}}){z_{n}}+{\theta_{n}}({y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}}))-p}\right\|^{2}}\\ &=(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}\\ &\quad-(1-{\theta_{n}}){\theta_{n}}{\left\|{{z_{n}}-{y_{n}}+{\lambda_{n}}(A{y_{n}}-A{w_{n}})}\right\|^{2}}.\end{split}\end{array} (3.18)

The definition of xn+1x_{n+1} implies that ynλn(AynAwn)=xn+1(1θn)znθn{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})=\frac{{{x_{n+1}}-(1-{\theta_{n}}){z_{n}}}}{{{\theta_{n}}}}. Thus

znyn+λn(AynAwn)2=znxn+1(1θn)znθn2=1θn2xn+1zn2.\begin{array}[]{c}\begin{split}{\left\|{{z_{n}}-{y_{n}}+{\lambda_{n}}(A{y_{n}}-A{w_{n}})}\right\|^{2}}&={\left\|{{z_{n}}-\frac{{{x_{n+1}}-(1-{\theta_{n}}){z_{n}}}}{{{\theta_{n}}}}}\right\|^{2}}\\ &=\frac{1}{{{\theta_{n}^{2}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}.\end{split}\end{array} (3.19)

By Lemma 9, we have

ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn2,pΩ.{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}},\forall p\in\Omega. (3.20)

Substituting (3.19) and (3.20) into (3.18), we get

xn+1p2(1θn)znp2+θnwnp2θn(1(μ+μn)2λn2λn+12)wnyn21θnθnxn+1zn2.\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}})\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\\ &\quad-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}.\end{split} (3.21)

Lemma 8 and the factlimnμn=0\mathop{\lim}\limits_{n\to\infty}{\mu_{n}}=0 imply that limn1(μ+μn)2λn2λn+12=1μ2>0\mathop{\lim}\limits_{n\to\infty}1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}=1-{\mu^{2}}>0. Thus there exists a positive integer N1N\geq 1 such that

θn(1(μ+μn)2λn2λn+12)wnyn20,nN.{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}})\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\geq 0,~{}\forall~{}n\geq N.

Thanks to (3.21), we have

xn+1p2(1θn)znp2+θnwnp21θnθnxn+1zn2,nN.\left\|{{x_{n+1}}-p}\right\|^{2}\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}},~{}\forall~{}n\geq N. (3.22)

By the definition of znz_{n} and Lemma 7, we obtain

znp2=xn+βn(xnxn1)p2=(1+βn)(xnp)βn(xn1p)2=(1+βn)xnp2βnxn1p2+βn(1+βn)xnxn12.\begin{array}[]{c}\begin{split}{\left\|{{z_{n}}-p}\right\|^{2}}&={\left\|{{x_{n}}+{\beta_{n}}({x_{n}}-{x_{n-1}})-p}\right\|^{2}}\\ &={\left\|{(1+{\beta_{n}})({x_{n}}-p)-{\beta_{n}}({x_{n-1}}-p)}\right\|^{2}}\\ &=(1+{\beta_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\beta_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\beta_{n}}(1+{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}.\end{split}\end{array} (3.23)

From the definition of wnw_{n} and Lemma 7, it follows that

wnp2=(1+αn)xnp2αnxn1p2+αn(1+αn)xnxn12.{\left\|{{w_{n}}-p}\right\|^{2}}=(1+{\alpha_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\alpha_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\alpha_{n}}(1+{\alpha_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}. (3.24)

The definition of znz_{n} means that

xn+1zn2=xn+1xnβn(xnxn1)2xn+1xn2+βn2xnxn122βnxn+1xnxnxn1(1βn)xn+1xn2+(βn2βn)xnxn12.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}&={\left\|{{x_{n+1}}-{x_{n}}-{\beta_{n}}({x_{n}}-{x_{n-1}})}\right\|^{2}}\\ &\geq{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}+{\beta_{n}}^{2}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}-2{\beta_{n}}\left\|{{x_{n+1}}-{x_{n}}}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|\\ &\geq(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}+({\beta_{n}}^{2}-{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}.\end{split}\end{array} (3.25)

Owing to (3.23), (3.24), (3.25) and (3.22), we know

xn+1p2(1θn)[(1+βn)xnp2βnxn1p2+βn(1+βn)xnxn12]+θn[(1+αn)xnp2αnxn1p2+αn(1+αn)xnxn12]1θnθn[(1βn)xn+1xn2+(βn2βn)xnxn12][1+(1θn)βn+θnαn]xnp2[(1θn)βn+θnαn]xn1p2+[(1θn)(1+βn)βn+θn(1+αn)αn(1θn)θn(βn2βn)]xnxn12(1θn)θn(1βn)xn+1xn2(1+an)xnp2anxn1p2+bnxnxn12cnxn+1xn2.\begin{split}\left\|{{x_{n+1}}-p}\right\|^{2}&\leq(1-{\theta_{n}})[(1+{\beta_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\beta_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\beta_{n}}(1+{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad+{\theta_{n}}[(1+{\alpha_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\alpha_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\alpha_{n}}(1+{\alpha_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}[(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}+({\beta_{n}}^{2}-{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\leq[1+(1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}}]{\left\|{{x_{n}}-p}\right\|^{2}}-[(1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}}]{\left\|{{x_{n-1}}-p}\right\|^{2}}\\ &\quad+[(1-{\theta_{n}})(1+{\beta_{n}}){\beta_{n}}+{\theta_{n}}(1+{\alpha_{n}}){\alpha_{n}}-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}({\beta_{n}}^{2}-{\beta_{n}})]{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\leq(1+{a_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}-{c_{n}}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}.\end{split} (3.26)

where an=(1θn)βn+θnαn{a_{n}}=(1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}}, bn:=(1θn)(1+βn)βn+θn(1+αn)αn1θnθn(βn2βn){b_{n}}:=(1-{\theta_{n}})(1+{\beta_{n}}){\beta_{n}}+{\theta_{n}}(1+{\alpha_{n}}){\alpha_{n}}-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}({\beta_{n}}^{2}-{\beta_{n}}) and cn:=1θnθn(1βn){c_{n}}:=\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}(1-{\beta_{n}}).
Define

Γn:=xnp2anxn1p2+bnxnxn12{\Gamma_{n}}:={\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}

Since {an}\{a_{n}\} is non-decreasing, by (3.26), we deduce that

Γn+1Γn=xn+1p2(1+an+1)xnp2+anxn1p2bnxnxn12+bn+1xn+1xn2xn+1p2(1+an)xnp2+anxn1p2bnxnxn12+bn+1xn+1xn2(cnbn+1)xn+1xn2.\begin{array}[]{c}\begin{split}{\Gamma_{n+1}}-{\Gamma_{n}}&={\left\|{{x_{n+1}}-p}\right\|^{2}}-(1+{a_{n+1}}){\left\|{{x_{n}}-p}\right\|^{2}}+{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}\\ &\quad-{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}+{b_{n+1}}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\leq{\left\|{{x_{n+1}}-p}\right\|^{2}}-(1+{a_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}+{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}\\ &\quad-{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}+{b_{n+1}}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\leq-({c_{n}}-{b_{n+1}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}.\end{split}\end{array} (3.27)

The condition (C3C_{3}) means that

cnbn+1=1θnθn(1βn)(1θn+1)(1+βn+1)βn+1θn+1(1+αn+1)αn+1+1θn+1θn+1(βn+12βn+1)1θn+1θn+1(12βn+1+βn+12)(1θn+1)(βn+1+βn+12)2θn+1ε(12β+β2)ε1+ε(β+β2)21+ε=11+ε[ε2β2(3ε+2ε2)β+(ε2+ε2)].\begin{array}[]{c}\begin{split}{c_{n}}-{b_{n+1}}&=\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}(1-{\beta_{n}})-(1-{\theta_{n+1}})(1+{\beta_{n+1}}){\beta_{n+1}}-{\theta_{n+1}}(1+{\alpha_{n+1}}){\alpha_{n+1}}\\ &\quad+\frac{{1-{\theta_{n+1}}}}{{{\theta_{n+1}}}}({\beta_{n+1}}^{2}-{\beta_{n+1}})\\ &\geq\frac{{1-{\theta_{n+1}}}}{{{\theta_{n+1}}}}(1-2{\beta_{n+1}}+{\beta_{n+1}}^{2})-(1-{\theta_{n+1}})({\beta_{n+1}}+{\beta_{n+1}}^{2})-2{\theta_{n+1}}\\ &\geq\varepsilon(1-2\beta+{\beta^{2}})-\frac{\varepsilon}{{1+\varepsilon}}(\beta+{\beta^{2}})-\frac{2}{{1+\varepsilon}}\\ &=\frac{1}{{1+\varepsilon}}[{\varepsilon^{2}}{\beta^{2}}-(3\varepsilon+2{\varepsilon^{2}})\beta+({\varepsilon^{2}}+\varepsilon-2)].\end{split}\end{array} (3.28)

Combining (3.27) and (3.28), we infer that

Γn+1Γnδxn+1xn2.{\Gamma_{n+1}}-{\Gamma_{n}}\leq-\delta{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}. (3.29)

where δ:=ε2β2(3ε+2ε2)β+(ε2+ε2)1+ε\delta:=\frac{{{\varepsilon^{2}}{\beta^{2}}-(3\varepsilon+2{\varepsilon^{2}})\beta+({\varepsilon^{2}}+\varepsilon-2)}}{{1+\varepsilon}}. Since β<3+2ε8ε+172ε,ε(1,+)\beta<\frac{{3+2\varepsilon-\sqrt{8\varepsilon+17}}}{{2\varepsilon}},\varepsilon\in(1,+\infty) , we conclude that δ>0\delta>0. From (3.29), it follows that

Γn+1Γn0.{\Gamma_{n+1}}-{\Gamma_{n}}\leq 0.

Thus the sequence {Γn}\{{\Gamma_{n}}\} is nonincreasing. The condition (C3C_{3}) implies that bn>0b_{n}>0. Thus

Γn=xnp2anxn1p2+bnxnxn12xnp2anxn1p2.\begin{array}[]{c}\begin{split}{\Gamma_{n}}&={\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\geq{\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}.\end{split}\end{array} (3.30)

This implies that

xnp2anxn1p2+Γnaxn1p2+Γ1anx0p2+Γ1(1+a+an1)anx0p2+Γ11a.\begin{array}[]{c}\begin{split}{\left\|{{x_{n}}-p}\right\|^{2}}&\leq{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\Gamma_{n}}\\ &\leq a{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\Gamma_{1}}\\ &\vdots\\ &\leq{a^{n}}{\left\|{{x_{0}}-p}\right\|^{2}}+{\Gamma_{1}}(1+a+\ldots{a^{n-1}})\\ &\leq{a^{n}}{\left\|{{x_{0}}-p}\right\|^{2}}+\frac{{{\Gamma_{1}}}}{{1-a}}.\end{split}\end{array} (3.31)

where a:=5+2ε8ε+172+2ε<1a:=\frac{{5+2\varepsilon-\sqrt{8\varepsilon+17}}}{{2+2\varepsilon}}<1.
By definition of {Γn}\{{\Gamma_{n}}\}, we have

Γn+1=xn+1p2an+1xnp2+bn+1xn+1xn2an+1xnp2.\begin{array}[]{c}\begin{split}{\Gamma_{n+1}}&={\left\|{{x_{n+1}}-p}\right\|^{2}}-{a_{n+1}}{\left\|{{x_{n}}-p}\right\|^{2}}+{b_{n+1}}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\geq-{a_{n+1}}{\left\|{{x_{n}}-p}\right\|^{2}}.\end{split}\end{array} (3.32)

According to (3.31) and (3.32), we infer that

Γn+1an+1xnp2axnp2an+1x0p2+aΓ11a.-{\Gamma_{n+1}}\leq{a_{n+1}}{\left\|{{x_{n}}-p}\right\|^{2}}\leq a{\left\|{{x_{n}}-p}\right\|^{2}}\leq{a^{n+1}}{\left\|{{x_{0}}-p}\right\|^{2}}+\frac{{a{\Gamma_{1}}}}{{1-a}}. (3.33)

By (3.33) and (3.29), we get

δk=1nxk+1xk2Γ1Γn+1an+1x0p2+Γ11ax0p2+Γ11a.\begin{array}[]{c}\begin{split}\delta\sum\limits_{k=1}^{n}{{{\left\|{{x_{k+1}}-{x_{k}}}\right\|}^{2}}\leq{\Gamma_{1}}}-{\Gamma_{n+1}}&\leq{a^{n+1}}{\left\|{{x_{0}}-p}\right\|^{2}}+\frac{{{\Gamma_{1}}}}{{1-a}}\\ &\leq{\left\|{{x_{0}}-p}\right\|^{2}}+\frac{{{\Gamma_{1}}}}{{1-a}}.\end{split}\end{array} (3.34)

This implies

k=1xk+1xk2.\sum\limits_{k=1}^{\infty}\|x_{k+1}-x_{k}\|^{2}\leq\infty. (3.35)

Thus

xn+1xn0,n.\left\|{{x_{n+1}}-{x_{n}}}\right\|\to 0,n\to\infty. (3.36)

The definition of {Γn}\{{\Gamma_{n}}\} implies that

xn+1wn2=xn+1xn2+αnxnxn122αnxn+1xn,xnxn1.{\left\|{{x_{n+1}}-{w_{n}}}\right\|^{2}}={\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}+{\alpha_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}-2{\alpha_{n}}\left\langle{{x_{n+1}}-{x_{n}},{x_{n}}-{x_{n-1}}}\right\rangle. (3.37)

Since {αn}\{\alpha_{n}\} is bounded, by (3.37), we obtain

xn+1wn20,n.{\left\|{{x_{n+1}}-{w_{n}}}\right\|^{2}}\to 0,n\to\infty. (3.38)

On the other hand

xnwnxnxn+1+xn+1wn.\left\|{{x_{n}}-{w_{n}}}\right\|\leq\left\|{{x_{n}}-{x_{n+1}}}\right\|+\left\|{{x_{n+1}}-{w_{n}}}\right\|. (3.39)

From (3.36) and (3.38), it follows that

xnwn20,n.{\left\|{{x_{n}}-{w_{n}}}\right\|^{2}}\to 0,n\to\infty. (3.40)

By (3.26), we have

xn+1p2(1+an)xnp2anxn1p2+bnxnxn12cnxn+1xn2(1+an)xnp2anxn1p2+bnxnxn12=xnp2+an(xnp2xn1p2)+bnxnxn12.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1+{a_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}-{c_{n}}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\leq(1+{a_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &={\left\|{{x_{n}}-p}\right\|^{2}}+{a_{n}}({\left\|{{x_{n}}-p}\right\|^{2}}-{\left\|{{x_{n-1}}-p}\right\|^{2}})+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}.\end{split}\end{array} (3.41)

Since 0an<a<10\leq a_{n}<a<1 and {bn}\{b_{n}\} is bounded, by Lemma 1 and (3.35), we know there exists l[0,+)l\in[0,+\infty) such that

limnxnp2=l.\mathop{\lim}\limits_{n\to\infty}{\left\|{{x_{n}}-p}\right\|^{2}}=l. (3.42)

Then from (3.24), we have

wnp2=(1+αn)xnp2αnxn1p2+αn(1+αn)xnxn12=xnp2+αn(xnp2xn1p2)+αn(1+αn)xnxn12.\begin{array}[]{c}\begin{split}{\left\|{{w_{n}}-p}\right\|^{2}}&=(1+{\alpha_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\alpha_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\alpha_{n}}(1+{\alpha_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &={\left\|{{x_{n}}-p}\right\|^{2}}+{\alpha_{n}}({\left\|{{x_{n}}-p}\right\|^{2}}-{\left\|{{x_{n-1}}-p}\right\|^{2}})+{\alpha_{n}}(1+{\alpha_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}.\end{split}\end{array} (3.43)

Since {αn}\{{\alpha_{n}}\} is bounded, by (3.36) and (3.42), we obtain

limnwnp2=l.\mathop{\lim}\limits_{n\to\infty}{\left\|{{w_{n}}-p}\right\|^{2}}=l. (3.44)

Utilizing the similar discussion as in obtaining (3.44), we can get

limnznp2=l.\mathop{\lim}\limits_{n\to\infty}{\left\|{{z_{n}}-p}\right\|^{2}}=l. (3.45)

Owing to (3.21), we have

θn(1(μ+μn)2λn2λn+12)wnyn2(1θn)znp2+θnwnp2(1θn)θnxn+1zn2xn+1p2(1θn)znp2+θnwnp2xn+1p2.\begin{array}[]{c}\begin{split}{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}&\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}\\ &\quad-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}-{\left\|{{x_{n+1}}-p}\right\|^{2}}\\ &\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-{\left\|{{x_{n+1}}-p}\right\|^{2}}.\end{split}\end{array} (3.46)

Since 0<θn<11+ϵ,ε(1,+)0<\theta_{n}<\dfrac{1}{1+\epsilon},\varepsilon\in(1,+\infty) and limn1(μ+μn)2λn2λn+12=1μ2>0\mathop{\lim}\limits_{n\to\infty}1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}=1-{\mu^{2}}>0, by (3.42) (3.44) and (3.45), we get limnwnyn=0.\mathop{\lim}\limits_{n\to\infty}\left\|{{w_{n}}-{y_{n}}}\right\|=0.

Since {xn}\{{x_{n}}\} is bounded, we assume that there exists a subsequence {xnk}\{{x_{{n_{k}}}}\} of {xn}\{{x_{n}}\} such that xnkzH{x_{{n_{k}}}}\rightharpoonup z\in H. The fact limnwnxn=0\lim_{n\to\infty}\|{{w_{n}}-{x_{n}}}\|=0 implies that wnkzH{w_{{n_{k}}}}\rightharpoonup z\in H. By Lemma 10, we know zΩz\in\Omega. The two assumptions of Lemma 2 are verified. Lemma 2 ensure that the sequence {xn}\{{x_{n}}\} converges weakly to μΩ\mu^{*}\in\Omega. The proof is completed.

Let CC is a nonempty, closed and convex subset of HH. If B=NCB=N_{C}, then MIP (1.1) reduce to the following variational inequality, denoted by VI(A,C)(A,C): find a point xCx^{*}\in C such that

A(x),yx0,yC.\langle A(x^{*}),y-x^{*}\rangle\geq 0,~{}\forall~{}y\in C.

Denote the solution set of VI(A,C)(A,C) by SS.

Assumption 3.1
  • (i)

    The solution set of the problem (VI) is nonempty, that is, SS\neq\emptyset.

  • (ii)

    The mapping A:HHA:H\to H is pseudomonotone, Lipschitz continuous and AA satisfies the condition: for any {xn}H\{x_{n}\}\subset H with xnwx_{n}\rightharpoonup w^{*}, one has Awlim infnAxn.\|Aw^{*}\|\leq\liminf_{n\to\infty}\|Ax_{n}\|.

Proposition 1

Suppose that B=NCB=N_{C} in Algorithm 3.1. Thus the following statements hold:

  • (i)

    if the Assumption 3.1 hold, then

    ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn2,pS;\begin{array}[]{l}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}},\forall p\in S;\end{array}
  • (ii)

    if the Assumption 3.1 hold and AA is μ\mu-strongly pseudomonotone, then

    ynλn(AynAwn)p2wnp2(1(μ+μn)2λn2λn+12)wnyn22μλnynp2,pS.\begin{array}[]{l}\begin{split}{\left\|{{y_{n}}-{\lambda_{n}}(A{y_{n}}-A{w_{n}})-p}\right\|^{2}}&\leq{\left\|{{w_{n}}-p}\right\|^{2}}-(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}\\ &-2\mu{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}},\forall p\in S.\end{split}\end{array}
Proof

Since B=NCB=N_{C}, we know yn=(I+λnB)1(IλnA)wn=PC(wnλnAwn){y_{n}}={(I+{\lambda_{n}}B)}^{-1}(I-{\lambda_{n}}A){w_{n}}={P_{C}}({w_{n}}-{\lambda_{n}}A{w_{n}}) . Thus we get

ynwn+λnAwn,ynp0.\left\langle{{y_{n}}-{w_{n}}+{\lambda_{n}}A{w_{n}},{y_{n}}-p}\right\rangle\leq 0. (3.47)

For any given pSp\in S, Ap,ynp0.\left\langle{Ap,y_{n}-p}\right\rangle\geq 0. Since AA is pseudomonotone, we have

Ayn,ynp0.\left\langle{Ay_{n},y_{n}-p}\right\rangle\geq 0. (3.48)

From (3.47) and (3.48), it follows that

wnynλnAwn+λnAyn,ynp0.\left\langle{{w_{n}}-{y_{n}}-{\lambda_{n}}A{w_{n}}+{\lambda_{n}}A{y_{n}},{y_{n}}-p}\right\rangle\geq 0. (3.49)

which is just (3.7). The rest of the proof follows the same arguments as in Lemma 9. The proof of the part (i) is completed.

(ii) For any given pSp\in S, Ap,ynp0.\left\langle{Ap,y_{n}-p}\right\rangle\geq 0. The strong pseudomonotonicity of AA implies that

Ayn,ynpμynp2.\left\langle Ay_{n},y_{n}-p\right\rangle\geq\mu{\left\|{{y_{n}}-p}\right\|^{2}}. (3.50)

According to(3.47) and (3.50), we have

wnynλn(AwnAyn),ynpμλnynp2.\langle w_{n}-y_{n}-\lambda_{n}(Aw_{n}-Ay_{n}),y_{n}-p\rangle\geq\mu\lambda_{n}{\left\|{{y_{n}}-p}\right\|^{2}}. (3.51)

The rest of proof is the same as in Lemma 9. This completes the proof of Proposition 1.

Proposition 2

Assume that B=NCB=N_{C}, the Assumption 3.1 hold, and {wn}\{{w_{n}}\} and {yn}\{{y_{n}}\} are sequences generated by Algorithm 3.1. If limnwnyn=0\mathop{\lim}\limits_{n\to\infty}{\left\|{{w_{n}}-{y_{n}}}\right\|}=0 and {wn}{\{{w_{n}}\}} converges weakly to some zHz\in H, then zSz\in S.

Proof

This proof is the same as in Lemma 3.7 of TYCR , and we omit it.

Corollary 1

If B=NCB=N_{C}, the conditions (C3C_{3}) and Assumption 3.1 hold, then the sequence {xn}\{{x_{n}}\} from Algorithm 3.1 converges weakly to a point pSp\in S.

Proof

Replacing Lemmas 9 and 10 by Propositions 1 and 2, respectively and using the same proof as in Theorem 3.1, we obtain the desired conclusion.

Remark 4

Compared with Theorem 4.2 of YI , the advantages of Corollary 1 have (i) the sequence {αn}\{\alpha_{n}\} may not be non-decreasing; (ii) the sequence {βn}\{\beta_{n}\} may not be a constant; (iii) we require 0<θ<θnθn+1<11+ϵ,ε(1,+)0<\theta<\theta_{n}\leq\theta_{n+1}<\dfrac{1}{1+\epsilon},\varepsilon\in(1,+\infty) other than ε(2,+)\varepsilon\in(2,+\infty), which extend the taking value interval of θn\theta_{n}. From the numerical experiment in Section 5, it can be seen that the larger the values of θn\theta_{n}, the better the algorithm performs.

Motivated by the Theorem 5.1 of SILD , which may be the first nonasymptotic convergence rate results of inertial projection-type algorithm for solving variational inequalities with monotone mappings, we give the nonasymptotic O(1n)O(\frac{1}{\sqrt{n}}) convergence rate with ``minNin"``\min_{N\leq i\leq n}" for Algorithm 3.1.

Theorem 3.2

Assume that the conditions (C1C_{1})-(C3C_{3}) hold, the sequence {xn}\{x_{n}\} be generated by Algorithm 3.1 and [t]+:=max{t,0}[t]_{+}:=max\{t,0\}. Then for any pΩp\in\Omega, there exist constants M such that the following estimate holds

minNinwiyi((xNp2+a1a[xNp2xN1p2]++(2+1θ4θ)M1a)1θ(1μ)nN+1)12.\mathop{\min}\limits_{N\leq i\leq n}\left\|{{w_{i}}-{y_{i}}}\right\|\leq{(\frac{{({{\left\|{{x_{N}}-p}\right\|}^{2}}+\frac{a}{{1-a}}{{[{{\left\|{{x_{N}}-p}\right\|}^{2}}-{{\left\|{{x_{N-1}}-p}\right\|}^{2}}]}_{+}}+\frac{{(2+\frac{{1-\theta}}{{4\theta}})M}}{{1-a}})\frac{1}{{\theta(1-\mu)}}}}{{n-N+1}})^{\frac{1}{2}}}.
Proof

Lemma 8 and the factlimnμn=0\mathop{\lim}\limits_{n\to\infty}{\mu_{n}}=0 imply that limn1(μ+μn)2λn2λn+12=1μ2>0\mathop{\lim}\limits_{n\to\infty}1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}=1-{\mu^{2}}>0. Thus there exists a positive integer N1N\geq 1 such that

θn(1(μ+μn)2λn2λn+12)wnyn2θn(1μ)wnyn2,nN.{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}})\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\geq\theta_{n}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2},~{}\forall~{}n\geq N.

Thanks to (3.21), we have, for all nNn\geq N

xn+1p2(1θn)znp2+θnwnp2θn(1μ)wnyn21θnθnxn+1zn2(1θn)znp2+θnwnp2θ(1μ)wnyn21θnθnxn+1zn2.\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-{\theta_{n}}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}\\ &\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}.\end{split} (3.52)

Owing to (3.23), (3.24), (3.25) and (3.52), we know, for all nNn\geq N

xn+1p2(1θn)[(1+βn)xnp2βnxn1p2+βn(1+βn)xnxn12]+θn[(1+αn)xnp2αnxn1p2+αn(1+αn)xnxn12]1θnθn[(1βn)xn+1xn2+(βn2βn)xnxn12]θ(1μ)wnyn2[1+(1θn)βn+θnαn]xnp2[(1θn)βn+θnαn]xn1p2+[(1θn)(1+βn)βn+θn(1+αn)αn(1θn)θn(βn2βn)]xnxn12(1θn)θn(1βn)xn+1xn2θ(1μ)wnyn2(1+an)xnp2anxn1p2+bnxnxn12cnxn+1xn2θ(1μ)wnyn2.(1+an)xnp2anxn1p2+bnxnxn12θ(1μ)wnyn2.\begin{split}\left\|{{x_{n+1}}-p}\right\|^{2}&\leq(1-{\theta_{n}})[(1+{\beta_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\beta_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\beta_{n}}(1+{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad+{\theta_{n}}[(1+{\alpha_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\alpha_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{\alpha_{n}}(1+{\alpha_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad-\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}[(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}+({\beta_{n}}^{2}-{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad-{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\\ &\leq[1+(1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}}]{\left\|{{x_{n}}-p}\right\|^{2}}-[(1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}}]{\left\|{{x_{n-1}}-p}\right\|^{2}}\\ &\quad+[(1-{\theta_{n}})(1+{\beta_{n}}){\beta_{n}}+{\theta_{n}}(1+{\alpha_{n}}){\alpha_{n}}-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}({\beta_{n}}^{2}-{\beta_{n}})]{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}-{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\\ &\leq(1+{a_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}-{c_{n}}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\quad-{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}.\\ &\leq(1+{a_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{a_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}-{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}.\end{split} (3.53)

where {an}\{a_{n}\}, {bn}\{b_{n}\} and {cn}\{c_{n}\} have the same definitions as in (3.26).

This implies that, for all nNn\geq N

θ(1μ)wnyn2xnp2xn+1p2+an(xnp2xn1p2)+bnxnxn12.{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\leq{\left\|{{x_{n}}-p}\right\|^{2}}-\left\|{{x_{n+1}}-p}\right\|^{2}+{a_{n}}({\left\|{{x_{n}}-p}\right\|^{2}}-{\left\|{{x_{n-1}}-p}\right\|^{2}})+{b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}. (3.54)

Let σn=xnp2,Kn=σnσn1{\sigma_{n}}={\left\|{{x_{n}}-p}\right\|^{2}},{K_{n}}={\sigma_{n}}-{\sigma_{n-1}} and τn=bnxnxn12{\tau_{n}}={b_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}. We get, for all nNn\geq N

θ(1μ)wnyn2σnσn+1+anKn+τnσnσn+1+an[Kn]++τnσnσn+1+a[Kn]++τn\begin{split}{\theta}(1-\mu)\left\|{{w_{n}}-{y_{n}}}\right\|^{2}&\leq\sigma_{n}-\sigma_{n+1}+a_{n}K_{n}+\tau_{n}\\ &\leq\sigma_{n}-\sigma_{n+1}+a_{n}[K_{n}]_{+}+\tau_{n}\\ &\leq\sigma_{n}-\sigma_{n+1}+a[K_{n}]_{+}+\tau_{n}\\ \end{split} (3.55)

where aa has been defined in (3.31).
In view of (3.35), we have k=1xk+1xk2\sum\limits_{k=1}^{\infty}\|x_{k+1}-x_{k}\|^{2}\leq\infty. Thus, there exists a positive constant MM such that

k=1xk+1xk2M\sum\limits_{k=1}^{\infty}\|x_{k+1}-x_{k}\|^{2}\leq M

Therefore,

n=1τn=n=1bnxnxn12=n=1[(1θn)(1+βn)βn+θn(1+αn)αn+1θnθn(βnβn2)]xnxn12n=1(2+1θ4θ)xnxn12=(2+1θ4θ)n=1xnxn12(2+1θ4θ)M=C1\begin{split}\sum\limits_{n=1}^{\infty}{{\tau_{n}}}&=\sum\limits_{n=1}^{\infty}{{b_{n}}{{\left\|{{x_{n}}-{x_{n-1}}}\right\|}^{2}}}\\ &=\sum\limits_{n=1}^{\infty}{[(1-{\theta_{n}})(1+{\beta_{n}}){\beta_{n}}+{\theta_{n}}(1+{\alpha_{n}}){\alpha_{n}}+\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}({\beta_{n}}-{\beta_{n}}^{2})]{{\left\|{{x_{n}}-{x_{n-1}}}\right\|}^{2}}}\\ &\leq\sum\limits_{n=1}^{\infty}{(2+\frac{{1-\theta}}{{4\theta}}){{\left\|{{x_{n}}-{x_{n-1}}}\right\|}^{2}}}\\ &=(2+\frac{{1-\theta}}{{4\theta}})\sum\limits_{n=1}^{\infty}{{{\left\|{{x_{n}}-{x_{n-1}}}\right\|}^{2}}}\\ &\leq(2+\frac{{1-\theta}}{{4\theta}})M={C_{1}}\end{split} (3.56)

From (3.53), it follows that, for all nNn\geq N

Kn+1anKn+τnan[Kn]++τn.{K_{n+1}}\leq{a_{n}}{K_{n}}+{\tau_{n}}\leq{a_{n}}{[{K_{n}}]_{+}}+{\tau_{n}}.

Thus,

[Kn+1]+a[Kn]++τnanN+1[Kn]++j=1nN+1aj1τn+1j.\begin{split}{[{K_{n+1}}]_{+}}&\leq a{[{K_{n}}]_{+}}+{\tau_{n}}\\ &\leq{a^{n-N+1}}{[{K_{n}}]_{+}}+\sum\limits_{j=1}^{n-N+1}{{a^{j-1}}{\tau_{n+1-j}}}.\end{split} (3.57)

Combining (3.56) and (3.57), we obtain

n=N[Kn+1]+a1a[KN]++11an=Nτna1a[KN]++C11a.\begin{split}\sum\limits_{n=N}^{\infty}{{{[{K_{n+1}}]}_{+}}}&\leq\frac{a}{{1-a}}{[{K_{N}}]_{+}}+\frac{1}{{1-a}}\sum\limits_{n=N}^{\infty}{{\tau_{n}}}\\ &\leq\frac{a}{{1-a}}{[{K_{N}}]_{+}}+\frac{{{C_{1}}}}{{1-a}}.\end{split} (3.58)

From (3.55), it follows that

θ(1μ)n=Nnwnyn2σNσn+1+an=Nn[Kn]++n=NnτnσN+a([KN]++n=Nn[Kn+1]+)+n=NnτnσN+a[KN]++a21a[KN]++aC11a+C1=σN+a1a[KN]++C11a=σN+a1a[KN]++(2+1θ4θ)M1a.\begin{split}\theta(1-\mu)\sum\limits_{n=N}^{n}{{{\left\|{{w_{n}}-{y_{n}}}\right\|}^{2}}}&\leq{\sigma_{N}}-\sigma_{{n+1}}+a\sum\limits_{n=N}^{n}{{{[{K_{n}}]}_{+}}}+\sum\limits_{n=N}^{n}{{\tau_{n}}}\\ &\leq{\sigma_{N}}+a({[{K_{N}}]_{+}}+\sum\limits_{n=N}^{n}{{{[{K_{n+1}}]}_{+}}})+\sum\limits_{n=N}^{n}{{\tau_{n}}}\\ &\leq{\sigma_{N}}+a{[{K_{N}}]_{+}}+\frac{{{a^{2}}}}{{1-a}}{[{K_{N}}]_{+}}+\frac{{a{C_{1}}}}{{1-a}}+{C_{1}}\\ &={\sigma_{N}}+\frac{a}{{1-a}}{[{K_{N}}]_{+}}+\frac{{{C_{1}}}}{{1-a}}\\ &={\sigma_{N}}+\frac{a}{{1-a}}{[{K_{N}}]_{+}}+\frac{{(2+\frac{{1-\theta}}{{4\theta}})M}}{{1-a}}.\end{split} (3.59)

This implies that

i=Nnwiyi2(xNp2+a1a[xNp2xN1p2]++(2+1θ4θ)M1a)1θ(1μ).\sum\limits_{i=N}^{n}{{{\left\|{{w_{i}}-{y_{i}}}\right\|}^{2}}}\leq({\left\|{{x_{N}}-p}\right\|^{2}}+\frac{a}{{1-a}}{[{\left\|{{x_{N}}-p}\right\|^{2}}-{\left\|{{x_{N-1}}-p}\right\|^{2}}]_{+}}+\frac{{(2+\frac{{1-\theta}}{{4\theta}})M}}{{1-a}})\frac{1}{{\theta(1-\mu)}}. (3.60)

and thus

minNinwiyi2(xNp2+a1a[xNp2xN1p2]++(2+1θ4θ)M1a)1θ(1μ)nN+1.\mathop{\min}\limits_{N\leq i\leq n}{\left\|{{w_{i}}-{y_{i}}}\right\|^{2}}\leq\frac{{({{\left\|{{x_{N}}-p}\right\|}^{2}}+\frac{a}{{1-a}}{{[{{\left\|{{x_{N}}-p}\right\|}^{2}}-{{\left\|{{x_{N-1}}-p}\right\|}^{2}}]}_{+}}+\frac{{(2+\frac{{1-\theta}}{{4\theta}})M}}{{1-a}})\frac{1}{{\theta(1-\mu)}}}}{{n-N+1}}. (3.61)

Since [minNinwiyi2]12=minNinwiyi{[\mathop{\min}\limits_{N\leq i\leq n}{\left\|{{w_{i}}-{y_{i}}}\right\|^{2}}]^{\frac{1}{2}}}=\mathop{\min}\limits_{N\leq i\leq n}\left\|{{w_{i}}-{y_{i}}}\right\|, we get

minNinwiyi((xNp2+a1a[xNp2xN1p2]++(2+1θ4θ)M1a)1θ(1μ)nN+1)12.\mathop{\min}\limits_{N\leq i\leq n}\left\|{{w_{i}}-{y_{i}}}\right\|\leq{(\frac{{({{\left\|{{x_{N}}-p}\right\|}^{2}}+\frac{a}{{1-a}}{{[{{\left\|{{x_{N}}-p}\right\|}^{2}}-{{\left\|{{x_{N-1}}-p}\right\|}^{2}}]}_{+}}+\frac{{(2+\frac{{1-\theta}}{{4\theta}})M}}{{1-a}})\frac{1}{{\theta(1-\mu)}}}}{{n-N+1}})^{\frac{1}{2}}}. (3.62)

This completes the proof.

Remark 5

By Lemma 6 we know yn=wn{y_{n}}={w_{n}} implies that yny_{n} is a solution of MIP. This means that the error estimate given in Theorem 3.2 can be regarded as the convergence rate of Algorithm 3.1.

4 Strong convergence

In this section, we analyse strong convergence and linear convergence of Algorithm 3.1. Firstly, we give the following assumption.

Assumption 4.1

The mapping A:HHA:H\to H is LL-Lipschitz continuous, rr-strongly monotone and the set-valued mapping B:H2HB:H\to{2^{H}} is maximal monotone or the mapping A:HHA:H\to H is LL-Lipschitz continuous, monotone and the set-valued mapping B:H2HB:H\to{2^{H}} is rr-strongly monotone.

Theorem 4.1

Assume that the conditions (C1C_{1}),(C3C_{3}) and Assumption 4.1 hold. Let {xn}\{{x_{n}}\} be a sequence generated by Algorithm 3.1. Then {xn}\{{x_{n}}\} converges strongly to some solution pΩp\in\Omega.

Proof

By (3.18) (3.19) and Lemma 9 (ii), we have

xn+1p2(1θn)znp2+θnwnp2θn(1(μ+μn)2λn2λn+12)wnyn22θnrλnynp2(1θn)θnxn+1zn2(1θn)znp2+θnwnp2θn(1(μ+μn)2λn2λn+12)wnyn22θnrλnynp2.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}\\ &\quad-2{\theta_{n}}r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}}-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}\\ &\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}\\ &\quad-2{\theta_{n}}r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}}.\end{split}\end{array} (4.1)

Lemma 8 and the factlimnμn=0\mathop{\lim}\limits_{n\to\infty}{\mu_{n}}=0 imply that limn1(μ+μn)2λn2λn+12=1μ2>0\mathop{\lim}\limits_{n\to\infty}1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}=1-{\mu^{2}}>0. Thus there exists a positive integer N1N\geq 1 such that

θn(1(μ+μn)2λn2λn+12)wnyn20,nN.{\theta_{n}}(1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}})\left\|{{w_{n}}-{y_{n}}}\right\|^{2}\geq 0,~{}\forall~{}n\geq N.

It follows that from (4.1)

xn+1p2(1θn)znp2+θnwnp22θnrλnynp2(1θn)znp2+θnwnp22θrλnynp2,nN.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-2{\theta_{n}}r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}}\\ &\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-2\theta r{\lambda_{n}}{\left\|{{y_{n}}-p}\right\|^{2}},~{}\forall n\geq N.\end{split}\end{array} (4.2)

From Lemma 8, it follows that

xn+1p2(1θn)znp2+θnwnp22θrλynp2,nN,\begin{array}[]{c}{\left\|{{x_{n+1}}-p}\right\|^{2}}\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-2\theta r\lambda^{*}{\left\|{{y_{n}}-p}\right\|^{2}},~{}\forall n\geq N,\end{array} (4.3)

where λ=min{μL,λ1}\lambda^{*}=\min\{\frac{\mu}{L},{\lambda_{1}}\}.
The definition of wnw_{n} implies that

wnp2=xn+αn(xnxn1)p2xnp2+αn2xnxn12+2αnxnpxnxn1.\begin{array}[]{c}\begin{split}{\left\|{{w_{n}}-p}\right\|^{2}}&={\left\|{{x_{n}}+{\alpha_{n}}({x_{n}}-{x_{n-1}})-p}\right\|^{2}}\\ &\leq{\left\|{{x_{n}}-p}\right\|^{2}}+{\alpha_{n}}^{2}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}+2{\alpha_{n}}\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|.\end{split}\end{array} (4.4)

By utilizing the definition of znz_{n}, we have

znp2xnp2+βn2xnxn12+2βnxnpxnxn1.\begin{array}[]{c}{\left\|{{z_{n}}-p}\right\|^{2}}\leq{\left\|{{x_{n}}-p}\right\|^{2}}+{\beta_{n}}^{2}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}+2{\beta_{n}}\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|.\end{array} (4.5)

Combining (4.3) (4.4) and (4.5), we obtain

xn+1p2(1θn)(xnp2+βn2xnxn12+2βnxnpxnxn1)+θn(xnp2+αn2xnxn12+2αnxnpxnxn1)2θλrynp2=xnp2+((1θn)βn2+θnαn2)xnxn12+2((1θn)βn+θnαn)xnpxnxn12θλrynp2,nN.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-{\theta_{n}})({\left\|{{x_{n}}-p}\right\|^{2}}+{\beta_{n}}^{2}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}+2{\beta_{n}}\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|)\\ &\quad+{\theta_{n}}({\left\|{{x_{n}}-p}\right\|^{2}}+{\alpha_{n}}^{2}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}+2{\alpha_{n}}\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|)\\ &\quad-2{\theta}\lambda^{*}r{\left\|{{y_{n}}-p}\right\|^{2}}\\ &={\left\|{{x_{n}}-p}\right\|^{2}}+((1-{\theta_{n}}){\beta_{n}}^{2}+{\theta_{n}}{\alpha_{n}}^{2}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad+2((1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}})\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|-2{\theta}\lambda^{*}r{\left\|{{y_{n}}-p}\right\|^{2}},~{}\forall n\geq N.\end{split}\end{array} (4.6)

In addition,

xnp22(xnyn2+ynp2)4(xnwn2+ynwn2)+2ynp2,\begin{array}[]{c}\begin{split}{\left\|{{x_{n}}-p}\right\|^{2}}&\leq 2({\left\|{{x_{n}}-{y_{n}}}\right\|^{2}}+{\left\|{{y_{n}}-p}\right\|^{2}})\\ &\leq 4({\left\|{{x_{n}}-{w_{n}}}\right\|^{2}}+{\left\|{{y_{n}}-{w_{n}}}\right\|^{2}})+2{\left\|{{y_{n}}-p}\right\|^{2}},\end{split}\end{array}

which implies that

ynp212xnp22ynwn22wnxn2.\begin{array}[]{c}\begin{split}{\left\|{{y_{n}}-p}\right\|^{2}}\geq\frac{1}{2}{\left\|{{x_{n}}-p}\right\|^{2}}-2{\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}-2{\left\|{{w_{n}}-{x_{n}}}\right\|^{2}}.\end{split}\end{array} (4.7)

In view of (4.6) and (4.7), we have

xn+1p2xnp2+((1θn)βn2+θnαn2)xnxn12+2((1θn)βn+θnαn)xnpxnxn1θrλxnp2+4θrλynwn2+4θrλwnxn2=(1θrλ)xnp2+((1θn)βn2+θnαn2)xnxn12+2((1θn)βn+θnαn)xnpxnxn1+4θrλynwn2+4θrλwnxn2,nN.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq{\left\|{{x_{n}}-p}\right\|^{2}}+((1-{\theta_{n}}){\beta_{n}}^{2}+{\theta_{n}}{\alpha_{n}}^{2}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad+2((1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}})\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|\\ &\quad-\theta r\lambda^{*}{\left\|{{x_{n}}-p}\right\|^{2}}+4\theta r\lambda^{*}{\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}+4\theta r\lambda^{*}{\left\|{{w_{n}}-{x_{n}}}\right\|^{2}}\\ &=(1-\theta r\lambda^{*}){\left\|{{x_{n}}-p}\right\|^{2}}+((1-{\theta_{n}}){\beta_{n}}^{2}+{\theta_{n}}{\alpha_{n}}^{2}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad+2((1-{\theta_{n}}){\beta_{n}}+{\theta_{n}}{\alpha_{n}})\left\|{{x_{n}}-p}\right\|\left\|{{x_{n}}-{x_{n-1}}}\right\|\\ &\quad+4\theta r\lambda^{*}{\left\|{{y_{n}}-{w_{n}}}\right\|^{2}}+4\theta r\lambda^{*}{\left\|{{w_{n}}-{x_{n}}}\right\|^{2}},~{}\forall n\geq N.\end{split}\end{array} (4.8)

Since λμL{\lambda^{*}}\leq\frac{\mu}{L} and rLr\leq L, we obtain 1θrλ(0,1)1-\theta r\lambda^{*}\in(0,1).

Since xnx_{n} is bounded, and limnxn+1xn=limnxnwn=limnwnyn=0\mathop{\lim}\limits_{n\to\infty}\left\|{{x_{n+1}}-{x_{n}}}\right\|=\mathop{\lim}\limits_{n\to\infty}\left\|{{x_{n}}-{w_{n}}}\right\|=\mathop{\lim}\limits_{n\to\infty}\left\|{{w_{n}}-{y_{n}}}\right\|=0, owing to Lemma 4, we have

limnxnp2=0.\mathop{\lim}\limits_{n\to\infty}{\left\|{{x_{n}}-p}\right\|^{2}}=0. (4.9)

Hence

limnxnp=0.\mathop{\lim}\limits_{n\to\infty}{\left\|{{x_{n}}-p}\right\|}=0. (4.10)

This proof is completed.

Remark 6

To the best of our knowledge, Theorem 4.1 is one of the few available strong convergence results for algorithms with the double inertial extrapolation steps to solve MIP (1.1). In addition, we emphasize that Theorem 4.1 does not need to know the modulus of strong monotonicity and the Lipschitz constant in advance.

Owing to Proposition 1 and Theorem 4.1, it is easy to get the following results.

Corollary 2

Let B=NCB=N_{C} and AA is μ\mu-strong pseudomonotone and LL-Lipschitz continuous. If the conditions (C3C_{3}) hold, then the sequence {xn}\{{x_{n}}\} from Algorithm 3.1 converges strongly to a point pSp\in S.

Remark 7

Corollary 2 improves Theorem 5.1 of YI in the following aspects: (i) the sequence {αn}\{\alpha_{n}\} may not be non-decreasing; (ii) the sequence {βn}\{\beta_{n}\} may not be a constant; (iii) we require 0<θ<θnθn+1<11+ϵ,ε(1,+)0<\theta<\theta_{n}\leq\theta_{n+1}<\dfrac{1}{1+\epsilon},\varepsilon\in(1,+\infty) other than ε(2,+)\varepsilon\in(2,+\infty), which extend the taking value interval of θn\theta_{n}.

In order to discuss the linear convergence rate of our algorithm, we need the following assumption.

Assumption 4.2
  • (i)

    The solution set of the inclusion problem (1.1) is nonempty, that is, Ω\Omega\neq\emptyset.

  • (ii)

    The mapping A:HHA:H\to H is LL-Lipschitz continuous, rr-strongly monotone and the set-valued mapping B:H2HB:H\to{2^{H}} is maximal monotone or the mapping A:HHA:H\to H is LL-Lipschitz continuous, monotone and the set-valued mapping B:H2HB:H\to{2^{H}} is rr-strongly monotone.

  • (iii)

    Let λ^:=min{μL,λ1}\hat{\lambda}:=\min\{\frac{\mu}{L},{\lambda_{1}}\}, τ:=112min{1μ,2λ^r}(12,1)\tau:=1-\frac{1}{2}\min\{1-\mu,2\hat{\lambda}r\}\in(\frac{1}{2},1) and the following conditions hold:

  • (c1c_{1})

    0βnβ<12(1τ1)0\leq{\beta_{n}}\leq\beta<\frac{1}{2}(\frac{1}{\tau}-1)

  • (c2c_{2})

    0αnα<1ττ0\leq{\alpha_{n}}\leq\alpha<\frac{{1-\tau}}{\tau}

  • (c3c_{3})

    max{1β1+αβ,β1+βτ(1+α)}<θθn1θn1β+(1+β)24(1τ12β)(β1)2(1τ12β)\max\{\frac{1-\beta}{1+\alpha-\beta},~{}\frac{{\beta}}{{1+{\beta}-\tau(1+{\alpha})}}\}<\theta\leq\theta_{n-1}\leq\theta_{n}\leq\frac{{-1-\beta+\sqrt{{{(1+\beta)}^{2}}-4(\frac{1}{\tau}-1-2\beta)(\beta-1)}}}{{2(\frac{1}{\tau}-1-2\beta)}}

Remark 8

The parameters set satisfying the condition is non-empty. For example, we can choose L=1.5L=1.5, r=1r=1, μ=0.45\mu=0.45, β=βn=0.1\beta=\beta_{n}=0.1, α=αn=0.37\alpha=\alpha_{n}=0.37 and θn=0.72\theta_{n}=0.72.

Next, we establish the linear convergence rate of our algorithm under Assumption 4.2.

Theorem 4.2

Suppose that Assumption 4.2 hold. Then {xn}\{{x_{n}}\} generated by Algorithm 3.1 converges linearly to some point in Ω\Omega.

Proof

By Lemma 8 we know limnλn=λλ^=min{μL,λ1}\mathop{\lim}\limits_{n\to\infty}{\lambda_{n}}=\lambda\geq\hat{\lambda}=\min\{\frac{\mu}{L},{\lambda_{1}}\}. Since limnμn=0\mathop{\lim}\limits_{n\to\infty}{\mu_{n}}=0, limn1(μ+μn)2λn2λn+12=1μ2>1μ>0\mathop{\lim}\limits_{n\to\infty}1-\frac{{{(\mu+{\mu_{n}})^{2}}{\lambda_{n}}^{2}}}{{{\lambda_{n+1}}^{2}}}=1-{\mu^{2}}>1-\mu>0. Thus it follows that from (4.1) there exists a positive integer N1N\geq 1 such that

xn+1p2(1θn)znp2+θnwnp2(1θn)θnxn+1zn2θn(1μ)wnyn22θnrλ^ynp2(1θn)znp2+θnwnp2(1θn)θnxn+1zn2θnmin{1μ,2λ^r}(wnyn2+ynp2)(1θn)znp2+θnτwnp2(1θn)θnxn+1zn2,nN.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}\\ &\quad-{\theta_{n}}(1-{\mu}){\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}-2{\theta_{n}}r\hat{\lambda}{\left\|{{y_{n}}-p}\right\|^{2}}\\ &\leq(1-{\theta_{n}}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}{\left\|{{w_{n}}-p}\right\|^{2}}-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}}\\ &\quad-\theta_{n}\min\{1-\mu,2\hat{\lambda}r\}({\left\|{{w_{n}}-{y_{n}}}\right\|^{2}}+{\left\|{{y_{n}}-p}\right\|^{2}})\\ &\leq(1-\theta_{n}){\left\|{{z_{n}}-p}\right\|^{2}}+{\theta_{n}}\tau{\left\|{{w_{n}}-p}\right\|^{2}}-\frac{{(1-{\theta_{n}})}}{{{\theta_{n}}}}{\left\|{{x_{n+1}}-{z_{n}}}\right\|^{2}},~{}\forall n\geq N.\end{split}\end{array} (4.11)

Substituting (3.23), (3.24) and (3.25) into (4.11), we have

xn+1p2(1θn)[(1+βn)xnp2βnxn1p2+(1+βn)βnxnxn12]+θnτ[(1+αn)xnp2αnxn1p2+(1+αn)αnxnxn12](1θn)θn[(1βn)xn+1xn2+(βn2βn)xnxn12][(1θn)(1+βn)+θnτ(1+αn)]xnp2[(1θn)βn+θnταn]xn1p2+[(1θn)(1+βn)βn+θnτ(1+αn)αn+(1θn)θn(βnβn2)]xnxn12(1θn)θn(1βn)xn+1xn2[(1θn)(1+βn)+θnτ(1+αn)]xnp2+[(1θn)(1+βn)βn+θnτ(1+αn)αn+(1θn)θn(βnβn2)]xnxn12(1θn)θn(1βn)xn+1xn2[(1θn)(1+β)+θnτ(1+α)]xnp2+[(1θn)(1+β)β+θnτ(1+α)α+(1θn)θn(ββ2)]xnxn12(1θn)θn(1β)xn+1xn2,nN.\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}&\leq(1-\theta_{n})[(1+{\beta_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\beta_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+(1+{\beta_{n}})\beta_{n}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad+\theta_{n}\tau[(1+{\alpha_{n}}){\left\|{{x_{n}}-p}\right\|^{2}}-{\alpha_{n}}{\left\|{{x_{n-1}}-p}\right\|^{2}}+(1+{\alpha_{n}}){\alpha_{n}}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\quad-\frac{{(1-\theta_{n})}}{\theta_{n}}[(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}+({{\beta_{n}}^{2}}-{\beta_{n}}){\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}]\\ &\leq[(1-\theta_{n})(1+{\beta_{n}})+\theta_{n}\tau(1+{\alpha_{n}})]{\left\|{{x_{n}}-p}\right\|^{2}}-[(1-\theta_{n}){\beta_{n}}+\theta_{n}\tau{\alpha_{n}}]{\left\|{{x_{n-1}}-p}\right\|^{2}}\\ &\quad+[(1-\theta_{n})(1+{\beta_{n}}){\beta_{n}}+\theta_{n}\tau(1+{\alpha_{n}}){\alpha_{n}}+\frac{{(1-\theta_{n})}}{\theta_{n}}({\beta_{n}}-{{\beta_{n}}^{2}})]{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad-\frac{{(1-\theta_{n})}}{\theta_{n}}(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\leq[(1-\theta_{n})(1+{\beta_{n}})+\theta_{n}\tau(1+{\alpha_{n}})]{\left\|{{x_{n}}-p}\right\|^{2}}\\ &\quad+[(1-\theta_{n})(1+{\beta_{n}}){\beta_{n}}+\theta_{n}\tau(1+{\alpha_{n}}){\alpha_{n}}+\frac{{(1-\theta_{n})}}{\theta_{n}}({\beta_{n}}-{{\beta_{n}}^{2}})]{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad-\frac{{(1-\theta_{n})}}{\theta_{n}}(1-{\beta_{n}}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\\ &\leq[(1-\theta_{n})(1+{\beta})+\theta_{n}\tau(1+{\alpha})]{\left\|{{x_{n}}-p}\right\|^{2}}\\ &\quad+[(1-\theta_{n})(1+{\beta}){\beta}+\theta_{n}\tau(1+{\alpha}){\alpha}+\frac{{(1-\theta_{n})}}{\theta_{n}}({\beta}-{{\beta}^{2}})]{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}\\ &\quad-\frac{{(1-\theta_{n})}}{\theta_{n}}(1-{\beta}){\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}},~{}\forall n\geq N.\end{split}\end{array} (4.12)

This implies that

xn+1p2+σnxn+1xn2[(1θn)(1+β)+θnτ(1+α)](xnp2+δnxnxn12).\begin{array}[]{c}{\left\|{{x_{n+1}}-p}\right\|^{2}}+\sigma_{n}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\leq[(1-\theta_{n})(1+{\beta})+\theta_{n}\tau(1+{\alpha})]({\left\|{{x_{n}}-p}\right\|^{2}}+\delta_{n}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}).\end{array} (4.13)

where δn=(1θn)(1+β)β+θnτ(1+α)α+(1θn)θn(ββ2)(1θn)(1+β)+θnτ(1+α)\delta_{n}=\frac{{(1-\theta_{n})(1+{\beta}){\beta}+\theta_{n}\tau(1+{\alpha}){\alpha}+\frac{{(1-\theta_{n})}}{\theta_{n}}({\beta}-{{\beta}^{2}})}}{{(1-\theta_{n})(1+{\beta})+\theta_{n}\tau(1+{\alpha})}} and σn=(1θn)θn(1β)\sigma_{n}=\frac{{(1-\theta_{n})}}{\theta_{n}}(1-{\beta}).
Now, we will show that δn<σn\delta_{n}<\sigma_{n}:

δnσn=(1θn)(1+β)β+θnτ(1+α)α+1θnθn(ββ2)(1θn)(1+β)+θnτ(1+α)1θnθn(1β)=(1θn)(1+β)β+θnτ(1+α)α+1θnθn(ββ2)(1θn)(1+β)+θnτ(1+α)[(1θn)(1+β)+θnτ(1+α)]1θnθn(1β)(1θn)(1+β)+θnτ(1+α)=1θnθn[θn(1+β)β+ββ2(1θn)(1β2)](1θn)(1+β)+θnτ(1+α)+τ(1+α)[θnα(1θn)(1β)](1θn)(1+β)+θnτ(1+α)=1θnθn[θnβ+β1+θn](1θn)(1+β)+θnτ(1+α)+τ(1+α)[θnα1+θn+βθnβ](1θn)(1+β)+θnτ(1+α)=[(1+β)θn2+2θn+β1]+θnτ(1+α)[θn(1+α)1+βθnβ]θn((1θn)(1+β)+θnτ(1+α))[(1+β)θn2+2θn+β1]+[1τθn2θn+βθnβθn2]θn((1θn)(1+β)+θnτ(1+α))=(1τ12β)θn2+(1+β)θn+β1θn((1θn)(1+β)+θnτ(1+α))\begin{array}[]{c}\begin{split}\delta_{n}-\sigma_{n}&=\frac{{(1-\theta_{n})(1+{\beta}){\beta}+\theta_{n}\tau(1+{\alpha}){\alpha}+\frac{{1-\theta_{n}}}{\theta_{n}}({\beta}-{{\beta}^{2}})}}{{(1-\theta_{n})(1+{\beta})+\theta_{n}\tau(1+{\alpha})}}-\frac{{1-\theta_{n}}}{\theta_{n}}(1-{\beta})\\ &=\frac{{(1-{\theta_{n}})(1+{\beta}){\beta}+{\theta_{n}}\tau(1+{\alpha}){\alpha}+\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}({\beta}-{\beta}^{2})}}{{(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})}}\\ &\quad-\frac{{[(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})]\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}(1-{\beta})}}{{(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})}}\\ &=\frac{{\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}[{\theta_{n}}(1+{\beta}){\beta}+{\beta}-{\beta}^{2}-(1-{\theta_{n}})(1-{\beta}^{2})]}}{{(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})}}\\ &\quad+\frac{{\tau(1+{\alpha})[{\theta_{n}}{\alpha}-(1-{\theta_{n}})(1-{\beta})]}}{{(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})}}\\ &=\frac{{\frac{{1-{\theta_{n}}}}{{{\theta_{n}}}}[{\theta_{n}}{\beta}+{\beta}-1+{\theta_{n}}]}}{{(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})}}+\frac{{\tau(1+{\alpha})[{\theta_{n}}{\alpha}-1+{\theta_{n}}+{\beta}-{\theta_{n}}{\beta}]}}{{(1-{\theta_{n}})(1+{\beta})+{\theta_{n}}\tau(1+{\alpha})}}\\ &=\frac{{[-(1+\beta){\theta_{n}}^{2}+2{\theta_{n}}+\beta-1]+{\theta_{n}}\tau(1+\alpha)[{\theta_{n}}(1+\alpha)-1+\beta-{\theta_{n}}\beta]}}{{{\theta_{n}}((1-{\theta_{n}})(1+\beta)+{\theta_{n}}\tau(1+\alpha))}}\\ &\leq\frac{{[-(1+\beta){\theta_{n}}^{2}+2{\theta_{n}}+\beta-1]+[\frac{1}{\tau}{\theta_{n}}^{2}-{\theta_{n}}+\beta{\theta_{n}}-\beta{\theta_{n}}^{2}]}}{{{\theta_{n}}((1-{\theta_{n}})(1+\beta)+{\theta_{n}}\tau(1+\alpha))}}\\ &=\frac{{(\frac{1}{\tau}-1-2\beta){\theta_{n}}^{2}+(1+\beta){\theta_{n}}+\beta-1}}{{{\theta_{n}}((1-{\theta_{n}})(1+\beta)+{\theta_{n}}\tau(1+\alpha))}}\end{split}\end{array} (4.14)

Since θn1β+(1+β)24(1τ12β)(β1)2(1τ12β)\theta_{n}\leq\frac{{-1-\beta+\sqrt{{{(1+\beta)}^{2}}-4(\frac{1}{\tau}-1-2\beta)(\beta-1)}}}{{2(\frac{1}{\tau}-1-2\beta)}}, we get δnσn\delta_{n}\leq\sigma_{n}. From θn1θn\theta_{n-1}\leq\theta_{n}, it follows that σnσn1\sigma_{n}\leq\sigma_{n-1}. By (4.13), we have

xn+1p2+σnxn+1xn2[(1θn)(1+β)+θnτ(1+α)](xnp2+σnxnxn12)=[(1+β)+θn[τ(1+α)(1+β)](xnp2+σnxnxn12).<[(1+β)+θ[τ(1+α)(1+β)](xnp2+σn1xnxn12),\begin{array}[]{c}\begin{split}{\left\|{{x_{n+1}}-p}\right\|^{2}}+\sigma_{n}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}&\leq[(1-\theta_{n})(1+{\beta})+\theta_{n}\tau(1+{\alpha})]({\left\|{{x_{n}}-p}\right\|^{2}}+\sigma_{n}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}})\\ &=[(1+{\beta})+\theta_{n}[\tau(1+{\alpha})-(1+\beta)]({\left\|{{x_{n}}-p}\right\|^{2}}+\sigma_{n}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}).\\ &<[(1+{\beta})+\theta[\tau(1+{\alpha})-(1+\beta)]({\left\|{{x_{n}}-p}\right\|^{2}}+\sigma_{n-1}{\left\|{{x_{n}}-{x_{n-1}}}\right\|^{2}}),\\ \end{split}\end{array} (4.15)

where the last inequality follows from σnσn1\sigma_{n}\leq\sigma_{n-1} and α<1ττβτ\alpha<\frac{{1-\tau}}{\tau}-\frac{\beta}{{\tau}}.
Since β(1+β)τ(1+α)<θ\frac{\beta}{{(1+\beta)-\tau(1+\alpha)}}<\theta, we have (1+β)+θ[τ(1+α)(1+β)](0,1)(1+{\beta})+\theta[\tau(1+{\alpha})-(1+\beta)]\in(0,1). Therefore we get

xn+1p2+σnxn+1xn2[(1+β)+θ[τ(1+α)(1+β)]n1(x2p2+σ1x2x12).\begin{array}[]{c}{\left\|{{x_{n+1}}-p}\right\|^{2}}+\sigma_{n}{\left\|{{x_{n+1}}-{x_{n}}}\right\|^{2}}\leq[(1+{\beta})+\theta[\tau(1+{\alpha})-(1+\beta)]^{n-1}({\left\|{{x_{2}}-p}\right\|^{2}}+\sigma_{1}{\left\|{{x_{2}}-{x_{1}}}\right\|^{2}}).\end{array} (4.16)

This implies that

xn+1p2[(1+β)+θ[τ(1+α)(1+β)]n1(x2p2+σ1x2x12).{\left\|{{x_{n+1}}-p}\right\|^{2}}\leq[(1+{\beta})+\theta[\tau(1+{\alpha})-(1+\beta)]^{n-1}({\left\|{{x_{2}}-p}\right\|^{2}}+\sigma_{1}{\left\|{{x_{2}}-{x_{1}}}\right\|^{2}}). (4.17)

Hence, the proof is completed.

Remark 9

Theorem 6.2 of YI gives a linear convergence rate result of the subgradient extragradient method with double inertial steps for solving variational inequalities. However, it only discuss the single inertial case in YI . As far as we know, algorithms with the double inertial extrapolation steps for solving variational inequalities and monotone inclusions have no linear convergent result. Hence Theorem 4.2 is a new result.

5 Numerical experiments

In this section, we provide some numerical examples to show the performance of our Algorithm 3.1 (shortly Alg1), and compare it with others, including Abubakar et al’s Algorithm 3.1 (shortly AKHAlg1) AB , Chlolamjiak et al’s Algorithm 3.2(CHCAlg2) CV , Cholamjiak et al’s Algorithm 3.1 (CHMAlg1) CH , Hieu et al’s Algorithm 3.1 (HAMAlg1) VA and Yao et al’s Algorithm 1 (YISAlg1) YI .

All the programs were implemented in MATLAB R2021b on Intel(R) Core(TM) i7-7700HQ CPU@
2.80GHZ computer with RAM 8.00GB. We denote the number of iterations by ”Iter.” and the CPU time seconds by ”CPU(s)”.

Example 1

We consider the signal recovery in compress sensing. This problem can be modeled as

y=Ax+ε.y=Ax+\varepsilon. (5.1)

where yRM{\rm{y}}\in{{\rm{R}}^{M}} is observed or measured data, A:RNRMA:{R^{N}}\to{R^{M}} is bounded linear operator, xRNx\in{R^{N}} is a vector with KK (K<<N)(K<<N) nonzero components and ε\varepsilon is the noise. It is know that the problem (5.1) can be viewed as the LASSO problem CV

minxRN12yAx22+λx1(λ>0).\mathop{\min}\limits_{x\in{R^{N}}}\frac{1}{2}\left\|{y-Ax}\right\|_{2}^{2}+\lambda{\left\|x\right\|_{1}}~{}(\lambda>0). (5.2)

The minimization problem 5.2 is equivalent to the following monotone inclusion problem

findxRNsuchthat0(B+C)x.{\rm find}~{}x\in{R^{N}}~{}{\rm such}~{}{\rm that}~{}0\in(B+C)x. (5.3)

where B=AT(Axy)B={A^{T}}(Ax-y) and C=(λx1)C=\partial(\lambda{\left\|x\right\|_{1}}). In this experiment, the vector xRNx\in{R^{N}} is from uniform distribution in the interval [1,1][-1,1].

The matrix ARM×NA\in{R^{M\times N}} is produced by a normal distribution with mean zero and one variance. The vector yy is generated by Gaussian noise ε\varepsilon with variance 10410^{-4}. The initial points x0{x_{0}} and x1{x_{1}} are both zero. We use En=xnxn1{E_{n}}=\left\|{{x_{n}}-{x_{n-1}}}\right\| to measure the restoration accuracy. And the stopping criterion is En105{E_{n}}\leq{10^{-5}}.

In the first experiment we consider the influence of different αn\alpha_{n} and βn\beta_{n} on the performance of our algorithm. We take μ=0.9\mu=0.9, θn=0.45{\theta_{n}}=0.45, λ1=0.1{\lambda_{1}}=0.1, μn=0{\mu_{n}}=0 and pn=1n2p_{n}=\frac{1}{{{n^{2}}}}.

Table 1: The performances of Algorithm 3.1 for different values of αn\alpha_{n} and βn\beta_{n} in Example 5.1
βn\beta_{n} αn\alpha_{n} Iter. 0.2 0.4 0.6 0.8 0.9 1
0 966 872 777 681 632 584
0.02 954 859 764 668 620 572
0.04 942 848 752 656 608 559
0.06 930 836 740 644 596 547
0.08 918 823 728 632 583 535
0.1 906 811 716 620 571 522

It can be seen from Table 1 that Algorithm 3.1 with double inertial extrapolation steps, that is, βn0\beta_{n}\neq 0 outperform the one with single inertia, moreover, the increase of αn\alpha_{n} and βn\beta_{n} significantly improves the convergence speed of the algorithm. This implies that it is important to investigate double inertial methods from both theoretical and numerical viewpoint.

In second experiment we compared the performance of our algorithm with other algorithms. The following two cases are considered

Case 1: K=20,M=256,N=512K=20,~{}M=256,~{}N=512;

Case 2: K=40,M=512,N=1024K=40,~{}M=512,~{}N=1024.

The parameters for algorithms are chosen as

Alg1: μ=0.9\mu=0.9,αn=1110n{\alpha_{n}}=1-\frac{1}{{{{10}^{n}}}}, βn=0.111000+n{\beta_{n}}=0.1-\frac{1}{{1000+n}}, θn=0.4511000+n{\theta_{n}}=0.45-\frac{1}{{1000+n}}, λ1=0.1{\lambda_{1}}=0.1, μn=1n2{\mu_{n}}=\frac{1}{{{n^{2}}}} and pn=1n2;p_{n}=\frac{1}{{{n^{2}}}};

CHCAlg2: μ=0.9\mu=0.9, α=0.1\alpha=0.1, θ=1\theta=1 and λ1=1;{\lambda_{1}}=1;

CHMAlg1: μ=0.4\mu=0.4, α1=0.01{\alpha_{1}}=0.01, α2=0.02{\alpha_{2}}=0.02 and λ0=0.01{\lambda_{0}}=0.01;

HAMAlg1: μ=0.4\mu=0.4, λ1=λ0=0.1{\lambda_{-1}}={\lambda_{0}}=0.1;

AKHAlg1: μ=0.3\mu=0.3, ρ=0.1\rho=0.1, ϱ=0.9\varrho=0.9 and λ0=1{\lambda_{0}}=1.

Fig.1 and Table.2 show the numerical results of our algorithm and other algorithms in two cases respectively. We give the graphs of original signal and recovered signal in Fig. 2.

Table 2: Numerical results for Example 1
Algorithms Case 1 Case 2
Iter. CPU(s) Iter. CPU(s)
Alg1 525 0.2447 809 1.2297
CHCAlg2 1347 0.5340 2595 3.0049
CHMAlg1 887 0.3496 1554 1.9651
HAMAlg1 971 0.3904 1577 1.5138
AKHAlg1 3427 1.3142 6723 6.4723
Refer to caption
(a) K=20K=20, M=256M=256, N=512N=512
Refer to caption
(b) K=40K=40, M=512M=512, N=1024N=1024
Figure 1: Numerical behavior of EnE_{n} for Example 1
Refer to caption
(a) Original Signal(N=512, M=256, 20 spikes)
Refer to caption
(b) Measured values with variance 10410^{-4}
Refer to caption
(c) Recovered signal by method (Alg1)(484 iterations)
Refer to caption
(d) Recovered signal by method (CHCAlg2)(1347 iterations)
Refer to caption
(e) Recovered signal by method (CHMAlg1)(718 iterations)
Refer to caption
(f) Recovered signal by method (HAMAlg1)(971 iterations)
Refer to caption
(g) Recovered signal by method (AKHAlg1)(3427 iterations)
Figure 2: From top to bottom: original signal, observation data, recovered signal by the methods (Alg1), (CHCAlg2), (CHMAlg1), (HAMAlg1) and (AKHAlg1) in Case 1, respectively
Remark 10

It can be seen from Fig.1 and Table 2 that the number of iterations and CPU time of our algorithm are better than other algorithms, indicating that our algorithm has better performance.

Example 2

We consider our algorithm to solve the variational inequality problem . The operator A:RmRmA:{R^{m}}\to{R^{m}} is defined by A(x):=Mx+qA(x):=Mx+q with M=NNT+S+DM=N{N^{T}}+S+D and qRmq\in{R^{m}}, where NN is an m×mm\times m matrix, SS is an m×mm\times m skew-symmetric matrix and DD is an m×mm\times m diagonal matrix. All entries of NN and SS are uniformly generated from (5,5)(-5,5) and all diagonal entries of DD are uniformly generated from (0,0.3)(0,0.3). It is easy to see that MM is positive definite. Define the feasible set C:=Rm+C:={R_{m}}^{+} and use En=xn{E_{n}}=\left\|{{x_{n}}}\right\| to measure the accuracy, and the stoping criterion is En103{E_{n}}\leq 10^{-3}.

In the first experiment we consider the effect of relaxation coefficients θn\theta_{n} on the performance of Algorithm 3.1. We choose μ=0.9\mu=0.9, αn=1{\alpha_{n}}=1, βn=0.1{\beta_{n}}=0.1, λ1=0.1{\lambda_{1}}=0.1, μn=0{\mu_{n}}=0 and pn=1n2p_{n}=\frac{1}{{{n^{2}}}}.

Table 3: The performances of Algorithm 3.1 for different values of θn\theta_{n} in Example 5.2
θn\theta_{n} 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45
Iter.Iter. 16988 8521 5562 4035 3095 2454 1987 1360 1346
CPU(s)CPU(s) 1.8752 0.9888 0.6785 0.4804 0.3879 0.2910 0.2436 0.2246 0.1674

Table 3 shows that the performance comparison of Algorithm 3.1 for different values of θn\theta_{n}. It can be seen that the performance of the algorithm becomes better with the increase of the relaxed parameter θn\theta_{n}. This indicates that the increase of the relaxation coefficient value range is of great significance to improve the performance of the algorithm.

In second experiment we compared the performance of our algorithm with other algorithms. The following cases are considered

Case 1: m=50m=50;  Case 2: m=100m=100;  Case 3: m=150m=150;  Case 4: m=200.m=200.

The parameters for algorithms are chosen as

Alg1: μ=0.9\mu=0.9, αn=1110n{\alpha_{n}}=1-\frac{1}{{{{10}^{n}}}}, βn=0.111000+n{\beta_{n}}=0.1-\frac{1}{{1000+n}}, θn=0.4511000+n{\theta_{n}}=0.45-\frac{1}{{1000+n}}, μn=0{\mu_{n}}=0 and pn=1n2;{p_{n}}=\frac{1}{{{n^{2}}}};

HAMAlg1: μ=0.4\mu=0.4, λ1=λ0=0.3;{\lambda_{-1}}={\lambda_{0}}=0.3;

CHCAlg2: μ=0.9\mu=0.9, α=0.3\alpha=0.3, θ=0.4\theta=0.4 and λ1=1;{\lambda_{1}}=1;

YISAlg1: μ=0.9\mu=0.9, αn=0.2903{\alpha_{n}}=0.2903, δ=0.0241\delta=0.0241, θn=1{\theta_{n}}=1 and λ1=0.1;{\lambda_{1}}=0.1;

AKHAlg1: μ=0.3\mu=0.3, ρ=0.1\rho=0.1, ϱ=0.9\varrho=0.9 and λ0=1{\lambda_{0}}=1;

CHMAlg1: μ=0.4\mu=0.4, α1=0.01{\alpha_{1}}=0.01, α2=0.02{\alpha_{2}}=0.02 and λ0=1{\lambda_{0}}=1.

Fig.3 and Table.4 show the performance comparison of our algorithm with other algorithms in four cases respectively.

Table 4: Numerical results for Example 2
Algorithms m=50m=50 m=100m=100 m=150m=150 m=200m=200
Iter. CPU(s) Iter. CPU(s) Iter. CPU(s) Iter. CPU(s)
Alg1 448 0.0191 642 0.0380 759 0.0968 1012 0.1681
CHCAlg2 723 0.0224 1048 0.0499 1234 0.1105 1644 0.2128
YISAlg1 600 0.0917 963 0.0635 1214 0.1374 1595 0.2125
HAMAlg1 779 0.0203 1142 0.0475 1355 0.1410 1751 0.1940
AKHAlg1 1049 0.0267 1425 0.0659 1691 0.1625 2371 0.2839
CHMAlg1 871 0.0331 1182 0.0637 1408 0.1366 1879 0.2158
Refer to caption
(a) Case 1 m=50m=50
Refer to caption
(b) Case 2 m=100m=100
Refer to caption
(c) Case 3 m=150m=150
Refer to caption
(d) Case 4 m=200m=200
Figure 3: Numerical behavior of EnE_{n} for Example 2
Remark 11

It can be seen that our algorithm performs better than other algorithms for such problems as Example 5.2 from Fig.3 and Table 4.

Example 3

Let H:=L2([1,2])H:={L_{2}}([1,2]) with the norm

x:=(01x(t)2𝑑t)12.\left\|x\right\|:={(\int_{0}^{1}{x(t})^{2}}dt{)^{\frac{1}{2}}}.

and the inner product

x,y:=01x(t)y(t)𝑑t.\left\langle{x,y}\right\rangle:=\int_{0}^{1}{x(t)y(t)dt}.

Let C:={xL2([0,1]):01tx(t)𝑑t=2}C:=\{x\in{L^{2}}([0,1]):\int_{0}^{1}{tx(t)dt=2}\} and define A:L2([0,1])L2([0,1])A:{L^{2}}([0,1])\to{L^{2}}([0,1]) by

Ax(t):=max{x(t),0},xL2([0,1]),t[0,1].Ax(t):=\max\{x(t),0\},x\in{L^{2}}([0,1]),t\in[0,1].

It is clear that AA is monotone and Lipschitz with L=1L=1. The orthogonal projection onto CC have the following explicit formula

PC(x)(t):=x(t)01tx(t)𝑑t201t2𝑑tt.{P_{C}}(x)(t):=x(t)-\frac{{\int_{0}^{1}{tx(t)dt-2}}}{{\int_{0}^{1}{{t^{2}}dt}}}t.

The example is taken from YI . Use En=xn+1xn{E_{n}}=\left\|{{x_{n+1}}-{x_{n}}}\right\| to measure the accuracy. The stoping criterion is En104{E_{n}}\leq 10^{-4}.

The following cases are considered

Case 1: x0=97t2+4t13,x1=t2e7t250;{x_{0}}=\frac{{97{t^{2}}+4t}}{{13}},{x_{1}}=\frac{{{t^{2}}-{e^{-7t}}}}{{250}};

Case 2: x0=97t2+4t13,x1=sin(3t)+cos(10t)100;{x_{0}}=\frac{{97{t^{2}}+4t}}{{13}},{x_{1}}=\frac{{\sin(3t)+\cos(10t)}}{{100}};

Case 3: x0=t2e7t250,x1=sin(3t)+cos(10t)100;{x_{0}}=\frac{{{t^{2}}-{e^{-7t}}}}{{250}},{x_{1}}=\frac{{\sin(3t)+\cos(10t)}}{{100}};

Case 4: x0=sin(3t)+cos(10t)100,x1=97t2+4t13.{x_{0}}=\frac{{\sin(3t)+\cos(10t)}}{{100}},{x_{1}}=\frac{{97{t^{2}}+4t}}{{13}}.

The parameters for algorithms are chosen as

Alg1: μ=0.4\mu=0.4, αn=1110n{\alpha_{n}}=1-\frac{1}{{{{10}^{n}}}}, βn=0.111000+n{\beta_{n}}=0.1-\frac{1}{{1000+n}}, θn=0.4511000+n{\theta_{n}}=0.45-\frac{1}{{1000+n}} and pn=1n2;{p_{n}}=\frac{1}{{{n^{2}}}};

CHCAlg2: μ=0.4\mu=0.4, α=0.3\alpha=0.3, θ=0.4\theta=0.4 and λ1=1;{\lambda_{1}}=1;

HAMAlg1: μ=0.4\mu=0.4, λ1=λ0=0.1;{\lambda_{-1}}={\lambda_{0}}=0.1;

YISAlg1: μ=0.4\mu=0.4, αn=0.2250{\alpha_{n}}=0.2250, δ=0.4950\delta=0.4950, θn=1{\theta_{n}}=1 and λ1=1.1;{\lambda_{1}}=1.1;

AKHAlg1: μ=0.4\mu=0.4,ρ=0.45\rho=0.45, ϱ=0.3\varrho=0.3 and λ0=0.5;{\lambda_{0}}=0.5;

CHMAlg1: μ=0.4\mu=0.4, α1=0.01{\alpha_{1}}=0.01, α2=0.02{\alpha_{2}}=0.02 and λ0=1{\lambda_{0}}=1.

Table 5: Numerical results for Example 3
Algorithms Case1 Case2 Case3 Case4
Iter. CPU(s) Iter. CPU(s) Iter. CPU(s) Iter. CPU(s)
Alg1 32 0.0554 32 0.0499 18 0.0296 36 0.0416
CHCAlg2 40 0.0601 40 0.0616 24 0.0361 52 0.0664
VAMAlg1 47 0.0616 46 0.0558 22 0.0311 70 0.0974
YISAlg1 45 0.0894 45 0.0629 26 0.0457 53 0.0912
AKHAlg1 38 0.0746 39 0.0506 20 0.0277 46 0.0697
HAMAlg1 34 0.0651 37 0.0498 24 0.0343 70 0.0802
Refer to caption
(a) x0=97t2+4t13,x1=t2e7t250.{x_{0}}=\frac{{97{t^{2}}+4t}}{{13}},{x_{1}}=\frac{{{t^{2}}-{e^{-7t}}}}{{250}}.
Refer to caption
(b) x0=97t2+4t13,x1=sin(3t)+cos(10t)100.{x_{0}}=\frac{{97{t^{2}}+4t}}{{13}},{x_{1}}=\frac{{\sin(3t)+\cos(10t)}}{{100}}.
Refer to caption
(c) x0=t2e7t250,x1=sin(3t)+cos(10t)100.{x_{0}}=\frac{{{t^{2}}-{e^{-7t}}}}{{250}},{x_{1}}=\frac{{\sin(3t)+\cos(10t)}}{{100}}.
Refer to caption
(d) x0=sin(3t)+cos(10t)100,x1=97t2+4t13.{x_{0}}=\frac{{\sin(3t)+\cos(10t)}}{{100}},{x_{1}}=\frac{{97{t^{2}}+4t}}{{13}}.
Figure 4: Numerical behavior of EnE_{n} for Example 3
Remark 12

From Table 5 and Fig.4, we observe that our Algorithm 3.1 performs better and converges faster.

6 Conclusions

In this paper, we propose a new Tseng splitting method with double inertial extrapolation steps for solving monotone inclusion problems in real Hilbert spaces and establish the weak convergence, nonasymptotic O(1n)O(\frac{1}{\sqrt{n}}) convergence rate, strong convergence and linear convergence rate of the proposed algorithm, respectively. Our method has the following advantages:

(i) Our method uses adaptive step sizes, which can be updated by a simple calculation without knowing the Lipschitz constant of the underlying operator.

(ii) Our method own double inertial extrapolation steps, in which inertial factor αn\alpha_{n} can equal 11. This is not allowed in the corresponding algorithms of CV ; CH , where the only single inertial extrapolation step is considered and the inertial factor is bounded away from 11. From Table 1 in section 5, it can be seen Algorithm 3.1 with double inertial extrapolation steps outperforms the one with the single inertia.

(iii) Our method includes the corresponding methods considered in AB ; CV ; YI as special cases. Especially, when our algorithm is used to solve variational inequalities, the relaxed parameter sequence {θn}\{\theta_{n}\} have larger choosing interval than the ones of YI . Via Table 3 in section 5, we observe that the performance of the algorithm becomes better with the increase of the relaxed parameter θn\theta_{n}.

(iv) To the best of our knowledge, there are few available convergence rate results for algorithms with the double inertial extrapolation steps for solving variational inequalities and monotone inclusions. From numerical experiments in section 5, we can see that our algorithm has better efficiency than the corresponding algorithms in AB ; CV ; CH ; VA ; YI .

Acknowledgements.
This work was supported by the National Natural Science Foundation of China (11701479, 11701478), the Chinese Postdoctoral Science Foundation (2018M643434) and the Fundamental Research Funds for the Central Universities (2682021ZTPY040).

References

  • (1) Abubakar, J., Kumam, P., Hassan Ibrahim, A., Padcharoen, A.: Relaxed inertial Tsengs type method for solving the inclusion problem with application to image restoration. Mathematics. 8, 818 (2020)
  • (2) Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Analysis. 9, 3-11 (2001)
  • (3) Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York (2011)
  • (4) Cai, G., Dong, Q.L., Peng, Y.: Strong convergence theorems for inertial Tseng’s extragradient method for solving variational inequality problems and fixed point problems. Optim. Lett. 15, 1457-1474 (2021)
  • (5) Cholamjiak, P., Hieu, D.V., Cho, Y.J.: Relaxed forward-backward splitting methods for solving variational inclusions and applications. J. Sci. Comput.88, 85 (2021)
  • (6) Cholamjiak, P., Hieu, D.V., Muu, L.D.: Inertial splitting methods without prior constants for solving variational inclusions of two operators. Bull. Iran. Math. Soc. (2022).https://doi.org/10.1007/s41980-022-00682-3
  • (7) Combettes, P.L., Wajs, V.: Signal recovery by proximal forward-backward splitting. SIAM Multiscale Model Simul. 4, 1168-1200 (2005)
  • (8) Çopur AK, Haciog˘\breve{g}lu E, Gu¨\ddot{u}rsoy F, et al.: An efficient inertial type iterative algorithm to approximate the solutions of quasi variational inequalities in real Hilbert spaces. J. Sci. Comput. 89:50(2022)
  • (9) Dong, Q., Jiang, D., Cholamjiak, P.: A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 19, 3097-3118 (2017)
  • (10) Duchi, J., Singer, Y.: Efficient online and batch learning using forward-backward splitting. J. Mach. Learn Res. 10, 2899-2934 (2009)
  • (11) Gibali, A., Thong, D.V.: Tseng type methods for solving inclusion problems and its applications. Calcolo 55, 55:49 (2018)
  • (12) Linh, N.X., Thong, D.V., Cholamjiak, P.: Strong convergence of an inertial extragradient method with an adaptive nondecreasing step size for solving variational inequalities. Acta Math. Sci. 42, 795-812 (2022)
  • (13) Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964-979 (1979)
  • (14) Liu, Q.: A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings. J. Math. Anal. Appl. 146, 301-305 (1990)
  • (15) Mainge´\acute{e}, P. E.: Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223-236 (2008)
  • (16) Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73, 591-597 (1967)
  • (17) Padcharoen, A., Kitkuan, D., Kumam, W., Kumam, P.: Tseng methods with inertial for solving inclusion problems and application to image deblurring and image recovery problems. Comput. Math. Methods (2020). https://doi.org/10.1002/cmm4.1088
  • (18) Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72, 383-390 (1979)
  • (19) Raguet, H., Fadili, J., Peyre´\acute{e}, G.: A generalized forward-backward splitting. SIAM J. Imaging Sci. 6, 1199-1226 (2013)
  • (20) Shehu, Y., Iyiola, O.S., Li, X.H., et al.: Convergence analysis of projection method for variational inequalities. Comput. Appl. Math. 38, 161 (2019)
  • (21) Shehu, Y., Iyiola, O.S., Reich, S.: A modified inertial subgradient extragradient method for solving variational inequalities. Optimization and Engineering, 23, 421-449.(2021)
  • (22) Tan, K.K., Xu, H.K.: Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 178, 301-308 (1993)
  • (23) Thong, D.V., Vinh, N.T. Cho, Y.J.: A strong convergence theorem for Tseng’s extragradient method for solving variational inequality problems. Optim. Lett. 14, 1157-1175 (2020)
  • (24) Thong, D.V., Yang, J., Cho, Y.J., Rassias, T.M.: Explicit extragradient-like method with adaptive stepsizes for pseudomonotone variational inequalities. Optim. Lett. 15, 2181-2199 (2021)
  • (25) Tseng, P.: A modified forward-backward splitting method for maximal monotone mapping. SIAM J. Control Optim. 38, 431-446 (2000)
  • (26) Van Hieu, D., Anh, P.K. Muu, L.D.: Modified forward-backward splitting method for variational inclusions. 4OR-Q J. Oper. Res. 19, 127-151 (2021)
  • (27) Wang, Z.B., Chen, X., Jiang, Y,, et al.: Inertial projection and contraction algorithms with larger step sizes for solving quasimonotone variational inequalities. J. Glob. Optim. 82:499-522 (2022)
  • (28) Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240-256 (2002)
  • (29) Yao, Y., Iyiola, O.S. Shehu, Y.: Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 90, 71 (2022)