This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Edge adaptive hybrid regularization model for image deblurring

Tingting Zhang1, Jie Chen1, Caiying Wu1, Zhifei He1, Tieyong Zeng2 and Qiyu Jin1 1 School of Mathematical Science, Inner Mongolia University, Hohhot, China
2 Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong, China
[email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
Abstract

The parameter selection is crucial to regularization based image restoration methods. Generally speaking, a spatially fixed parameter for regularization item in the whole image does not perform well for both edge and smooth areas. A larger parameter of regularization item reduces noise better in smooth areas but blurs edge regions, while a small parameter sharpens edge but causes residual noise. In this paper, an automated spatially adaptive regularization model, which combines the harmonic and TV models, is proposed for reconstruction of noisy and blurred images. In the proposed model, it detects the edges and then spatially adjusts the parameters of Tikhonov and TV regularization terms for each pixel according to the edge information. Accordingly, the edge information matrix will be also dynamically updated during the iterations. Computationally, the newly-established model is convex, which can be solved by the semi-proximal alternating direction method of multipliers (sPADMM) with a linear-rate convergence rate. Numerical simulation results demonstrate that the proposed model effectively reserves the image edges and eliminates the noise and blur at the same time. In comparison to state-of-the-art algorithms, it outperforms other methods in terms of PSNR, SSIM and visual quality.

: Inverse Problems

, , , , and

Keywords: the edge information matrix, image deblurring, semi-proximal alternating direction method of multipliers

1 Introduction

Images are frequently degraded by noise and blurring during the image acquisition and transmission procedures  [23, 36]. As a result, image denoising and deblurring are two fundamental steps for various image processing tasks, such as image segmentation, edge detection, and pattern recognition. Let 𝐮M×N\mathbf{u}\in\mathbb{R}^{M\times N} be the unknown clear image of size M×NM\times N, and 𝐟M×N\mathbf{f}\in\mathbb{R}^{M\times N} be the observed degraded image with Gaussian white noise. Then the mathematical model for image degradation  [1, 3, 21, 25] is formulated as follows,

𝐟=𝐀𝐮+𝜺,\mathbf{f}=\mathbf{A\otimes u}+\bm{\varepsilon}, (1.1)

where 𝐀\mathbf{A} is the known blurring operator, 𝐀𝐮\mathbf{A\otimes u} is the convolution of 𝐀\mathbf{A} with 𝐮\mathbf{u} as

[𝐀𝐮](i,j)=(s,t)Ω𝐀[(i,j)(s,t)]×𝐮(s,t).[\mathbf{A}\otimes\mathbf{u}](i,j)=\sum_{(s,t)\in\Omega}\mathbf{A}[(i,j)-(s,t)]\times\mathbf{u}(s,t).

Moreover, 𝜺M×N\bm{\varepsilon}\in\mathbb{R}^{M\times N} represents the Gaussian white noise with mean 0 and standard deviation σ\sigma. Let \mathcal{F} denotes the vector space of 2D images defined on Ω={1,2,,M}×{1,2,,N}\Omega=\{1,2,\cdots,M\}\times\{1,2,\cdots,N\}. For each 2D image 𝐮\mathbf{u}, the total number of pixels is M×NM\times N, and 𝐮(i,j)\mathbf{u}(i,j) denotes the image value at pixel (i,j)Ω(i,j)\in\Omega. Restoring the unknown image 𝐮\mathbf{u} from the degraded image 𝐟\mathbf{f} is a typical ill-posed problem. Hence, effective image deblurring methods usually rely on delicately designed regularizations based on the prior information on 𝐮\mathbf{u}.

To preserve significant edges, Rudin, Osher, and Fatemi[38] proposed the celebrated total variation (TV) model for image restoration. By this approach, the recovery of the image 𝐮\mathbf{u} is based on solving the following minimization problem

min𝐮μ2𝐀𝐮𝐟22+𝐮1,\min_{\mathbf{u}}\frac{\mu}{2}\|\mathbf{A\otimes u-f}\|_{2}^{2}+\|\nabla\mathbf{u}\|_{1}, (1.2)

where 𝐮=[x𝐮,y𝐮]2M×N\nabla\mathbf{u}=[\nabla_{x}\mathbf{u},\nabla_{y}\mathbf{u}]\in\mathbb{R}^{2M\times N} is the gradient in the discrete setting with 𝐮(i,j)=(x𝐮(i,j),y𝐮(i,j))\nabla\mathbf{u}(i,j)=(\nabla_{x}\mathbf{u}(i,j),\nabla_{y}\mathbf{u}(i,j)), and x𝐮\nabla_{x}\mathbf{u} and y𝐮\nabla_{y}\mathbf{u} denote the horizontal and vertical first-order differences with Dirichlet boundary condition respectively, which are defined as follows

x𝐮(i,j)={𝐮(i+1,j)𝐮(i,j),if i<M;0,if i=M,y𝐮(i,j)={𝐮(i,j+1)𝐮(i,j),if j<M;0,if j=M,\eqalign{\nabla_{x}\mathbf{u}(i,j)=\cases{\mathbf{u}(i+1,j)-\mathbf{u}(i,j),&if \quad$i<M;$\\ 0,&if \quad$i=M,$\\ }\cr\nabla_{y}\mathbf{u}(i,j)=\cases{\mathbf{u}(i,j+1)-\mathbf{u}(i,j),&if \quad$j<M;$\\ 0,&if \quad$j=M,$\\ }} (1.3)

and 𝐮1\|\nabla\mathbf{u}\|_{1} is the l1l_{1}-norm of 𝐮\nabla\mathbf{u}

𝐮1=(i,j)Ω|𝐮(i,j)|,|𝐮(i,j)|=(x𝐮)2(i,j)+(y𝐮)2(i,j).\qquad\quad\|\nabla\mathbf{u}\|_{1}=\sum_{(i,j)\in\Omega}|\nabla\mathbf{u}(i,j)|,\quad|\nabla\mathbf{u}(i,j)|=\sqrt{(\nabla_{x}\mathbf{u})^{2}(i,j)+(\nabla_{y}\mathbf{u})^{2}(i,j)}. (1.4)

As the total variation term is powerful for preserving sharp edges, the TV model has been widely used for image processing. Over the past years, much effort has been devoted to studying, solving and extending the TV model. In particular, a two-phase approach for deblurring images  [6, 7] was proposed to handle the impulsive noise. More related models and algorithms have been reported in  [9, 10, 11, 12, 16, 20, 24, 33, 35, 37, 34, 40, 43, 44, 45, 46].

However, image deblurring with TV model often leads to some undesirable staircase effects, namely, the transformation of smooth regions into piece-wise constant ones. There are at least two possible ways to handle this issue. One is to use the tight-frame approaches  [9, 10]. The other is to combine the TV regularization term with some more advanced models. For instance, the harmonic model  [39] with Tikhonov regularization and fourth-order PDE (LLT) filter  [32] can effectively reduce noise. Therefore, some hybrid regularization models combining the TV model with LLT models  [26, 29, 32, 37] or harmonic models  [19, 30] are proposed to preserve edges while restraining the staircase effects at the same time. Furthermore, Liu et al.[28] combines image sharpening operator and framelet regularization  [8] for image deblurring, whose model is expressed as,

min𝐮μ2𝐀𝐮𝐟22+𝐖𝐮1+α𝐮B(𝐟)1,\min_{\mathbf{u}}\frac{\mu}{2}\|\mathbf{A\otimes u-f}\|_{2}^{2}+\|\mathbf{Wu}\|_{1}+\alpha\|\mathbf{u}-B(\mathbf{f})\|_{1}, (1.5)

where 𝐖\mathbf{W} is the framelet transformation, B(𝐟)𝐑M×NB(\mathbf{f})\in\mathbf{R}^{M\times N} is a sharpened image, and μ,α\mu,\alpha are positive parameters. In the optimization problem (1.2), the positive parameter μ\mu controls the trade-off between a good fit of 𝐟\mathbf{f} and a smoothness requirement by the total variation regularization. In general, an image is composed of multiple objects at different scales. This suggests that different values of μ\mu are desirable for various image features of different scales, to obtain better results. More specifically, for the texture regions, we can use larger μ\mu as it leads to less smoothed and more detailed restoration. On the other hand, for the flat region, smaller μ\mu is desired to get a better smoothing and noise-reduced result. For this reason, in  [2, 4] the multi-scale total variation (MTV) models were proposed, with spatially varying parameters based on (1.2). The corresponding multi-scale version of model (1.2) is represented as

min𝐮12𝐀𝐮𝐟𝝁2+𝐮1,\min_{\mathbf{u}}\frac{1}{2}\|\mathbf{A\otimes u-f}\|_{\bm{\mu}}^{2}+\|\nabla\mathbf{u}\|_{1}, (1.6)

where 𝝁M×N\bm{\mu}\in\mathbb{R}^{M\times N} is spatially varying non-negative parameter matrix, 𝐀𝐮𝐟2,𝝁2=𝐀𝐮𝐟,𝝁.(𝐀𝐮𝐟)\|\mathbf{A\otimes u-f}\|_{2,\bm{\mu}}^{2}={\langle\mathbf{A\otimes u-f},\bm{\mu}.*(\mathbf{A\otimes u-f})\rangle}. As in Matlab, 𝝁.(𝐀𝐮𝐟)\bm{\mu}.*(\mathbf{A\otimes u-f}) denotes point-wise product between the elements 𝝁\bm{\mu} and 𝐀𝐮𝐟\mathbf{A\otimes u-f} as follows,

[𝝁.(𝐀𝐮𝐟)](i,j)=𝝁(i,j)×[𝐀𝐮𝐟](i,j).[\bm{\mu}.*(\mathbf{A\otimes u-f})](i,j)=\bm{\mu}(i,j)\times[\mathbf{A\otimes u-f}](i,j). (1.7)

Borrowing the idea of (1.6), the multi-scale technique can be applied to the TV term, which yields

min𝐮μ2𝐀𝐮𝐟22+𝐮1,𝜶,\min_{\mathbf{u}}\frac{\mu}{2}\|\mathbf{A\otimes u-f}\|_{2}^{2}+\|\nabla\mathbf{u}\|_{1,\bm{\alpha}}, (1.8)

where 𝜶M×N\bm{\alpha}\in\mathbb{R}^{M\times N} is a non-negative spatially varying parameter matrix, 𝐮1,𝜶\|\nabla\mathbf{u}\|_{1,\bm{\alpha}} is given by

𝐮1,𝜶=(i,j)Ω𝜶(i,j)|𝐮(i,j)|.\|\nabla\mathbf{u}\|_{1,\bm{\alpha}}=\sum_{(i,j)\in\Omega}\bm{\alpha}(i,j)|\nabla\mathbf{u}(i,j)|. (1.9)

Introducing a local parameter gives more flexibility to the model in exchange for the robustness. For this reason, Dong et al.[17] proposed a method to decide whether to use a spatially varying parameter, by using a confidence interval technique based on the expected maximal local variance estimate. In this paper, we study the properties of the image edges and propose an edge adaptive hybrid regularization model by introducing an edge detection operation to a combination of the TV and harmonic models. The edge detection operation is used to decide, in a robust way, where the edges are located in the noisy image 𝐟\mathbf{f} and generates an edge information matrix. This edge information matrix, is later applied to decide the acceptance or rejection of a scaling parameter. It is worth to mention that our edge information matrix is obtained and updated fully automatically in each iteration for preserving edges. Furthermore, the proposed model is convex once the local parameters are fixed, and hence it can be solved efficiently by the semi-proximal alternating direction method of multipliers (sPADMM) [18, 27, 22], which has a linear convergence rate under some mild conditions. The contributions of this work are summarized as follows:

  1. 1.

    we propose an algorithm which can automatically detect the image edges and adjust the parameters for the Tikhonov and TV regularization terms for each pixel according to the edge information. This enables the proposed algorithm to effectively remove noise on the smooth area and sharpen the edges during deblurring;

  2. 2.

    we build a convex optimization model which is easily solved by sPADMM with a linear-rate convergence rate;

  3. 3.

    we conduct extensive experiments to prove that the proposed method outperforms the state-of-the-art methods in restoring the noisy and blurred images.

The remainder of the paper is organized as follows. In Section 2, we present our model which describes how to adjust the parameters of TV and Tikhonov regularization terms according to edge information. The simulation results are exhibited and analyzed in Section 3, followed by concluding remarks in Section 4.

2 Edge adaptive hybrid regularization model for image deblurring

In this section, we will explain that the proposed model can flexibly balance the relationship between the edge and smooth areas and achieve the goal of retaining the edge and eliminating the staircase effect simultaneously. The sPADMM is used to efficiently solve the proposed model.

2.1 Model Establishment

As mentioned above, the harmonic model is effective in suppressing the noise in smooth areas of the image, however, it also brings about the blur to the edges inevitably. On the other hand, the total variation-based methods protect the image edges efficiently but still have a staircase artifacts in the smooth regions. In this paper, we combine these two models to make up for their respective shortcomings. Note that the simple combination is not distinctive enough for image restoration. The fixed parameter in the regularization term operates uniformly on the whole image, while ideal edges and the smooth areas have opposite demand of regularization. Specifically, the larger parameter of regularization term is conducive to restrain noise in the smooth area, however has a side effect of blurring the edges. On the other hand, a smaller regularization parameter retains edge information but brings about a staircase effect to the smooth areas. Accordingly, we introduce two edge adaptive parameters for TV and Tikhonov regularization terms respectively, which treat the edges and the smooth areas flexibly and hence improve the denoising performance in a reasonable way. On this basis, a new edge adaptive hybrid regularization (EAHR) model is proposed. The algorithm first detects the edges of the image to build an edge information matrix. It then adjusts parameters for TV and Tikhonov terms according to the local information of each pixel. Overall, the Edge Adaptive Hybrid Regularization (EAHR) model for image restoration we consider is given as follows

min𝐮𝐮1,𝜶1+12𝐮2,𝜶22+μ2𝐀𝐮𝐟22,{\min_{\mathbf{u}}}\|\nabla\mathbf{u}\|_{1,\bm{\alpha}_{1}}+\frac{1}{2}\|\nabla\mathbf{u}\|_{2,\bm{\alpha}_{2}}^{2}+\frac{\mu}{2}\|\mathbf{A\otimes u}-\mathbf{f}\|_{2}^{2}, (2.1)

where 𝜶1\bm{\alpha}_{1} and 𝜶2M×N\bm{\alpha}_{2}\in\mathbb{R}^{M\times N} are spatially varying non-negative parameter matrices, 𝐮1,𝜶1\|\nabla\mathbf{u}\|_{1,\bm{\alpha}_{1}} is given by (1.9) with 𝜶1\bm{\alpha}_{1} instead of 𝜶\bm{\alpha}, and 𝐮2,𝜶2\|\nabla\mathbf{u}\|_{2,\bm{\alpha}_{2}} is given by

𝐮2,𝜶2=(i,j)Ω𝜶2(i,j)|𝐮(i,j)|2.\|\nabla\mathbf{u}\|_{2,\bm{\alpha}_{2}}=\sqrt{\sum_{(i,j)\in\Omega}\bm{\alpha}_{2}(i,j)|\nabla\mathbf{u}(i,j)|^{2}}. (2.2)

In the proposed model, the edge adaptive parameters α1\alpha_{1} and α2\alpha_{2} for TV and Tikhonov regularization terms play key roles in the final recovered images. As explained in  [30], the edge detection operator is a function of |𝐮||\nabla\mathbf{u}| for detecting the image edges. Following it, we define the edge information matrix 𝐄\mathbf{E} as

𝐄(i,j)=11+|[𝐆𝐮](i,j)|2,\mathbf{E}(i,j)=\frac{{1}}{{1}+|[\mathbf{G}\otimes\nabla\mathbf{u}](i,j)|^{2}}, (2.3)

where 𝐆\mathbf{G} is Gaussian kernel, and 𝐆𝐮\mathbf{\mathbf{G}\otimes\nabla\mathbf{u}} is the convolution of 𝐆\mathbf{G} with 𝐮\nabla\mathbf{u} as follows

[𝐆𝐮](i,j)=(s,t)Ω𝐆[(i,j)(s,t)]×𝐮(s,t).[\mathbf{G}\otimes\nabla\mathbf{u}](i,j)=\sum_{(s,t)\in\Omega}\mathbf{G}[(i,j)-(s,t)]\times\nabla\mathbf{u}(s,t).

In order to improve the robustness of edge adaptive algorithm, we binarize the edge adaptive parameters according to the edge information matrix 𝐄\mathbf{E}. The spatially varying non-negative parameter matrices 𝜶1\bm{\alpha}_{1} and 𝜶2\bm{\alpha}_{2} in the model (2.1) are defined by

𝜶1(i,j)={α1,𝐄(i,j)<τ;θ1α1,𝐄(i,j)τ,,\bm{\alpha}_{1}(i,j)=\cases{\alpha_{1},&$\mathbf{E}(i,j)<\tau;$\\ \theta_{1}\alpha_{1},&$\mathbf{E}(i,j)\geq\tau,$\\ }, (2.4)

and

𝜶2(i,j)={α2,𝐄(i,j)<τ;θ2α2,𝐄(i,j)τ,\bm{\alpha}_{2}(i,j)=\cases{\alpha_{2},&$\mathbf{E}(i,j)<\tau;$\\ \theta_{2}\alpha_{2},&$\mathbf{E}(i,j)\geq\tau,$\\ } (2.5)

where α1\alpha_{1} and α2\alpha_{2} are positive parameters, θ1,θ2(0,1)\theta_{1},\theta_{2}\in(0,1) are scaling parameters, and τ>0\tau>0 is a threshold. With 𝜶1\bm{\alpha}_{1} and 𝜶2M×N\bm{\alpha}_{2}\in\mathbb{R}^{M\times N} fixed, the proposed model is convex. Therefore, it can be solved efficiently by sPADMM.

The proposed model (2.1) balances the relationship between the edge and smooth areas well by adopting the two parameters based on the edge information. By the proposed weights, it achieves the goal of retaining the edge and eliminating the staircase effect simultaneously. Moreover, while the TV-based models sometimes erroneously treats the noise in the smooth area as the image edge, the proposed model can effectively eliminate these misjudged pixels by edge detection process.

2.2 Algorithm

As mentioned, our proposed model can be efficiently solved by sPADMM. To apply this algorithm, we first introduce an auxiliary variable 𝐤\mathbf{k} to take the place of 𝐮\nabla\mathbf{u} in the both terms of 𝐮1,𝜶1\|\nabla\mathbf{u}\|_{1,\bm{\alpha}_{1}} and 𝐮2,𝜶22\|\nabla\mathbf{u}\|_{2,\bm{\alpha}_{2}}^{2}, then the model (2.1) can be reformulated as the following constrained optimization problem

min𝐮J(𝐮)=min𝐮𝐤1,𝜶1+12𝐤2,𝜶22+μ2𝐀𝐮𝐟22,s.t.𝐤=𝐮.{\min_{\mathbf{u}}}J(\mathbf{u})={\min_{\mathbf{u}}}\|\mathbf{k}\|_{1,\bm{\alpha}_{1}}+\frac{1}{2}\|\mathbf{k}\|_{2,\bm{\alpha}_{2}}^{2}+\frac{\mu}{2}\|\mathbf{A\otimes u}-\mathbf{f}\|_{2}^{2},\quad s.t.\quad\mathbf{k}=\nabla\mathbf{u}. (2.6)

The augmented Lagrangian function of the problem (2.6)  is defined as

L(𝐮,𝐤;𝝀)=𝐤1,𝜶1+12𝐤2,𝜶22+μ2𝐀𝐮𝐟22+𝝀,𝐤𝐮+β2𝐤𝐮2,L(\mathbf{u},\mathbf{k};\bm{\lambda})=\|\mathbf{k}\|_{1,\bm{\alpha}_{1}}+\frac{1}{2}\|\mathbf{k}\|_{2,\bm{\alpha}_{2}}^{2}+\frac{\mu}{2}\|\mathbf{A\otimes u}-\mathbf{f}\|_{2}^{2}+{\langle\bm{\lambda},\mathbf{k}-\nabla\mathbf{u}\rangle}+\frac{\beta}{2}\|\mathbf{k}-\nabla\mathbf{u}\|^{2}, (2.7)

where 𝝀=[𝝀1,𝝀2]2M×N\bm{\lambda}=[\bm{\lambda}_{1},\bm{\lambda}_{2}]\in\mathbb{R}^{2M\times N} is the Lagrange multiplier parameter matrix with 𝝀(i,j)=(𝝀1(i,j),𝝀2(i,j))\bm{\lambda}(i,j)=(\bm{\lambda}_{1}(i,j),\bm{\lambda}_{2}(i,j)), and β>0\beta>0 is the penalty parameter. In each iteration of sPADMM, we minimize LL with respect to 𝐮\mathbf{u} for fixed 𝐤\mathbf{k} and then minimize LL with respect to 𝐤\mathbf{k} for fixed 𝐮\mathbf{u}. After these two steps, we update the multiplier λ\lambda. Hence, the solution to the problem (2.7) is approached by iterating the following three equations:

𝐮n=argmin𝐮L(𝐮,𝐤n1;𝝀n1)+12𝐮𝐮n1𝐒12,\mathbf{u}^{n}=\mathop{\arg\min}\limits_{\mathbf{u}}L(\mathbf{u},\mathbf{k}^{n-1};\bm{\lambda}^{n-1})+\frac{1}{2}\|\mathbf{u}-\mathbf{u}^{n-1}\|_{\mathbf{S}_{1}}^{2}, (2.8)
𝐤n=argmin𝐤L(𝐮n,𝐤;𝝀n1)+12𝐤𝐤n1𝐒22,\mathbf{k}^{n}=\mathop{\arg\min}\limits_{\mathbf{k}}L(\mathbf{u}^{n},\mathbf{k};\bm{\lambda}^{n-1})+\frac{1}{2}\|\mathbf{k}-\mathbf{k}^{n-1}\|_{\mathbf{S}_{2}}^{2}, (2.9)
𝝀n=𝝀n1+ηβ(𝐤n𝐮n).\bm{\lambda}^{n}=\bm{\lambda}^{n-1}+\eta\beta(\mathbf{k}^{n}-\nabla\mathbf{u}^{n}). (2.10)

Here, for any 𝐱M×N\mathbf{x}\in\mathbb{R}^{M\times N}, 𝐒M×N\mathbf{S}\in\mathbb{R}^{M\times N} is the self-adjoint positive semidefinte matrix and 𝐒𝐱\mathbf{S}\mathbf{x} is the matrix multiplication of 𝐒\mathbf{S} and 𝐱\mathbf{x}, 𝐱𝐒=𝐱,𝐒𝐱\|\mathbf{x}\|_{\mathbf{S}}=\sqrt{\langle\mathbf{x},\mathbf{S}\mathbf{x}\rangle} denotes the matrix norm. If we take 𝐒1=0\mathbf{S}_{1}=0 and 𝐒2=0\mathbf{S}_{2}=0, the sPADMM will be the alternating direction method of multipliers (ADMM) [5]. In our algorithm, 𝐒1\mathbf{S}_{1} and 𝐒2\mathbf{S}_{2} are positive definite matrices. The sPADMM for our EAHR model is given by Algorithm 1.

Input: Noisy & Blurry image 𝐟\mathbf{f}, standard deviation σ\sigma, blurring function 𝐀\mathbf{A}, Gaussian kernel 𝐆\mathbf{G}
 Intialize: 1) 𝐒1,𝐒2,α1,α2,θ1,θ2,τ,μ,β,tol\mathbf{S}_{1},\mathbf{S}_{2},\alpha_{1},\alpha_{2},\theta_{1},\theta_{2},\tau,\mu,\beta,tol, MaxIterMaxIter
     2) BM3D [14, 13] preprocessed blurred image 𝐟\mathbf{f}
     3)𝐮0=0;𝐤0=0;𝝀0=0\mathbf{u}^{0}=0;\mathbf{k}^{0}=0;\bm{\lambda}^{0}=0
for  n=1:MaxItern=1:MaxIter do
       Update {𝐮n=argmin𝐮L(𝐮,𝐤n1;𝝀n1)+12𝐮𝐮n1𝐒12,𝐤n=argmin𝐤L(𝐮n,𝐤;𝝀n1)+12𝐤𝐤n1𝐒22,𝝀nby𝝀=𝝀n1+ηβ(𝐤n𝐮n)𝜶1and𝜶2by𝜶m(i,j)={αm,𝐄(i,j)<τ;θmαm,𝐄(i,j)τwherem=1,2\cases{\mathbf{u}^{n}=\mathop{\arg\min}\limits_{\mathbf{u}}L(\mathbf{u},\mathbf{k}^{n-1};\bm{\lambda}^{n-1})+\frac{1}{2}\|\mathbf{u}-\mathbf{u}^{n-1}\|_{\mathbf{S}_{1}}^{2},\\ \mathbf{k}^{n}=\mathop{\arg\min}\limits_{\mathbf{k}}L(\mathbf{u}^{n},\mathbf{k};\bm{\lambda}^{n-1})+\frac{1}{2}\|\mathbf{k}-\mathbf{k}^{n-1}\|_{\mathbf{S}_{2}}^{2},\\ \bm{\lambda}^{n}\mathrm{\,\,by\,\,\bm{\lambda}=\bm{\lambda}^{n-1}+\eta\beta(\mathbf{k}^{n}-\nabla\mathbf{u}^{n})}\\ \bm{\alpha}_{1}\,\,\mathrm{and}\,\,\bm{\alpha}_{2}\mathrm{\,\,by\,\,\bm{\alpha}_{m}(i,j)=\cases{\alpha_{m},&$\mathbf{E}(i,j)<\tau;$\\ \theta_{m}\alpha_{m},&$\mathbf{E}(i,j)\geq\tau$\\ }where\,\,m=1,2}\\ }
      if min{𝐮n𝐮n12𝐟2,𝐤n𝐮n2𝐟}tol,\min\left\{\frac{\|{\mathbf{u}}^{n}-{\mathbf{u}}^{n-1}\|_{2}}{\|\mathbf{f}\|_{2}},\frac{\|\mathbf{k}^{n}-\nabla\mathbf{u}^{n}\|_{2}}{\|\nabla\mathbf{f}\|}\right\}\leq tol, then
             break
       end if
      
end for
Output: Restored image 𝐮n\mathbf{u}^{n}
Algorithm 1 sPADMM for the EAHR model

In each iteration of sPADMM, the subproblems (2.8) and (2.9) need to be solved. The 𝐮\mathbf{u}-subproblem is written as

min𝐮{μ2𝐀𝐮𝐟22+λn1,𝐤n1𝐮+β2𝐤n1𝐮2+12𝐮𝐮n1𝐒12}.\eqalign{\min\limits_{\mathbf{u}}&\bigg{\{}\frac{\mu}{2}\|\mathbf{A\otimes u}-\mathbf{f}\|_{2}^{2}+\langle\mathbf{\lambda}^{n-1},\mathbf{k}^{n-1}-\nabla\mathbf{u}\rangle\cr&\quad+\frac{\beta}{2}\|\mathbf{k}^{n-1}-\nabla\mathbf{u}\|^{2}+\frac{1}{2}\|\mathbf{u}-\mathbf{u}^{n-1}\|_{\mathbf{S}_{1}}^{2}\bigg{\}}.} (2.11)

In our experiment, we choose the self-adjoint positive definite linear operators 𝐒1=ρ1𝐈\mathbf{S}_{1}=\rho_{1}\mathbf{I}, where 𝐈\mathbf{I} is the unit matrix and ρ1>0\rho_{1}>0. Note that the subproblem (2.11) is a quadratic optimization problem subject to the optimality condition

μ𝐀T(𝐀𝐮𝐟)T𝝀n1βT𝐤n1+βΔ𝐮+ρ1(𝐮𝐮n1)=0,\mu\mathbf{A}^{T}(\mathbf{A\otimes u}-\mathbf{f})-\nabla^{T}\bm{\lambda}^{n-1}-\beta\nabla^{T}\mathbf{k}^{n-1}+\beta\Delta\mathbf{u}+\rho_{1}(\mathbf{u}-\mathbf{u}^{n-1})=0,

which is solved by Fourier transform  [40] as follows

𝐮=1{μ(𝐀T).(𝐟)+(T(𝝀n1+β𝐤n1))+(ρ1𝐮n1)μ(AT).(A)+(βΔ+ρ1𝐈)}.\mathbf{u}=\mathcal{F}^{-1}\left\{\frac{\mu\mathcal{F}(\mathbf{A}^{T}).*\mathcal{F}(\mathbf{f})+\mathcal{F}(\nabla^{T}(\bm{\lambda}^{n-1}+\beta\mathbf{k}^{n-1}))+\mathcal{F}(\rho_{1}\mathbf{u}^{n-1})}{\mu\mathcal{F}(A^{T}).*\mathcal{F}(A)+\mathcal{F}(\beta\Delta+\rho_{1}\mathbf{I})}\right\}. (2.12)

Next, we analyze the nonsmooth subproblem (2.9) in detail to find its global minimizer.

2.3 Solving subproblem (2.9)

In order to solve subproblem (2.9), we first introduce a proposition as follows.

Proposition 2.1.  For any  α,β>0\alpha,\beta>0, and z,t2z,t\in\mathbb{R}^{2} , the minimizer of

minz2{f(z)=α|z|+β2|zt|2}{\min_{z\in\mathbb{R}^{2}}}\left\{f(z)=\alpha|z|+\frac{\beta}{2}|z-t|^{2}\right\} (2.13)

is given by

z=(1αβ|t|)+t,z^{*}=\left(1-\frac{\alpha}{\beta|t|}\right)_{+}t, (2.14)

where (a)+=max(a,0)\left(a\right)_{+}=\max\left(a,0\right), 𝟎0=𝟎\frac{\mathbf{0}}{0}=\mathbf{0} and 𝟎=(0,0)\mathbf{0}=(0,0).

Proof: If t𝟎t\neq\mathbf{0}, we have

fz=αz|z|+β(zt).\frac{\partial f}{\partial z}=\alpha\frac{z}{|z|}+\beta(z-t).

Let  fz=0\frac{\partial f}{\partial z}=0, we get

t=z+αβz|z|.t=z+\frac{\alpha}{\beta}\frac{z}{|z|}. (2.15)

The equation (2.15) implies that

|z|={|t|αβ,|t|αβ0;0,|t|αβ<0.|z|=\cases{|t|-\frac{\alpha}{\beta},&$|t|-\frac{\alpha}{\beta}\geq 0$;\\ 0,&$|t|-\frac{\alpha}{\beta}<0$.\\ } (2.16)

It follows from (2.16)  and  (2.15) that

z=(1αβ|t|)+t.z=\left(1-\frac{\alpha}{\beta|t|}\right)_{+}t.

Then the equation (2.14) holds when t𝟎t\neq\mathbf{0}. The equation (2.14) also holds when t=𝟎t=\mathbf{0}. Therefore, Proposition 4.1 is proved.

The subproblem (2.9) is written as

min𝐤{𝐤1,𝜶1+12𝐤2,𝜶22+μ2𝐀𝐮n𝐟22+λn1,𝐤𝐮n+β2𝐤𝐮n2+12𝐤𝐤n1𝐒22}.\eqalign{{\min_{\mathbf{k}}}\bigg{\{}\|\mathbf{k}\|_{1,\bm{\alpha}_{1}}+\frac{1}{2}\|\mathbf{k}\|_{2,\bm{\alpha}_{2}}^{2}+\frac{\mu}{2}\|\mathbf{A\otimes u}^{n}-\mathbf{f}\|_{2}^{2}+\langle\mathbf{\lambda}^{n-1},\mathbf{k}-\nabla\mathbf{u}^{n}\rangle\cr\quad\quad\quad+\frac{\beta}{2}\|\mathbf{k}-\nabla\mathbf{u}^{n}\|^{2}+\frac{1}{2}\|\mathbf{k}-\mathbf{k}^{n-1}\|_{\mathbf{S}_{2}}^{2}\bigg{\}}.} (2.17)

The problem above is equivalently expressed as

min𝐤(i,j){𝜶1(i,j)|𝐤(i,j)|+𝜶2(i,j)|𝐤(i,j)|2+𝝀n1(i,j),[𝐤𝐮n](i,j)+β2([𝐤𝐮n](i,j))2+12𝐤(i,j)𝐤(i,j)n1𝐒22}.\eqalign{{\min_{\mathbf{k}(i,j)}}\bigg{\{}\bm{\alpha}_{1}(i,j)|\mathbf{k}(i,j)|+\bm{\alpha}_{2}(i,j)|\mathbf{k}(i,j)|^{2}+{\langle\bm{\lambda}^{n-1}(i,j),[\mathbf{k}-\nabla\mathbf{u}^{n}](i,j)\rangle}\cr\quad\quad\quad+\frac{\beta}{2}\left([\mathbf{k}-\nabla\mathbf{u}^{n}](i,j)\right)^{2}+\frac{1}{2}\|\mathbf{k}(i,j)-{\mathbf{k}(i,j)}^{n-1}\|_{\mathbf{S}_{2}}^{2}\bigg{\}}.}

In our experiment, we use the self-adjoint positive definite linear operators 𝐒2=ρ2𝐈\mathbf{S}_{2}=\rho_{2}\mathbf{I}, where 𝐈\mathbf{I} is the unit matrix and ρ2>0\rho_{2}>0. Letting z=𝐤(i,j)z=\mathbf{k}(i,j), q=𝐮(i,j)q=\nabla\mathbf{u}(i,j), p1=𝜶1(i,j)p_{1}=\bm{\alpha}_{1}(i,j), p2=𝜶2(i,j)p_{2}=\bm{\alpha}_{2}(i,j) and λ~=𝝀(i,j)\widetilde{\lambda}=\bm{\lambda}(i,j), we rewrite the problem above as follows

minz{p1|z|+p2|z|2+λ~,zq+β2|zq|2+ρ22zzn12}.\min_{z}\bigg{\{}p_{1}|z|+p_{2}|z|^{2}+{\langle\widetilde{\lambda},z-q\rangle}+\frac{\beta}{2}|z-q|^{2}+\frac{\rho_{2}}{2}\|z-z^{n-1}\|^{2}\bigg{\}}. (2.18)

where  |z|=z12+z22|z|=\sqrt{z_{1}^{2}+z_{2}^{2}}. Let

t=ββ+2p2+ρ2(qλ~β+ρ2zn1β),t=\frac{\beta}{\beta+2p_{2}+\rho_{2}}\left(q-\frac{\tilde{\lambda}}{\beta}+\frac{\rho_{2}z^{n-1}}{\beta}\right), (2.19)

the problem (2.18) is equivalent to

minz{p1|z|+β+2p2+ρ22|zt|2}.{\min_{z}}\left\{p_{1}|z|+\frac{\beta+2p_{2}+\rho_{2}}{2}|z-t|^{2}\right\}. (2.20)

From Proposition  2.1, the optimal solution of this problem is

z=(1p1(β+2p2+ρ2)|t|)+t.z^{*}=\left(1-\frac{p_{1}}{(\beta+2p_{2}+\rho_{2})|t|}\right)_{+}t. (2.21)

By the definition of zz^{*} and tt, we have

𝐤n(i,j)=(1𝜶1(i,j)(β+2𝜶2(i,j)+s2)|t(i,j)|)+t(i,j),\mathbf{k}^{n}(i,j)=\left(1-\frac{\bm{\alpha}_{1}(i,j)}{(\beta+2\bm{\alpha}_{2}(i,j)+s_{2})|t(i,j)|}\right)_{+}t(i,j), (2.22)

where

t(i,j)=ββ+2𝜶2(i,j)+ρ2(𝐮(𝐢,𝐣)λβ+ρ2𝐤n1(i,j)β).t(i,j)=\frac{\beta}{\beta+2\bm{\alpha}_{2}(i,j)+\rho_{2}}\left(\mathbf{\nabla u(i,j)}-\frac{\mathbf{\lambda}}{\beta}+\frac{\rho_{2}\mathbf{k}^{n-1}(i,j)}{\beta}\right). (2.23)

Then 𝐤n\mathbf{k}^{n} with 𝐤n(i,j)\mathbf{k}^{n}(i,j) given by (2.22) is the solution of (2.9).

2.4 The linear-rate convergence

The pseudo-code of the proposed algorithm is given in Algorithm 1. The linear-rate convergence of the algorithm is characterized in Theorem 2.1.

Theorem 2.1.  Let {(𝐮n,𝐤n,λn)}\{(\mathbf{u}^{n},\mathbf{k}^{n},\mathbf{\lambda}^{n})\} be the sequence generated by the algorithm sPADMM. If η(0,1+52)\eta\in(0,\frac{1+\sqrt{5}}{2}), then the sequence {(𝐮n,𝐤n,λn)}\{(\mathbf{u}^{n},\mathbf{k}^{n},\mathbf{\lambda}^{n})\} converges to the KKT point of (2.6).

The detailed proof of the theorem can be found in the Appendix.

3 Numerical Simulation

In this section, we present simulation results with twelve testing images, including 7 gray images and 5 color images, as shown in Figure 1. We consider three types of blurring functions Gaussian blur (GB), Motion blur (MB) and Average blur (AB) with different levels of additive Gaussian noise. For instance, the notation GB(9,5)/σ=5\mathrm{GB(9,5)}/\sigma=5 represents Gaussian kernel with the free parameter 55 and size 9×99\times 9, and additive Gaussian noise with standard deviation σ=5\sigma=5.

To illustrate the effectiveness of the proposed model, we compare our model with the classic TV  [38], DCA with L10.5L2L_{1}-0.5L_{2}  [31], TRL2  [42], SOCF  [28] and BM3D [15]. All the experiments are performed under Windows 10 and MATLAB R2018a running on a desktop (Intel(R) Core(TM) i5-8250 CPU @1.60 GHz). The termination criterion for all experiments is defined as follows

min{𝐮n𝐮n12𝐟2,𝐤n𝐮n2𝐟}tol,\min\left\{\frac{\|{\mathbf{u}}^{n}-{\mathbf{u}}^{n-1}\|_{2}}{\|\mathbf{f}\|_{2}},\frac{\|\mathbf{k}^{n}-\nabla\mathbf{u}^{n}\|_{2}}{\|\nabla\mathbf{f}\|}\right\}\leq tol,

where nn is the number of iterations, the tolerance value toltol is set as 5e55e-5. Quantitatively, we evaluate the quality of image restoration by the peak signal to noise ratio (PSNR) and structural similarity index (SSIM). The SSIM is defined by Wang et al.  [41] and PSNR is calculated by

PSNR=10log102552MSE,MSE=1M×N(i,j)Ω(𝐮(i,j)𝐮^(i,j))2,\mathrm{PSNR}=10\log_{10}\frac{255^{2}}{\mathrm{MSE}},\quad\mathrm{MSE}=\frac{1}{M\times N}\sum_{(i,j)\in\Omega}(\mathbf{u}(i,j)-\hat{\mathbf{u}}(i,j))^{2},

where M×NM\times N is the image size, 𝐮\mathbf{u} is the original image and 𝐮^\hat{\mathbf{u}} is the restored image.

3.1 Parameter setting

In this section, we introduce all the parameters used in our algorithm. From (2.5), θ1,θ2(0,1)\theta_{1},\theta_{2}\in(0,1) are scaling parameters, and τ>0\tau>0 is a threshold which is used to adjust the value of α1\alpha_{1} and α2\alpha_{2}. There are three parameters (μ,β,τ)(\mu,\beta,\tau) in our Algorithm 1 that need to be adjusted. We choose the self-adjoint positive definite linear operators 𝐒1=𝐒2=ρ𝐈\mathbf{S}_{1}=\mathbf{S}_{2}=\rho\mathbf{I} with ρ=0.1\rho=0.1 and the maximum iterative number (denoted by MaxIterMaxIter ) is set as 500500. Through a multitude of experiments, we choose better parameters to improve the quality of restoring images damaged by blur and noise.

We have done a lot of simulations to find the best value of parameters μ\mu, τ\tau and β\beta for each blurring kernel with different noise levels. As a result, we get the parameters μ\mu, τ\tau and β\beta as functions of σ(0,100]\sigma\in(0,100] for each blurring kernel. Taking Gaussian blur as a example, the parameters μ\mu, τ\tau and β\beta are given as:

μ=1.8σ2×1022.7σ×103+1.1×104,τ=3.4σ×103+0.9,β=3.7σ×104+1.2×101,\eqalign{\mu=1.8\sigma^{2}\times 10^{2}-2.7\sigma\times 10^{3}+1.1\times 10^{4},\cr\tau=3.4\sigma\times 10^{-3}+0.9,\cr\beta=-3.7\sigma\times 10^{-4}+1.2\times 10^{-1},}

where σ(0,100]\sigma\in(0,100]. When σ=3\sigma=3, the corresponding values of μ\mu, τ\tau and β\beta are 5000, 0.9 and 0.1 respectively. In the case of Motion blur and Average blur, we have similar functions for the parameters μ\mu, τ\tau and β\beta which are given in the code.

The parameter settings for the other compared algorithms are given below. The TV model [38] has two parameters μ\mu and λ\lambda, where μ\mu is the parameter to balance the data fidelity and regularity terms and λ\lambda plays an important role in analyzing the convergence. In our tests, we let μ{200,300,400,700,1500}\mu\in\{200,300,400,700,1500\} and λ{10,15,20,50,80,100,200,300}\lambda\in\{10,15,20,50,80,100,200,300\}. For the DCA model [31], we fix the parameter λ=1\lambda=1, and then choose the parameter μ\mu from the sequence {50,100,150,200,300,380,700,800,1500}\{50,100,150,200,300,380,700,800,1500\}. The parameters used by the TRL2 model [42] are set as α{5,10,15,20,30,50,100,150,200}\alpha\in\{5,10,15,20,30,50,100,150,200\}, τ{0.1,0.2,0.3,0.5}\tau\in\{0.1,0.2,0.3,0.5\} and β{0.1,0.5,1,2,3,5,10}\beta\in\{0.1,0.5,1,2,3,5,10\}. In the SOCF model [28], we take fixed values μ1=0.2,μ2=1e1\mu_{1}=0.2,\mu_{2}=1e-1 and η=1.6\eta=1.6 represented the step-length. We choose μ{1e4,1e3}\mu\in\{1e-4,1e-3\} and λ{2,3,4,7,11,13}\lambda\in\{2,3,4,7,11,13\}. The BM3D model [15] has two parameters: regularized inversion (RI) which is the collaborative hard-thresholding for BM3D filtering and regularized Wiener inversion (RWI) for collaborative Wiener filtering. We set these two parameters as RI{4e4,4e3,1e3}RI\in\{4e-4,4e-3,1e-3\} and RWI{5e3,3e2}RWI\in\{5e-3,3e-2\}.

Refer to caption Refer to caption Refer to caption Refer to caption
Shepp-Logan Shape150 House Boat
Refer to caption Refer to caption Refer to caption Refer to caption
Pepper Cameraman Hill Plate
Refer to caption Refer to caption Refer to caption Refer to caption
Duck Building Hats Car
Figure 1: Set of 12 testing images. House, Boat, Pepper, Cameraman, Hill from the Tampere University of Technology(http://www.cs.tut.fi/~foi/GCF-BD3D/). Plate from [22]. Duck, Building, Car from the Berkeley segmentation database. Hats from Kodak Image Dataset(http://www.cs.albany.edu/~xypan/research/snr/Kodak.html).

3.2 Experimental results

In order to verify the performance of our proposed model, we have tested the recovery results of gray images and color images under different blur kernels and different Gaussian noise levels. Tables 1,  2 and  3 show the values of PSNR, SSIM of the classic TV  [38], DCA with L10.5L2L_{1}-0.5L_{2}  [31], TRL2  [42], SOCF  [28] and BM3D  [15] on Gaussian blur GB(9,5)/σ=5\mathrm{GB(9,5)}/\sigma=5 and Motion blur MB(20,60)/σ=5\mathrm{MB(20,60)}/\sigma=5 and Average blur AB(20,60)/σ=3\mathrm{AB(20,60)}/\sigma=3. The number in bold means the best result. Obviously, our algorithm gives the best results in most of the cases. We also show the average of PSNR/SSIM value of the restored image in the Table 4. Comprehensive evaluations on the image set with three kinds of kernels and four different noise levels demonstrate the superiority of our proposed method when compared with other state-of-the-art methods.

Table 1: The value of PSNR and SSIM of the test images recovered by different models for GB(9,5)/σ=5\mathrm{GB(9,5)}/\sigma=5.
PSNR/SSIM Degraded TV [38] DCA  [31] TRL2  [42] SOCF  [28] BM3D [15] 0Ours
Shepp-Logan 18.93/0.578 22.77/0.930 23.29/0.893 23.68/0.873 23.96/0.918 24.56/0.846 25.66/0.941
Shape150 18.46/0.541 25.05/0.859 25.66/0.866 25.62/0.869 25.96/0.910 26.44/0.873 27.74/0.916
House 24.12/0.560 27.90/0.784 28.18/0.800 28.05/0.676 29.10/0.797 29.90/0.783 30.66/0.826
Boat 23.34/0.489 25.03/0.633 25.43/0.655 26.12/0.640 26.45/0.690 26.76/0.700 26.92/0.704
Pepper 21.41/0.573 23.46/0.735 23.69/0.736 24.71/0.705 24.87/0.781 26.42/0.763 25.86/0.793
Cameraman 20.87/0.499 23.10/0.716 23.49/0.734 24.16/0.645 24.24/0.745 24.74/0.737 24.73/0.758
Hill 24.85/0.497 26.86/0.631 27.12/0.653 27.35/0.644 27.55/0.670 28.07/0.688 28.08/0.693
Plate 17.40/0.779 21.64/0.910 21.92/0.916 21.90/0.912 22.71/0.923 24.57/0.942 24.84/0.947
Duck 24.17/0.659 27.00/0.789 27.13/0.804 27.25/0.742 28.11/0.823 28.60/0.830 29.13/0.834
Building 19.54/0.636 20.17/0.701 20.40/0.718 20.80/0.711 20.87/0.739 21.33/0.760 21.32/0.762
Hats 27.04/0.858 28.95/0.937 29.23/0.939 29.10/0.916 29.75/0.941 30.10/0.944 30.22/0.946
Car 23.92/0.755 25.18/0.828 25.70/0.838 26.03/0.819 26.22/0.841 26.74/0.847 26.98/0.853
Average 22.01/0.619 24.76/0.788 25.10/0.796 25.39/0.738 25.82/0.815 26.52/0.809 26.89/0.830
Table 2: The value of PSNR and SSIM of the test images recovered by different models for MB(20,60)/σ=5\mathrm{MB(20,60)}/\sigma=5.
PSNR/SSIM Degraded TV [38] DCA  [31] TRL2  [42] SOCF  [28] BM3D [15] 0Ours
Shepp-Logan 17.46/0.546 24.78/0.942 25.40/0.919 25.11/0.883 25.43/0.870 25.67/0.808 27.87/0.903
Shape150 16.33/0.469 25.78/0.901 30.52/0.926 24.24/0.602 25.16/0.854 26.62/0.865 27.94/0.869
House 21.45/0.503 25.21/0.730 27.93/0.738 25.85/0.653 28.73/0.772 29.24/0.786 29.55/0.793
Boat 22.01/0.456 25.54/0.658 25.96/0.660 25.73/0.652 26.28/0.688 26.63/0.698 26.89/0.703
Pepper 20.06/0.532 23.94/0.763 24.16/0.732 23.44/0.665 24.86/0.772 25.39/0.768 24.79/0.755
Cameraman 19.80/0.478 23.92/0.736 24.50/0.705 24.41/0.612 24.91/0.739 25.25/0.745 25.29/0.746
Hill 23.21/0.451 26.61/0.632 26.88/0.642 26.72/0.639 26.98/0.655 27.42/0.670 27.50/0.675
Plate 15.93/0.730 20.99/0.902 21.55/0.911 21.19/0.903 23.09/0.932 24.11/0.941 24.44/0.953
Duck 22.48/0.623 26.79/0.782 26.81/0.782 26.61/0.683 27.98/0.792 28.03/0.808 28.57/0.797
Building 18.93/0.628 20.87/0.741 20.97/0.749 21.72/0.740 22.30/0.796 22.86/0.816 22.92/0.806
Hats 25.44/0.840 28.74/0.934 28.78/0.935 28.31/0.885 29.29/0.927 29.22/0.930 29.62/0.931
Car 22.05/0.737 24.97/0.830 25.57/0.838 25.56/0.803 25.82/0.841 26.49/0.843 26.89/0.844
Average 20.43/0.583 24.85/0.796 25.75/0.795 24.91/0.710 25.90/0.803 26.41/0.807 26.84/0.815
Table 3: The value of PSNR and SSIM of the test images recovered by different models for AB(9,9)/σ=3\mathrm{AB(9,9)}/\sigma=3.
PSNR/SSIM Degraded TV [38] DCA  [31] TRL2  [42] SOCF  [28] BM3D [15] 0Ours
Shepp-Logan 18.65/0.708 24.39/0.947 25.82/0.926 24.94/0.853 24.73/0.938 25.71/0.884 27.35/0.951
Shape150 18.16/0.612 26.25/0.900 28.48/0.933 27.97/0.946 27.46/0.941 28.01/0.928 28.97/0.918
House 23.96/0.618 29.09/0.807 29.44/0.823 29.61/0.768 30.45/0.825 31.47/0.836 31.60/0.837
Boat 23.23/0.521 25.82/0.662 26.59/0.703 27.12/0.708 27.23/0.721 28.06/0.751 28.14/0.751
Pepper 21.25/0.613 24.21/0.763 25.45/0.781 25.78/0.777 27.02/0.823 27.75/0.806 27.89/0.820
Cameraman 20.70/0.561 23.87/0.743 24.47/0.766 24.78/0.642 24.66/0.704 25.81/0.781 25.98/0.792
Hill 24.85/0.528 27.46/0.661 28.03/0.695 28.39/0.709 28.43/0.708 29.03/0.731 28.96/0.736
Plate 17.08/0.766 22.98/0.930 23.86/0.941 23.54/0.935 25.04/0.950 26.19/0.958 26.80/0.963
Duck 24.12/0.709 28.43/0.821 28.59/0.837 29.21/0.845 29.34/0.850 29.91/0.863 30.62/0.863
Building 19.47/0.650 20.70/0.734 20.87/0.750 21.02/0.749 21.48/0.778 22.06/0.794 22.00/0.794
Hats 27.38/0.897 29.83/0.945 29.86/0.945 30.02/0.935 31.63/0.952 31.34/0.954 31.53/0.955
Car 23.93/0.785 26.40/0.847 26.43/0.850 26.49/0.843 27.16/0.859 27.41/0.861 27.67/0.863
Average 21.90/0.664 25.79/0.813 26.49/0.829 26.57/0.768 27.05/0.837 27.73/0.846 28.13/0.854
Table 4: The average value of PSNR and SSIM of the test images recovered by different models for all experiments.
Kernel σ\sigma Degraded TV [38] DCA  [31] TRL2  [42] SOCF  [28] BM3D [15] 0Ours
GB(9,5)\mathrm{GB(9,5)} σ=3\sigma=3 22.22/0.678 27.30/0.838 26.91/0.821 26.26/0.785 27.40/0.845 27.46/0.836 27.82/0.857
σ=5\sigma=5 22.00/0.619 24.76/0.789 25.10/0.796 25.40/0.762 25.82/0.812 26.52/0.809 26.81/0.831
σ=8\sigma=8 22.37/0.527 25.37/0.791 24.96/0.770 24.03/0.727 25.57/0.789 25.65/0.786 25.97/0.811
σ=10\sigma=10 21.15/0.472 25.17/0.771 24.63/0.765 24.18/0.721 25.16/0.775 25.24/0.776 25.74/0.796
MB(20,60)\mathrm{MB(20,60)} σ=3\sigma=3 20.58/0.638 27.75/0.819 28.03/0.829 26.93/0.812 27.41/0.834 27.70/0.834 28.09/0.846
σ=5\sigma=5 20.43/0.583 24.84/0.793 25.75/0.795 24.91/0.727 25.90/0.799 26.41/0.807 26.86/0.815
σ=8\sigma=8 20.10/0.496 25.26/0.768 25.10/0.757 23.51/0.737 24.94/0.760 25.24/0.780 25.75/0.783
σ=10\sigma=10 19.82/0.445 24.58/0.757 24.63/0.763 23.56/0.696 24.46/0.755 24.69/0.768 25.25/0.776
AB(9,9)\mathrm{AB(9,9)} σ=3\sigma=3 21.90/0.664 25.79/0.813 26.49/0.829 26.57/0.793 27.05/0.837 27.73/0.846 28.13/0.854
σ=5\sigma=5 21.70/0.605 26.41/0.812 26.07/0.811 24.89/0.730 26.65/0.816 26.76/0.819 27.16/0.831
σ=8\sigma=8 21.26/0.515 25.61/0.792 25.12/0.776 24.37/0.737 25.71/0.783 25.87/0.793 26.24/0.809
σ=10\sigma=10 20.87/0.462 25.12/0.767 24.72/0.756 24.12/0.726 25.63/0.793 25.44/0.781 25.84/0.798
Table 5: The computation time(s) for all compared methods on a desktop (Intel(R) Core(TM) i5-8250 CPU @1.60 GHz).
Kernel ImagesizeImagesize TV [38] DCA  [31] TRL2  [42] SOCF  [28] BM3D [15] 0Ours
GB(9,5)\mathrm{GB(9,5)} 512×512512\times 512 gray 2.93 50.07 7.05 12.99 5.73 20.22
768×512768\times 512 RGB 12.96 162.96 31.44 60.56 27.94 79.28
MB(20,60)\mathrm{MB(20,60)} 512×512512\times 512 gray 2.46 60.61 10.90 17.03 5.81 20.52
768×512768\times 512 RGB 12.89 137.92 30.74 93.01 28.69 100.66
AB(9,9)\mathrm{AB(9,9)} 512×512512\times 512 gray 2.91 44.39 11.82 12.47 5.64 14.67
768×512768\times 512 RGB 12.76 152.73 36.67 62.05 28.21 85.09
Refer to caption Refer to caption Refer to caption Refer to caption
(a) Original image (b) Degraded image (c) TV  [38] (d) DCA  [31]
18.74/0.398 24.04/0.804 23.11/0.823
Refer to caption Refer to caption Refer to caption Refer to caption
(e) TRL2  [42] (f) SOCF  [28] (g) BM3D  [15] (h) Ours
23.44/0.773 23.93/0.827 23.76/0.833 24.83/0.929
Figure 2: Deblurring result of GB(9, 5) and σ=8\sigma=8 for gray image ”Shepp-Logan” with zoomed areas and PSNR values and SSIM values.
Refer to caption Refer to caption Refer to caption Refer to caption
(a) Original image (b) Degraded image (c) TV  [38] (d) DCA  [31]
17.12/0.765 22.06/0.915 21.62/0.910
Refer to caption Refer to caption Refer to caption Refer to caption
(e) TRL2  [42] (f) SOCF  [28] (g) BM3D  [15] (h) Ours
20.10/0.868 22.23/0.914 22.77/0.920 23.30/0.928
Figure 3: Deblurring result of GB(9, 5) and σ=10\sigma=10 for color image ”Plate” with zoomed areas and PSNR values and SSIM values.
Refer to caption Refer to caption Refer to caption Refer to caption
(a) Original image (b) Degraded image (c) TV  [38] (d) DCA  [31]
23.43/0.499 28.15/0.703 28.04/0.704
Refer to caption Refer to caption Refer to caption Refer to caption
(e) TRL2  [42] (f) SOCF  [28] (g) BM3D  [15] (h) Ours
27.70/0.692 28.36/0.712 28.53/0.717 28.56/0.722
Figure 4: Deblurring result of MB(20, 60) and σ=3\sigma=3 for gray image ”Hill” with zoomed areas and PSNR values and SSIM values.
Refer to caption Refer to caption Refer to caption Refer to caption
(a) Original image (b) Degraded image (c) TV  [38] (d) DCA  [31]
22.48/0.623 26.79/0.782 27.36/0.793
Refer to caption Refer to caption Refer to caption Refer to caption
(e) TRL2  [42] (f) SOCF  [28] (g) BM3D  [15] (h) Ours
26.61/0.683 27.98/0.792 28.03/0.808 28.57/0.797
Figure 5: Deblurring result of MB(20, 60) and σ=5\sigma=5 for color image ”Duck” with zoomed areas and PSNR values and SSIM values.
Refer to caption Refer to caption Refer to caption Refer to caption
(a) Original image (b) Degraded image (c) TV  [38] (d) DCA  [31]
23.70/0.543 29.60/0.798 28.76/0.802
Refer to caption Refer to caption Refer to caption Refer to caption
(e) TRL2  [42] (f) SOCF  [28] (g) BM3D  [15] (h) Ours
26.80/0.592 29.75/0.796 30.33/0.808 30.49/0.817
Figure 6: Deblurring result of AB(9, 9) and σ=5\sigma=5 for gray image ”House” with zoomed areas and PSNR values and SSIM values.
Refer to caption Refer to caption Refer to caption Refer to caption
(a) Original image (b) Degraded image (c) TV  [38] (d) DCA  [31]
23.10/0.685 26.14/0.834 25.56/0.825
Refer to caption Refer to caption Refer to caption Refer to caption
(e) TRL2  [42] (f) SOCF  [28] (g) BM3D  [15] (h) Ours
25.18/0.799 25.95/0.823 26.24/0.839 26.54/0.842
Figure 7: Deblurring result of AB(9, 9) and σ=8\sigma=8 for color image ”Car” with zoomed areas and PSNR values and SSIM values.

In order to further demonstrate the superiority of our algorithm on image restoration more intuitively, we compare the visual quality of restored images of our proposed method and other state-of-the-art methods. We also zoom in key parts of the image for better illustrating. Figures 27\ref{fig:GBg}-\ref{fig:ABc} show that our method yields the best image quality in terms of removing noise, preserving edges, and maintaining image sharpness. Looking carefully at the details in the figures, we can see that TV  [38] oversmoothes images, TRL2  [42] introduces staircase effect, DCA  [31] sharpens edges but loses some detail information, there exits the ringing phenomenon by SOCF  [28] and BM3D model  [15] introduces some artifacts.

Table 5 lists the computation time on a desktop (Intel(R) Core(TM) i5-8250 CPU @1.60 GHz), which reveal that our method is faster than DCA  [31], but is slower than TV  [38], BM3D  [15], TRL2 [42] and SOCF [28]. Acceleration will be considered in the future work.

In brief, our algorithm (EAHR) has competitive performance in sharpening the edges and removing the noise, which outperforms other mainstream methods both in PSNR/SSIM values and visual quality.

4 Conclusion

For an entire image, the spatially fixed regularization parameters can not perform well for both edges and smooth areas. The larger parameters are favorable to reduce the noise in the smooth area, while incurs blur to edges. The small parameters enable regularization-based algorithms to sharpen the edges, but the denoising may not be sufficient.

In this paper, we have presented an automated spatially dependent regularization parameter selection framework for restoring an image from a noisy and blur image. An edge detector with high robustness to noise is used to detect the edges of the image and then generates an edge information matrix. According to this matrix, the automated spatially dependent parameters for regularization are given in binarization. With the help of these parameters, the regularization algorithm performs outstandingly at both edges and smooth areas. Once automated spatially dependent parameters are fixed, the proposed model is convex and therefore can be solved by sPADMM with a linear-rate convergence rate. Extensive experiments on different types of blurring kernels and different levels of Gaussian noise have been conducted to show that our approach is robust and outperforms other state-of-the-art deblurring methods. In addition, the proposed model not only effectively overcomes the false edges and staircase effects but also makes up for the insufficiency of the harmonic model’s diffusion in all directions and protects the edge well to restore the internal smooth area. Due to the limited space, we only display the experimental results in three cases: Gaussian blur GB(9,5)/σ=5\mathrm{GB(9,5)}/\sigma=5, Motion blur MB(20,60)/σ=5\mathrm{MB(20,60)}/\sigma=5 and Average blur AB(9,9)/σ=3\mathrm{AB(9,9)}/\sigma=3. The future work will include:

  1. 1.

    We will accelerate our algorithm to reduce the recovery time.

  2. 2.

    To remain more image details, we will examine an edge detector with higher accuracy and robustness. Moreover, the automated spatially dependent parameters will be considered according to the image texture.

  3. 3.

    Our model is now only suitable for non-blind deblurring, and we will extend it to blind deblurring task.

Acknowledgement

The work has been supported by the National Natural Science Foundation of China (Grants Nos. 12061052), Natural Science Fund of Inner Mongolia Autonomous Region (Grant No. 2020MS01002), the China Scholarship Council for a one year visiting at Ecole Normale Supérieure Paris-Saclay (No. 201806810001). The authors would like to thank Professor Guoqing Chen for providing us some useful suggestions and this work is also partly supported by his project (“111 project”of higher education talent training in Inner Mongolia Autonomous Region).

Appendix. Convergence proof

To make the paper self-contained, we establish the global convergence result of algorithm 1.

Let F(𝐮)=μ2𝐀𝐮𝐟22F(\mathbf{u})=\frac{\mu}{2}\|\mathbf{Au}-\mathbf{f}\|_{2}^{2}, and G(𝐤)=𝐤1,α1+12𝐤2,α22G(\mathbf{k})=\|\mathbf{k}\|_{1,\mathbf{\alpha}_{1}}+\frac{1}{2}\|\mathbf{k}\|_{2,\mathbf{\alpha}_{2}}^{2}. If (𝐮¯,𝐤¯)(\bar{\mathbf{u}},\bar{\mathbf{k}}) is an optimal solution of (2.6) if and only if there exists Lagrange multiplier λ¯\bar{\mathbf{\lambda}} such that

{0=F(𝐮¯)+λ¯,0G(𝐤¯)λ¯,0=𝐮¯𝐤¯,\cases{0=\nabla F(\bar{\mathbf{u}})+\nabla^{\top}\bar{\mathbf{\lambda}},\\ 0\in\partial G(\bar{\mathbf{k}})-\bar{\mathbf{\lambda}},\\ 0=\mathbf{\nabla}\bar{\mathbf{u}}-\bar{\mathbf{k}},\\ }

where F=μ𝐀(𝐀𝐮𝐟),\nabla F=\mu\mathbf{A}^{\top}(\mathbf{Au}-\mathbf{f}), and G\partial G is the subdifferential of GG. From the KKT conditions, we know that if {(𝐮,𝐤,λ)}\{(\mathbf{u},\mathbf{k},\mathbf{\lambda})\} is the KKT point of (2.6), then

{0=F(𝐮)+λ,0G(𝐤)λ,0=𝐮𝐤.\cases{0=\nabla F(\mathbf{u})+\nabla^{\top}\mathbf{\lambda},\\ 0\in\partial G(\mathbf{k})-\mathbf{\lambda},\\ 0=\nabla\mathbf{u}-\mathbf{k}.\\ } (4.1)

Since the subdifferential mapping of the closed convex function is maximal monotone, there exist self-adjoint and positive semidefinite operators 𝐅,𝐆\sum_{\mathbf{F}},\sum_{\mathbf{G}} such that for all 𝐮,𝐮^dom(F),ωF(𝐮)\mathbf{u},\hat{\mathbf{u}}\in dom(F),\omega\in\nabla F(\mathbf{u}) and ω^F(𝐮^)\hat{\omega}\in\nabla F(\hat{\mathbf{u}}),

ωω^,𝐮𝐮^𝐮𝐮^𝐅2,\langle\omega-\hat{\omega},\mathbf{u}-\hat{\mathbf{u}}\rangle\geq\|\mathbf{u}-\hat{\mathbf{u}}\|_{\sum_{\mathbf{F}}}^{2}, (4.2)

for all 𝐤,𝐤^dom(G),xG(𝐤)\mathbf{k},\hat{\mathbf{k}}\in dom(G),x\in\partial G(\mathbf{k}) and x^G(𝐤^)\hat{x}\in\partial G(\hat{\mathbf{k}}),

xx^,𝐤𝐤^𝐤𝐤^𝐆2.\langle x-\hat{x},\mathbf{k}-\hat{\mathbf{k}}\rangle\geq\|\mathbf{k}-\hat{\mathbf{k}}\|_{\sum_{\mathbf{G}}}^{2}. (4.3)

It is obtained by our algorithm that

{0=F(𝐮n+1)+(β(𝐮n+1𝐤n)+λn)+𝐒𝟏(𝐮n+1𝐮n),0G(𝐤n+1)(β(𝐮n+1𝐤n+1)+λn)+𝐒𝟐(𝐤n+1𝐤n),0=𝐮n+1𝐤n+1(βη)1(λn+1λn).\cases{0=\nabla F(\mathbf{u}^{n+1})+\nabla^{\top}(\beta(\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n})+\mathbf{\lambda}^{n})+\mathbf{S_{1}}(\mathbf{u}^{n+1}-\mathbf{u}^{n}),\\ 0\in\partial G(\mathbf{k}^{n+1})-(\beta(\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n+1})+\mathbf{\lambda}^{n})+\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n}),\\ 0=\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n+1}-(\beta\eta)^{-1}(\mathbf{\lambda}^{n+1}-\mathbf{\lambda}^{n}).\\ } (4.4)

Let ϵ(𝐮,𝐤)=𝐮𝐤\epsilon(\mathbf{u},\mathbf{k})=\nabla\mathbf{u}-\mathbf{k}. Denote 𝐮en=𝐮n𝐮¯\mathbf{u}_{e}^{n}=\mathbf{u}^{n}-\bar{\mathbf{u}}, similarly 𝐤en,λen\mathbf{k}_{e}^{n},\mathbf{\lambda}_{e}^{n},

{wn+1=λn+1+(1η)βϵ(𝐮n+1,𝐤n+1)+β(𝐤n+1𝐤n),xn+1=λn+1+(1η)βϵ(𝐮n+1,𝐤n+1).\cases{w^{n+1}=\mathbf{\lambda}^{n+1}+(1-\eta)\beta\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})+\beta(\mathbf{k}^{n+1}-\mathbf{k}^{n}),\\ x^{n+1}=\mathbf{\lambda}^{n+1}+(1-\eta)\beta\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1}).\\ }

Therefore,

ϵ(𝐮en+1,𝐤en+1)=𝐮n+1𝐤n+1=ϵ(𝐮n+1,𝐤n+1)=(βη)1(λen+1λen)=(βη)1(λn+1λn).\eqalign{\epsilon(\mathbf{u}_{e}^{n+1},\mathbf{k}_{e}^{n+1})&=\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n+1}=\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\\ &=(\beta\eta)^{-1}(\mathbf{\lambda}_{e}^{n+1}-\mathbf{\lambda}_{e}^{n})=(\beta\eta)^{-1}(\mathbf{\lambda}^{n+1}-\mathbf{\lambda}^{n}).}

Noting (4.1), (4.2), (4.3) and (4.4), we find

𝐮en+1F2wn+1w¯,𝐮n+1𝐮¯=(λ¯λn)+β(𝐤n𝐮n+1)𝐒𝟏(𝐮n+1𝐮n),𝐮en+1,\eqalign{&\|{\mathbf{u}_{e}^{n+1}}\|_{\sum_{F}}^{2}\\ &\leq\langle w^{n+1}-\overline{w},\mathbf{u}^{n+1}-\overline{\mathbf{u}}\rangle\cr&=\langle\nabla^{\top}(\bar{\mathbf{\lambda}}-\mathbf{\lambda}^{n})+\beta(\nabla^{\top}\mathbf{k}^{n}-\mathbf{\triangle}\mathbf{u}^{n+1})-\mathbf{S_{1}}(\mathbf{u}^{n+1}-\mathbf{u}^{n}),\mathbf{u}_{e}^{n+1}\rangle,} (4.5)
𝐤en+1𝐆2xn+1x¯,𝐤n+1𝐤¯=λnλ¯+β(𝐮n+1𝐤n+1)𝐒𝟐(𝐤n+1𝐤n),𝐤en+1.\eqalign{&\|{\mathbf{k}_{e}^{n+1}}\|_{\sum_{\mathbf{G}}}^{2}\\ &\leq\langle x^{n+1}-\overline{x},\mathbf{k}^{n+1}-\overline{\mathbf{k}}\rangle\\ &=\langle\mathbf{\lambda}^{n}-\bar{\mathbf{\lambda}}+\beta(\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n+1})-\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n}),\mathbf{k}_{e}^{n+1}\rangle.} (4.6)

By calculation, we get

wn+1=[λn+1+(1η)βϵ(𝐮n+1,𝐤n+1)+β(𝐤n+1𝐤n)]=λnβ𝐮n+1+β𝐤n,\eqalign{-\nabla^{\top}w^{n+1}&=-\nabla^{\top}[\mathbf{\lambda}^{n+1}+(1-\eta)\beta\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})+\beta(\mathbf{k}^{n+1}-\mathbf{k}^{n})]\\ &=-\nabla^{\top}\mathbf{\lambda}^{n}-\beta\nabla^{\top}\nabla\mathbf{u}^{n+1}+\beta\nabla^{\top}\mathbf{k}^{n},}
xn+1=λn+1+(1η)βϵ(𝐮n+1,𝐤n+1)=λn+β(𝐮n+1𝐤n+1).\qquad~{}\eqalign{x^{n+1}&=\mathbf{\lambda}^{n+1}+(1-\eta)\beta\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\\ &=\mathbf{\lambda}^{n}+\beta(\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n+1}).}

Thus, (4.5) is converted to (4.7) and (4.6) becomes (4.8),

𝐮en+1𝐅2wn+1𝐒𝟏(𝐮n+1𝐮n)+λ¯,𝐮en+1,\|{\mathbf{u}_{e}^{n+1}}\|_{\sum_{\mathbf{F}}}^{2}\leq\langle-\nabla^{\top}w^{n+1}-\mathbf{S_{1}}(\mathbf{u}^{n+1}-\mathbf{u}^{n})+\nabla^{\top}\overline{\mathbf{\lambda}},\mathbf{u}_{e}^{n+1}\rangle, (4.7)
𝐤en+1𝐆2xn+1𝐒𝟐(𝐤n+1𝐤n)λ¯,𝐤en+1.\|{\mathbf{k}_{e}^{n+1}}\|_{\sum_{\mathbf{G}}}^{2}\leq\langle x^{n+1}-\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n})-\overline{\mathbf{\lambda}},\mathbf{k}_{e}^{n+1}\rangle. (4.8)

Then through (4.7) and (4.8), we obtain

𝐮en+1𝐅2+𝐤en+1𝐆2(βη)1λen+1,λenλen+1β𝐤n+1𝐤n,ϵ(𝐮n+1,𝐤n+1)β𝐤n+1𝐤n,𝐤en+1β(1η)ϵ(𝐮n+1,𝐤n+1)22𝐒𝟏(𝐮n+1𝐮n),𝐮en+1𝐒𝟐(𝐤n+1𝐤n),𝐤en+1.\eqalign{\|&{\mathbf{u}_{e}^{n+1}}\|_{\sum_{\mathbf{F}}}^{2}+\|{\mathbf{k}_{e}^{n+1}}\|_{\sum_{\mathbf{G}}}^{2}\\ &\leq(\beta\eta)^{-1}\langle\mathbf{\lambda}_{e}^{n+1},\mathbf{\lambda}_{e}^{n}-\mathbf{\lambda}_{e}^{n+1}\rangle-\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\rangle\\ &-\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{k}_{e}^{n+1}\rangle-\beta(1-\eta)\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}_{2}\\ &-\langle\mathbf{S_{1}}(\mathbf{u}^{n+1}-\mathbf{u}^{n}),\mathbf{u}_{e}^{n+1}\rangle-\langle\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n}),\mathbf{k}_{e}^{n+1}\rangle.\\ } (4.9)

Next, we shall estimate the term β𝐤n+1𝐤n,ϵ(𝐮n+1,𝐤n+1)\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\rangle. It follows from equation (4.4) that

xn+1𝐒𝟐(𝐤n+1𝐤n)G(𝐤n+1),xn𝐒𝟐(𝐤n𝐤n1)G(𝐤n).x^{n+1}-\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n})\in\partial G(\mathbf{k}^{n+1}),~{}~{}x^{n}-\mathbf{S_{2}(}\mathbf{k}^{n}-\mathbf{k}^{n-1})\in\partial G(\mathbf{k}^{n}).

In addition, by the the maximal monotonic property of G(.)\partial G(.), we have

𝐤n+1𝐤n,xn+1xn=𝐤n+1𝐤n,xn+1𝐒𝟐(𝐤n+1𝐤n)(xn𝐒𝟐(𝐤n𝐤n1))+𝐤n+1𝐤n,𝐒𝟐(𝐤n+1𝐤n)𝐤n+1𝐤n,𝐒𝟐(𝐤n𝐤n1)𝐤n+1𝐤n𝐒𝟐2+𝐤n+1𝐤n𝐒𝟐2𝐤n+1𝐤n,𝐒𝟐(𝐤n𝐤n1)𝐤n+1𝐤n𝐒𝟐2𝐤n+1𝐤n,𝐒𝟐(𝐤n𝐤n1).\eqalign{&\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},x^{n+1}-x^{n}\rangle\\ &=\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},x^{n+1}-\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n})-(x^{n}-\mathbf{S_{2}}(\mathbf{k}^{n}-\mathbf{k}^{n-1}))\rangle\\ &\quad+\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n})\rangle-\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{S_{2}}(\mathbf{k}^{n}-\mathbf{k}^{n-1})\rangle\\ &\geq\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{{S_{2}}}}+\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{{S_{2}}}}-\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{S_{2}}(\mathbf{k}^{n}-\mathbf{k}^{n-1})\rangle\\ &\geq\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{{S_{2}}}}-\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{S_{2}}(\mathbf{k}^{n}-\mathbf{k}^{n-1})\rangle.}

Let αn+1=(1η)β𝐤n+1𝐤n,ϵ(𝐮n,𝐤n)\alpha_{n+1}=-(1-\eta)\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n},\mathbf{k}^{n})\rangle, then

β𝐤n+1𝐤n,ϵ(𝐮n+1,𝐤n+1)=(1η)β𝐤n+1𝐤n,ϵ(𝐮n+1,𝐤n+1)𝐤n+1𝐤n,λn+1λn=(1η)β𝐤n+1𝐤n,ϵ(𝐮n+1,𝐤n+1)𝐤n+1𝐤n,xn+1xn(1η)βϵ(𝐮n+1,𝐤n+1)+(1η)βϵ(𝐮n,𝐤n)=αn+1+𝐤n+1𝐤n,xnxn+1αn+1𝐤n+1𝐤n𝐒𝟐2+𝐤n+1𝐤n,𝐒𝟐(𝐤n𝐤n1)αn+1𝐤n+1𝐤n𝐒𝟐2+𝐤n+1𝐤n𝐒𝟐𝐒𝟐(𝐤n𝐤n1)𝐒𝟐𝟏αn+1𝐤n+1𝐤n𝐒𝟐2+12𝐤n+1𝐤n𝐒𝟐2+12𝐤n𝐤n1𝐒𝟐2=αn+112𝐤n+1𝐤n𝐒𝟐2+12𝐤n𝐤n1𝐒𝟐2.\eqalign{&-\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\rangle\\ &=-(1-\eta)\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\rangle-\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{\lambda}^{n+1}-\mathbf{\lambda}^{n}\rangle\\ &=-(1-\eta)\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\rangle\\ &\quad-\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},{{x}^{n+1}-{x}^{n}-(1-\eta)\beta\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})+(1-\eta)\beta\epsilon(\mathbf{u}^{n},\mathbf{k}^{n})\rangle}\\ &=\alpha_{n+1}+\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},{x}^{n}-{x}^{n+1}\rangle\\ &\leq\alpha_{n+1}-\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{{\mathbf{S_{2}}}}}+\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{\mathbf{S_{2}}}(\mathbf{k}^{n}-\mathbf{k}^{n-1})\rangle\\ &\leq\alpha_{n+1}-\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{\mathbf{{S_{2}}}}}+\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|_{\mathbf{{\mathbf{S_{2}}}}}\|\mathbf{S_{2}}(\mathbf{k}^{n}-\mathbf{k}^{n-1})\|_{\mathbf{{S_{2}}^{-1}}}\\ &\leq\alpha_{n+1}-\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}}+\frac{1}{2}\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}}+\frac{1}{2}\|\mathbf{k}^{n}-\mathbf{k}^{n-1}\|^{2}_{\mathbf{S_{2}}}\\ &={\alpha_{n+1}-\frac{1}{2}\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}}+\frac{1}{2}\|\mathbf{k}^{n}-\mathbf{k}^{n-1}\|^{2}_{\mathbf{S_{2}}}}.} (4.10)

Since λn+1=λn+(βη)ϵ(𝐮n+1,𝐤n+1)\mathbf{\lambda}^{n+1}=\mathbf{\lambda}^{n}+(\beta\eta)\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1}), it follows from (4.9) and (4.10) that

2𝐮en+1𝐅2+2𝐤en+1𝐆22(βη)1λen+1,λenλen+12β𝐤n+1𝐤n,ϵ(𝐮n+1,𝐤n+1)2β𝐤n+1𝐤n,𝐤en+12β(1η)ϵ(𝐮n+1,𝐤n+1)222𝐒𝟏(𝐮n+1𝐮n),𝐮en+12𝐒𝟐(𝐤n+1𝐤n),𝐤en+1(βη)1(λen2λen+12)(2η)βϵ(𝐮n+1,𝐤n+1)2+2αn+1𝐤n+1𝐤n𝐒𝟐2+𝐤n𝐤n1𝐒𝟐2β𝐤n+1𝐤n2β𝐤en+12+β𝐤en2𝐮n+1𝐮n𝐒𝟏2𝐮en+1𝐒𝟏2+𝐮en𝐒𝟏2𝐤n+1𝐤n𝐒𝟐2𝐤en+1𝐒𝟐2+𝐤en𝐒𝟐2.\eqalign{2&\|{\mathbf{u}_{e}^{n+1}}\|_{\sum_{\mathbf{F}}}^{2}+2\|{\mathbf{k}_{e}^{n+1}}\|_{\sum_{\mathbf{G}}}^{2}\\ \leq&2(\beta\eta)^{-1}\langle\mathbf{\lambda}_{e}^{n+1},\mathbf{\lambda}_{e}^{n}-\mathbf{\lambda}_{e}^{n+1}\rangle-2\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\rangle\\ &-2\beta\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\mathbf{k}_{e}^{n+1}\rangle-2\beta(1-\eta)\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}_{2}\\ &-2\langle\mathbf{S_{1}}(\mathbf{u}^{n+1}-\mathbf{u}^{n}),\mathbf{u}_{e}^{n+1}\rangle-2\langle\mathbf{S_{2}}(\mathbf{k}^{n+1}-\mathbf{k}^{n}),\mathbf{k}_{e}^{n+1}\rangle\\ \leq&(\beta\eta)^{-1}(\|\mathbf{\lambda}_{e}^{n}\|^{2}-\|\mathbf{\lambda}_{e}^{n+1}\|^{2})-(2-\eta)\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1)}\|^{2}\\ &+2\alpha_{n+1}-\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}}+\|\mathbf{k}^{n}-\mathbf{k}^{n-1}\|^{2}_{\mathbf{S_{2}}}\\ &-\beta\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}-\beta\|{\mathbf{k}_{e}^{n+1}}\|^{2}+\beta\|{\mathbf{k}_{e}^{n}}\|^{2}\\ &-\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|^{2}_{\mathbf{S_{1}}}-\|{\mathbf{u}_{e}^{n+1}}\|_{\mathbf{S_{1}}}^{2}+\|{\mathbf{u}_{e}^{n}}\|_{\mathbf{S_{1}}}^{2}\\ &-\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}}-\|{\mathbf{k}_{e}^{n+1}}\|_{\mathbf{S_{2}}}^{2}+\|{\mathbf{k}_{e}^{n}}\|_{\mathbf{S_{2}}}^{2}.} (4.11)

Define

{δn+1=min{η,1+ηη2}β𝐤n+1𝐤n2+𝐤n+1𝐤n𝐒𝟐2,tn+1=δn+1+𝐮n+1𝐮n𝐒𝟏2+2𝐮n+1𝐮¯𝐅2+2𝐤n+1𝐤¯𝐆2,ψn+1=θ(𝐮n+1,𝐤n+1,λn+1)+𝐤n+1𝐤n𝐒𝟐2,θ(𝐮,𝐤,λ)=(βη)1λλ¯2+𝐮𝐮¯𝐒𝟏2+𝐤𝐤¯𝐒𝟐2+β𝐤𝐤¯2.\cases{\delta_{n+1}=\min\{\eta,1+\eta-\eta^{2}\}\beta\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}+\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}},\\ t_{n+1}=\delta_{n+1}+\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|^{2}_{\mathbf{S_{1}}}+2\|\mathbf{u}^{n+1}-\bar{\mathbf{u}}\|_{\sum_{\mathbf{F}}}^{2}+2\|\mathbf{k}^{n+1}-\bar{\mathbf{k}}\|_{\sum_{\mathbf{G}}}^{2},\\ \psi_{n+1}=\theta(\mathbf{u}^{n+1},\mathbf{k}^{n+1},\mathbf{\lambda}^{n+1})+\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}_{\mathbf{S_{2}}},\\ \theta(\mathbf{u},\mathbf{k},\mathbf{\lambda})=(\beta\eta)^{-1}\|\mathbf{\lambda}-\bar{\mathbf{\lambda}}\|^{2}+\|\mathbf{u}-\bar{\mathbf{u}}\|^{2}_{\mathbf{S_{1}}}+\|\mathbf{k}-\bar{\mathbf{k}}\|^{2}_{\mathbf{S_{2}}}+\beta\|\mathbf{k}-\bar{\mathbf{k}}\|^{2}.} (4.12)

Next, we discuss two cases:
Case I: η(0,1]\eta\in(0,1]. It is obvious that

2𝐤n+1𝐤n,ϵ(𝐮n,𝐤n)𝐤n+1𝐤n2+ϵ(𝐮n,𝐤n)2.2\langle\mathbf{k}^{n+1}-\mathbf{k}^{n},\epsilon(\mathbf{u}^{n},\mathbf{k}^{n})\rangle\leq\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|^{2}+\|\epsilon(\mathbf{u}^{n},\mathbf{k}^{n})\|^{2}.

By the definition of αn+1\alpha_{n+1} and (4.11), we see

ψn+1+(1η)βϵ(𝐮n+1,𝐤n+1)2[ψn+(1η)βϵ(𝐮n,𝐤n)2]+tn+1+βϵ(𝐮n+1,𝐤n+1)20.\eqalign{&\psi_{n+1}+(1-\eta)\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}-[\psi_{n}+(1-\eta)\beta\|\epsilon(\mathbf{u}^{n},\mathbf{k}^{n})\|^{2}]\\ &+t_{n+1}+\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}\leq 0.\\ } (4.13)

Case II: η(1,1+52)\eta\in(1,\frac{1+\sqrt{5}}{2}). Similarly, we have

ψn+1+(1η1)βϵ(𝐮n+1,𝐤n+1)2[ψn+(1η1)βϵ(𝐮n,𝐤n)2]+tn+1+η1(1+ηη2)βϵ(𝐮n+1,𝐤n+1)20.\eqalign{&\psi_{n+1}+(1-\eta^{-1})\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}-[\psi_{n}+(1-\eta^{-1})\beta\|\epsilon(\mathbf{u}^{n},\mathbf{k}^{n})\|^{2}]\\ &+t_{n+1}+\eta^{-1}(1+\eta-\eta^{2})\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}\leq 0.\\ } (4.14)

Denote

gn+1={ψn+1+(1η)βϵ(𝐮n+1,𝐤n+1)2,η(0,1];ψn+1+(1η1)βϵ(𝐮n+1,𝐤n+1)2,η(1,1+52).g_{n+1}=\cases{\psi_{n+1}+(1-\eta)\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2},&$\eta\in(0,1];$\\ \psi_{n+1}+(1-\eta^{-1})\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2},&$\eta\in(1,\frac{1+\sqrt{5}}{2}).$\\ }

Thus, from (4.13) and (4.14), we conclude that the sequence {gn}\{g_{n}\} is bounded and monotonically decreasing, hence, it has limits. Since ψn>0\psi_{n}>0, the sequence {ψn}\{\psi_{n}\} is bounded.

Let γ=1\gamma=1 or γ=η1(1+ηη2)\gamma=\eta^{-1}(1+\eta-\eta^{2}). Furthermore, from (4.13) and (4.14),

0tn+1+γβϵ(𝐮n+1,𝐤n+1)2gngn+1.0\leq t_{n+1}+\gamma\beta\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|^{2}\leq g_{n}-g_{n+1}.

Thus,

tn+10,t_{n+1}\rightarrow 0, (4.15)
ϵ(𝐮n+1,𝐤n+1)0.\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|\rightarrow 0. (4.16)

Considering the relationship λn+1λn=(ηβ)ϵ(𝐮n+1,𝐤n+1)\mathbf{\lambda}^{n+1}-\mathbf{\lambda}^{n}=(\eta\beta)\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1}), we have λn+1λn0\|\mathbf{\lambda}^{n+1}-\mathbf{\lambda}^{n}\|\rightarrow 0. By (4.15) and the bounded property of {ψn}\{{\psi_{n}}\}, one can get these sequences {λn+1}\{{\|\mathbf{\lambda}^{n+1}\|}\}, {𝐮en+1𝐒𝟏2}\{{\|\mathbf{u}^{n+1}_{e}}\|^{2}_{\mathbf{S_{1}}}\}, {𝐮en+1𝐅2}\{{\|\mathbf{u}^{n+1}_{e}}\|^{2}_{\sum_{\mathbf{F}}}\}, {𝐤en+1𝐆2}\{{\|\mathbf{k}^{n+1}_{e}}\|^{2}_{\sum_{\mathbf{G}}}\}, {𝐤en+1𝐒𝟐2},{𝐤en+12}\{{\|\mathbf{k}^{n+1}_{e}}\|^{2}_{\mathbf{S_{2}}}\},\{{\|\mathbf{k}^{n+1}_{e}}\|^{2}\} are bounded. By using the inequality

𝐮en+1𝐮en+1𝐤en+1+𝐤en+1=ϵ(𝐮n+1,𝐤n+1)+𝐤en+1,\|\nabla\mathbf{u}^{n+1}_{e}\|\leq\|\nabla\mathbf{u}^{n+1}_{e}-\mathbf{k}^{n+1}_{e}\|+\|\mathbf{k}^{n+1}_{e}\|=\|\epsilon(\mathbf{u}^{n+1},\mathbf{k}^{n+1})\|+\|\mathbf{k}^{n+1}_{e}\|,

we deduce that {𝐮en+1}\{\|\nabla\mathbf{u}^{n+1}_{e}\|\} is bounded, so {𝐮en+1}\{\|\mathbf{u}^{n+1}_{e}\|_{\nabla^{\top}\nabla}\} is bounded. Since 𝐮en+1=𝐮en+1S1+𝐅++I𝐮en+1S1+𝐅+\|\mathbf{u}^{n+1}_{e}\|=\|\mathbf{u}^{n+1}_{e}\|_{S_{1}+\sum_{\mathbf{F}}+\nabla^{\top}\nabla+I}-\|\mathbf{u}^{n+1}_{e}\|_{S_{1}+\sum_{\mathbf{F}+\nabla^{\top}\nabla}}, and the positive definite property of S1+𝐅++IS_{1}+\sum_{\mathbf{F}}+\nabla^{\top}\nabla+I, the sequence {𝐮en+1}\{\|\mathbf{u}^{n+1}_{e}\|\} is bounded, and hence the sequence {𝐮n+1}\{\|\mathbf{u}^{n+1}\|\} is bounded. Therefore, the sequence {(𝐮n,𝐤n,λn)}\{(\mathbf{u}^{n},\mathbf{k}^{n},\mathbf{\lambda}^{n})\} is bounded, which implies the existence of a convergent subsequence to a clusters point, denoted as (𝐮ni,𝐤ni,λni)(𝐮,𝐤,λ)(\mathbf{u}^{n^{i}},\mathbf{k}^{n^{i}},\mathbf{\lambda}^{n^{i}})\rightarrow(\mathbf{u}^{*},\mathbf{k}^{*},\mathbf{\lambda}^{*}).

It follows from (4.12) and (4.15) that

limn𝐤n+1𝐤n=0,limn𝐤n+1𝐤nS2=0,limn𝐮n+1𝐮nS1=0.\eqalign{&\lim_{n\to\infty}\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|=0,\\ &\lim_{n\to\infty}\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|_{S_{2}}=0,\\ &\lim_{n\to\infty}\|\mathbf{u}^{n+1}-\mathbf{u}^{n}\|_{S_{1}}=0.} (4.17)

Thus, from

𝐮n+1𝐤n𝐮n+1𝐤n+1+𝐤n+1𝐤n,\eqalign{\|\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n}\|&\leq\|\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n+1}\|+\|\mathbf{k}^{n+1}-\mathbf{k}^{n}\|,}

we have limn𝐮n+1𝐤n=0\lim_{n\to\infty}\|\nabla\mathbf{u}^{n+1}-\mathbf{k}^{n}\|=0. Taking limits on both sides of equation (4.4) along the subsequence (𝐮ni,𝐤ni,λni)(\mathbf{u}^{n^{i}},\mathbf{k}^{n^{i}},\mathbf{\lambda}^{n^{i}}) and using the closedness of subdifferential, we have

{λ=F(𝐮),λG(𝐤),0=𝐮𝐤.\cases{-\nabla^{\top}\mathbf{\lambda}^{*}=\nabla F(\mathbf{u}^{*}),\\ \mathbf{\lambda}^{*}\in\partial G(\mathbf{k}^{*}),\\ 0=\nabla\mathbf{u}^{*}-\mathbf{k}^{*}.}

Thus, (𝐮,𝐤)(\mathbf{u}^{*},\mathbf{k}^{*}) is the optimal solution of (2.6) and λ\mathbf{\lambda}^{*} is the corresponding Lagrange multiplier.

Now, let’s prove that limn(𝐮n,𝐤n,λn)=(𝐮,𝐤,λ){\displaystyle\lim_{n\rightarrow\infty}}(\mathbf{u}^{n},\mathbf{k}^{n},\mathbf{\lambda}^{n})=(\mathbf{u}^{*},\mathbf{k}^{*},{\lambda}^{*}). Since (𝐮,𝐤,λ)(\mathbf{u}^{*},\mathbf{k}^{*},{\lambda}^{*}) satisfies (4.4), we could replace (𝐮¯,𝐤¯,λ¯)(\bar{\mathbf{u}},\bar{\mathbf{k}},\bar{\mathbf{\lambda}}) with (𝐮,𝐤,λ)(\mathbf{u}^{*},\mathbf{k}^{*},\mathbf{\lambda}^{*}) in the above analysis. From (4.13), (4.14), (4.15) and (4.16), we find gni0,nig_{n_{i}}\rightarrow 0,n_{i}\rightarrow\infty. (4.13) and (4.14) indicate the sequence {gn}\{g_{n}\} has limits, then gn0,ng_{n}\rightarrow 0,n\rightarrow\infty. So ψn0,n\psi_{n}\rightarrow 0,n\rightarrow\infty. Further, we know by the definition of ψn\psi_{n} that

λnλ0λnλ,𝐤n𝐤0𝐤n𝐤,𝐮en+1𝐒𝟏20.\eqalign{&\|\mathbf{\lambda}^{n}-\mathbf{\lambda}^{*}\|\rightarrow 0\Longrightarrow\mathbf{\lambda}^{n}\rightarrow\mathbf{\lambda}^{*},\\ &\|\mathbf{k}^{n}-\mathbf{k}^{*}\|\rightarrow 0\Longrightarrow\mathbf{k}^{n}\rightarrow\mathbf{k}^{*},\\ &{\|\mathbf{u}^{n+1}_{e}}\|^{2}_{\mathbf{S_{1}}}\rightarrow 0.} (4.18)

Considering (4.15) and (4.16), we get

𝐮en+1𝐅20,n.{\|\mathbf{u}^{n+1}_{e}}\|^{2}_{\mathbf{\sum_{F}}}\rightarrow 0,n\rightarrow\infty. (4.19)

Then, by (4.18) and (4.19),

𝐮en𝐒𝟏+𝐅0,n.{\|\mathbf{u}^{n}_{e}}\|_{\mathbf{S_{1}+\sum_{F}}}\rightarrow 0,n\rightarrow\infty. (4.20)

From the positive semidefinite property of 𝐅\mathbf{\sum_{F}} and the positive definite property of 𝐒𝟏\mathbf{S_{1}}, it is clear that

𝐮en2=𝐮en,𝐮en𝐮en𝐒𝟏+𝐅𝐮en(𝐒𝟏+𝐅)1,\|\mathbf{u}^{n}_{e}\|^{2}=\langle\mathbf{u}^{n}_{e},\mathbf{u}^{n}_{e}\rangle\leq\|\mathbf{u}^{n}_{e}\|_{\mathbf{S_{1}+\sum_{F}}}\|\mathbf{u}^{n}_{e}\|_{(\mathbf{S_{1}+\sum_{F}})^{-1}},

which combined with (4.20) gives limn𝐮en=0{\displaystyle\lim_{n\rightarrow\infty}}\|\mathbf{u}^{n}_{e}\|=0, namely, limn𝐮n=𝐮{\displaystyle\lim_{n\rightarrow\infty}}\mathbf{u}^{n}=\mathbf{u}^{*}.

In brief, when η(0,1+52)\eta\in(0,\frac{1+\sqrt{5}}{2}), we have

limn(𝐮n,𝐤n,λn)=(𝐮,𝐤,λ).\lim_{n\rightarrow\infty}(\mathbf{u}^{n},\mathbf{k}^{n},\mathbf{\lambda}^{n})=(\mathbf{u}^{*},\mathbf{k}^{*},{\lambda}^{*}).

References

References

  • [1] R. Acar and C. Vogel. Analysis of bounded variation penalty methods for ill-posed problems. Inverse Problems, 10:1217–1229, 1994.
  • [2] A. Almansa, C. Ballester, V. Caselles, and G. Haro. A tv based restoration model with local constraints. Journal of Scientific Computing, 34(3):209–236, 2008.
  • [3] F. Benvenuto, A La Camera, C. Theys, A. Ferrari, H Lantéri, and M. Bertero. The study of an iterative method for the reconstruction of images corrupted by poisson and gaussian noise. Inverse Problems, 24(3):35016–35020, 2008.
  • [4] M. Bertalmio, V. Caselles, B. Rougé, and A. Solé. Tv based image restoration with local constraints. Journal of Scientific Computing, 19:95–122, 2003.
  • [5] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations & Trends in Machine Learning, 3(1):1–122, 2010.
  • [6] Jian Feng Cai, Raymond H. Chan, and Mila Nikolova. Two-phase approach for deblurring images corrupted by impulse plus gaussian noise. Inverse Problems and Imaging, 2(2):187–204, 2008.
  • [7] Jian Feng Cai, Raymond H. Chan, and Mila Nikolova. Fast two-phase image deblurring under impulse noise. Journal of Mathematical Imaging and Vision, 36(1):46–53, 2010.
  • [8] Jian Feng Cai, Bin Dong, Stanley Osher, and Zuowei Shen. Image restorations: Total variation, wavelet frames, and beyond. Journal of the American Mathematical Society, 25(2):1033–1089, 2012.
  • [9] Jian Feng Cai, Stanley Osher, and Zuowei Shen. Linearized bregman iterations for frame-based image deblurring. SIAM Journal on Imaging Sciences, 2(1):226–252, 2009.
  • [10] Jian Feng Cai, Stanley Osher, and Zuowei Shen. Split bregman methods and frame based image restoration. SIAM Journal on Multiscale Modeling & Simulation, 8(2):337–369, 2009.
  • [11] Antonin Chambolle. An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision, 20:89–97, 2004.
  • [12] Dali Chen, Shenshen Sun, Congrong Zhang, Yang Quan Chen, and Dingyu Xue. Fractional-order tv-l2 models for image denoising. Central European Journal of Physics, 11(10):1414–1422, 2013.
  • [13] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Color image denoising via sparse 3d collaborative filtering with grouping constraint in luminance-chrominance space. In 2007 IEEE International Conference on Image Processing, volume 1, pages I–313. IEEE, 2007.
  • [14] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on image processing, 16(8):2080–2095, 2007.
  • [15] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen O. Egiazarian. Image restoration by sparse 3d transform-domain collaborative filtering. 2008.
  • [16] Yiqiu Dong, Torsten GöRner, and Stefan Kunis. An algorithm for total variation regularized photoacoustic imaging. Advances in Computational Mathematics, 41:423–438, 2015.
  • [17] Yiqiu Dong, Michael Hintermüller, and M. Monserrat Rincon-Camacho. Automated regularization parameter selection in multi-scale total variation models for image restoration. Journal of Mathematical Imaging & Vision, 40(1):82–104, 2011.
  • [18] Maryam Fazel, Ting Kei Pong, Defeng Sun, and Paul Tseng. Hankel matrix rank minimization with applications to system identification and realization. SIAM Journal on Matrix Analysis and Applications, 34(3):946–977, 2013.
  • [19] Ali Gholami and S. Mohammad Hosseini. A balanced combination of tikhonov and total variation regularizations for reconstruction of piecewise-smooth signals. Signal Processing, 93(7):1945–1960, 2013.
  • [20] Tom Goldstein and Stanley Osher. The split bregman method for l1 regularized problems. SIAM Journal on Imaging Sciences, 2(2):323–343, 2009.
  • [21] Zheng Gong, Zuowei Shen, and Kim-Chuan Toh. Image restoration with mixed or unknown noises. SIAM Journal on Multiscale Modeling and Simulation, 12(2):458–487, 2014.
  • [22] Deren Han, Defeng Sun, and Liwei Zhang. Linear rate convergence of the alternating direction method of multipliers for convex composite programming. Mathematics of Operations Research, 43(2):622–637, 2018.
  • [23] Jie Huang, Marco Donatelli, and Raymond Chan. Nonstationary iterated thresholding algorithms for image deblurring. Inverse Problems and Imaging, 7(3):717–736, 2013.
  • [24] Yumei Huang, Michael K.Ng, and You-Wei Wen. A fast total variation minimization method for image restoration. SIAM Journal on Multiscale Modeling and Simulation, 7(2):774–795, 2008.
  • [25] Tongtong Jia, Yuying Shi, Yonggui Zhu, and Lei Wang. An image restoration model combining mixed l1/l2 fidelity terms. Journal of Visual Communication and Image Representation, 38:461–473, 2016.
  • [26] Fang Li, Chaomin Shen, Jingsong Fan, and Chunli Shen. Image restoration combining a total variational filter and a fourth-order filter. Journal of Visual Communication and Image Representation, 18(4):322–330, 2007.
  • [27] Min Li, Defeng Sun, and Kim-Chuan Toh. A majorized admm with indefinite proximal terms for linearly constrained convex composite optimization. SIAM Journal on Optimization, 26(2):922–950, 2016.
  • [28] Jingjing Liu, Yifei Lou, Guoxi Ni, and Tieyong Zeng. An image sharpening operator combined with framelet for image deblurring. Inverse Problems, 36(4), 2020.
  • [29] Kui Liu, Jieqing Tan, and Liefu Ai. Hybrid regularizers-based adaptive anisotropic diffusion for image denoising. SpringerPlus, 5(1):1–24, 2016.
  • [30] Kui Liu, Jieqing Tan, and Benyue Su. An adaptive image denoising model based on tikhonov and tv regularizations. Advances in Multimedia, 2014:1–10, 2014.
  • [31] Yifei Lou, Tieyong Zeng, Stanley Osher, and Jack Xin. A weighted difference of anisotropic and isotropic total variation model for image processing. SIAM Journal on Imaging Sciences, 8(3):1798–1823, 2015.
  • [32] Marius Lysaker, Arvid Lundervold, and Xue-cheng Tai. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time. IEEE Transactions on Image Processing, 12(12):1579–1590, 2003.
  • [33] Liyan Ma, Li Xu, and Tieyong Zeng. Low rank prior and total variation regularization for image deblurring. Journal of Scientific Computing, 70:1336–1357, 2017.
  • [34] Jitendra M. Malik and Pietro Perona. Scale-space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(7):629–639, 1990.
  • [35] Jin Jin Mei, Yiqiu Dong, and Ting Zhu Huang. Simultaneous image fusion and denoising by using fractional-order gradient information. Journal of Computational and Applied Mathematics, 351:212–227, 2018.
  • [36] Mario Micheli, Yifei Lou, Stefano Soatto, and Andrea L. Bertozzi. A linear systems approach to imaging through turbulence. Journal of Mathematical Imaging and Vision, 48:185–201, 2014.
  • [37] Seungmi Oh, Hyenkyun Woo, Sangwoon Yun, and Myungjoo Kang. Non-convex hybrid total variation for image denoising. Journal of Visual Communication and Image Representation, 24(3):332–344, 2013.
  • [38] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D-Nonlinear Phenomena, 60(1-4):259–268, 1992.
  • [39] Andrej Nikolaevich Tikhonov and Vasiliy Yakovlevich Arsenin. Solution of ill-posed problems. Mathematics of Computation, 32(144):266–267, 1977.
  • [40] Yilun Wang, Junfeng Yang, Wotao Yin, and Yin Zhang. A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248–272, 2008.
  • [41] Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  • [42] Chunlin Wu, Zhifang Liu, and Shuang Wen. A general truncated regularization framework for contrast-preserving variational signal and image restoration: Motivation and implementation. Science China Mathematics, 61(9):1711–1732, 2018.
  • [43] Chunlin Wu and Tai Xue-Cheng. Augmented lagrangian method, dual methods, and split bregman iteration for rof, vectorial tv, and high order models. SIAM Journal on Imaging Sciences, 3(3):300–339, 2010.
  • [44] Tingting Wu, Wei Li, Lihua Li, and Tieyong Zeng. A convex variational approach for image deblurring with multiplicative structured noise. IEEE Access, 8:37790–37807, 2020.
  • [45] Xiongjun Zhang, Minru Bai, and Michael K. Ng. Nonconvex-tv based image restoration with impulse noise removal. Siam Journal on Imaging Sciences, 10(3):1627–1667, 2017.
  • [46] Shixiu Zheng, Zhenkuan Pan, Guodong Wang, and Xu Yan. A variational model of image restoration based on first and second order derivatives and its split bregman algorithm. pages 860–865, 2012.