A New Low-Rank Learning Robust Quaternion Tensor Completion Method for Color Video Inpainting Problem and Fast Algorithms
Abstract
The color video inpainting problem is one of the most challenging problem in the modern imaging science. It aims to recover a color video from a small part of pixels that may contain noise. However, there are less of robust models that can simultaneously preserve the coupling of color channels and the evolution of color video frames. In this paper, we present a new robust quaternion tensor completion (RQTC) model to solve this challenging problem and derive the exact recovery theory. The main idea is to build a quaternion tensor optimization model to recover a low-rank quaternion tensor that represents the targeted color video and a sparse quaternion tensor that represents noise. This new model is very efficient to recover high dimensional data that satisfies the prior low-rank assumption. To solve the case without low-rank property, we introduce a new low-rank learning RQTC model, which rearranges similar patches classified by a quaternion learning method into smaller tensors satisfying the prior low-rank assumption. We also propose fast algorithms with global convergence guarantees. In numerical experiments, the proposed methods successfully recover color videos with eliminating color contamination and keeping the continuity of video scenery, and their solutions are of higher quality in terms of PSNR and SSIM values than the state-of-the-art algorithms.
Index Terms:
Color video inpainting; robust quaternion tensor completion; 2DQPCA; low-rank; learning modelI Introduction
Many applications of multi-dimensional data (tensor data) are becoming popular. For instance, color videos or images can be seen as 3-mode or 2-mode quaternion data. With its capacity to capture the fundamental substructures and color information, quaternion tensor-based modeling is an obvious choice to solve color video processing problems. A modern and challenging problem is color video inpainting, which aims to recover a color video from a sampling of its pixels that may contain noise. In mathematical language, this problem is robust quaternion tensor completion (RQTC) problem. There are currently less of methods to solve this problem because it is difficult to preserve the coupling of color channels and the evolution of color video frames. In this paper, we present new robust quaternion tensor completion (RQTC) models to solve this challenging problem.
For a single color image, the robust quaternion matrix completion (RQMC) method proposed in [10] theoretically solved the color image inpainting under the incoherence conditions. Chen and Ng [2] proposed a cross-channel weight strategy and analysed the error bound of RQMC problem. Xu et al. [23] proposed a new model to combine deep prior and low-rank quaternion prior in color image processing. A generous amount of practical applications indicate that RQMC can completely recover the color images of which low-frequency information dominates, but it fails to recover the color image of which high-frequency information dominates. To inpaint color images in the latter case, a new nonlocal self-similarity (NSS) based RQMC was introduced in [8] to compute an optimal approximation to the color image. The main idea is to gather similar patches into several color images of small size that mainly contain low-frequency information. This NSS-based RQMC uses the distance function to find low-rank structures of color images. It is also applied to solve color video inpainting problems and achieves color videos of high quality. However, it overlooks the global information that reflects the potential relation of continuous frames. So we need to build a quaternion tensor-based model for color video inpainting.
Recall that several famous real tensor decompositions [12] serve as the foundation of modern robust tensor completion (RTC) approaches. For instance, Liu et al. [14] presented a sum-of-nuclear-norms (SNN) as tensor rank in the RTC model. This representation depends on the Tucker decomposition [20] and SNN model is proved with an exact recovery guarantee in [6]. Gao and Zhang [4] proposed a novel nonconvex model with norm to solve RTC problem. Jiang et al. [11] presented a data-adaptive dictionary to determine relevant third-order tensor tubes and established a new tensor learning and coding model. Ng et al. [18] proposed a novel unitary transform method that is very efficient by using similar patches strategy to form a third-order sub-tensor. Wang et al. [21] recovered tensors by two new tensor norms. Zhao et al. [27] proposed an equivalent nonconvex surrogates of RTC problem and analysed the recovery error bound. These RTC methods have been successfully applied in color image or video processing. However, RTC models regard color images as 3-mode real tensors [13, 16] and color videos as 4-mode real tensors [7] and thus, they usually independently process three color channels and ignore the mutual connection among channels.
Since quaternion has three imaginary parts, a color pixel can be seen as a pure quaternion. Based on quaternion representation and calculation, the color information can be preserved in the color image processing. A low-rank quaternion tensor completion method (LRQTC) was proposed in [17]. It cannot deal with the noisy or corrupted problem since it only contains a low-rank regularization term. By introducing a new sparsity regularization term into the subject function, we propose a new RQTC method for color video inpainting with missing and corrupted pixels. There are two new models. One is a robust quaternion tensor completion (RQTC) model, which recovers color videos from a global view. It is essentially in the form of a quaternion minimization problem with rank and norm two regularization terms. The other is a low-rank learning RQTC (LRL-RQTC) model. We intend to learn similar information to form low-rank structure and prove the numerical low-rank property in theory. Under the view of numerical linear algebra, the principal components computed by two-dimensional principal component analysis (2DPCA) [24] span an optimal subspace on which projected samples are maximally scattered. Meanwhile, low-rank approximations of original samples can be simultaneously reconstructed from such low-dimension projections. Recently, 2DPCA is generalized to quaternion, named by two-dimensional quaternion principal component analysis (2DQPCA), in [9] and 2DQPCA performs well on color image clustering. 2DQPCA and the variations extract features from training data and utilize these features to project training and testing samples into projections of low dimensions for efficient use of available computational resources. We find that 2DQPCA is a good learning method to extract low-rank structure from quaternion tensors. So we apply 2DQPCA method to learn the low-rank structure adaptively.
The highlights are as follows:
-
•
We present a novel RQTC method for color video inpainting problem with missing and corrupted pixels and derive the exact recovery theorem. This method can simultaneously preserve the coupling of color channels and the evolution of color video frames.
-
•
We firstly introduce the 2DQPCA technology into color video inpainting to learn the low-rank structures of quaternion tensors and present a new low-rank learning RQTC model. Moreover, the numerical low-rank property is proved in theory.
-
•
We design new RQTC and LRL-RQTC algorithms based on the alternating direction method of multipliers (ADMM) framework and apply them to solve color video inpainting problems with missing or noisy pixels. The color videos computed by the newly proposed algorithms are of higher quality in terms of PSNR and SSIM values than those by the state-of-the-art algorithms.
This paper is organized as follows. In Section II, we introduce preliminaries about quaternion matrix and the quaternion matrix completion method. In Section III, we present new robust quaternion tensor completion and low-rank learning robust quaternion tensor completion models, including solving procedure, sufficient conditions for precise recovery, convergence analysis, 2DQPCA-based classification technology to learn low-rank information, and theoretical analysis of numerical low-rank. In Section IV, we propose experimental results of color video inpainting, which indicate the advantages of the newly proposed methods on quality of restorations. In Section V, we conclude the paper and present prospects.
II Preliminaries
Several necessary results about quaternion matrices are recalled in this section.
II-A Quaternion matrix
Let denotes the set of quaternion and a quaternion has one real part , three imaginary parts and is expressed as [5]. A symbol of boldface is used to express quaternion scalar, vector, matrix or tensor. A quaternion matrix with . If and , is named by a purely imaginary quaternion matrix. The quaternion shrinkage function shrinkQ is defined in [8] by:
(1) | ||||
where , and
Suppose is the singular value decomposition and denote by the singular triplets of a quaternion matrix . The quaternion singular thresholding function approxQ is defined in [8] by:
(2) | ||||
where and the singular values are substituted by zeros. Quaternion matrix norms are defined by , , , and .
II-B Robust quaternion matrix completion method
A low-rank quaternion matrix can be recovered completely from an observed quaternion matrix by the RQMC method [10], where is a noisy matrix and is a random sampling operator:
By [10, Theorem 2], if the sufficient conditions are satisfies, can be exactly computed by solving the following minimization problem with ,
(3) |
A practical QMC algorithm is given in [8, Supplementary Material]. The augmented Lagrangian function of (3) is defined by
where is the penalty parameter. The solving procedure is
(4) |
III Robust Quaternion Tensor Completion Models and Fast Algorithms
In this section, we propose two new RQTC models to solve color video inpainting problem with partial and corrupted pixels, as well as their fast algorithms.
The boldface Euler script letters, e.g. , and , are used to denote quaternion tensors. Let be a -mode quaternion tensor. The elements of are denoted by , where . A -mode fiber is an -dimensional column vector constructed by entries with fixing all indexes except the th one, denoted by . The number of -mode fibers is . Concatenate all of -mode fibers as column vectors (in dictionary order) into a quaternion matrix and name it by the -mode unfolding of quaternion tensor . We define the ‘unfold’ function on quaternion tensor by
(5) |
and the ‘fold’ function by
(6) |
A slice of is a quaternion matrix of the form with all the indexes being fixed except and .
One important application of quaternion tensor is color video processing. A color video with frames can be seen as a -mode quaternion tensor , where , and represent the red, green and blue three channels. Mathematically, color video inpainting problem with noise is exactly the RQTC problem (with ), which will be characterized later in (7).
III-A RQTC model
Let denotes observed quaternion tensor with missing and/or corrupted entries. Then the RQTC problem is mathematically modeled by the following minimization problem,
(7) |
where , denote the target low-rank tensor and the sparse data, respectively. The quaternion tensor random sampling operator is defined by
The quaternion tensor nuclear norm and the norm are defined by
(8) |
where ’s are constants satisfying and . Here, the quaternion tensor nuclear norm is essentially a convex combination of quaternion matrix nuclear norms which are generated by unfolding the tensor along each mode. Notice that .
To derive the exact RQTC theorem, we need to build the incoherence conditions of quaternion tensors.
Definition III.1.
For a quaternion tensor , suppose each has the singular value decomposition
Let
Then the conditions of quaternion tensor incoherence with , , are as follows:
(1) -mode incoherence
(9) |
(2) mutual incoherence
(10) |
The condition (10) strengthens the original quaternion matrix incoherence condition (9) and keeps balance between the ranks of quaternion . Indeed, define and . Clearly, a larger means that even though quaternion tensor has a certain mode of low-rank but also has a mode of high rank.
Theorem III.1.
Suppose a quaternion tensor meets the incoherence conditions in definition (III.1), a set is uniformly distributed with cardinality and each observed entry is corrupted with probability independently of other entries. The solution of (7) with is exact with a probability of at least , provided that
where, , and are positive numerical constants.
Proof.
Under the definition of quaternion tensor nuclear norm (8), model (7) reduces to the QMC model proposed in [10] when and reduce to quaternion matrix. Equivalently, model (7) is equal to
(11) |
in which and are results of ‘unfold’ function (5) acted on and (we will use this notation in later models). Model (11) is a generalized QMC model by extending the first term of regularization function to the combination of nuclear norms. In other words, the model (11) can be written as
(12) |
The parameter in (11) is denoted by . Then (12) is a convex combination of three QMC problems, thus the solution of (11) is exact as long as they satisfy the exact recovery conditions respectively. According to Theorem 2 in [10], we can get the conclusion. ∎
Introducing two auxiliary variables and , the augmented Lagrangian equation of problem (11) becomes
(13) |
where and are penalty parameters, and are results of ‘unfold’ function (5) acted on two Lagrange multipliers and that are two quaternion tensors. Now, we design an optimization algorithm to solve (13) based on the ADMM framework. Problem (13) can be converted to two-block subproblems and each one contains two unknown variables:
subproblem:
subproblem:
By these formulae, the minimization problems of quaternion tensors are equivalently converted into the minimization problems of quaternion matrices. It seems that they can be feasibly solved by the QMC iteration (4). However, one obstacle is the subproblem that contains a convex combination of quaternion matrix norms. Fortunately, we find that this problem has a closed-form solution.
Theorem III.2.
The closed-form solution of subproblem is .
Proof.
subproblem is
We find that the function is a sum of non-negative functions that are independent with each other. Then subproblem can be solved by finding minimizers of subproblems , respectively. Suppose have been known and is the only unknown variable. The solution of subproblem about is
Here, the quaternion singular thresholding operator is employed in the computation. The derivation is entirely independent of the selection of , so the mentioned minimization can be carried out for any
Each solution is the optimal approximation of the th unfolding of . So the closed-form solution of subproblem is
∎
The other three subproblems can be solved similarly. For instance, the subproblem can be solved by the shrinkage of quaternion operator shrinkQ (1), and in fact, it has a closed-form solution:
To summarize above analysis, we present a new RQTC algorithm in Algorithm 1 and prove its convergence in Theorem III.3.
Proof.
Since all of the matrices mentioned in (13) are quaternion matrices, we reformulate them to real forms. Taking as an example, we represent it with a real vector defined by where denotes an ()-dimensional vector generated by stacking the columns of . Thus, the quaternion model (13) is mathematically equivalent to
(14) |
Problem (14) is a minimization problem about real variables. For clarification, we define the object function by
From [10, Proposition 2], the convex envelope of the function on can be expressed as
Therefore, the nuclear function is convex and closed. On the other hand, the function is obviously convex and closed. Under this circumstance, the optimization problem (14) fits the framework of ADMM. We can iteratively update all variables as follows:
Here, we denote and by , and , respectively. Then the above equation is consistent with the following equations in [1]:
As a result, this falls essentially in the two-block ADMM framework and the convergence is theoretically guaranteed according to [1]. ∎
Remark III.1.
It is worth mentioning that this model performs better than the real tensor completion model, because the intrinsic color structures are totally retained during the computation process for the quaternion tensor, while unfold process may completely obliterate the three channels of color pixel.
Remark III.2.
A color image can be seen as a color video with only one frame and its representation is a quaternion matrix. That is, if then a 3-mode quaternion tensor reduces to a quaternion matrix . So the proposed RQTC method is surely a generalization of the QMC method [10]. The incoherence conditions and the assumption of low-rank and sparsity for robust quaternion tensor recovery problem surely cover those in [10, Theorem 2] for robust quaternion matrix quaternion recovery problems.
Remark III.3.
In the above, we have presented a novel RQTC model with the exact recovery theorem and a new ADMM-based algorithm with a convergence proof. They are feasible and efficient to restore quaternion tensors from partial and/or corrupted entries under the condition that the assumption of Theorem III.1 is satisfied. However, the low-rank condition sometimes does not hold in practical applications. For instance, quaternion tensor that represents color video is of high-rank when color video contains high frequency information. So we need to improve our model and algorithm further.
III-B LRL-RQTC model
Now we present an improved RQTC model by introducing a low-rank learning method. For the convenience of description, we concentrate on -mode quaternion tensor that represents color video and use the engineer language instead of the mathematician language.
Since color video is often of large-scale, we set a window for searching low-rank information and denote the part of quaternion tensor in this window by adding a subscript . That is, denotes a small quaternion tensor of in a fixed searching window.
Suppose we set windows totally, in other words, we divide the large tensor into smaller ones. A -mode quaternion tensor is a stack of horizontal, lateral and frontal slices. We choose a series of overlapping patches of (the th searching window of tensor ) from three kinds of slices, respectively, and classify them into classes . Then we vectorize each patch of the th class () and rearrange them into a quaternion matrix, denoted by , In other words, is a low-rank quaternion matrix generated by the th class of similar patches from the th type of slice. Define the classification function of similar patches by
(15) |
The function is invertible and the inverse is defined by
(16) |
Now we introduce a learning strategy into RQTC model and present a new low-rank learning robust quaternion tensor completion model (LRL-RQTC):
(17) |
where is a mapping transformation (15). Different from the prior NSS-QMC model that uses the distance function to search similar patches, here we introduce a new 2DQPCA-based classification function and propose a new LRL-RQTC model with adaptively low-rank learning. The model (17) is a minimization issue consisting of three subproblems and independent of each other. For the convenience of the narrative, we only present the operation on the frontal slice in the following part, i.e.
(18) |

The flowchart of the patched-based learning method is shown in Fig. 1. Firstly, choose the th window and get overlapping patches with size (, ), covering each frontal slice of . Then we gather them into a set
(19) |
where denotes the location of a patch. Secondly, we set exemplar patches which are non-overlapping. After choosing them, we calculate their eigen subspace . Then we find a number of patches most similar to the exemplar patches by classification in group . The entire process is shown in Algorithm 2. For the th exemplar patch, similar patches are stored in , which is the subset of . By the 2DQPCA technology, we successfully achieve better performance in matching similar patches by learning low dimensional representation and forming small scale and quantity matrices to reduce CPU time. According to the result of classification from 2DQPCA, the number of similar patches in is not fixed, denoted by with being a positive integer. Finally, we stack the quaternion matrix from to the quaternion column vector and put them together lexicographically to construct a new quaternion matrix
(20) |
where denotes the -th element of . Thus, this 2DQPCA-based classification process learns small low-rank quaternion matrices stored in the set .
Then, we repeat the above learning process on each window of horizontal, lateral and frontal slices of tensor until the low-rank conditions are satisfied.
Now we prove that the 2DQPCA-based classification function (Algorithm 2) generates a low -rank matrix. We refer to the definition of low -rank [8].
Definition III.2.
[8] A quaternion matrix is called of -rank if it has singular values bigger than .
We can see that the matrix is of low-rank when tends to zero. Based on the above definition, we give the following theorem to prove that is a low -rank matrix.
Theorem III.4.
Suppose that each generated by Algorithm 2 satisfies and has the singular value decomposition: , where . Let be the least positive integer such that
(21) |
where . Then and thus the -rank of is less than .
Proof.
Refer to [8, Theorem 3.1], it is sufficient to demonstrate that , where are any two columns of .
From (20), we choose two columns of : and . Then, . ∎
Remark III.4.
Theorem III.4 describes the matrix generated by 2DQPCA-based classification function has the -rank less than r. Actually, according to the definition of approximately low-rank matrix [2]:
where is a numerical value, is the singular values of matrix and the matrix reduces to low-rank when .
If we get singular values of , after a simple calculation, , then .
Next, we propose a new LRL-RQTC algorithm based on learning scheme for solving (17). Based on ADMM framework, by introducing two quaternion tensors and , we minimize the following equivalent problem:
(22) |
Actually, the solving method of the problem (22) converts to QMC method. Then we apply the QMC algorithm (4) to solve (22), which can be broken down into independent subprocesses and thus, they can be performed in parallel. Under the assumption of low-rank condition, we can get the low-rank reconstruction and the sparse composition . Then we can get a good approximation of a quaternion tensor.
Based on the above analysis, we summarize the proposed LRL-RQTC algorithm and present the pseudo-code in Algorithm 3.
Our LRL-RQTC method successfully recovers the color video with missing entries and/or noise and achieves a better performance in both visual and numerical comparison. The algorithm uses ’approxQ’ function (2) which is required to calculate all singular triplets of a quaternion matrix involving heavy computing cost at each iteration. We need to further increase the speed of computing. Moreover, in the practical, we can calculate the partial singular triplets and set a threshold value to reduce time. To overcome these difficulties, we build superior SVD solvers for the RQTC problem. We design an implicit restarted multi-symplectic block Lanczos bidiagonalization acceleration algorithm for quaternion SVD computation. We will show the details of the process in the supplementary material.
IV Numerical Examples
We will carry out various experiments in this part to demonstrate the usefulness of our low-rank algorithms for robust quaternion tensor completion (RQTC and LRL-RQTC). Below we conduct experiments on color images and videos from datasets, respectively. The level of the noise is denoted by , where the size of each object is and pixels are corrupted with uniform distribution noise. The level of missing entries is denoted by , where is the set of pixels we can observe and is also randomly selected. All computations were carried out in MATLAB version R2020a on a computer with Intel(R) Xeon(R) CPU E5-2630 @ 2.40Ghz processor and 32 GB memory.
Example IV.1.
(The Effect of 2DQPCA Technology in Color Image Inpainting)
In this example, we compare the recoverable performance on color images with LRL-RQTC and NSS-QMC [8] methods. Peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) [22] are used to assess the quality. The levels of missing entries and noise are considered as follows: . The tolerance , the patch size of two methods is both and the maximum number of iterations is .
Numerical results are displayed in Table I and restored images are shown in Fig 2. In bold, the best values are highlighted in the table. We can find that LRL-RQTC gets higher values both in PSNR and SSIM in Table I. That means the inpainting effect of LRL-RQTC is better than NSS-QMC from a numerical point of view. Let us see the details in Fig 3. On the first line, we can find that the texture of the pepper is clearer in the (d)th column than the (c)th. In other words, the image restored by LRL-RQTC is better than NSS-QMC. On the second line, it is obvious that the edge of the woman’s face restored by LRL-RQTC is more stereoscopic and the five sense organs are clearer than the images restored by NSS-QMC. Besides, the texture information is also well preserved, such as the bottom left of the scarf.
Images Methods PSNR SSIM NSS-QMC 33.0979 0.9139 LRL-RQTC 33.2769 0.9174 NSS-QMC 35.0680 0.9442 LRL-RQTC 35.3002 0.9507 NSS-QMC 29.3625 0.8941 LRL-RQTC 29.6978 0.8991 NSS-QMC 32.1652 0.9457 LRL-RQTC 33.0635 0.9577

.

.
Example IV.2.
(Robust Color Video Recovery)
In this example, we compare the proposed RQTC and LRL-RQTC methods with other modern techniques in order to demonstrate the effectiveness and superiority of our methods in robust color video recovery. We choose the color video ‘DO01013’ in ‘videoSegmentationData’ database [3] of size and use a 3-mode quaternion tensor to represent it. We also compare the performance of models under different missing entries and noise levels.
We set . The stopping criteria is and suitable parameters are chosen to obtain the best results of each model. We compare RQTC and LRL-RQTC with the other six robust quaternion tensor completion methods.
-
•
t-SVD (Fourier) [15]: Tensor SVD using Fourier transform.
-
•
TNN [26]: Recover 3-D arrays with a low tubal-rank tensor.
-
•
OITNN-O [21]: Recover tensors by new tensor norms: Overlapped Orientation Invariant Tubal.
-
•
OITNN-L [21]: Recover tensors by new tensor norms: Latent Orientation Invariant Tubal Nuclear Norm.
-
•
QMC [10]: Consider each frame of color video as a color image.
-
•
TNSS-QMC [8]: Use distance function to find similar patches.
For t-SVD (Fourier), TNN, OITNN-O, OITNN-L, QMC, and TNSS-QMC, we strictly follow the parameters set in the article. For LRL-RQTC, we fix each window of size and only act on the frontal slice, i.e. . The maximum number of iterations is , each patch size is . For RQTC, the weighted vector .
The detailed comparisons of the ‘DO01013’ dataset are given in Table II. We display the comparative results on ten frames randomly choosen from color video of two situations. From the last row of Table II, with respect to PSNR and SSIM, the suggested LRL-RQTC outperforms all other competitors on the average recovery results. Our LRL-RQTC exceeds the second-best by dB PSNR value on average. In particular, from the sub-table (a), the PSNR value of the restored frames of video can be increased by nearly 1dB under noise and missing entries level. From the sub-table (b), although on individual frames our LRL-RQTC is not as good as OITNN-L, the average result of our method is higher and the result of OITNN-L is not stable. Furthermore, different from patch-based methods such as TNSS-QMC and LRL-RQTC, our proposed RQTC recovers color video on the whole. Despite not reaching the highest PSNR and SSIM values, RQTC outperforms tsvd (Fourier), TNN, OITNN-O, and QMC which also processes the entire video without patching. It is worth noting that RQTC outperforms QMC by dB PSNR value and are only slightly less than the other patch’s method. The RQTC can also be handled very well in small details and the global approach is much faster than the patch-based approach, so we believe that there is much potential to improve RQTC.
In Fig. 4 and Fig. 5, we present the visual results on one frame under two conditions, where it can be seen that recovered video frames generated through the proposed RQTC and LRL-RQTC contain more details and more closely approximate the color of the initial frames. For instance, we observe the detailed features of the restored grass carefully. From the enlarged part (2nd row) of Fig. 4, color of shades and density of the grass in (c)th (h)th and (j)th columns have a clearer structure and look more realistic but the horse of (c)th column are not restored clearer. This means this video restored by our LRL-RQTC and RQTC are better. Although the PSNR and SSIM values of RQTC are not higher than TNSS-QMC, the recovery of certain details is superior to it. It shows that both RQTC and LRL-RQTC outperform in most cases. From the enlarged part (2nd row) of Fig. 5, the white point of the horse’s head is restored better by LRL-RQTC, it looks brighter and completer. The proposed LRL-RQTC uses the idea of clustering and finding similar patches adaptively by learning technology. Patches by learning technology can form low-rank matrices and make sure the effect of recovery is better. In particular, it is not necessary to calculate all singular values during the process. As a result, with the proper amount of patches, the suggested LRL-RQTC will be substantially more efficient.
(1)
Number of | t-SVD (Fourier) | TNN | OITNN-O | OITNN-L | QMC | RQTC | TNSS-QMC | LRL-RQTC | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
frames | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
1 | 32.76 | 0.9039 | 29.87 | 0.8938 | 30.33 | 0.8966 | 38.33 | 0.9764 | 33.51 | 0.9303 | 36.83 | 0.9525 | 38.79 | 0.9709 | 40.22 | 0.9785 |
2 | 31.71 | 0.8888 | 30.50 | 0.9105 | 33.95 | 0.9500 | 39.73 | 0.9799 | 33.63 | 0.9336 | 37.19 | 0.9544 | 39.56 | 0.9759 | 40.94 | 0.9814 |
3 | 32.08 | 0.8957 | 31.09 | 0.9245 | 35.59 | 0.9628 | 39.30 | 0.9780 | 33.84 | 0.9354 | 37.36 | 0.9553 | 40.09 | 0.9776 | 41.35 | 0.9823 |
4 | 32.57 | 0.8982 | 31.02 | 0.9254 | 35.99 | 0.9688 | 39.09 | 0.9793 | 33.83 | 0.9361 | 37.32 | 0.9560 | 39.75 | 0.9769 | 40.92 | 0.9810 |
5 | 32.59 | 0.9025 | 30.43 | 0.9167 | 35.64 | 0.9692 | 38.30 | 0.9778 | 33.86 | 0.9359 | 36.35 | 0.9524 | 39.32 | 0.9745 | 39.84 | 0.9771 |
6 | 32.64 | 0.9014 | 29.72 | 0.9045 | 35.22 | 0.9668 | 37.64 | 0.9763 | 33.76 | 0.9361 | 35.36 | 0.9495 | 37.59 | 0.9670 | 37.97 | 0.9716 |
7 | 31.60 | 0.8895 | 29.46 | 0.8899 | 34.58 | 0.9622 | 37.41 | 0.9765 | 33.85 | 0.9345 | 35.64 | 0.9527 | 38.32 | 0.9715 | 39.17 | 0.9749 |
8 | 32.32 | 0.8967 | 29.73 | 0.8907 | 33.72 | 0.9572 | 36.29 | 0.9732 | 34.16 | 0.9367 | 35.79 | 0.9539 | 39.15 | 0.9751 | 39.93 | 0.9778 |
9 | 32.54 | 0.8997 | 29.41 | 0.8870 | 32.75 | 0.9477 | 36.03 | 0.9725 | 34.09 | 0.9357 | 35.87 | 0.9535 | 39.40 | 0.9758 | 39.99 | 0.9783 |
10 | 32.58 | 0.9029 | 28.37 | 0.8679 | 28.56 | 0.8733 | 34.69 | 0.9683 | 34.18 | 0.9362 | 35.10 | 0.9506 | 38.61 | 0.9733 | 39.12 | 0.9757 |
average | 32.34 | 0.8979 | 29.96 | 0.9011 | 33.63 | 0.9455 | 37.68 | 0.9758 | 33.8 | 0.9351 | 36.28 | 0.9531 | 39.06 | 0.9739 | 39.95 | 0.9778 |
(2)
Number of | t-SVD (Fourier) | TNN | OITNN-O | OITNN-L | QMC | RQTC | TNSS-QMC | LRL-RQTC | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
frames | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
1 | 31.96 | 0.8927 | 29.79 | 0.8920 | 30.32 | 0.8962 | 37.32 | 0.9662 | 32.94 | 0.9165 | 35.64 | 0.9369 | 37.77 | 0.9657 | 38.32 | 0.9698 |
2 | 31.13 | 0.8788 | 30.42 | 0.9091 | 33.97 | 0.9498 | 39.84 | 0.9801 | 32.99 | 0.9202 | 36.05 | 0.9400 | 38.75 | 0.9729 | 39.40 | 0.9761 |
3 | 31.41 | 0.8834 | 31.03 | 0.9237 | 35.63 | 0.9627 | 39.41 | 0.9881 | 33.10 | 0.9211 | 36.15 | 0.9406 | 39.05 | 0.9737 | 39.82 | 0.9776 |
4 | 31.98 | 0.8912 | 30.98 | 0.9249 | 36.00 | 0.9686 | 39.12 | 0.9794 | 32.95 | 0.9233 | 36.05 | 0.9415 | 38.31 | 0.9724 | 39.18 | 0.9759 |
5 | 31.99 | 0.8907 | 30.41 | 0.9166 | 35.65 | 0.9693 | 38.32 | 0.9780 | 32.83 | 0.9223 | 35.41 | 0.9388 | 37.09 | 0.9666 | 37.45 | 0.9692 |
6 | 31.97 | 0.8895 | 29.74 | 0.9052 | 35.25 | 0.9165 | 36.69 | 0.9666 | 33.00 | 0.9215 | 34.45 | 0.9393 | 37.12 | 0.9618 | 37.30 | 0.9655 |
7 | 31.04 | 0.8743 | 29.47 | 0.8901 | 34.57 | 0.9619 | 37.37 | 0.9663 | 32.99 | 0.9201 | 34.74 | 0.9425 | 37.71 | 0.9652 | 38.18 | 0.9681 |
8 | 31.53 | 0.8854 | 29.69 | 0.8901 | 33.68 | 0.9567 | 36.26 | 0.9630 | 33.08 | 0.9203 | 34.92 | 0.9431 | 38.24 | 0.9683 | 38.74 | 0.9707 |
9 | 31.82 | 0.8905 | 29.37 | 0.8861 | 32.72 | 0.9476 | 35.99 | 0.9622 | 32.82 | 0.9190 | 35.89 | 0.9429 | 38.50 | 0.9694 | 38.89 | 0.9715 |
10 | 32.15 | 0.8947 | 28.33 | 0.8672 | 28.54 | 0.8733 | 34.69 | 0.9586 | 32.86 | 0.9204 | 34.30 | 0.9401 | 37.79 | 0.9665 | 37.98 | 0.9680 |
average | 31.70 | 0.8871 | 29.92 | 0.9005 | 33.62 | 0.9402 | 37.50 | 0.9708 | 32.96 | 0.9204 | 35.36 | 0.9406 | 38.03 | 0.9683 | 38.53 | 0.9712 |


Example IV.3.
(Color Video Completion)
In this example, we deal with the color video completion problem, which involves filling in missing pixel values from a partly unobserved video. Assume the missing pixels are distributed randomly across RGB three channels.
We compare the proposed RQTC and LRL-RQTC with representative models for color video completion: t-SVD (Fourier)[15], t-SVD (data)[19] (using a transform tensor singular value decomposition based on a unitary transform matrix), OITNN-O[21], OITNN-L[21], LRQTC[17] (quaternion tensor completion) , QMC[10] and TNSS-QMC[8].
The color video dataset is also chosen from the ‘videoSegmentationData’ database in [3]. We choose the color videos ‘BR130T’, ‘DO01030’, and ’DO01013’ of size , which are denoted as ‘bird’, ‘flower’, and ‘horse’, respectively, and the videos can be expressed as 3-mode quaternion tensors. Their parameter values are empirically chosen to produce the greatest performance and are fixed in all testing for a fair comparison.
Different missing entries and noise levels are considered: . In the proposed LRL-RQTC, we fix each window of size and only act on the frontal slice, i.e.. The maximum number of iterations is , each patch size is . For RQTC, the weighted vector .
For quantitative comparison, PSNR and SSIM values on the ‘bird’, ‘flower’, and ‘horse’ videos are reported in Fig. 6, Fig 9 and Table III. The corresponding visual examples are shown in Fig. 7 - 8, Fig. 10 - 11, and Fig. 12 - 13 for qualitative evaluation. In the four histograms (Fig. 6 and Fig 9), we can find that PSNR and SSIM in earthy yellow on the right-hand side are higher than the other. In Table III, we present the PSNR and SSIM values of random ten frames of the ’horse’ video to provide additional information. The standout performance is bolded and the underlined data indicates sub-optimal. It is clear that the proposed LRL-RQTC improves PSNR and SSIM values considerably. The PSNR values obtained by the LRL-RQTC on three color video data improve by almost dB when compared to the second-best method and are always at the highest value, demonstrating the proposed method’s superiority over the other good approaches presently in use. That implies our LRL-RQTC is the best among the ten methods and suitable for different types of video. In particular, the PSNR values achieved by the LRL-RQTC on the ’flower’ color video data increase by roughly dB when compared to the second-best method. As displayed in the graph, the proposed LRL-RQTC outperforms the rival method in terms of visual quality. It can be seen that our LRL-RQTC captures the detail of each frame properly, which implies that learning technology to find similar patches is superior. From Fig. 7, the videos restored by LRL-RQTC (th column) are clearer and show more details of the bird’s wings as well as the foliage. Moreover, videos recovered by RQTC (th column) also restores details of the original video. Both LRL-RQTC and RQTC are superior to the other methods in respect of local features’ recovery. In Fig. 10 - Fig. 13, the details of flower petals, grass, the tails of horse and so on recovered by LRL-RQTC are closer to the original frame. We present some of them and use the red box on the images for details.
For computing time, we display the result in Table IV. Because of TNSS-QMC and our LRL-RQTC are based on patch ideas, per-patch based operations can be paralleled, so we only show the average time of each window. From Table IV, we can observe that our LRL-RQTC gets better results without taking too long. It slightly less than OITNN-L which is also gets better recovery effect.
Remember that the LRL-RQTC model designed by using a learning perspective framework searches for similar structures adaptively and forms a low-rank structure based on the patch idea. This model is made feasible by the 2DQPCA-based classification function that characterizes the low-rank information from subspace structure after projection. Numerical results indicate that the LRL-RQTC model has clearly made a significant increase in its ability to find high-dimensional information in low-dimensional space and accurately identify the delicate correlations between the observed and unknown values.






(1)
Number of | t-SVD (Fourier) | t-SVD (data) | TNN | OITNN-O | OITNN-L | LRQTC | QMC | TNSS-QMC | RQTC | LRL-RQTC | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
frames | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
1 | 29.29 | 0.8817 | 33.80 | 0.9435 | 33.74 | 0.9521 | 31.43 | 0.9016 | 31.38 | 0.9007 | 32.11 | 0.8815 | 29.77 | 0.8253 | 35.21 | 0.9408 | 33.02 | 0.8941 | 37.06 | 0.9599 |
2 | 28.47 | 0.8571 | 33.19 | 0.9339 | 34.72 | 0.9622 | 33.92 | 0.9369 | 33.89 | 0.9364 | 32.10 | 0.8843 | 29.71 | 0.8301 | 35.67 | 0.9453 | 33.18 | 0.8973 | 37.46 | 0.9629 |
3 | 28.89 | 0.8707 | 33.66 | 0.9405 | 35.39 | 0.9661 | 34.59 | 0.9440 | 34.56 | 0.9436 | 32.31 | 0.8851 | 30.09 | 0.8353 | 36.30 | 0.9505 | 33.36 | 0.9000 | 37.68 | 0.9637 |
4 | 29.54 | 0.8800 | 33.95 | 0.9412 | 34.99 | 0.9645 | 34.79 | 0.9485 | 34.75 | 0.9481 | 32.43 | 0.8886 | 30.13 | 0.8380 | 36.35 | 0.9504 | 33.39 | 0.8998 | 37.63 | 0.9644 |
5 | 29.58 | 0.8828 | 33.99 | 0.9429 | 34.18 | 0.9598 | 34.74 | 0.9505 | 34.71 | 0.9501 | 32.29 | 0.8873 | 30.04 | 0.8385 | 35.99 | 0.9489 | 33.20 | 0.8978 | 37.60 | 0.9647 |
6 | 29.29 | 0.8786 | 33.76 | 0.9409 | 33.58 | 0.9539 | 34.60 | 0.9484 | 34.56 | 0.9480 | 35.15 | 0.8795 | 30.11 | 0.8368 | 35.38 | 0.9413 | 32.56 | 0.8812 | 37.27 | 0.9621 |
7 | 28.30 | 0.8563 | 33.03 | 0.9341 | 33.12 | 0.9460 | 34.16 | 0.9451 | 34.12 | 0.9447 | 32.08 | 0.8784 | 30.04 | 0.8337 | 35.16 | 0.9373 | 32.59 | 0.8800 | 37.09 | 0.9593 |
8 | 28.56 | 0.8673 | 33.47 | 0.9400 | 33.04 | 0.9445 | 33.92 | 0.9433 | 33.89 | 0.9429 | 32.26 | 0.8798 | 30.04 | 0.8388 | 35.26 | 0.9375 | 32.69 | 0.8804 | 37.17 | 0.9595 |
9 | 29.13 | 0.8748 | 33.54 | 0.9396 | 33.02 | 0.9438 | 33.60 | 0.9360 | 33.56 | 0.9354 | 32.28 | 0.8800 | 30.02 | 0.8341 | 35.24 | 0.9374 | 32.70 | 0.8816 | 37.22 | 0.9595 |
10 | 28.94 | 0.8752 | 33.54 | 0.9426 | 31.74 | 0.9289 | 30.75 | 0.8940 | 30.71 | 0.8932 | 32.31 | 0.8808 | 30.08 | 0.8356 | 35.29 | 0.9389 | 32.67 | 0.8818 | 37.33 | 0.9601 |
average | 29.04 | 0.8725 | 33.59 | 0.9299 | 33.75 | 0.9522 | 33.65 | 0.9348 | 33.61 | 0.9343 | 32.53 | 0.8808 | 30.00 | 0.8346 | 35.56 | 0.9428 | 32.94 | 0.8894 | 37.35 | 0.9616 |
(2)
Number of | t-SVD (Fourier) | t-SVD (data) | TNN | OITNN-O | OITNN-L | LRQTC | QMC | TNSS-QMC | RQTC | LRL-RQTC | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
frames | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
1 | 25.99 | 0.7562 | 30.42 | 0.8748 | 32.65 | 0.9373 | 30.28 | 0.8729 | 30.23 | 0.8718 | 31.66 | 0.8642 | 30.17 | 0.8737 | 33.94 | 0.9243 | 31.72 | 0.8560 | 37.16 | 0.9623 |
2 | 25.41 | 0.7251 | 29.93 | 0.8576 | 33.46 | 0.9486 | 32.51 | 0.9127 | 32.46 | 0.9118 | 31.63 | 0.8638 | 29.96 | 0.8317 | 33.65 | 0.9189 | 31.74 | 0.8544 | 36.93 | 0.9595 |
3 | 25.62 | 0.7418 | 30.19 | 0.8669 | 33.98 | 0.9548 | 33.07 | 0.9236 | 33.04 | 0.9230 | 31.72 | 0.8673 | 30.05 | 0.8334 | 33.50 | 0.9197 | 31.93 | 0.8568 | 37.02 | 0.9619 |
4 | 25.95 | 0.7524 | 30.45 | 0.8717 | 33.68 | 0.9536 | 33.27 | 0.9287 | 33.23 | 0.9280 | 31.67 | 0.8671 | 30.00 | 0.8333 | 33.44 | 0.9187 | 31.83 | 0.8556 | 37.03 | 0.9621 |
5 | 25.99 | 0.7572 | 30.54 | 0.8769 | 33.18 | 0.9490 | 33.28 | 0.9309 | 33.24 | 0.9302 | 31.56 | 0.8671 | 30.02 | 0.8372 | 33.29 | 0.9203 | 31.76 | 0.8557 | 36.84 | 0.9626 |
6 | 26.03 | 0.7568 | 30.55 | 0.8751 | 32.59 | 0.8414 | 33.24 | 0.9305 | 33.20 | 0.9300 | 31.65 | 0.8658 | 30.16 | 0.8375 | 33.31 | 0.9175 | 31.82 | 0.8585 | 37.21 | 0.9618 |
7 | 25.31 | 0.7285 | 29.75 | 0.8584 | 32.18 | 0.9335 | 32.87 | 0.9257 | 32.84 | 0.9251 | 31.64 | 0.8626 | 31.69 | 0.8555 | 33.45 | 0.9164 | 31.69 | 0.8555 | 36.95 | 0.9583 |
8 | 25.60 | 0.7402 | 30.28 | 0.8679 | 31.97 | 0.9280 | 32.68 | 0.9222 | 32.64 | 0.9216 | 31.97 | 0.8697 | 30.67 | 0.8406 | 34.05 | 0.9233 | 32.00 | 0.8616 | 37.27 | 0.9610 |
9 | 25.95 | 0.7552 | 30.52 | 0.8741 | 31.85 | 0.9261 | 32.35 | 0.9139 | 32.30 | 0.9129 | 31.97 | 0.8686 | 30.57 | 0.8411 | 34.14 | 0.9240 | 32.01 | 0.8608 | 37.51 | 0.9610 |
10 | 25.96 | 0.7552 | 30.55 | 0.8750 | 30.71 | 0.9094 | 29.68 | 0.8684 | 29.63 | 0.8673 | 31.86 | 0.8662 | 30.14 | 0.8360 | 34.19 | 0.9242 | 31.99 | 0.8601 | 37.35 | 0.9606 |
average | 25.78 | 0.7468 | 30.32 | 0.8698 | 32.63 | 0.9383 | 32.32 | 0.9130 | 32.28 | 0.9122 | 31.73 | 0.8662 | 30.34 | 0.8420 | 33.70 | 0.9208 | 31.85 | 0.8575 | 37.13 | 0.9611 |


Methods bird video flower video horse video (50%,0) (80%,0) (80%,0) (85%,0) (80%,0) (85%,0) t-SVD(Fourier) 8.22 9.64 6.61 6.87 6.67 6.83 t-SVD(data) 15.97 18.58 12.92 12.59 10.97 11.01 TNN 5.05 4.17 4.54 4.03 2.86 2.85 OITNN-O 36.25 35.57 36.41 18.30 24.08 12.50 OITNN-L 37.02 36.96 36.98 36.74 24.02 24.24 QMC 96.12 84.06 88.79 96.27 80.11 87.88 LRQTC 19.61 19.61 19.51 19.49 19.52 19.47 TNSS-QMC 15.31 16.05 14.75 14.78 13.29 12.18 RQTC 22.74 21.98 20.65 21.23 19.67 19.63 LRL-RQTC 26.21 24.62 25.25 26.92 23.84 22.98
V Conclusion
In this article, we develop a new learning technology-based robust quaternion tensor completion model, LRL-RQTC. Firstly, we divide the observed large quaternion tensor into smaller quaternion sub-tensors and parallelly act on these quaternion sub-tensors. Secondly, we use 2DQPCA-based classification function to learning low-rank information of each slices and form them into a low-rank quaternion matrix. Then RQTC problem of each quaternion sub-tensors can be solved. The recommended technique focuses on keeping as many low-rank correlations among the local characteristics of color videos as feasible in order to preserve more related information under the framework. A new RQTC model is also proposed to solve RQTC problem by ADMM-based framework. We establish conditions for quaternion tensor incoherence and exact recovery theory. Numerical experiments on the established RQTC and LRL-RQTC models demonstrate that our proposed models can inpaint a given missing and/or corrupted quaternion tensor better, maintaining a low-rank structure for processing nature color videos both effectively and efficiently. In order to make the most of the information between images and get a faster and more efficient algorithm, we will strive to develop a learning technology combined with other advanced technology for quaternion tensor decomposition in the future.
Acknowledgment
This work is supported in part by the National Natural Science Foundation of China under grants 12171210, 12090011, and 11771188; the Major Projects of Universities in Jiangsu Province (No. 21KJA110001); and the Natural Science Foundation of Fujian Province of China grants 2020J05034.
References
- [1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers”, Foundations and Trends® in Machine Learning, 3 (2011), pp. 1–122.
- [2] J. Chen and M. Ng, “Color Image Inpainting via Robust Pure Quaternion Matrix Completion: Error Bound and Weighted Loss”, SIAM Journal on Imaging Sciences 15, 3 (2022), pp. 1469–1498.
- [3] K. Fukuchi, K. Miyazato, A. Kimura, S. Takagi, and J. Yamato, “Saliency-based video segmentation with graph cuts and sequentially updated priors”, IEEE International Conference on Multimedia and Expo (2009), pp. 638–641.
- [4] K. Gao and Z. Huang, “Tensor Robust Principal Component Analysis via Tensor Fibered Rank and Minimization”, SIAM Journal on Imaging Sciences, vol. 16, no. 1 (2023), pp. 423–460.
- [5] W. R. Hamilton, “Elements of Quaternion”, Longmans, Green and Co., London (1866).
- [6] B. Huang, C. Mu, D. Goldfarb, and J. Wright, “Provable low-rank tensor recovery”, Optim.-Online, vol. 4252, no. 2, (2014), Art. no. 4252.
- [7] H. Huang, Y. Liu, Z. Long and C. Zhu, “Robust Low-Rank Tensor Ring Completion”, IEEE Transactions on Big Data, (2023), pp. 1–14.
- [8] Z. Jia, Q. Jin, M. Ng, and X. Zhao, “Non-local robust quaternion matrix completion for large-scale color image and video Inpainting”, IEEE Transactions on Image Processing, 31 (2022), pp. 3868-3883.
- [9] Z. Jia, S. Ling, and M. Zhao, “Color two-dimensional principal component analysis for face recognitionbased on quaternion model”, Lecture Notes in Computer Science, 10361 (2017), pp. 177-189.
- [10] Z. Jia, M. Ng, and G. Song, “Robust quaternion matrix completion with applications to image inpainting”, Numerical Linear Algebra with Applications, 26(4) (2019), pp. e2245.
- [11] T. Jiang, X. Zhao, H. Zhang and M. Ng, “Dictionary Learning With Low-Rank Coding Coefficients for Tensor Completion”, IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 2 (2023), pp. 932-946.
- [12] T. Kolda, and B. Bader, “Tensor decompositions and applications”, SIAM Review, 51, 3 (2009), pp. 455–500.
- [13] Y. Li, D. Qiu, and X. Zhang, “Robust Low Transformed Multi-Rank Tensor Completion With Deep Prior Regularization for Multi-Dimensional Image Recovery”, IEEE Transactions on Big Data, (2023), pp. 1–14.
- [14] J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data”, ICCV (2009), pp. 2114–2121.
- [15] C. Lu, J. Feng, Y. Chen, W. Liu, Z. Lin, and S. Yan, “Tensor robust principal component analysis with a new tensor nuclear norm”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 4 (2020), pp. 925–938.
- [16] Y. Luo, X. Zhao, T. Jiang, Y. Chang, M. Ng and C. Li, “Self-Supervised Nonlinear Transform-Based Tensor Nuclear Norm for Multi-Dimensional Image Recovery”, IEEE Transactions on Image Processing, vol. 31 (2022), pp. 3793-3808.
- [17] J. Miao, K. I. Kou, and W. Liu, “Low-rank quaternion tensor completion for recovering color videos and images”, Pattern Recognition, vol. 107 (2020), Art. no. 107505.
- [18] M. Ng, X. Zhang, X. Zhao, “Patched-tube unitary transform for robust tensor completion”, Pattern Recognition, vol. 100 (2020), Art. no. 107181..
- [19] G. Song, M. Ng, and X. Zhang, “Robust tensor completion using transformed tensor singular value decomposition”, Numerical Linear Algebra with Applications, 27 (3) (2020), e2299.
- [20] L. Tucker, “Some mathematical notes on three-mode factor analysis”, Psychometrika, 31, 3 (1966), pp. 279–311.
- [21] A. Wang, Q. Zhao, Z. Jin, et al., “Robust tensor decomposition via orientation invariant tubal nuclear norms”, Science China Technological Sciences 65 (2022), pp. 1300–1317 .
- [22] Z. Wang, A. Bovik, H. Sheikh and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity”, IEEE Transactions on Image Processing vol. 13, no. 4 (2004), pp. 600–612.
- [23] T. Xu, X. Kong, Q. Shen, Y. Chen, and Y. Zhou, “Deep and Low-Rank Quaternion Priors for Color Image Processing”, IEEE Transactions on Circuits and Systems for Video Technology (2022), doi: 10.1109/TCSVT.2022.3233589.
- [24] J. Yang, D. Zhang, A. Frangi, and J. Yang, “Two-dimensional PCA: A new approach to appearance-based face representation and recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1 (2004), pp. 131-137.
- [25] F. Zhang, “Quaternions and matrices of quaternions”, Linear Algebra and its Applications, 251 (1997), pp. 21–57.
- [26] Z. Zhang and S. Aeron, “Exact Tensor Completion Using t-SVD”, IEEE Transactions on Signal Processing, vol. 65, no. 6 (2017), pp. 1511-1526.
- [27] X. Zhao, M. Bai, D. Sun, and L. Zheng, “Robust Tensor Completion: Equivalent Surrogates, Error Bounds, and Algorithms”, SIAM Journal on Imaging Sciences, vol. 15, no. 2 (2022), pp. 625–669.