Deep-RLS: A Model-Inspired Deep Learning Approach to Nonlinear PCA
Abstract
In this work, we consider the application of model-based deep learning in nonlinear principal component analysis (PCA). Inspired by the deep unfolding methodology, we propose a task-based deep learning approach, referred to as Deep-RLS, that unfolds the iterations of the well-known recursive least squares (RLS) algorithm into the layers of a deep neural network in order to perform nonlinear PCA. In particular, we formulate the nonlinear PCA for the blind source separation (BSS) problem and show through numerical analysis that Deep-RLS results in a significant improvement in the accuracy of recovering the source signals in BSS when compared to the traditional RLS algorithm.
Index Terms— Deep learning, deep unfolding, blind source separation, principal component analysis.
1 Introduction
Principal component analysis (PCA) is a linear orthogonal transformation that determines the eigenvectors or the basis vectors associated with the covariance matrix of the observed data. The output of the standard PCA is the projection of the data onto the obtained basis vectors, resulting in principal components which are mutually uncorrelated.
As a related problem, blind source separation (BSS) [1] aims to recover statistically independent source signals from linear mixtures whose composition gains are unknown. In fact, the only knowledge about the source signals to be recovered in BSS is their independence. This is a stronger assumption than what one requires for PCA, i.e. uncorrelated signal sources. This suggest that performing PCA on such a mixture of signals may not successfully separate them, and that higher order statistics is required to verify and ensure the independence of the source signals.
In light of the above, the notion of nonlinear correlation was introduced to verify the independence of signals more effectively [2]. It was also shown that an extension of PCA to the nonlinear domain (known as nonlinear PCA) can separate independent components of the mixture [2]. In [1], an objective function targeting the fourth-order moments of the distribution of the recovered signals is suggested. It is proved that minimizing this objective function can obtain the independent source signals under the assumption that all sources have negative kurtosis, i.e. they are sub-Gaussian. Minimizing such an objective functions introduces nonlinearities in the signal recovery and the learning process.
In this paper, we propose a deep learning approach to nonlinear PCA that relies on the unfolding of an iterative algorithm into the layers of a deep neural network in order to perform the nonlinear PCA task; i.e. to estimate the source signals for BSS. In particular, we use the classical recursive least squares [3] to construct the network structure and to iteratively estimate the source signals. We experimentally verify the performance of our algorithm and compare it with the traditional RLS algorithm.
2 Problem Formulation
2.1 Nonlinear PCA for Blind Source Separation
We begin by considering the longstanding BSS problem in which statistically independent signals are linearly mixed to yield possibly noisy combinations
(1) |
Let denote the dimensional data vector made up of the mixture at time that is exposed to an additive noise . Given no knowledge of the mixing matrix , the goal is to recover the original source signal vector from the mixture.
A seminal work in this context is [1] which suggests tuning and updating a separating matrix until the output
(2) |
where , is as close as possible to the source signal vector of interest . Assuming there exists a larger number of sensors than the source signals, i.e. , we can draw an analogy between the BSS problem and the task of PCA. In a sense, we are aiming to represent the random vector in a lower dimensional orthonormal subspace, represented by the columns of , as the orthonormal basis vectors. By this analogy, both BSS and PCA problems can be reduced to minimizing an objective function of the from:
(3) |
Assuming that is a zero-mean vector, it can be shown that the solution to the above optimization problem is a matrix whose columns are the dominant eigenvectors of the data covariance matrix [2]. Therefore, the principal components or the recovered source signals are mutually uncorrelated. As previously discussed in Section 1, having uncorrelated data is not a sufficient condition to achieve separation. In other words, the solutions to PCA and BSS do not coincide unless we address the higher order statistics of the output signal . By introducing nonlinearity into (3), we will implicitly target higher order statistics of the signal [2]. This nonlinear PCA, which is an extension of the conventional PCA, is made possible by considering the signal recovery objective:
(4) |
where denotes an odd non-linear function applied element-wise on the vector argument. We invite the interested readers to find a proof of the connection between (4) and higher order statistics of the source singals in [4].
While PCA is a fairly standardized technique, nonlinear or robust PCA formulations based on (4) tend to be multi-modal with several local optima—so they can be run from various initial points and possibly lead to different “solutions” [5]. In [3], a recursive least squares algorithm for subspace estimation is proposed, which is further extended to the nonlinear PCA in [6] for solving the BSS problem. We consider the algorithm in [6] as a baseline for developing our deep unfolded framework for nonlinear PCA.
2.2 Recursive Least Squares for Nonlinear PCA
We consider a real-time and adaptive scenario in which upon arrival of new data , the subspace of signal at time instant is recursively updated from the subspace at time and the new sample [3]. The separating matrix introduced in Section 2.1 is therefore replaced by and updated at each time instant . The adaptive algorithm chosen for this task is the well-known recursive least squares (RLS) [7].
In the linear case, by replacing the expectation in (3) with a weighted sum, we can attenuate the impact of the older samples, which is reasonable, for instance whenever one deals with a time-varying environment. In this way, one can make sure the distant past will be forgotten and the resulting algorithm for minimizing (3) can effectively track the statistical variations of the observed data. By replacing and using an exponential weighting (governed by a forgetting factor), the loss function in (3) boils down to:
(5) |
with the forgetting factor satisfying . Note that yields the ordinary method of least squares in which all samples are weighed equally while choosing relatively small makes our estimation rather instantaneous, thus neglecting the past. Therefore, is usually chosen to be less than one, but also rather close to one for smooth tracking and filtering.
Note that one may write the gradient of the loss function in (5) in its compact form as
(6) |
where and are the auto-correlation matrix of ,
(7) |
and the cross-correlation matrix of and ,
(8) |
at the time instance , respectively. Setting the gradient (6) to zero will result in the close-form separating matrix,
(9) |
A recursive computation of can be achieved using the RLS algorithm [7]. In RLS, the matrix inversion lemma enables a recursive computation of ; see the derivations in Appendix. At each iteration of the RLS algorithm, is recursively computed as
(10) |
Consequently, the RLS algorithm provides the estimate of the source signals. The steps of the RLS algorithm are summarized in Algorithm 1.
It appears that extending the application of RLS to the nonlinear PCA loss function in (4) is rather straightforward. To accomplish this task, solely step 3 of Algorithm 1 should be modified to in order to meet the nonlinear PCA criterion [3]. In the following, we unfold the iterations of the modified Algorithm 1, for nonlinear PCA onto the layers of a deep neural network where each layer resembles one iteration of the RLS algorithm. Interestingly, one can fix the complexity budget of the inference framework via fixing the number of layers, and apply the RLS-based proposed method to yield an estimation of the source signals.
3 Deep Unfolded RLS for Nonlinear PCA
Deep neural networks (DNN) are one of the most studied approaches in machine learning owing to their significant potential. They are usually used as a black-box without incorporating any knowledge of the system model. Moreover, they are not always practical to use as they require a large amount of training data and considerable computational resources. Hershey, et al., in [8], introduced a technique referred to as deep unfolding or unrolling in order to enable the community to address the mentioned issues with DNNs. The deep unfolding technique lays the ground for bridging the gap between well-established iterative signal processing algorithms that are model-based in nature and deep neural networks that are purely data-driven. Specifically, in deep unfolding, each layer of the DNN is designed to resemble one iteration of the original algorithm of interest. Passing the signals through such a deep network is in essence similar to executing the iterative algorithm a finite number of times, determined by the number of layers. In addition, the algorithm parameters (such as the model parameters and forgetting parameter in the RLS algorithm) will be reflected in the parameters and weights of the constructed DNN. The data-driven nature of the emerging deep network thus enables improvements over the original algorithm. Note that the constructed network may be trained using back-propagation, resulting in model parameters that are learned from the real-world training datasets. In this way, the trained network can be naturally interpreted as a parameter optimized algorithm, effectively overcoming the lack of interpretability in most conventional neural networks [9]. In comparison with a generic DNN, the unfolded network has much fewer parameters and therefore requires a more modest size of training data and computational resources. The deep unfolding technique has been deployed for many signal processing problems and has dramatically improved the convergence rate of the state-of-the-art model-based iterative algorithms; see, e.g., [10, 11, 12] and the references therein.
Due to the promises of deep unfolding in addressing the shortcomings of both generic deep learning methods and model-based signal processing algorithms, we have been motivated to deploy this technique to improve the recursive least squares solution for nonlinear PCA. As shown in [6], when applied to a linear mixture of source signals (i.e., the BSS problem), the RLS algorithm usually approximates the true source signals well and successfully separates them. However, the number of iterations needed to converge may vary greatly depending on the initial values and the forgetting parameter . Inspired by the deep unfolding technique, we introduce Deep-RLS, our deep learning-based framework which is designed based on the modified iterations of the algorithm 1. More precisely, the dynamics of the -th layer of Deep-RLS are given as:
(11a) | ||||
(11b) | ||||
(11c) | ||||
(11d) | ||||
(11e) | ||||
(11f) |
where is the data vector at the time instance , is a nonlinear activation function which can be chosen by considering the distribution of the source signals, represents the trainable forgetting parameter, and and denotes the set of trainable weights and biases of the -th layer, respectively. In [5], it was shown that for source signals with sub-Gaussian distribution, results in convergence of the nonlinear PCA to the true source signals.
Given samples of the data vector , our goal is to optimize the parameters of the DNN, where
(12) |
The output of the -th layer is an approximation of the source signals at the time instance . For training of the proposed Deep-RLS network, we consider cumulative MSE loss of the layers. In designing the training procedure, one needs to consider the constraint that the forgetting parameter must satisfy . Hence, in order to impose such a constraint, one can regularize the loss function ensuring that the network chooses proper weights corresponding to a feasible forgetting parameter at each layer [13]. Accordingly, we define the loss function used for the training of the proposed architecture as follows:
(13) |
where is the well-known Rectifier Linear Unit function extensively used in the deep learning literature. We have presented the employed training process in Algorithm 2.
4 Numerical Result
In this section, we demonstrate the performance of the proposed Deep-RLS for nonlinear PCA in the case of blind source separation. The proposed framework was implemented using the PytTorch library [14] and the Adam stochastic optimizer [15] with a learning rate of for training purposes.


The training was performed based on the data generated via the following model. For the time interval , each element of the vector is to be generated from a sub-Gaussian distribution. For data generation purposes, we have assumed the source signals to be i.i.d. and uniformly distributed, i.e., . The mixing matrix is assumed to be fixed and generated once according to a Normal distribution, i.e., . We performed the training of the proposed Deep-RLS using the batch learning process with a batch size of 40 and trained the network for epochs. A training set of size and test set of size was chosen. We used the average of the mean-square-error (MSE), , as the performance metric. In Fig. 1, the performance of RLS implemented with at different number of iterations is compared with the Deep-RLS network trained with the same number of layers. It can be observed that the average MSE obtained using the proposed architecture is much lower than RLS, even with fewer number of layers/iterations. Fig. 2 demonstrates the performance of the proposed Deep-RLS network and the original RLS algorithm in terms of the average MSE for a growing number of source signals . It can be observed from both Figs. 1 and 2 that the proposed method achieves a far superior performance than that of the original RLS algorithm.
5 Conclusion
We considered the application of model-based machine learning, and specifically the deep unfolding technique, for nonlinear PCA. A deep network based on the well-known RLS algorithm, which we call Deep-RLS, was proposed that outperforms its traditional model-based counterparts.
Appendix A Appendix: The RLS Recursive Formula
Let , , and be positive definite matrices so that Using the matrix inversion lemma, the inverse of can be expressed as
(14) |
Now, assuming that the auto-correlation matrix is positive definite (and thus nonsingular), by choosing , ,, one can compute as proposed in (10).
References
- [1] J-F Cardoso and Beate H Laheld, “Equivariant adaptive source separation,” IEEE Transactions on signal processing, vol. 44, no. 12, pp. 3017–3030, 1996.
- [2] João Marcos Travassos Romano, Romis Attux, Charles Casimiro Cavalcante, and Ricardo Suyama, Unsupervised signal processing: channel equalization and source separation, CRC Press, 2018.
- [3] Bin Yang, “Projection approximation subspace tracking,” IEEE Transactions on Signal processing, vol. 43, no. 1, pp. 95–107, 1995.
- [4] Petteri Pajunen and Juha Karhunen, “Least-squares methods for blind source separation based on nonlinear PCA,” International Journal of Neural Systems, vol. 8, no. 05n06, pp. 601–612, 1997.
- [5] Juha Karhunen, Erkki Oja, Liuyue Wang, Ricardo Vigario, and Jyrki Joutsensalo, “A class of neural networks for independent component analysis,” IEEE Transactions on neural networks, vol. 8, no. 3, pp. 486–504, 1997.
- [6] Juha Karhunen and Petteri Pajunen, “Blind source separation using least-squares type adaptive algorithms,” in 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing. IEEE, 1997, vol. 4, pp. 3361–3364.
- [7] S. Haykin and S.S. Haykin, Adaptive Filter Theory, Pearson, 2014.
- [8] John R Hershey, Jonathan Le Roux, and Felix Weninger, “Deep unfolding: Model-based inspiration of novel deep architectures,” arXiv preprint arXiv:1409.2574, 2014.
- [9] Vishal Monga, Yuelong Li, and Yonina C Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” arXiv preprint arXiv:1912.10557, 2019.
- [10] Shahin Khobahi, Naveed Naimipour, Mojtaba Soltanalian, and Yonina C Eldar, “Deep signal recovery with one-bit quantization,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 2987–2991.
- [11] Shahin Khobahi, Arindam Bose, and Mojtaba Soltanalian, “Deep radar waveform design for efficient automotive radar sensing,” in 2020 IEEE 11th Sensor Array and Multichannel Signal Processing Workshop (SAM). IEEE, 2020, pp. 1–5.
- [12] Oren Solomon, Regev Cohen, Yi Zhang, Yi Yang, Qiong He, Jianwen Luo, Ruud JG van Sloun, and Yonina C Eldar, “Deep unfolded robust PCA with application to clutter suppression in ultrasound,” IEEE transactions on medical imaging, vol. 39, no. 4, pp. 1051–1063, 2019.
- [13] Shahin Khobahi and Mojtaba Soltanalian, “Model-aware deep architectures for one-bit compressive variational autoencoding,” arXiv preprint arXiv:1911.12410, 2019.
- [14] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, Eds., pp. 8024–8035. Curran Associates, Inc., 2019.
- [15] Diederik P Kingma and Jimmy Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.