Dynamic Compressive Sensing based on RLS for Underwater Acoustic Communications
Abstract
Sparse structures are widely recognized and utilized in channel estimation. Two typical mechanisms, namely proportionate updating (PU) and zero-attracting (ZA) techniques, achieve better performance, but their computational complexity are higher than non-sparse counterparts. In this paper, we propose a DCS technique based on the recursive least squares (RLS) algorithm which can simultaneously achieve improved performance and reduced computational complexity. Specifically, we develop the sparse adaptive subspace pursuit-improved RLS (SpAdSP-IRLS) algorithm by updating only the sparse structure in the IRLS to track significant coefficients. The complexity of the SpAdSP-IRLS algorithm is successfully reduced to , compared with the order of for the standard RLS. Here, represents the length of the channel, and represents the size of the support set. Our experiments on both synthetic and real data show the superiority of the proposed SpAdSP-IRLS, even though only elements are updated in the channel estimation.
I Introduction
The sparse structure of the channel, such as the digital TV transmission and underwater acoustic (UWA) communications, has been widely recognized [1, 2, 3, 4, 5, 6, 7], motivating the design of sparse adaptive filtering algorithms [8, 9, 10, 11, 12, 13, 14, 15, 16]. The proportionate updating (PU) and zero-attracting (ZA) principles have been mainly utilized in the existing algorithms. The ZA-type algorithms were developed by adopting a regularized cost function to attract inactive taps to zero in the strictly sparse system [8, 9, 10, 11], while, for systems which are not strictly sparse but possess a relatively sparse (nonuniform) structure, the PU-type algorithms achieve a better performance [12, 13, 14, 15, 16]. With the advancement in the sparse adaptive filtering algorithms, the investigation of the channel estimation in the UWA communications resurges [17, 18, 19].
However, compared with non-sparse counterparts, sparse adaptive algorithms with two mechanisms mentioned above achieve an improved performance at the cost of high complexity. The reason behind this is that algorithms still need to update all elements rather than a support set. Recently, there has been significant interest in researching dynamic compressive sensing (DCS) techniques that track only the positions of significant taps and update them using sparsity. Different from the traditional compressive sensing (CS) methods, the DCS allows one adjustable batch mode, and hence is suitable for time-varying environments.
The algorithms for the DCS can be classified into three categories: dynamic convex optimization methods [20], the dynamic approximate Bayesian approach [21], and dynamic greedy algorithms [22, 23, 24, 25, 26]. Compared to the first two methods, dynamic greedy algorithms have lower computational complexity, making them more suitable for online operation. In [22], the inherent batch-mode compressive sampling matching pursuit (CoSaMP) method was successfully incorporated into the DCS algorithm, resulting in the sparse adaptive orthogonal matching pursuit (SpAdOMP) method. To be specific, the SpAdOMP method replaced the least squares (LS) algorithm in CoSaMP with the normalized least mean square (NLMS) adaptive filter algorithm. However, colored input signals can significantly degrade its convergence speed. Recently, we proposed the SpAdOMP-affine projection algorithm (SpAdOMP-APA) [25] and sparse adaptive subspace pursuit-improved proportionate APA (SpAdSPP-IPAPA) [26]. Compared with the NLMS-type greedy algorithms, they achieve a faster convergence speed and a better performance.
Although first-order adaptive algorithms, such as NLMS and APA, have been extensively studied, their linear convergence speed is often slow in practice, necessitating a larger amount of training data. To further improve the convergence rate of the dynamic greedy algorithm, we will explore a typical second-order adaptive algorithm, the recursive least squares (RLS) algorithm, whose convergence rate is super-linear. In this paper, we first propose the sparse adaptive subspace pursuit-improved recursive least squares (SpAdSP-IRLS) algorithm whose complexity is lower than the standard RLS algorithm. When the length of the channel is and the size of the support set , the complexity of the SpAdSP-IRLS is instead of for the standard RLS. Moreover, the proposed SpAdSP-IRLS can achieve improved performance since updating only the support set can further increase the convergence rate and reduce the noise in the sparse weights. Experimental results corroborated the superiority of the SpAdSP-IRLS in the UWA channel estimation.
Notation: Throughout the paper, we use bold uppercase (lowercase) letters to denote matrix, , and vector, , respectively. An italic letter, , represents a scalar quantity. The superscripts, , , and denote the transpose, complex conjugate, and Hermitian transpose operators, respectively. The denotes the support set corresponding to the largest components in vector . takes the imaginary part of a complex value . The is an identity matrix of size .
II Dynamic Compressive Sensing Based Channel Estimation
For the UWA communication, the received signal at time n is given as
(1) |
where is the -th channel tap, is the phase rotation accounting for the Doppler effect of the UWA channel [27], is the transmitted symbol, and denotes an additive noise. Define the channel vector at time as and .
II-A Dynamic Tracking of the Support Set
Next, we will detail the dynamic tracking of SpAdSP-IRLS algorithm. Define the proxy signals for the proposed algorithm as
(2) |
where is a forgetting factor. In time-invariant environments, shall be set to 1. The is an estimation of the phase rotation that can be produced by a second-order phase locked loop (PLL) [28]. The as the symbol estimation error at time is denoted as
(3) |
where is the applied estimator vector and obtained from intermediate estimator vector as
(4) |
where of size is the support set at time . Accordingly, its complementary set consists of elements. For brevity, time indices are dropped from the notations of all four sets.
Based on , the support set for intermediate estimator vector, , is updated as
(5) |
where according to the SP algorithm [29], represents the support sets corresponding to the largest components in . Again, the time index has been dropped in and for notation simplicity.
II-B Adaptive Updating of the Coefficients
In the SpAdSP-IRLS algorithm, the IRLS algorithm is adopted to track the estimator coefficients. In order to distinguish it from the standard RLS algorithm, we note that the step size is introduced in the updating equation of the IRLS in order to improve performance in the practical implementation.
Define the apriori error as
(6) |
where and are respectively submatrices of and . The updating of the estimator vector over its support sets is then
(7) |
where is the step size that introduced in the updating equation of the traditional RLS algorithm. Compared with the standard RLS algorithm, the step size is introduced in order to better balance the convergence and steady-state error. In addition, the Kalman gain vector is the subvector of corresponding to the support set , given by
(8) |
with denoting a forgetting factor ranging in . , where the diagonal matrix can be designed by , denotes that elements of in the support set are same with and the rest of them are . The matrix is the inverse of the input covariance matrix, which can be iteratively computed as
(9) | |||||
In the end, the phase updating of the second-order PLL is
(11) |
where .
The implementation of the SpAdSP-IRLS is finally summarized in Algorithm 1. Last, a complexity comparison between RLS and SpAdSP-IRLS algorithms in terms of complex multiplication (CM) is shown in Table I. Clearly, the comparison between two algorithms depends on the choice of . Moreover, we note that the complexity in the SpAdSP-IRLS is from (9) since and are full vectors without using support set. We leave the exploration of lower-complexity methods for future work.
Algorithms | Complexity (in terms of CMs) |
---|---|
RLS | |
SpAdSP-IRLS |
III Experimental Results
To verify the performance gain of the proposed SpAdSP-IRLS algorithm, both simulated and experimental examples of system identification were conducted. Before presenting experimental results, we first define two algorithms. To be specific, (1) the SpAdSP-IRLS becomes IRLS when ; (2) the SpAdSP-IRLS reduces to SpAdSP-RLS when .
III-A Synthetical data
In the first experiment, we considered a sparse system of length , in which the system coefficients with indices and were . The system input, , was a zero-mean white Gaussian process with variance . The additive noise was also zero-mean white Gaussian with its variance determined by the signal to noise ratio (SNR), where . For all considered algorithms including the RLS and the proposed SpAdSP-IRLS, the forgetting factor is and . For the SpAdSP-IRLS, and . The results are shown in Fig. 1, where all curves were obtained by averaging over 1000 independent trials. Obviously, the proposed SpAdSP-IRLS algorithm outperforms the RLS even though the size of the support set is only . In addition, we can notice that the smaller is , the better steady-state performance is achieved for the SpAdSP-IRLS algorithm over the RLS algorithm.


In the second experiment, the performance of the RLS and the proposed SpAdSP-IRLS under different SNRs have been shown in Fig. 2. The SpAdSP-IRLS achieved a better performance than the RLS in terms of convergence and steady-state error for different SNRs. It also showed the parameter choice is not sensitive to SNR.
III-B Real data
In this subsection, we will testify the performance the proposed SpAdSP-IRLS via real-world experimental data collected in a shallow-water acoustic communication trial. The experiment was performed at the coastline of Xiamen, China in 2019. The carrier frequency was 16 kHz and the symbol period was 0.25 ms. The RLS, IRLS, SpAdSP-RLS and SpAdSP-IRLS were employed to achieve adaptive channel estimation. For all algorithms, and and for SpAdSP-type algorithms, and . For IRLS and SPAdSP-IRLS, . The parameters for the second order PLL were: and .




In Fig. 3, snapshots of estimated channels are shown. We can notice that the channel shows the sparse structure and the SpAdSP-IRLS achieved the most sparse feature. We also demonstrate that the position and magnitude of the SpAdSP-IRLS coefficients vary with time in Fig. 4. In Fig. 5, the SpAdSP-IRLS obtained the best performance while only coefficients were employed, meaning that the complexity of it was obviously decreased. To further explore the characteristic of the support set , we measured steady-state error of the proposed SpAdSP-IRLS under different . From Fig. 6, introduced in the updating equation indeed improved the performance in different . In addition, while was chosen to be small, the obvious improvement of the SpAdSP-IRLS can be obtained due to exploitation of the sparse structure and noise suppression. However, with the increase of , there exists the performance loss because the performance of the SpAdSP-IRLS gradually approaches that of non-sparse counterpart.
IV Conclusion and Outlook
In this paper, a dynamic compressive sensing technique named the sparse adaptive subspace pursuit-improved recursive least squares (SpAdSP-IRLS) was proposed by only updating the support set in the IRLS. Compared with the high complexity of the standard RLS algorithm, the SpAdSP-IRLS achieves a lower complexity while obtains a better performance. The SpAdSP-IRLS was tested by synthetical and at-sea experimental data and shown to significantly outperform conventional RLS at reduced complexity.
An important area of focus for us in the future is to develop one DCS-based second-order algorithm with lower complexity. While SpAdSP-IRLS has successfully reduced the complexity of the standard RLS from to for the length- channel with the size- support set, its complexity remains quadratic in terms of . To achieve a super-linear convergence rate and linear complexity simultaneously, one option is to use fast RLS algorithms [30, 31] with the DCS technique. However, in practice, there are too many free parameters that require fine-tuning. Another approach is to use the diagonal quasi-Newton method [32, 33, 34] to achieve this objective. We will leave these investigations to future work.
References
- [1] S. F. Cotter and B. D. Rao, “Sparse channel estimation via matching pursuit with application to equalization,” IEEE Trans. Commun., vol. 50, pp. 374–377, Mar. 2002.
- [2] W. Li and J. C. Preisig, “Estimation of rapidly time-varying sparse channels,” IEEE J. Ocean. Eng., vol. 32, pp. 927–939, Oct. 2007.
- [3] C. R. Berger, S. Zhou, J. C. Preisig, and P. Willett, “Sparse channel estimation for multicarrier underwater acoustic communication: From subspace methods to compressed sensing,” IEEE Trans. Signal Process., vol. 58, pp. 1708–1721, Mar. 2009.
- [4] K. Pelekanakis and M. Chitre, “New sparse adaptive algorithms based on the natural gradient and the -norm,” IEEE J. Ocean. Eng., vol. 38, pp. 323–332, Apr. 2013.
- [5] K. Pelekanakis and M. Chitre, “Adaptive sparse channel estimation under symmetric alpha-stable noise,” IEEE Trans. Wireless Commun., vol. 13, pp. 3183–3195, Jun. 2014.
- [6] W. Duan, J. Tao, and Y. R. Zheng, “Efficient adaptive turbo equalization for multiple-input-multiple-output underwater acoustic communications,” IEEE J. Ocean. Eng., vol. 43, pp. 792–804, Jul. 2018.
- [7] J. Tao, Y. Wu, X. Han, and K. Pelekanakis, “Sparse direct adaptive equalization for single-carrier MIMO underwater acoustic communications,” IEEE J. Ocean. Eng., vol. 45, pp. 1622–1631, Oct. 2020.
- [8] Y. Chen, Y. Gu, and A. O. Hero, “Sparse LMS for system identification,” in Proc. Int. Conf. Acoust., Speech, Signal Process. (ICASSP), pp. 3125–3128, 2009.
- [9] Y. Gu, J. Jin, and S. Mei, “ norm constraint LMS algorithm for sparse system estimation,” IEEE Signal Process. Lett., vol. 16, pp. 774–777, Sep. 2009.
- [10] E. M. Eksioglu and A. K. Tanc, “RLS algorithm with convex regularization,” IEEE Signal Process. Lett., vol. 18, pp. 470–473, Aug. 2011.
- [11] Z. Qin, J. Tao, X. Wang, X. Luo, and X. Han, “Direct adaptive equalization based on fast sparse recursive least squares algorithms for multiple-input multiple-output underwater acoustic communications,” J. Acoust. Soc. Amer., vol. 145, pp. 277–283, Apr. 2019.
- [12] D. L. Duttweiler, “Proportionate normalized least-mean-squares adaptation in echo cancelers,” IEEE Trans. Audio, Speech, Lang. Process., vol. 8, pp. 508–518, Sep. 2000.
- [13] J. Benesty and S. L. Gay, “An improved PNLMS algorithm,” in Proc. Int. Conf. Acoust., Speech, Signal Process. (ICASSP), (Orlando, FL, USA), pp. 1881–1884, 2002.
- [14] P. Loganathan, A. W. H. Khong, and P. A. Naylor, “A class of sparseness-controlled algorithms for echo cancellation,” IEEE Trans. Speech Audio Process., vol. 17, pp. 1591–1601, Nov. 2009.
- [15] Z. Qin, J. Tao, and Y. Xia, “A proportionate recursive least squares algorithm and its performance analysis,” IEEE Trans. Circuits Syst. II, Exp. Briefs., vol. 68, pp. 506–510, Jan. 2021.
- [16] Z. Qin, J. Tao, Y. Xia, and L. Yang, “Proportionate RLS with norm regularization: Performance analysis and fast implementation,” Digital Signal Process., vol. 122, p. 103366, Apr. 2022.
- [17] K. Pelekanakis and M. Chitre, “Comparison of sparse adaptive filters for underwater acoustic channel equalization/estimation,” in Proc. IEEE Int. Conf. Commun. Syst., pp. 395–399, Nov. 2010.
- [18] F. Wu, Y. Zhou, F. Tong, and R. Kastner, “Simplified p-norm-like constraint lms algorithm for efficient estimation of underwater acoustic channels,” J. Marine Sci. Appl., vol. 12, pp. 228–234, Jun. 2013.
- [19] Y. Tian, X. Han, J. Yin, Q. Liu, and L. Li, “Group sparse underwater acoustic channel estimation with impulsive noise: Simulation results based on arctic ice cracking noise,” J. Acoust. Soc. Amer., vol. 146, pp. 2482–2491, Oct. 2019.
- [20] N. Vaswani, “Kalman filtered compressed sensing,” in Proc. 15th IEEE Intl. Conf. Image Proc. (ICIP), pp. 893–896, 2008.
- [21] J. Ziniel and P. Schniter, “Dynamic compressive sensing of time-varying signals via approximate message passing,” IEEE Trans. Signal Process., vol. 61, pp. 5270–5284, Nov. 2013.
- [22] G. Mileounis, B. Babadi, and N. Kalouptsidis, “An adaptive greedy algorithm with application to nonlinear communications,” IEEE Trans. Signal Process., vol. 58, pp. 2998–3007, Jun. 2010.
- [23] E. Vlachos, A. S. Lalos, and K. Berberidis, “Stochastic gradient pursuit for adaptive equalization of sparse multipath channels,” IEEE J. Emerg. Sel. Topics Circuits Syst., vol. 2, pp. 413–423, Sep. 2012.
- [24] D. Zachariah, S. Chatterjee, and M. Jansson, “Dynamic iterative pursuit,” IEEE Trans. Signal Process., vol. 60, pp. 4967–4972, Sep. 2012.
- [25] Z. Qin, J. Tao, and X. Han, “Dynamic compressive sensing based adaptive equalization for underwater acoustic communications,” in Proc. MTS/IEEE Global OCEANS Conf., pp. 1– 5, Oct. 2020.
- [26] Z. Qin, J. Tao, F. Qu, and Y. Qiao, “Adaptive equalization based on dynamic compressive sensing for single-carrier multiple-input multiple-output underwater acoustic communications,” J. Acoust. Soc. Amer., vol. 151, pp. 2877–2884, May 2022.
- [27] J. Tao, Y. Wu, Q. Wu, and X. Han, “Kalman filter based equalization for underwater acoustic communications,” in Proc. MTS/IEEE Global OCEANS Conf., pp. 1– 5, Jun. 2019.
- [28] M. Stojanovic, J. Catipovic, and J. Proakis, “Phase-coherent digital communicaitons for underwater acoustic channels,” IEEE J. Ocean. Eng., vol. 19, pp. 100–111, Jan. 1994.
- [29] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing signal reconstruction,” IEEE Trans. Inf. Theory, vol. 55, pp. 2230 – 2249, May 2009.
- [30] J. Cioffi and T. Kailath, “Fast, recursive-least-squares transversal filters for adaptive filtering,” IEEE Trans. Speech Audio Process., vol. 32, pp. 304–337, Apr. 1984.
- [31] D. T. M. Slock and T. Kailath, “Numerically stable fast transversal filters for recursive least squares adaptive filtering,” IEEE Trans. Signal Process., vol. 39, pp. 92–114, Jan. 1991.
- [32] A. Bordes, L. Bottou, and P. Gallinari, “SGD-QN: Careful quasi-newton stochastic gradient descent,” J. Mach. Learn. Res., vol. 10, pp. 1737–1754, 2009.
- [33] N. Andrei, “A diagonal quasi-newton updating method for unconstrained optimization,” Numerical Algorithms, vol. 81, no. 2, pp. 575–590, 2019.
- [34] X. Ma, “Apollo: An adaptive parameter-wise diagonal quasi-newton method for nonconvex stochastic optimization,” arXiv preprint arXiv:2009.13586, 2020.