Algorithm Unrolling for Massive Access via Deep Neural Network with Theoretical Guarantee
Abstract
Massive access is a critical design challenge of Internet of Things (IoT) networks. In this paper, we consider the grant-free uplink transmission of an IoT network with a multiple-antenna base station (BS) and a large number of single-antenna IoT devices. Taking into account the sporadic nature of IoT devices, we formulate the joint activity detection and channel estimation (JADCE) problem as a group-sparse matrix estimation problem. This problem can be solved by applying the existing compressed sensing techniques, which however either suffer from high computational complexities or lack of algorithm robustness. To this end, we propose a novel algorithm unrolling framework based on the deep neural network to simultaneously achieve low computational complexity and high robustness for solving the JADCE problem. Specifically, we map the original iterative shrinkage thresholding algorithm (ISTA) into an unrolled recurrent neural network (RNN), thereby improving the convergence rate and computational efficiency through end-to-end training. Moreover, the proposed algorithm unrolling approach inherits the structure and domain knowledge of the ISTA, thereby maintaining the algorithm robustness, which can handle non-Gaussian preamble sequence matrix in massive access. With rigorous theoretical analysis, we further simplify the unrolled network structure by reducing the redundant training parameters. Furthermore, we prove that the simplified unrolled deep neural network structures enjoy a linear convergence rate. Extensive simulations based on various preamble signatures show that the proposed unrolled networks outperform the existing methods in terms of the convergence rate, robustness and estimation accuracy.
Index Terms:
Massive access, joint activity detection and channel estimation, group-sparse matrix estimation, algorithm unrolling, and deep learning.I Introduction
Massive machine-type communications (mMTC), as one of the three major use cases of the fifth-generation (5G) wireless networks [2], is expected to support ubiquitous connectivity for a large number of low-cost Internet of Things (IoT) devices. The typical applications of the mMTC use case include smart homes, wearables, environmental sensing, and healthcare [3]. Driven by the increasing popularity of IoT services and the decreasing cost of IoT devices, the number of IoT devices is expected to reach billion by 2025 [4]. Providing efficient medium access for such a large number of IoT devices is a critical design challenge that needs to be addressed to support mMTC.
Grant-free random access, proposed by the 3rd generation partnership project (3GPP) for 5G new radio (NR), has been recognized as a promising multiple access scheme that is capable of accommodating massive IoT devices with low-latency requirements [5]. In particular, with grant-free random access, each IoT device can transmit a frame containing both preamble sequence and data at any time instant without waiting for the grant from the base station (BS). Compared to the grant-based random access, the signaling overhead and the random access latency can be significantly reduced. However, due to the lack of prior knowledge on the subset of active IoT devices that will transmit and the necessity of obtaining accurate channel state information (CSI) for data decoding, it is of vital importance to achieve joint activity detection and channel estimation (JADCE) for grant-free random access in IoT networks[6]. To alleviate the collision probability of preamble sequences and in turn facilitate JADCE, the authors in [7, 8, 9] focused on the design of non-orthogonal preamble signature sequences. In particular, the authors in [7] proposed a novel preamble design that concatenates multiple Zadoff-Chu (ZC) sequences, which increase the success rate of preamble transmission and enhance the channel reuse efficiency for grant-free random access.
Taking into account the sporadic nature of IoT devices, the JADCE problem was formulated as a sparse signal recovery problem in [10]. This kind of problem can be addressed by adopting the compressed sensing (CS) technique that is capable of exploiting the channel sparsity structures in both the time- and frequency-domains [11, 12]. Due to the large-scale nature of mMTC, the computationally efficient approximate message passing (AMP) based algorithms were proposed to solve the CS problems for supporting massive access [13, 6, 14]. The authors in [6] showed that AMP with a vector denoiser and multiple measurement vectors (MMV) [15] achieve approximately the same performance for JADCE. An AMP algorithm with a minimum mean-squared error (MMSE) denoiser was proposed in [13], where a prior knowledge of the underlying channel distribution was required. Furthermore, a generalized multiple measurement vector approximate message passing (GMMV-AMP) algorithm was proposed to detect the device activity by exploiting sparsity in both the spatial and the angular domains [14]. However, AMP-based algorithms often fail to converge when the preamble signature matrix is mildly ill-conditioned or non-Gaussian [16]. To this end, the authors in [17, 18] introduced a mixed -norm to formulate the JADCE problem as a form of group LASSO, which can be solved by using the interior-point method [17] or the alternating direction method of multipliers (ADMM) [18] without any prior knowledge on CSI. In particular, the popular iterative shrinkage thresholding algorithm (ISTA) [19, 20] was one of the best known robust algorithmic approaches to solve the group LASSO problem. However, the ISTA converges only sublinearly in general cases [19] and suffers from an inherent trade-off between the estimation performance and convergence rate [21], which prohibit its practical implementation for grant-free massive access in IoT networks.
Deep learning has recently been emerging as a disruptive technology to reduce the computational cost. For instance, the authors in [22, 23, 24, 25, 26] proposed the idea of “learning to optimize”, where neural networks were developed to replace the traditional optimization algorithms to solve the non-convex constrained optimization problems [22, 23, 24] and mixed-integer nonlinear programming (MINLP) problems [25, 26] for resource allocation in wireless networks. These studies demonstrated the potentials of deep learning for improving communication and computation efficiency. However, the deep learning framework usually regarded the neural network as a black box and cannot guarantee the theoretical performance. Therefore, the lack of interpretability and theoretical guarantee of the deep learning framework is a critical issue that needs to be addressed in wireless networks.
Fortunately, the unrolled deep neural network [27], as another powerful deep learning technique, has been proposed to provide a concrete connection between the existing iterative algorithms and the deep neural networks with theoretical performance guarantee [28, 29]. The promising applications of this method include resource management, MIMO detection and precoding design [30, 31]. The typical unrolled deep neural network, i.e., learned iterative shrinkage thresholding algorithm (LISTA) [27, 32, 33], was adopted to approximate the sparse coding based on LASSO, which achieves a better estimation performance. Moreover, the authors in [34, 32] established the simplified structures for LISTA and proved that simplified LISTA achieves linear convergence rate. The authors in [33] study the selection of adapted step size to improve the convergence rate of ISTA and further propose a unrolled network where only the step sizes of ISTA are learned. However, such an unrolling framework cannot be directly applied to solve the JADCE problem in IoT networks with grant-free random access, where the uplink channels of active devices are usually represented by a group-row-sparse matrix. This is because the existing LISTA with a scalar shrinkage-thresholding operator (SSTO), which only focuses on the recovery of sparse vectors, cannot tackle the group-sparse matrix estimation problem.
I-A Contributions
In this paper, we consider grant-free massive access in an IoT network, where a large number of single-antenna IoT devices sporadically transmit non-orthogonal preamble sequences to a multi-antenna BS. In such a network setting, we formulate the JADCE problem as a group-sparse matrix estimation problem by introducing a mixed -norm. Based on the above discussions, we shall develop a new unrolled deep neural network framework by adopting the multidimensional generalization operator, named multidimensional shrinkage thresholding operator (MSTO) [35], to address this problem. Such an extension turns out to be non-trivial, as the MSTO for the group-row-sparse matrix breaks the independency between the sparse vectors, which brings formidable challenge to reveal the weight coupling structure and analyze the convergence rate for the proposed method. We address this challenge and provide the theoretical performance guarantee for the proposed framework that is capable of solving JADCE problem to support massive access in IoT networks.
The major contributions of this paper are summarized as follows:
-
•
We develop a novel unrolled deep neural network framework with three structures for group-sparse matrix estimation to support grant-free massive access in IoT networks. The proposed methods enjoy high robustness by inheriting the structure of ISTA and keep the computational complexity at a low level through end-to-end training.
-
•
We conduct rigorous theoretical analysis to get rid of the interpretability issue and facilitate the efficient training in IoT networks. We reveal the weight coupling structure between the training parameters and identify the property that the learned weight matrix only depends on the preamble signature. The theoretical analysis allows us to further simplify original unrolled network structure by reducing the redundant training parameters. Furthermore, we prove that the simplified unrolling neural networks enjoy a linear convergence rate.
-
•
Extensive simulations are conducted to validate the theoretical analysis and show the superior performance of the proposed algorithms in three critical aspects for grant-free massive access. Firstly, comparing with the robust algorithms such as ISTA, the proposed methods have much lower computational cost. Secondly, the proposed methods are more robust when comparing to computational-efficient algorithms such as AMP-based algorithms by considering simulating different preamble signatures, including Gaussian matrix, binary matrix, and ZC sequence. Thirdly, the proposed algorithms are capable of returning more accurate solutions comparing with the classical CS-based algorithms.
I-B Organization and Notations
The rest of this paper is organized as follows. The system model and problem formulation are described in Section II. Section III presents the proposed three kinds of unrolled neural networks for solving the group-sparse matrix estimation problem. The convergence analysis of the proposed unrolled neuron networks is presented in Section IV. The numerical results of the proposed algorithms are illustrated in Section V. Finally, Section VI concludes this paper.
Notations
Throughout this paper, we denote . Let (resp. ) be the set of real (resp. complex) numbers and be the set of integers. For a set , the number of elements of is denoted by . For index sets , and , we denote by the (sub)matrix of entries that lie in the rows of indexed by and the columns indexed by . When (resp. ), we simply denote by (resp. ). For a matrix , (resp. ) denotes the operator (resp. Frobenius) norm. And denotes the max norm. Also, (resp. ) denotes mixed -norm (resp. mixed -norm). For a vector , denotes the Euclidean norm. We denote the support of the vector as , i.e. .
II System Model and Problem Formulation
II-A System Model
Consider the grant-free uplink transmission of a single-cell IoT network consisting of one -antenna BS and single-antenna IoT devices, as shown in Fig. 2. Full frequency reuse and quasi-static block fading channels are considered in this paper. We denote as the set of IoT devices, which sporadically communicate with the BS. In each transmission block, all the IoT devices independently decide whether or not to be active. Within a specific transmission block, we denote if device is active and otherwise. Because of the sporadic transmission, it is reasonable to assume that each IoT device is active with a probability being much smaller than 1. In addition, we assume that the active IoT devices are synchronized at the symbol level [17]. The preamble signal received at the BS, denoted as , can be expressed as
(1) |
where denotes the channel vector between device and the BS, denotes the -th symbol of the preamble signature sequence transmitted by device , denotes the length of the signature sequence, and denotes the additive white Gaussian noise (AWGN) vector at the BS.


As the length of the signature sequence is typically much smaller than the number of IoT devices, i.e., , it is impossible to allocate mutually orthogonal signature sequences to all the IoT devices. Hence, each IoT device is assigned a unique signature sequence, which is generally non-orthogonal to the preamble sequences assigned to other IoT devices. Note that three different kinds of non-orthogonal preamble sequences will be considered in the simulations.
For notational ease, we denote as the received preamble signal matrix across antennas over the transmission duration of symbols, as the channel matrix between devices and the BS, as the noise matrix at the BS, and as the known preamble signature sequence matrix with . As a result, the preamble signal received at the BS can be rewritten as
(2) |
where denotes the diagonal activity matrix. We aim to simultaneously detect the activity matrix and estimate the channel matrix , which is known as a JADCE problem [13]. By denoting , matrix thus has the group-sparse structure in rows [17], as shown in Fig. 2. We further rewrite (2) as
(3) |
Note that the active devices and their corresponding channel vectors can be recovered from the estimation of matrix [17].
II-B Problem Formulation
To estimate the group-row sparse matrix , we adopt the mixed -norm [17] to induce the group sparsity, i.e., where is the -th row of matrix . The group-sparse matrix estimation problem can be reformulated as the following unconstrained convex optimization problem (also known as group LASSO) [36]
(4) |
where denotes the regulation parameter. As matrix is group sparse in rows, problem is essentially a group-sparse matrix estimation problem.
To facilitate efficient algorithm design, we rewrite (3) as its real-valued counterpart
(5) |
where and represent the real part and imaginary part of a complex matrix. As a result, problem can be further rewritten as
(6) |
Therefore, our goal of JADCE becomes the recovering of matrix based on the preamble matrix and the noisy observation .
II-C Prior Work
The ISTA [36] is a popular approach to solve the group LASSO problem. In particular, the resulting ISTA iteration for (6) can be written as
(7) |
where is an estimate of ground truth at iteration , plays a role as step size, is regulation parameter, and denotes the MSTO [35]. Specifically, denotes as the -th row of the matrix after applying , which is defined as [20]
(8) |
However, ISTA suffers from an inherent trade-off between estimation performance and convergence rate based on the choice of , i.e., a larger leads to faster convergence but a poorer estimation performance [21]. Moreover, the choice of step size also influences the convergence rate [33]. In the next section, we shall propose a learned ISTA framework by learning the parameters including and the step size simultaneously to improve the convergence rate and estimation performance of ISTA for the group-sparse matrix estimation problem (ISTA-GS).
III Algorithm Unrolling via Deep Neural Network
In this section, we propose an unrolled deep learning framework to solve the JADCE problem by exploiting the group sparse structure. Although the LISTA proposed in [27, 34, 32, 37] can recover the individual sparse vector signals, all these methods cannot recover the matrix with group row sparsity. To address this issue, we extend LISTA for the group-sparse matrix recovery. Furthermore, we analytically prove that the weight coupling property holds for the group-sparse matrix estimation problem.



III-A Unrolled Neural Network Structures
In this subsection, we propose three neural network structures under the unrolled framework for group-sparse matrix estimation problem .
III-A1 LISTA-GS
The key idea of the proposed unrolled method for group-sparse matrix estimation problem is to view matrices , , and scalars in (9) as trainable parameters. As a result, (9) can be modeled as a one layer recurrent neural network (RNN), i.e., one iteration of ISTA-GS is treated as one layer neural network. The neural network structure can be concatenated as a -layer RNN, which is capable of modeling the ISTA-GS with iterations. Mathematically, the unrolled RNN with iterations for group-sparse matrix estimation problem is given by
(10) |
where parameters are trainable. This is the main difference from the iterative algorithm in (7).
III-A2 LISTA-GSCP
We establish the following necessary condition for the convergence of LISTA-GS, which is inspired by the fact that and coupled structure in [34]. This necessary condition reveals the properties of trainable parameters if the proposed unrolled NN can recover the group sparse matrix and can be used to simplify the proposed network.
Theorem 1.
Proof.
Please refer to Appendix A. ∎
Thus, we develop the necessary condition of the recovery convergence for the group-sparse matrix estimation problem. It is worth nothing that the MSTO [35] is a generalization of the SSTO, which brings unique challenge of revealing the weight coupling structure on the group-row-sparse matrix estimation problem. The extension turns out to be non-trivial since the MSTO of group-sparse matrix breaks up the “independency” between sparse vectors.
Motivated by (12) in Theorem 1, we adopt the partial weight coupling structure of the trainable weights in LISTA-GS as Hence, by letting , we obtain the simplified -layer unrolled neural network structure (namely LISTA-GSCP) as
(13) |
where parameters are trainable. Fig. 3(b) illustrates the unrolled network.
III-A3 ALISTA-GS
To further alleviate the need to learn a weight matrix with a large amount of parameters while stabilizing the training process, we separate the weight as the product of a scalar and a matrix, i.e., . The definition of “good” parameters in (35) shows that the weight matrix only depends on . Thus, we can solve the following convex optimization problem by using the projected gradient descend (PGD) algorithm to obtain the weight matrix prior to the training stage
(14) |
Hence, the proposed third unrolled network structure (namely ALISTA-GS) comes to light
(15) |
where are parameters to be trained and the weight matrix is pre-computed before the training process. Different from the LISTA-GSCP, this network structure pays more attention to select better learning step sizes. A visual depiction is provided in Fig. 3(c).
Till now, we have established three LISTA variations for group-sparse matrix estimation problems. With less training parameters, the training process turns to be easier and faster.
Remark 1.
Recently, the advancement of complex-valued deep networks make it possible to apply the proposed LISTA variations in a complex-valued representation, i.e., without rewriting as its real-valued counterpart (5). The complex-valued representation may be utilized to further enhance the performance of sparse recovery by exploiting the extra information about potential grouping of real and imaginary parts, e.g., the real and imaginary parts are either zero or nonzero simultaneously [38, 39]. However, the design of the complex-valued network training process and the complex-valued shrinkage function will be two challenges. Besides, the convergence analysis of such complex-valued LISTA variations will be an interesting problem, which will be studied in our future work.
III-B Deep Neural Networks Training and Testing
In the training stage, we consider the framework that the neural networks learn to solve group-sparse matrix estimation problem in a supervised way. Note that we denote samples training data as , where and are viewed as the label and the input, respectively. The following optimization problem is adopted in the whole training stage for training a -layer network,
(16) |
However, such large scale optimization easily converged to a bad local minimum [40]. The layer-wise training strategy is used to separate (16) into the following two parts, i.e., (17) and (18), where (17) is to find a good initialization of (18) to avoid bad local minima of (16). In particular, for training a -layer network, we denote as the trainable parameters from layer to layer , and as the trainable parameters in layer . By fixing the parameters of the former -layer network that has been trained and the parameters ’s initialization is given in Table I, we first learn the parameters with the learning rate by using Adam algorithm [41] to solve the following optimization problem
(17) |
where denotes the output of the -layer network with input and initial point . Then, we use the parameters obtained by (17) and the fixed parameters as initialization and then tune all parameters with the learning rate which is smaller than by using Adam algorithm to solve
(18) |
After applying the procedure successfully, we obtain the parameters for the whole -layer network. So next we can train a -layer network.
In the testing stage, since the known preamble signature remains unchanged, the proposed unrolled networks can recover the changing channels. Given a newly received signal , the learned unrolled networks, i.e., LISTA-GS in (10), LISTA-GSCP in (13) and ALISTA-GS in (15), are applied for group-sparse matrix estimation. For instance, with LISTA-GS, we obtain the -th layer recovered signal by using , where , and are the learned parameters of the -th layer.
Remark 2.
Most of the existing deep learning based approaches including the proposed LISTA variations rely on the assumption that the channel coefficients follow the same distribution during the training and testing stages, and may not work well in dynamic environment with different channel distributions over time. The continual learning and transfer learning technologies [42, 43] that have been developed for resource allocation and beamforming optimization in dynamic environment have the potential to address this issue. As these technologies are still in the early stage, we leave the proposed LISTA variations to recover the group-sparse matrix in dynamic environment for future work.
III-C Time Complexity and Number of Trainable Parameters
The time complexity for the three neural network structures are mainly due to matrix multiplication. For LISTA-GS, the evaluation of the matrix multiplication and require time at each iteration. As for LISTA-GSCP and ALISTA-GS, the evaluation of the matrix multiplication and require time at each iteration. Since the proposed methods converge faster than ISTA and with the same complexity per iteration, thereby reducing the computational cost.
For -layer RNN, the ALISTA-GS contains only total trainable parameters , while LISTA-GS and LISTA-GSCP require variables and variables , respectively. We summarize the initialization and the required numbers of trainable parameters for the -layer networks in Table I. Since and should be initialize to proper constants, we initialize and in this paper.
Network | Trainable params | Initialization | Number of params |
---|---|---|---|
LISTA-GS | , | ||
LISTA-GSCP | |||
ALISTA-GS |
IV Convergence Analysis
In this section, we provide the main theoretical results of this paper, i.e., the linear convergence rate of LISTA-GSCP (13) and ALISTA-GS (15), respectively. Since the proposed unrolled networks inherit the structure of ISTA-GS, they thus allow us to track the interpretability for such deep learning framework from the perspective of optimization. As a matter of fact, our proposed unrolled neural networks are extensions of [34, 32] to solve the group-sparse matrix estimation problems. Such dimension expansion and matrix structures bring the unique and formidable challenges for establishing the theoretical analysis on the proposed unrolled neural networks.
We firstly establish the convergence analysis of LISTA-GSCP framework. We use to replace for notational simplicity. The following theorem presents the convergence rate of LISTA-GSCP. For theoretical analysis, we assume that norm of all rows of signal and Frobenius norm of AWGN noise are bounded by and [34, 32]. Furthermore, since each entry of the activity sequence follows the Bernoulli distribution, we assume that the number of non-zero rows on signal is bounded by a small number [17]. And for notation brevity, we assume the signal and noise belong to set
Theorem 2 (Convergence rate of LISTA-GSCP).
Especially, for the noiseless case, (19) reduces to Moreover, LISTA-GSCP converges at an rate, which is faster than original ISTA of and Nesterov’s method [44] of .
The steps of proving Theorem 2 are summarized as follows:
-
A.
“Good” parameters for leaning. We define the “good” parameters that guarantee the linear convergence rate.
-
B.
Error bound for one sample data. We establish an error bound for one sample 111To emphasize the signal and noise, we use to replace one sample rather than ..
-
C.
Error bound for the whole data set. By taking the supremum over all , we establish an error bound over the whole samples.
IV-A “Good” Parameters for Learning
In this subsection, we shall give the definition of “good” parameters for the LISTA-GSCP network structure that guarantees the linear convergence rate.
First, we need to introduce several fundamental definitions inspired by [34]. The first one is the mutual coherence [45] of , which characterizes the coherence between different columns of .
Definition 1.
-
(i)
The mutual coherence of with normalized columns is defined as
(20) -
(ii)
The generalized mutual coherence of with normalized columns is defined as
(21)
Lemma in [34] tells us that there exists a matrix that attaches the infimum given in (21), i.e., where a set of “good” weight matrices is defined as
(22) |
Then, we define the “good” parameters to be learned in LISTA-GSCP as follows.
Definition 2.
are called “good” parameters in LISTA-GSCP if they satisfy
(23) |
where and .
IV-B Error Bound for One Sample
In this subsection, we give an upper bound of the recovery error for one sample . We first introduce the extra notation , to provide information about the group sparsity. For each , we define a function as
(24) |
Specifically, for a vector , we have Note that can provide information about group sparsity of a given matrix . By simple calculations, one can get the following lemma.
Lemma 1.
With and , we have
(25) |
By taking one sample and letting , we establish the error bound by two steps: (i) We show that there are no false positive rows in for all . (ii) Since the no-false-positive property holds, we consider the component on .
In step (i), we prove the following lemma.
Lemma 2 (No-false-positive property).
With all assumptions in Theorem 2 and “good” parameters , we have
(26) |
Proof.
Please refer to Appendix B. ∎
Lemma 2 shows that there are no false positive entries in . In another word, if (23) in LISTA-GSCP hold, then
(27) |
This property implies that the recovery error of the component beyond turns to be .
In step (ii), we consider the component on . For all , the LISTA-GSCP in (13) gives
(28) |
where is given in (51). It can be viewed that (28) consists of three parts , , . Since , then (28) can expressed as
(29) |
The definition of in (51) shows that Then, taking the norm on both sides in (29), one can get that for all ,
(30) |
where (a) follows from the triangle inequality, (b) arises from the choice of “good” parameters in (2) and It is easy to check that (27) implies that for all . Therefore, based on (IV-B), it follows that
(31) |
where . The inequality (IV-B) provides the recursive form for consecutive errors of and . Hence, we establish the recover error bound of mixed-norm for one sample .
IV-C Error Bound for Whole Data Set
In this subsection, we give an upper bound of recovery error for the whole training samples. Note that for all . Taking supremum over on both sides of (IV-B), we have
(32) |
Considering the “good” parameters of it follows that
provided that . By letting , we have The fact that for any matrix gives an upper bound of error with respect to Frobenius norm
(33) |
As long as and hold, the error bound holds uniformly for all . Then for -layer network, we have
(34) |
Therefore, we complete the proof of Theorem 2.
IV-D Convergence Rate of ALISTA-GS
In this subsection, we first give the definition of the “good” parameters to be learned in the ALISTA-GS network. We then verify the convergence of ALISTA-GS for noiseless case. For noisy case, the analysis can be referred to that of Theorem .
Definition 3.
are called “good” parameters for all (noiseless case) with the weight matrix is pre-computed in ALISTA-GS if they satisfy that for each
(35) |
Theorem 3 (Convergence rate of ALISTA-GS).
Proof.
Please refer to Appendix C. ∎
Remark 3.
Optimally, if the factor takes the maximum at , i.e., , then ALISTA-GS enjoys a linear convergence rate.
V Numerical Results
In this section, we present the simulation results for the proposed three unrolled neural network structures for grant-free massive access in IoT networks. We first introduce the settings of the simulations and performance metrics and then present the simulation results to confirm the main theorems including the weight coupling structure and the convergence rate. Finally, we compare the recovery performance of the proposed methods with other popular CS-based methods.


V-A Simulation Settings and Performance Metrics
In simulations, the channels are assumed to suffer from independent Rayleigh fading, i.e., . The preamble signature matrix is fixed with each of its columns being normalized and the noise matrix follows the Gaussian distribution with zero mean and variance . In addition, each entry of the activity sequence is a random variable which follows the Bernoulli distribution with mean , i.e., and , . The transmit signal-to-noise ratio (SNR) of the system is defined as
(38) |
By transforming all these complex-valued matrices into real-valued matrices according to (5), we obtain the training data set . In the training stage, we choose layers unless otherwise stated for all the unrolled models in the simulations. The training set contains different samples in the training stage. The learning rate is set to be and . In the validation and testing stage, samples are generated to test the trained models (drawn independent of the training set, but from the same distribution).
We compare our proposed unrolled networks, i.e., LISTA-GS, LISTA-GSCP and ALISTA-GS, with the other three popular CS-based algorithms for group-sparse matrix estimation problem:
We adopt the normalized mean square error (NMSE) to evaluate the performance in recovering the real-valued , defined as
(39) |
where represents the ground truth and is the estimated value.
V-B Validation of Theorems




In this subsection, we conduct simulations to validate the developed Theorems. We generate the preamble signature matrix according to the complex Gaussian distribution unless otherwise stated, i.e., . We set the length of the signature sequence, the total number of devices, and the number of antennas at the BS, i.e., and , to , , and , respectively.
Validation of Theorem 1. Theorem 1 endorses the empirical results shown in Figs. 4(a) and 4(b). In Fig. 4, the value of and in LISTA-GS are reported. We observe that as increases, the value of and approach to . The simulations clearly validate Theorem 1: and , as .
Validations of Theorems 2 and 3. We examine the convergence rates of the proposed unrolled networks. Fig. 8 shows the NMSE of the proposed unrolled networks and other methods over iterations in a noisy scenario when SNR dB. In Fig. 8, we show that the proposed networks (almost) converge linearly, which validate Theorem 2 and 3. Furthermore, the LISTA-GS converges fastest, but with the most number of trainable parameters. Besides, we observe that the proposed unrolled networks outperform other baseline algorithms in terms of NMSE.
Networks | LISTA-GS | LISTA-GSCP | ALISTA-GS |
---|---|---|---|
Training time (min) | 160.13 | 105.31 | 87.09 |
NMSE (dB) | -24.08 | -23.38 | -23.30 |
LISTA-GS | LISTA-GSCP | ALISTA-GS | ISTA-GS | Nesterov’s method | AMP-MMV | AMP MMSE-denoiser |
0.0078s | 0.0076s | 0.0076s | 0.0079s | 0.0087s | 0.2089s | 0.0731s |




Training Time Comparison. We compare the training time of the proposed unrolled networks. We train all the networks of layers with training samples in noisy scenario when SNR = dB on GeForce GTX 1080. Table II shows that the ALISTA-GS is not only faster to train but performs as well as LISTA-GS and LISTA-GSCP. With less trainable parameters, the training process turns to be faster.
Computation Time Comparison. We compare the running time of test stage for different methods. Once the proposed unrolled networks are trained, we can use them to recover many new sample of different channels. We run all the methods of iterations on Intel(R) Core(TM) i7-8650U CPU @ 2.11 GHz and average over test samples, which is shown in Table III. The running time of the proposed unrolled networks is very close to the ISTA-GS and Nesterov’s method, which validate the time complexity analysis in Section III-C. In addition, the proposed unrolled networks run faster than AMP-based algorithms. This benefits from the low computational complexity per iteration of ISTA-GS.
Convergence Performance with Ill-Conditioned Preamble Signature. We consider the simulations when preamble signature matrix is an ill-conditioned matrix, which is defined as a matrix with a large condition number . We consider the fixed signature matrix with condition numbers and in this setting. To obtain the signature matrix of condition number and , we first sample a matrix , i.e., . Then, we decompose by singular value decomposition and replace by a new that satisfy the above conditions.
Fig. 8 and Fig. 8 show the simulation results of NMSE when signature matrix of the condition number is and with SNR dB, respectively. The baseline AMP-MMV and AMP MMSE-denoiser fail when in simulations. Comparing with the case and , the proposed unrolled networks remain stable but the outputs of ISTA-GS and Nesterov’s method diverge when . This results show that the proposed unrolled networks are robust to ill-conditioned preamble signature matrix.
V-C Performance of Proposed Unrolled Neural Networks
In this subsection, we show the performance in terms of NMSE, and compare the proposed methods with other classic CS-based methods. By varying the SNR from to dB, all the algorithms under consideration reach a stable solution in the simulations, and we set . The smaller NMSE value means the better recovery performance of JADCE. In these cases, we set and to , , and , which is suitable for grant-free massive access.
Gaussian Matrix as Preamble Signature. We conduct simulations for the case in which preamble signature matrix is a complex Gaussian matrix, i.e., . Fig. 8 shows the impact of SNR on the NMSE of the proposed unrolled networks and baseline algorithms. First, the performance of NMSE decreases as SNR grows, which implies that the JADCE performance becomes better as SNR increases. Second, the proposed network structures achieve a much lower NMSE than other CS-based methods for different values of SNR. In addition, the LISTA-GS outperforms the LISTA-GSCP and ALISTA-GS. This indicates that the more trainable parameters, the better NMSE performance.
Fig. 12 demonstrates the NMSE performance versus number of devices of the proposed unrolled networks and baseline algorithms with and SNR = dB. As a result, the NMSE increases as the number of devices increase for all the methods, while the proposed unrolled networks outperform other baseline algorithms. Fig. 12 shows that the proposed unrolled networks have the less device activity detection error probability compared with other baseline methods, which also verify the superior of the proposed unrolled networks.
Binary Matrix as Preamble Signature. We conduct simulations for the case in which the preamble sequence matrix is binary sequence matrix, i.e., . Each entry of is selected uniformly at random on , which is closely related to Code Division Multiple Access (CDMA) communication systems [41]. Fig. 12 shows the NMSE performance of the proposed unrolled networks and baseline methods. As a result, the proposed networks remain stable but AMP-MMV and AMP MMSE-denoiser fail to solve the JADCE problem when preamble sequence matrix is binary. The proposed unrolled networks outperform baseline algorithms more than dB on NMSE for different values of SNR. Obviously, the proposed unrolled networks achieve significantly better NMSE performance compared with CS-based methods. This result demonstrates the robustness of our proposed methods for the non-Gaussian matrices case.
Zadoff-Chu Sequences as Preamble Signature. In this case, we carry out simulations when preamble signature are composed of Zadoff-Chu sequences[8], which are widely used in practical situations, i.e., 4G LTE systems. Fig. 12 shows the NMSE performance of the proposed unrolled networks and other baseline CS-based methods. The proposed networks remain stable but AMP-based algorithms fail to solve the JADCE when is Zadoff-Chu sequence matrix. From this figure, we observe that the proposed unrolled networks achieve a much better NMSE performance compared with the baseline methods. Moreover, as the SNR increasing, the proposed approaches perform better than baseline methods. Among the proposed structures, the LISTA-GS achieves the best performance and LISTA-GSCP shares a similar performance to that of ALISTA-GS. This result also shows the robustness of our proposed methods for practical situations.
In summary, the simulations demonstrate the effectiveness of the proposed unrolled networks in the following aspects:
-
•
The proposed unrolled networks converge faster than the robust algorithms such as ISTA-GS and Nesterov’s method, yielding much lower computational complexity.
-
•
Simulations of different preambles show that the proposed unrolled networks are more robust compared with the computationally efficient algorithms such as AMP-based algorithms.
-
•
The proposed unrolled networks achieve better performance for JADCE comparing with the baseline CS-based algorithms.
VI Conclusions
In this paper, we proposed a novel unrolled deep neural network framework enjoying linear convergence rate, low computational complexity and high robustness to solve the JADCE problem in grant-free massive access for IoT networks. We introduced the first unrolled network structure by mapping the iterative algorithm ISTA-GS as an unrolled RNN, thereby improving the convergence rate by end-to-end training. To make training procedure efficiently and tackle the interpretability issue of deep learning, we proposed two simplified unrolled network structures with less trainable parameters and proved the linear convergence rate of these methods. Extensive simulations were conducted to verify the effectiveness of the proposed unrolled networks in terms of the convergence rate, robustness and estimation accuracy for the JADCE problem.
Appendix A Proof of theorem 1
Proof.
We prove the threshold value converges to zero firstly, and then we will show that the weights in LISTA-GS have weight coupling property.
(1) We verify that as in (11). We define a subset of for a given as
Clearly, . Since is uniform for all we have , where . Then, there exists for all such that if ,
Then we have Recall that the recurrence relation and let . From the uniform convergence of , it follows that for any . We have
(40) |
where .
Also, the uniform convergence of implies that for any and there exists such that if , then . Denote for each . That is, for all .
We now show that is sufficiently small for . Denote extra notations
The value can be simply rewritten as where is the inner product in . Then the Cauchy-Schwartz inequality implies
From the fact that for any matrix and the triangle inequality it follows that if , then
(42) |
Note that
(43) |
Then it follows that
(44) |
By Cauchy-Schwarz inequality, it holds that
(45) |
(46) |
where . Then, by (41) we have
Therefore, by (50) we get
(47) |
For any , it is true that holds. Thus, the above argument holds for all if Substituting with in the previous equation, we have
(48) |
From (47) and (48) one can get Then, taking the absolute value on the both sides, then by (46), if ,
(49) |
Moreover, as is MSTO parameter, . Therefore, as .
(2) We prove that as . LISTA-GS model (9) gives
where is the sub-gradient of . It is a set defined for each row as follows:
(50) |
where
(51) |
From the definition of operator norm we have that
Thus, , uniformly for all with . Therefore, as . ∎
Appendix B Proof of Lemma 1
We will show that for the no-false-positives property holds for the LISTA-GSCP.
Appendix C Proof of Theorem 3
Proof.
Similar to the proof of Theorem 2, we first show that there are no false positive in . Take arbitrary and let . We prove the no-false-positive property by induction.
(i) When , it is satisfied since .
(ii) Fix and suppose that for all . Then we have
for all . As the thresholds are taken as and , it holds that
which implies by the definition. Thus,
In the next step, we consider the components on . For all , we have
(52) |
where is defined in (51). As we choose , so , then can be expressed as
Hence, (52) can be rewritten as
Then one can get that for all
By no-false-positive property, we have
Finally, we take supremum over with , we have
Taking , then from the fact that in (36) we obtain
Hence, we get the following upper bound with respect to the Frobenius norm:
(53) |
Therefore, the error bound holds uniformly for all .
Lastly, we verify that for all . The assumption gives . If , then If , then Thus for all .
∎
References
- [1] Y. Shi, S. Xia, Y. Zhou, and Y. Shi, “Sparse signal processing for massive device connectivity via deep learning,” in Proc. IEEE Int. Conf. Commun. (ICC) Workshop, pp. 1–6, 2020.
- [2] S. K. Sharma and X. Wang, “Toward massive machine type communications in ultra-dense cellular IoT networks: Current issues and machine learning-assisted solutions,” IEEE Commun. Surveys Tuts., vol. 22, pp. 426–471, First Quart. 2020.
- [3] W. Peng, W. Gao, and J. Liu, “AI-enabled massive devices multiple access for smart city,” IEEE Internet Things J., vol. 6, pp. 7623–7634, Oct. 2019.
- [4] Y. Wu, X. Gao, S. Zhou, W. Yang, Y. Polyanskiy, and G. Caire, “Massive access for future wireless communication systems,” IEEE Wireless Commun., pp. 1–9, Aug. 2020.
- [5] L. Liu, E. G. Larsson, W. Yu, P. Popovski, C. Stefanovic, and E. De Carvalho, “Sparse signal processing for grant-free massive connectivity: A future paradigm for random access protocols in the internet of things,” IEEE Signal Process. Mag., vol. 35, pp. 88–99, Sept. 2018.
- [6] Z. Chen, F. Sohrabi, and W. Yu, “Sparse activity detection for massive connectivity,” IEEE Trans. Signal Process., vol. 66, pp. 1890–1904, Apr. 2018.
- [7] H. Jiang, D. Qu, J. Ding, and T. Jiang, “Multiple preambles for high success rate of grant-free random access with massive MIMO,” IEEE Trans. Wireless Commun., vol. 18, pp. 4779–4789, Oct. 2019.
- [8] J. Ding and J. Choi, “Comparison of preamble structures for grant-free random access in massive MIMO systems,” IEEE Wireless Commun. Lett., vol. 9, pp. 166–170, Feb. 2020.
- [9] X. Shao, X. Chen, C. Zhong, J. Zhao, and Z. Zhang, “A unified design of massive access for cellular internet of things,” IEEE Internet of Things J., vol. 6, pp. 3934–3947, Apr. 2019.
- [10] K. Senel and E. G. Larsson, “Grant-free massive MTC-enabled massive MIMO: A compressive sensing approach,” IEEE Trans. Commun., vol. 66, pp. 6164–6175, Dec. 2018.
- [11] Z. Qin, J. Fan, Y. Liu, Y. Gao, and G. Y. Li, “Sparse representation for wireless communications: A compressive sensing approach,” IEEE Signal Process. Mag., vol. 35, pp. 40–58, May 2018.
- [12] Z. Gao, L. Dai, S. Han, I. Chih-Lin, Z. Wang, and L. Hanzo, “Compressive sensing techniques for next-generation wireless communications,” IEEE Wireless Commun., vol. 25, pp. 144–153, Jun. 2018.
- [13] L. Liu and W. Yu, “Massive connectivity with massive MIMO-Part I: Device activity detection and channel estimation,” IEEE Trans. Signal Process., vol. 66, pp. 2933–2946, Jun. 2018.
- [14] M. Ke, Z. Gao, Y. Wu, X. Gao, and R. Schober, “Compressive sensing-based adaptive active user detection and channel estimation: Massive access meets massive MIMO,” IEEE Trans. Signal Process., vol. 68, pp. 764–779, 2020.
- [15] J. Ziniel and P. Schniter, “Efficient high-dimensional inference in the multiple measurement vector problem,” IEEE Trans. Signal Process., vol. 61, pp. 340–354, Jan. 2012.
- [16] A. K. Fletcher, P. Pandit, S. Rangan, S. Sarkar, and P. Schniter, “Plug-in estimation in high-dimensional linear inverse problems: A rigorous analysis,” in Proc. Neural Inf. Process. Syst. (NeurIPS), pp. 7440–7449, 2018.
- [17] T. Jiang, Y. Shi, J. Zhang, and K. B. Letaief, “Joint activity detection and channel estimation for IoT networks: Phase transition and computation-estimation tradeoff,” IEEE Internet of Things J., vol. 6, pp. 6212–6225, Aug. 2019.
- [18] Q. He, T. Q. Quek, Z. Chen, Q. Zhang, and S. Li, “Compressive channel estimation and multi-user detection in C-RAN with low-complexity methods,” IEEE Trans. Wireless Commun., vol. 17, pp. 3931–3944, Jun. 2018.
- [19] Z. Qin, K. Scheinberg, and D. Goldfarb, “Efficient block-coordinate descent algorithms for the group lasso,” Math. Program. Comput., vol. 5, no. 2, pp. 143–169, 2013.
- [20] A. Bonnefoy, V. Emiya, L. Ralaivola, and R. Gribonval, “Dynamic screening: Accelerating first-order algorithms for the lasso and group-lasso,” IEEE Trans. Signal Process., vol. 63, pp. 5121–5132, Oct. 2015.
- [21] R. Giryes, Y. C. Eldar, A. M. Bronstein, and G. Sapiro, “Tradeoffs between convergence speed and reconstruction accuracy in inverse problems,” IEEE Trans. Signal Process., vol. 66, pp. 1676–1690, Apr. 2018.
- [22] H. Sun, X. Chen, Q. Shi, M. Hong, X. Fu, and N. D. Sidiropoulos, “Learning to optimize: Training deep neural networks for interference management,” IEEE Trans. Signal Process., vol. 66, pp. 5438–5453, Oct. 2018.
- [23] M. Eisen, C. Zhang, L. F. O. Chamon, D. D. Lee, and A. Ribeiro, “Learning optimal resource allocations in wireless systems,” IEEE Trans. Signal Process., vol. 67, pp. 2775–2790, May 2019.
- [24] Y. Shen, Y. Shi, J. Zhang, and K. B. Letaief, “A graph neural network approach for scalable wireless power control,” in Proc. IEEE Global Commun. Conf. (Globecom) Workshop, pp. 1–6, Dec. 2019.
- [25] Y. Shen, Y. Shi, J. Zhang, and K. B. Letaief, “LORM: Learning to optimize for resource management in wireless networks with few training samples,” IEEE Trans. Wireless Commun., vol. 19, pp. 665–679, Jan. 2020.
- [26] L. Liang, H. Ye, G. Yu, and G. Y. Li, “Deep-learning-based wireless resource allocation with application to vehicular networks,” Proc. of the IEEE, vol. 108, pp. 341–356, Feb. 2020.
- [27] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proc. Int. Conf. Mach. Learn. (ICML), pp. 399–406, Omnipress, 2010.
- [28] V. Monga, Y. Li, and Y. C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,” IEEE Signal Process. Mag., vol. 38, pp. 18–44, Mar. 2021.
- [29] N. Shlezinger, J. Whang, Y. C. Eldar, and A. G. Dimakis, “Model-based deep learning,” arXiv preprint arXiv:2012.08405, 2020.
- [30] S. Balatsoukas, Alexios, and C. Studer, “Deep unfolding for communications systems: A survey and some new directions,” in Proc. IEEE Int. Workshop Signal Process. Syst. (SiPS), pp. 266–271, 2019.
- [31] Q. Hu, Y. Cai, Q. Shi, K. Xu, G. Yu, and Z. Ding, “Iterative algorithm induced deep-unfolding neural networks: Precoding design for multiuser mimo systems,” IEEE Trans. Wireless Commun., vol. 20, pp. 1394–1410, Feb. 2021.
- [32] J. Liu, X. Chen, Z. Wang, and W. Yin, “ALISTA: Analytic weights are as good as learned weights in LISTA,” in Proc. Int. Conf. on Learn. Rep. (ICLR), 2019.
- [33] P. Ablin, T. Moreau, M. Massias, and A. Gramfort, “Learning step sizes for unfolded sparse coding,” in Proc. Neural Inf. Process. Syst. (NeurIPS), pp. 13100–13110, 2019.
- [34] X. Chen, J. Liu, Z. Wang, and W. Yin, “Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds,” in Proc. Neural Inf. Process. Syst. (NeurIPS), pp. 9061–9071, 2018.
- [35] A. T. Puig, A. Wiesel, G. Fleury, and A. O. Hero, “Multidimensional shrinkage-thresholding operator and group LASSO penalties,” IEEE Signal Process. Lett., vol. 18, pp. 363–366, Jun. 2011.
- [36] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” J. R. Stat. Soc. B, vol. 68, no. 1, pp. 49–67, 2006.
- [37] D. Ito, S. Takabe, and T. Wadayama, “Trainable ISTA for sparse signal recovery,” IEEE Trans. Signal Process., vol. 67, pp. 3113–3125, Jun. 2019.
- [38] A. Maleki, L. Anitori, Z. Yang, and R. G. Baraniuk, “Asymptotic analysis of complex LASSO via complex approximate message passing (CAMP),” IEEE Trans. Inf. Theory, vol. 59, pp. 4290–4308, Jul. 2013.
- [39] S. Takabe, T. Wadayama, and Y. C. Eldar, “Complex trainable ista for linear and nonlinear inverse problems,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pp. 5020–5024, 2020.
- [40] M. Borgerding, P. Schniter, and S. Rangan, “AMP-Inspired deep networks for sparse linear inverse problems,” IEEE Trans. Signal Process., vol. 65, pp. 4293–4308, Aug. 2017.
- [41] Y. Kabashima, “A CDMA multiuser detection algorithm on the basis of belief propagation,” J. Phys. A: Math. General, vol. 36, p. 11111, Oct. 2003.
- [42] H. Sun, W. Pu, M. Zhu, X. Fu, T.-H. Chang, and M. Hong, “Learning to continuously optimize wireless resource in episodically dynamic environment,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pp. 4945–4949, 2021.
- [43] Y. Yuan, G. Zheng, K.-K. Wong, B. Ottersten, and Z.-Q. Luo, “Transfer learning and meta learning-based fast downlink beamforming adaptation,” IEEE Trans. Wireless Commun., vol. 20, pp. 1742–1755, Mar. 2021.
- [44] J. Liu, S. Ji, and J. Ye, “Multi-task feature learning via efficient -norm minimization,” in Proc. Conf. Uncertainty Artif. Intell. (UAI), pp. 339–348, AUAI Press, 2009.
- [45] R. Gribonval and M. Nielsen, “Sparse representations in unions of bases,” IEEE Trans. Inf. Theory, vol. 49, pp. 3320–3325, Dec. 2003.