A Neural Network-Prepended GLRT Framework for Signal Detection Under Nonlinear Distortions
Abstract
Many communications and sensing applications hinge on the detection of a signal in a noisy, interference-heavy environment. Signal processing theory yields techniques such as the generalized likelihood ratio test (GLRT) to perform detection when the received samples correspond to a linear observation model. Numerous practical applications exist, however, where the received signal has passed through a nonlinearity, causing significant performance degradation of the GLRT. In this work, we propose prepending the GLRT detector with a neural network classifier capable of identifying the particular nonlinear time samples in a received signal. We show that pre-processing received nonlinear signals using our trained classifier to eliminate excessively nonlinear samples (i) improves the detection performance of the GLRT on nonlinear signals and (ii) retains the theoretical guarantees provided by the GLRT on linear observation models for accurate signal detection.
Index Terms:
Generalized likelihood ratio test, dense neural network, nonlinear signal processing, wireless communications.I Introduction
Signal detection is a classical problem with several applications in wireless communications [1]. The general detection problem entails identifying the presence of a particular signal of interest (SOI) in a received transmission containing different sources of background noise and interference. Traditionally, with adequate assumptions about the distributions of the SOI and interference, statistical hypothesis testing has been used to derive robust and highly accurate detectors [2, 3].
Despite their robustness under classical linear operations, nonlinear transformations and clipping of a received signal significantly degrades the performance of linear test statistics derived from statistical hypothesis testing. In particular, and of special interest to our study, high-powered throughput signals that are outside the linear operating regime of an amplifier cause nonlinear signal distortions, which result in clipped signals leading to nonlinearities in the received passband signal before it is observed at complex baseband. Such high-powered signals that induce nonlinearity arise from sources of interference such as loud signals in audio applications and high signal-to-noise ratio (SNR) interferers that dominate the SOI in situations such as the near-far problem. Achieving accurate detection in such scenarios despite the nonlinearity caused by the system’s hardware is vital. Therefore, robust detection models, with performance guarantees similar to those provided by likelihood methods, are needed for signal detection in the presence of hardware-induced nonlinearities.
In this work, we consider editing a received wireless signal that has undergone a nonlinear transformation prior to applying the generalized likelihood ratio test (GLRT) to perform detection. We find that, under nonlinear distortions of a received signal, each sample has been altered to a different extent. For example, certain nonlinear samples are significantly distorted from their linear counterparts while other nonlinear samples remain close to their linear counterparts. Motivated by this, we train a lightweight (i.e., low-parameter) feedforward dense neural network (DNN) capable of classifying samples that have undergone excessive nonlinearity. At test time, we pre-process our received nonlinear signal by eliminating the time samples that are predicted to be highly distorted by the DNN. The GLRT is then applied to the pre-processed signal, with the highly nonlinear time samples eliminated, to perform detection. Our results verify that this method allows a linear test statistic to operate in nonlinear environments while also retaining the theoretical guarantees of the GLRT.
Related Work: Historically, test statistics derived from the GLRT, as well as similar statistical hypothesis testing methods [4], have been thoroughly studied and shown to be robust signal detectors in linear operations [5, 6] with Gaussian noise. In addition, the GLRT has also been shown to be effective when the distribution of the background noise in the received signals is non-Gaussian [7, 8]. Yet, the GLRT is prone to significant performance degradation when the received signal has undergone a nonlinear transformation, regardless of the noise distribution [9].
Contrary to the long-standing work on detection using likelihood methods, machine learning has recently been proposed for signal detection in wireless communications and cognitive radio environments [10, 11, 12]. In particular, these studies have demonstrated the versatility that deep learning is able to provide in dynamic environments over linear methods when analytically characterizing the received signal may not be possible. However, contrary to likelihood-based detection, deep learning does not provide theoretical guarantees for detection, making the interpretation of the detection result challenging. In this work, we employ neural networks in conjunction with a likelihood-based detection test to leverage its strong classification performance while retaining the theoretical guarantees provided by the GLRT.
Summary of Contributions: The main contributions of this work are as follows:
-
•
Nonlinear Sample Classifier: We propose prepending a GLRT detector with a neural network classifier to identify time samples that have undergone a nonlinear transformation (Sec. II).
-
•
Linear GLRT Operation in Nonlinear Environment: We show the effectiveness of our nonlinear sample classifier, in conjunction with the GLRT-derived linear test statistic, in performing robust detection in nonlinear environments (Sec. III).
II Methodology
II-A Signal Modeling

We consider two time-series signals, and , transmitted from two different directions, impinging an element antenna array. Throughout this work, we will denote the interference signal as and the SOI, which we are interested in detecting, as . A snapshot of the output at the antenna array at time is given by
(1) |
where is a column vector that denotes the time sample of the sensor outputs at time , and are the array response vectors of signals and arriving from directions and (assuming that the direction parameters and are scalars), respectively, and and are the amplitudes of and , respectively, at time . Lastly, is complex additive white Gaussian noise (AWGN) with variance , and is total number of time samples.
Let denote the complete observed signal at the antenna array. Given , we are interested in determining when the SOI turns on (i.e., when becomes active). Here, we define the null hypothesis, , as the case where is absent in (i.e., ), and the alternate hypothesis, , as the case where is present in (i.e., ). Formally, we characterize the distribution of under each hypothesis as
(2) |
where is the true, unknown covariance of . Now, under , we are exclusively considering a system with one SOI in . Here, is the antenna array response of the SOI, is the row vector of signal amplitudes, and is the matrix () corresponding to the time samples during which the SOI is active (i.e., when is present in ), thus yielding , where denotes the time sample at which becomes active in . Our objective is to arrive at a metric that will allow us to determine the active hypothesis (i.e., where the SOI became active in ).
II-B GLRT-based Linear Detection
Given , we will determine the active hypothesis by utilizing the GLRT and optimizing over the empirical covariances. Denoting and as the complex Gaussian densities corresponding to the distributions under and , the GLRT, by definition, is
(3) |
where is the threshold parameter, H denotes the Hermitian transpose, denotes the trace of a matrix, and denotes the determinant. In (a), the maximization over the numerator and denominator covariances yield the corresponding empirical covariances and . The quantity in (3) is equivalent to
(4) |
We refer the reader to [13] for additional details on this GLRT derivation. In (4), we have introduced , which corresponds to a partition of subsequent time samples in . This definition assumes that the GLRT will be applied to partitions of . Specifically, will be used in conjunction with the next partition of time samples, , to determine if a new signal (the SOI) became active in which was absent in [6]. Formally, and are given by
where and are shifted one time sample per iteration, and the GLRT is reevaluated, for all time samples.
In our case, where is completely unknown, we can base detection on the equivalent monotonically related suboptimal statistic of the GLRT [6] given by
(5) |
where we will reject if for some threshold , which is empirically determined. Here, is the empirical covariance of , and is the empirical covariance of . Intuitively, the GLRT solution scans the spatial (covariance) domain on blocks of time samples and results in a contrast peak (further shown and discussed in Sec. III) when a signal (i.e., the SOI) impinging the antenna array from a different direction than the interference turns on in that was not present in .
II-C GLRT Breakdown in Nonlinearity
Now, let us consider the effect of the GLRT test statistic given in (5) when a nonlinearity is induced on the received signal. In this case, we will assume that we receive instead of , where is some unknown nonlinear function that is individually applied to each element in . Specifically, the nonlinearity effect is given by
(6) |
The element-wise effect of is critical because it will affect each sample differently. For example, certain received time samples in will remain in the linear operating region and thus will not be significantly distorted by , whereas other samples will experience high degrees of clipping and distortion due to the nonlinearity. The clipping distortion of each antenna element can be observed in the eigenvalue spread of the covariance matrix, .
The induced nonlinearity directly changes the distribution of and, as a result, the initial assumptions about the distributions of the interference and SOI in (2) no longer hold. Furthermore, since the nonlinear function, , is unknown, we are unable to characterize the distribution of the signal after it has been transformed via , thus preventing us from deriving a new GLRT-based test statistic to operate on . The inability to analytically characterize nonlinearly-processed received signals motivates a data driven approach to signal detection (as described in Sec. II-D), where data samples can be augmented such that the GLRT is able to operate accurately.
II-D DNN-Based Nonlinear Sample Classification
To combat the breakdown of the GLRT solution on nonlinear signals, we implement a dense (fully connected) neural network (DNN) classification model, which employs the time samples comprising for training. Specifically, our objective is to train the DNN to classify excessively nonlinear time samples, i.e., , using both a linear received signal, , and its corresponding nonlinear transformation, . At test time, when we only receive and are unaware of , we will use the DNN to identify and eliminate excessively nonlinear samples from before applying the GLRT test statistic for detection. Our overall DNN-prepended GLRT proposed framework is shown in Fig. 1.
For compatibility with real-valued DNNs, we will represent each time sample (column vector) as a real-valued vector consisting of the sample’s corresponding real and imaginary components. Specifically, we denote as with the form
(7) |
where and denote the real and imaginary components of and the subscript denotes the antenna element from which the sample was collected at time .
Using a set of training data for which we have both the linear and nonlinear representation of the received signal, and , we measure the Euclidean distance between each time sample, i.e., between and , and populate a one-hot label vector for each sample. Specifically, for each time sample, , we compute
(8) |
with the corresponding label determined as
(9) |
where is an empirically determined threshold from the training data, and indicates whether the corresponding time sample is excessively nonlinear.
Next, we train a DNN, , using and to classify the time samples, , that are highly distorted from their linear counterparts. Note that the input to the DNN is since, during test time, we will only receive the nonlinear signal, and thus need to classify its time samples that have contributed to the nonlinearity to the greatest extent. We consider a -layered DNN, consisting of units each, with a softmax output layer yielding the DNN prediction , where the input is predicted to be nonlinear if . In place of a two-dimensional softmax output, the DNN could output a one-dimensional sigmoidal value. However, we empirically found that the former setup yields a more accurate nonlinear sample classifier.
Using and as training samples, the DNN minimizes the mean squared error loss function. At test time, each column (i.e., time sample) is forward propagated through the DNN, and time samples classified as highly distorted due to the nonlinearity are deleted. The edited matrices of and , denoted as and , respectively, correspond to and after their time samples that were classified as nonlinear by were removed. and are then used to compute the GLRT test statistic in (5).
II-E Complexity Analysis
Here, we analyze the computational complexity of our proposed framework during inference. The GLRT statistic in (5) consists of matrix Hermitian operations and matrix multiplications followed by an matrix inverse and an matrix multiplication. Thus, the computational complexity of the GLRT is . Since the DNN pre-pended to the GLRT in our proposed framework consists of dense layers with units each, it adds an additional complexity of . This yields a total time complexity, for our method, of . During inference, our method requires roughly ms to compute the test statistic in (5) for the simulation environment considered in Sec. III. Thus, despite the overhead incurred from the DNN, our method remains computationally feasible for practical use.
III Simulations
III-A Signal Generation
For our empirical evaluation, we generate an interference signal, , and a SOI, . Each signal contains a sequence of random bits, is modulated according to the BPSK constellation, and transmitted with a carrier frequency of MHz and a bandwidth of MHz. At the receiver, we use antenna elements and consider a received signal with an observation window of time samples. In addition, we use time samples to construct each initial instance of and . Here, we consider two setups by scaling the AWGN: (a) an interference to noise ratio (INR) of 40 dB and a signal to noise ratio (SNR) of 15 dB (corresponding to perhaps an RF application); and (b) an INR of 57 dB and an SNR of 37 dB (corresponding to perhaps an audio application). The large power difference between the SOI and interference introduces an additional challenge for detection, which is achievable in the linear case but much more challenging in nonlinearity.
In our analysis, we use the hyperbolic tangent function as the induced nonlinearity, i.e., we define . This function is a representative model for hardware nonlinearities, since it consists of a linear and a nonlinear region, thus reflecting the effect of amplifiers when only certain received samples experience nonlinear saturation [14].
III-B DNN Implementation
The DNN is trained on a generated received signal and its nonlinear counterpart to identify excessively nonlinear signals based on the method in Sec. II-D. The employed DNN contains layers with hyperbolic tangent units each. During training, we use a batch size of 64, a learning rate of , and 500 epochs with a patience on the training loss of 25 epochs as the stopping criterion.
III-C Detector Performance




We evaluate our methodology on , and in addition, we generate the linear counterpart, , to compare the nonlinear and linear detection performance. During implementation, we scale the nonlinearity based on a factor as to achieve a desirable nonlinearity in each considered setup. Fig. 2 shows the GLRT test statistic on (i) a theoretical (linear) signal, (ii) a nonlinear signal, and (iii) the same nonlinear signal after DNN pre-processing for setups (a) and (b). The SOI turns on at . As shown by the metric on the linear signal, a distinctive contrast peak is captured at the time in which the SOI turns on in the received transmission. When the signal undergoes a nonlinearity, and is received without DNN pre-processing, several false alarm peaks are produced at multiple incorrect time samples, and no distinctive contrast peak is captured at when the SOI turns on. When our proposed methodology is applied to the nonlinear signal, however, the GLRT test statistic produces a distinctive contrast peak correctly at , albeit to somewhat of a lesser degree than in the theoretical case. This behavior stems from the remaining samples in and still containing nonlinear samples. Using the curves in Fig. 2, we vary the detection threshold and measure the resulting false positive versus true positive detection rates to obtain the performance curves shown next in Fig. 3.
Fig. 3 shows the receiver operating characteristic (ROC) detection performance curves of the GLRT for each setup. In addition, we compare our proposed method to two baselines: (i) end-to-end deep learning (EtE DL) [15], where a deep learning classifier is trained on both linear and nonlinear signals to perform detection directly in place of the GLRT; and (ii) data driven (DD) pre-processing [16], where the received nonlinear signal is first pre-processed in an attempt to recover the linear signal prior to applying the GLRT. In our adoption of [16], we train an autoencoder (AE) to map nonlinear signals (taken as input to the AE) to their linear counterparts (given as output by the AE). The GLRT test statistic in (5) is then applied on the AE output. Finally, for completion, we also employ our proposed DNN pre-processing methodology on a linear signal to show its effect when it is applied to a signal that does not suffer from nonlinear distortions.
From Fig. 3, we see that our method is able to improve the area under the curve (AUC) of the ROC curve in both cases over using the nonlinear signal without DNN pre-processing and the other considered baselines. The lower AUC, in comparison to the theoretical linear case, is attributed to the increased magnitudes of the false positive contrast peaks produced by the GLRT in Fig. 2. In particular, we note that our method significantly improves the true positive detection rate at low false alarm levels, which is often desirable in the context of wireless signal detection. The nearly random performance of the EtE DL classifier is attributed to the large power difference between the interference and SOI, resulting in a lack of salient features for the classifier to perform detection on. Similarly, DD pre-processing faces challenges in the training step since the same nonlinear clipped time sample corresponds to several linear time samples (i.e., a one-to-many mapping), thus resulting in poorly trained AE models. Our proposed method avoids the sensitivity of mapping non-linear time samples to an approximation of their linear counterparts by instead classifying and removing nonlinear samples that degrade detection performance and then applying the GLRT. Finally, we see that applying our proposed framework to a linear signal only slightly degrades performance in comparison to the theoretical linear performance. However, since we can deduce when a received signal has undergone non-linear distortions from hardware constraints, we can employ our proposed framework on non-linear signals while resorting to the classical GLRT on linear observations.
IV Conclusion and Future Work
In this work, we proposed prepending a GLRT detector with a DNN in order to perform reliable detection in the presence of nonlinear distortions. We showed that our method was capable of performing accurate detection despite the nonlinearity, whereas nonlinear signals without any pre-processing were unable to be used for detection using the GLRT. In future work, we anticipate exploring additional nonlinear environments that our framework could be applied to, including the extended target case [17]. In addition, we expect to expand this work to dynamic environments where the direction of the interference or SOI may not be stationary and require the utilization of the GLRT in an FDA-MIMO [18] framework.
References
- [1] Y. Li, J. Winters, and N. Sollenberger, “MIMO-OFDM for wireless communications: signal detection with enhanced channel estimation,” IEEE Trans. Commun., vol. 50, no. 9, pp. 1471–1477, 2002.
- [2] A. Salehi, A. Zaimbashi, and M. Valkama, “Kernelized-likelihood ratio tests for binary phase-shift keying signal detection,” IEEE Trans. Cogn. Commun. Netw., vol. 7, no. 2, pp. 541–552, 2021.
- [3] H. Tang, L. Chai, and X. Wan, “An augmented generalized likelihood ratio test detector for signal detection in clutter and noise,” IEEE Access, vol. 7, pp. 163 478–163 486, 2019.
- [4] B. Wu, S. Cheng, and H. Wang, “Clipping effects on channel estimation and signal detection in OFDM,” in Proc. of IEEE PIMRC, vol. 1, 2003, pp. 531–534.
- [5] E. Kelly, “An adaptive detection algorithm,” IEEE Trans. Aerosp. Electron. Syst., vol. AES-22, no. 2, pp. 115–127, 1986.
- [6] K. Forsythe, “Utilizing waveform features for adaptive beamforming and direction finding with narrowband signals,” Lincoln Laboratory Journal, vol. 10, no. 2, pp. 99–126, 1997.
- [7] F. Chapeau-Blondeau, “Nonlinear test statistic to improve signal detection in non-gaussian noise,” IEEE Signal Process. Lett., vol. 7, no. 7, pp. 205–207, 2000.
- [8] D. Rousseau, G. Anand, and F. Chapeau-Blondeau, “Noise-enhanced nonlinear detector to improve signal detection in non-gaussian noise,” Signal Processing, vol. 86, no. 11, pp. 3456–3465, 2006.
- [9] J. Tellado, L. Hoo, and J. Cioffi, “Maximum-likelihood detection of nonlinearly distorted multicarrier symbols by iterative decoding,” IEEE Trans. Commun., vol. 51, no. 2, pp. 218–228, 2003.
- [10] H. Ye, G. Y. Li, and B.-H. Juang, “Power of deep learning for channel estimation and signal detection in OFDM systems,” IEEE Wireless Commun. Lett., vol. 7, no. 1, pp. 114–117, 2018.
- [11] Y. Xie, X. Liu, K. C. Teh, and Y. L. Guan, “Robust deep learning based end-to-end receiver for OFDM system with non-linear distortion,” IEEE Commun. Lett., pp. 1–1, 2021.
- [12] R. Sahay, C. G. Brinton, and D. J. Love, “A deep ensemble-based wireless receiver architecture for mitigating adversarial attacks in automatic modulation classification,” IEEE Trans. Cogn. Commun. Netw., vol. 8, no. 1, pp. 71–85, 2022.
- [13] E. J. Kelly and K. Forsythe, “Adaptive detection and parameter estimation for multidimensional signal models,” in Technical Report, MIT Lincoln Laboratory, 1989.
- [14] S. R. Abdulridha and F. S. Hasan, “Palm clipping and nonlinear companding techniques based papr reduction in ofdm-dcsk system,” Journal of Engineering and Sustainable Development, vol. 25, no. 4, p. 84–94, Feb. 2022.
- [15] W. Zhang, M. Feng, M. Krunz, and A. Hossein Yazdani Abyaneh, “Signal detection and classification in shared spectrum: A deep learning approach,” in Proc. of IEEE INFOCOM, 2021, pp. 1–10.
- [16] A. Sayeed, “Data-driven signal detection and classification,” in in Proc. of IEEE ICASSP, vol. 5, 1997, pp. 3697–3700.
- [17] M. Beard, S. Reuter, K. Granström, B.-T. Vo, B.-N. Vo, and A. Scheel, “Multiple extended target tracking with labeled random finite sets,” IEEE Trans. Signal Process., vol. 64, no. 7, pp. 1638–1653, 2016.
- [18] L. Lan, A. Marino, A. Aubry, A. De Maio, G. Liao, J. Xu, and Y. Zhang, “Glrt-based adaptive target detection in fda-mimo radar,” IEEE Trans. Aerosp. Electron. Syst., vol. 57, no. 1, pp. 597–613, 2021.