Online Learning of Trellis Diagram Using Neural Network for Robust Detection and Decoding
Abstract
This paper studies machine learning-assisted maximum likelihood (ML) and maximum a posteriori (MAP) receivers for a communication system with memory, which can be modelled by a trellis diagram. The prerequisite of the ML/MAP receiver is to obtain the likelihood of the received samples under different state transitions of the trellis diagram, which relies on the channel state information (CSI) and the distribution of the channel noise. We propose to learn the trellis diagram real-time using an artificial neural network (ANN) trained by a pilot sequence. This approach, termed as the online learning of trellis diagram (OLTD), requires neither the CSI nor statistics of the noise, and can be incorporated into the classic Viterbi and the BCJR algorithm. It is shown to significantly outperform the model-based methods in non-Gaussian channels. It requires much less training overhead than the state-of-the-art methods, and hence is more feasible for real implementations. As an illustrative example, the OLTD-based BCJR is applied to a Bluetooth low energy (BLE) receiver trained only by a 256-sample pilot sequence. Moreover, the OLTD-based BCJR can accommodate for turbo equalization, while the state-of-the-art BCJRNet/ViterbiNet cannot. As an interesting by-product, we propose an enhancement to the BLE standard by introducing a bit interleaver to its physical layer; the resultant improvement of the receiver sensitivity can make it a better fit for some Internet of Things (IoT) communications.
Index Terms:
Neural Network; BCJR algorithm; Viterbi algorithm; turbo equalization; BluetoothI Introduction
Reliable detection and decoding is essential for any communication systems. For a single-carrier system in an inter-symbol interference (ISI) channel, which can be modeled by a finite-state trellis diagram, the classic design of a maximum likelihood (ML) receiver relies on the likelihood function of the state transitions in the trellis diagram [1]. In this paper, we propose to learn the likelihood function using an artificial neural network (ANN) based on a pilot sequence of moderate length, requiring neither the channel state information (CSI) nor the statistics of the noise.
As related works, machine learning-assisted wireless communications have attracted broad attentions in recent years[2, 3, 4, 5, 6], such as machine learning-assisted channel decoding [7, 8, 9, 10, 11] and symbol detection in a multi-input multi-output (MIMO) system [12, 13, 14, 15]. Deep learning algorithms are found to be more effective in addressing the difficult problem of symbol detection with incomplete CSI [16, 17, 18, 19]. The aforementioned works, however, attempt to substitute a whole communication system by an ANN, which requires a large amount of training data (a lengthy pilot sequences of tens of thousand samples or more), way too much to be practically feasible. The notion of the model-driven method, which combines machine learning techniques and model-based expert knowledge, is introduced in [20] to better incorporate machine learning techniques into a communication system. The efficacy of this type of model-driven methods is demonstrated in [21, 22].
The recent works by Shlezinger et. al. [23, 24] advocate to use a (relatively simple) neural network to substitute only the channel-dependent part of the Viterbi [25] and the BCJR receiver [26]. The resultant algorithms, the so-termed ViterbiNet and BCJRNet, train a neural network to learn the a posteriori probability (APP) of the state transitions given the received samples and use the finite mixture model (FMM) [27] to estimate the marginal probability density of the channel output, assuming that the channel noise is Gaussian. Therefore, the ViterbiNet and BCJRNet require only several thousand training samples, which are significantly less than that in [16, 17], but may still be too much to be practically competitive compared with the conventional model-based method.
In this paper, we first consider the same problem as addressed in [23, 24]. We adopt the notion of integrating a simple neural network into a communication system as advocated by Shlezinger et. al. and propose a new method, termed as the online learning of trellis diagram (OLTD), that can also be integrated into the Viterbi algorithm [25] and the BCJR algorithm [26]. The resultant OLTD-based Viterbi and OLTD-based BCJR differ from the ViterbiNet and BCJRNet in that the ANN is used to learn the likelihoods of the received samples under different state transitions, rather than the APPs. Therefore, we do not need to assume the channel noise to be Gaussian nor to estimate the marginal distribution of the channel output; thus, our proposed method is simpler, robuster, and requires a substantially shorter pilot sequence.
To show the practical feasibility of the proposed OLTD method, we apply it to the physical layer (PHY) of the Bluetooth Low Energy (BLE) protocol[28]. A BLE system adopts the coded Gaussian frequency shift keying (GFSK) modulation, which belongs to the family of continuous phase modulation (CPM) [29]. By modeling the GFSK modulation process with a trellis diagram, we employ the OLTD method to learn the likelihoods of the state transition associated with each received sample based on the 256-sample pilot sequence as regulated in the BLE protocol[30, Ch. 6, PartB], and then use the conventional Viterbi or BCJR algorithm to recover the information bits. This study shows that our proposed neural network assisted receiver can work for a real wireless protocol.
We further introduce a bit-interleaver to the coded GFSK system, for which we combine seamlessly the OLTD method with turbo equalization [31, 32, 33] to achieve significantly improved receiver sensitivity than the conventional Bluetooth. Unlike the previous neural network based methods that unfold each iteration with one layer of the neural network [9, 11], the OLTD-based method obtains the likelihood of each connected branch in the trellis diagram only once through the whole iterative process. In contrast, the BCJRNet [24] assumes that all the coded bits have equal probability and therefore is not suitable for turbo equalization as explained in Section IV-B.
The contributions of this paper are summarized as follows:
-
•
We show that what a neural network needs to learn about the trellis diagram is the normalized likelihoods of the received sample, rather than the a posteriori probabilities of state transitions as adopted in the BCJRNet/ViterbiNet [23, 24]. Owing to this insight, our proposed OLTD method is computationally simpler, robuster, and more versatile than the state-of-the-art methods. We also show that adopting an ANN with only one hidden-layer is sufficient for the OLTD.
-
•
In contrast to the BCJRNet and ViterbiNet [23, 24], our proposed OLTD does not need to compute the marginal distribution of received samples, nor does it assumes the statistics of the channel noise or the a priori probabilities of the transmitted symbols; thus, the OLTD is computationally much simpler and robuster.
-
•
The OLTD-based BCJR method can be seamlessly incorporated into a turbo equalizer for significantly improved performance. But the BCJRNet method cannot, as simulated and analyzed in Section IV-B. In this sense, our proposed method is more versatile.
-
•
Based on a pilot sequence of a practical length, our neural network-based method can outperform the model-based approaches in channels with non-Gaussian noise or interference, as illustrated by the numerical simulations.
-
•
As an interesting by-product, this study indicates a possible enhancement to the BLE standard, i.e., to introduce a bit interleaver between the convolutional encoder and the GFSK modulator for much improved reliability, which can make BLE a better candidate for some Internet of Things (IoT) communications [34].
The remainder of this paper is organized as follows: Section II introduces the system model and briefly review the BCJR algorithm and Viterbi algorithm. Section III explains how to train the OLTD and how the OLTD-based BCJR/Viterbi algorithm works online. Section IV introduces the OLTD-based turbo equalization and its application to a bit-interleaved coded GFSK system. In Section V, the simulation results are given to verify the effectiveness of OLTD-based method and show the superior performance of the proposed OLTD-based BCJR/Viterbi and OLTD-based turbo algorithm. The conclusion is given in Section VI.
II System Model and Preliminaries
II-A An ISI Channel Model
The received signal in an ISI channel can be expressed as
(1) |
where is the channel length, are the channel coefficients, and denotes the i.i.d additive noise, which is not necessarily Gaussian.
The ISI channel can be modeled as a tapped delay line as shown in Fig. 1. Owing to the shift register structure, the channel can be modelled by a trellis diagram [1].
![]() |
As the transmitted symbols are drawn from a set of , the ISI channel modeled in (1) can be represented by a trellis diagram consisting of a set of the states with cardinality . Denote the state set at time as , which corresponds to the combination of symbols . The state transitions is associated with the output signal . As an illustrative example, consider an ISI channel with coefficients , , and and the binary phase shift key (BPSK) signal as the input. The trellis consists of states, with , , , , as shown in Fig. 2. The numbers on the branches represent and . Take the state for example: if the input symbol , the branch output ; if , .
![]() |
II-B The GFSK Modulation
The trellis diagram can also be used to model a modulation with memory. As an example, we review the GFSK, which is used in the PHY of Bluetooth [28]. The GFSK with bandwidth sampled at is
(2) |
where
(3) |
Here is the information bits, is the modulation index, and the pulse shaping function
(4) |
with .
If (as specified in [28]), (3) becomes
(5) |
where . Here we set the frequency-time product (also as specified in [28]). As shown in Fig. 3, for , and for .
![]() |
![]() |
We can model the continuous phase modulation as a process of finite state transition based on the phase transition (it is not ) as shown in Fig. 4. Denote as the state transition from at time to at time driven by . Here stands for the set of the states of the trellis. The dash branches correspond to , the solid branches correspond to , and the associated signal is
(6) |
where we have used the fact that .
If the input to the ISI channel (1) is an GFSK signal, we can combine the GFSK modulation and the multi-path channels into a (larger) trellis diagram.
II-C Primer of The Viterbi and BCJR Algorithms
Based on the trellis diagram, the receiver can use the Viterbi algorithm or the BCJR algorithm for optimal symbol detection. We briefly review them to make this paper self-contained.
The Viterbi algorithm is for maximum likelihood (ML) detection. By exploiting the Markovian structure of the finite-memory channel, it computes the likelihood of each branch, i.e., the conditional probability density function (PDF) of the channel output given the inputs
(7) |
where . For a given channel output , we should have
(8) |
(9) | ||||
where is the likelihood of associated with the state transition from to . The Viterbi algorithm can solve (9) efficiently by searching for the shortest path across the trellis diagram.
The BCJR algorithm [26] is for maximum a posteriori probability (MAP) detection. It computes the a posteriori log-likelihood ratio (LLR) of bits
(10) | ||||
where , are the set of the ordered pairs corresponding to all states transitions driven by and , respectively. The sequence in can be written as . Applying the chain rule for joint probabilities, we can decompose into:
(11) |
where , , and . They can be recursively computed as
(12a) | |||
(12b) | |||
(12c) |
with the initialization and .
Driven by , the probability of the state transition is
(13) |
II-D About the Likelihood Function
Examining (9) and (12), we see that the likelihood plays a key role in both Viterbi and BCJR algorithms. Indeed, the Viterbi algorithm depends solely on the likelihood, while the BCJR also exploits the a priori information on the state transition probability , which is independent of the CSI [cf. (12c) and (13)]. Hence, is the only CSI-dependent component for both algorithms.
The conventional method calculates assuming known CSI and that the channel noise is zero-mean Gaussian with variance . Hence the likelihood can be computed to be
(14) |
where is the channel output associated with the state transition from to (cf. Fig. 2).
But when the CSI is unknown or the distribution of the noise is unknown owing to non-Gaussian co-channel interference, the model-based likelihood (14) will be erroneous, causing severe degradation to the performance of Viterbi or BCJR. To address this issue, we propose to use an ANN to learn online the likelihood function of the trellis diagram based on a pilot sequence, assuming no CSI nor the noise statistics.
III The OLTD-based Viterbi and BCJR
This section explains how the OLTD-based Viterbi and BCJR algorithms work. Both algorithms consist of three stages as explained in the following.
III-A Stage One: Train ANN to Learn The Trellis Diagram
As shown in Fig. 5, we construct a fully connected and one hidden-layered ANN [35], whose input is the real and imaginary part of the received sample and the output nodes correspond to all the state transitions in the trellis diagram. For example, we can use a network with 8 output nodes to learn the trellis diagram in Fig. 4, which has 8 state transitions. The hidden-layer nodes adopt the Sigmoid activation function, while the output layer employs the Softmax activate function. The reason of using the softmax function is to be explained in Section III-D.
![]() |
Given the pilot sequence , we know the true transition corresponding to each and thus label the normalized likelihood corresponding to the true transition to be 1 and all the others to be 0; this is the so-called one-hot encoding. The ANN is optimized according to the minimum cross-entropy criterion. Note that the OLTD method requires neither the CSI nor the noise statistics, which is its main advantage over the model-based method.
III-B Stage Two: Feed the Payload into the ANN
After the ANN is trained, the payload signals are then fed into the network; for each the ANN will yield the normalized likelihoods for all the state transitions. The numerical examples in Section V indicates that the likelihoods yielded by the ANN is sufficiently accurate when it is trained with a pilot with length no more than a few hundred.
III-C Stage Three: Feed the Likelihoods into the Viterbi or BCJR Algorithm
Given the likelihoods ’s per received sample of the payload, the Viterbi (or BCJR) algorithm can then be directly applied to obtain the ML (or MAP) detection.
Taking the BCJR algorithm for example, given the sample ’s associated likelihoods ’s, the BCJR algorithm can calculate according to (12c) and further compute the forward recursion (12a) and the backward recursion (12b) to obtain the LLR by (11) and (10). The combination of the OLTD method and the BCJR algorithm is termed as the OLTD-based BCJR. The OLTD-based Viterbi is even simpler: it just needs to search the most likelihood trellis path according to (9).
III-D Discussions
Observe from (9) that to multiply ’s by a common positive factor does not affect the output result of the Viterbi algorithm, neither does it affect the BCJR as can be seen from (10) and (12). Hence, instead of learning the original likelihoods, which can be anywhere in , the ANN can simply learn the normalized likelihoods
which is why we can adopt the softmax as the activation function of the output layer even though the actual likelihood value may be out of the range . The above insight is the underlying reason why our method is significantly simpler than the state-of-the-art method [24], which has to compute the marginal distribution besides using a deep neural network to learn the conditional probability [24, Fig. 3].
We can gain more insights into the OLTD method by drawing its analogy to the classic least square (LS) fitting method. An LS method optimizes the parameters of a signal model to fit the true signal; the OLTD method trains the ANN to approximate the ground truth of the normalized likelihoods, which is a one-hot vector in absence of the channel noise. The LS method uses the LS criteria; the OLTD adopts the minimum cross-entropy criterion. The LS method does not assume knowledge of the noise statistics, neither does the OLTD, which is why the OLTD is robust against non-Gaussian noise as shown in Section IV.
IV The OLTD-Based Turbo Equalization for A Coded GFSK System
In this section, we apply the OLTD method for turbo equalization, for which a bit interleaver is inserted between the forward error correction (FEC) encoder and the modulator, as shown in Fig. 6. This section serves for two purposes: i) the OLTD method is shown to be able to be incorporated seamlessly into a turbo equalizer for superb performance; ii) we advocate a potential enhancement to the current BLE standard for significantly better receiver sensitivity, as an interesting byproduct of this study.
Consider that the FEC encoder in Fig. 6 is the convolutional code whose generator polynomials are
(15) | ||||
which is actually the convolutional code adopted in the BLE standard [30], and the modulation is the GFSK (as explained in Section II-B). The information bits are denoted as , the coded bits , and the interleaved bits , which will be modulated into symbols .
![]() |
IV-A OLTD-Based Turbo Equalization
The whole procedure of turbo equalization is as shown in Fig. 7, where BCJR equalization and BCJR decoding are conducted iteratively 222Readers unfamiliar with turbo equalization may refer to [36] for an excellent tutorial..
On the equalization side, the BCJR algorithm takes the a priori probabilities of ’s to calculate the state transition probabilities as
(16) |
takes the normalized likelihoods from the OLTD algorithm, and then calculate (12) and (11) to calculate in (10).
Note that the a posteriori LLR of can also be computed as
(17) | ||||
Since the bit interleaver decorrelates the neighboring coded bits, it holds that ; thus, can be decomposed into
(18) |
where
(19) |
is the extrinsic information about contained in , and
(20) |
is called the intrinsic information. Hence, it follows from (18) that
(21) |
On the decoding side, after deinterleaving into , the BCJR decoder takes the extrinsic information as the “received signal”, i.e., to update the a posteriori LLR of the coded bits based on the trellis diagram associated with the FEC (15). As another input into the BCJR decoder, the uncoded bits are assumed to satisfy .
Note from (18) that we can update the intrinsic LLRs
(22) |
which can be fed into the BCJR equalizer after being interleaved into . Using the relationship [cf. (20)]
(23) |
and (16), we can obtain the state-transition probabilities ’s, which are needed for the next round of BCJR equalization.

After a prescribed number of iterations, we calculate as [36, eq.(22)]
(24) |
for , and the decoder will calculate the LLRs of information bits , and make a final decision
(25) |
Hence, the only difference between the OLTD-based turbo equalization and a standard model-based one is how the likelihoods are obtained, while the whole procedure in the dotted-line box is the same for both methods.
IV-B Apply BCJRNet to Turbo Equalization?
One may attempt to apply the BCJRNet to turbo equalization, but will come across a major issue as explained in the next.
In the BCJRNet, a neural network is trained to obtain the a posteriori probability . Then by Bayes’ rule 333To obtain , the BCJRNet algorithm actually assumes that is a constant [23, eq. (13)].
(26) |
Given the a posteriori probability learned by the BCJRNet, one can update through the forward recursion
(30) |
where can be obtained according to (16) and (23). Due to this recursion, the BCJR-based turbo equalization is cumbersome.
More important, this method actually does not work as shown by the simulation example in Section V-B. Indeed, the a posteriori probability is not the suitable metric to learn, because
(31) |
relies on the a priori information ; thus, it is impossible to infer from itself unless is a constant, which is usually untrue. In contrast, the likelihood relies solely on the distribution of the channel noise and the CSI, and is independent of the channel coding; thus, it can be learned from itself.
V Simulation Results
In this section, we present simulation examples to validate the feasibility and superior performance of the OLTD method applied to the Viterbi algorithm, the BCJR algorithm, and the turbo equalization.
We adopt a fully-connected neural network with a single hidden layer as shown in Fig. 5 for the OLTD. The hidden layer has 100 neurons and employs the Sigmoid activate function. The number of neurons in the output layer is the same as the number of state transitions in the trellis diagram. The likelihood of each state transition is normalized by the Softmax to approximate the ground truth, i.e., the one hot vector. The network is trained using the Adamax optimizer [37] to minimize the cross-entropy based on a pilot sequence. The optimizer divides the pilot into mini-batches of 16 samples and the initial learning rate is set to be 0.01. The settings of our training conditions are summarized in Table I.
NN Toolkit | Keras using Tensorflow backend |
---|---|
Training Processor | Inter(R) i7-6700 CPU |
Training Batch Size | 16 |
Training Epoch | 200 |
No. of Hidden Layer | 1 (100 neurons) |
Optimizer | Adamax |
Activate Function | Sigmoid—Softmax |
Loss Function | Cross Entropy |
Pilot Length | 500 in QPSK and OOK, 256 in GFSK cases |
Three types of signals are simulated: i) uncoded Quadrature Phase Shift Keying (QPSK) transmitted over an ISI channel, ii) bit-interleaved On-Off Keying (OOK) in a Poisson channel, and iii) coded GFSK as adopted in the PHY of the BLE – all can be represented by a trellis diagram. 444The codes used for generating the simulation results can be found: https://github.com/JayYang-Fdu/OLTD-code..
V-A Uncoded QPSK in an ISI Channel with Additive Noise
We first simulate an ISI channel as modelled in (1), where the input signal is uncoded QPSK, the noise is complex-valued Gaussian, and channel coefficients are given by for . We set the channel memory length ; thus, the trellis diagram is fully-connected with 4 states and 16 branches (state transitions). Fig. 8 compares the bit error rate (BER) performance of model-based BCJR/Viterbi algorithm and OLTD-based BCJR/Viterbi algorithm. The model-based method is simulated based on perfect CSI, while the OLTD is trained based on a 500-sample pilot. The simulation results are obtained by averaging over Monte-Carlo simulations, where the channel coefficients are generated with being draw at random in the range . Fig. 8 shows that OLTD-based method can achieve performance very close to the model-based benchmark. The Viterbi algorithm performs the same as the BCJR algorithm, since here the bits are generated or evenly.
We then simulate the channel noise as complex Cauchy with the PDF
(32) |
where and stand for the real and imaginary parts, respectively. The Cauchy’s PDF has much heavier tails than the Gaussian’s.
![]() |
![]() |
In the Cauchy noise case, the SNR is undefined since the variance of Cauchy is unbounded. We simulate the BER performance by changing the value of . Fig. 9 shows that the OLTD-BCJR outperforms the model-based BCJR algorithm in this non-Gaussian case, which is not surprising since the OLTD requires no a priori knowledge on the statistics of the noise.
![]() |
We also simulate the ISI channel scenario where the QPSK signal is interfered by a random 4-ary Pulse Amplitude Modulation (4-PAM) source. The received power of the interference is the same as the QPSK signal, i.e., the signal-to-interference ratio (SIR) is dB. As shown in Fig. 10, the model-based Viterbi method based on the assumption of Gaussian noise fails. The striking advantage of the OLTD-based approach indicates that the neural network somehow learned the “structure” of the non-Gaussian interference and hence suppressed it effectively.
![]() |
To demonstrate the influence of training pilot length on reception performance, we simulate the BER performance of the OLTD-based approach when trained using the pilots length of as shown in Fig. 11. We set SNR = 10 dB under Gaussian channel, and the performance of the OLTD-based method, as the training pilots length increases, gradually approaches that of the model-based method under perfect CSI. A pilot of length a few hundred is sufficient for all the three cases. Fig. 11 illustrates from a different perspective that the OLTD-based method can outperform the model-based method in the two cases of non-Gaussian noise.
V-B Bit-interleaved Coded OOK in a Poisson Channel
In addition to the additive noise channel, we also simulate the Poisson channel as previously considered in [23]. But here we consider a bit-interleaved system as shown in Fig. 6, where the information bits are FEC encoded, bit-interleaved, and then modulated by OOK before being transmitted over a Poisson channel. The channel output is of Poisson distribution, i.e.,
(33) |
where with and for being draw at random in the range.
For , the Poisson channel can be represented by a fully-connected two-state trellis diagram with 4 branches. Hence, the OLTD uses a neural network with 4 outputs. Fig. 12 shows that using no iterations the OLTD-based method and the BCJRNet have identical performance. The BCJRNet applied for turbo equalization, however, leads to failed decoding as explained in Section IV-B. In contrast, the OLTD-based turbo equalization can achieve significantly improved performance.
![]() |
V-C A BLE System and Its Enhancement
![]() |
We simulate in an AWGN channel a BLE system with bit rate 500 Kbps with a pilot of length 256 as specified in the protocol [30]. A model-based receiver consists of a BCJR demodulator and a Viterbi decoder. It assumes perfect CSI. The OLTD-based receiver differs from the model-based one only in that the likelihoods [cf. (12c)] used in the BCJR algorithm is produced by a neural network trained by the 256-sample pilot sequence. According to Fig. 4, the GFSK modulation can be represented by a trellis diagram with 8 state transitions; thus, the neural network for the OLTD has 8 neurons in the output layer. The BER is averaged over Monte Carlo trails of the channel coefficients , with being drawn at random in the range . In Fig. 13, the dot dash line with marker corresponds to the model-based method, while the dot dash line with marker is the OLTD-based one. They essentially overlap, which suggests that the neural network assisted receiver can be practically feasible at least performance-wise.
We also present Shannon limit of the BER performance of the GFSK signal obtained using the numerical method in [38, 39]. The large gap between Shannon limit and the achieved performance of the BLE system motivated us to consider introducing a bit interleaver at the transmitter between encoder and GFSK modulator (cf. Fig. 6). Given the bit-interleaver, the receiver can apply the turbo equalization. Fig. 13 also illustrates the BER performance of the turbo receiver with no iteration, and with 1 and 2 iterations. The three dash lines show the model-based turbo equalization under the perfect CSI and the other three sold lines with markers correspond to the OLTD-based turbo algorithm with pilot length of 256. It can be seen that introducing the bit-interleaver and using the turbo equalization in the receiver can yield dB gain compared with the conventional receiver. Hence, to introduce bit-interleaving may be an interesting enhancement to the existing BLE protocol for significantly enhanced performance, which can make it more competitive for IoT communications.
In the last example, we consider the BLE system in a ISI channel with memory . The ISI channel is described as , where the channel coefficients are normalized. The combination of the GFSK modulation and the ISI channel can be modeled by a trellis diagram with 16 states. Then we can also apply turbo equalization based on the OLTD method except that here the output layer of the neural network has neurons. The BER performance of the model-based turbo receiver with known perfect CSI and the OLTD-based turbo receiver in the ISI channel is compared in Fig. 14. It can be seen that the OLTD-based turbo equalization algorithm trained based on a same 256-sample pilot sequence has about 0.5dB loss compared with the model-based method with perfect CSI.
![]() |
VI Conclusions
This paper introduced a method named online learning of trellis diagram (OLTD), which uses a single hidden-layer artificial neural network (ANN) to learn the likelihoods of the received samples under different state transitions. It can be applied to replace only the channel state-dependent part of the Viterbi algorithm and the BCJR algorithm. We applied the OLTD-based Viterbi/BCJR algorithms to a coded QPSK/GFSK system, and the simulation results show that using a pilot sequence of length only a few hundred samples the OLTD based methods can perform similarly to their model-based counterpart given perfect channel state information (CSI) and Gaussian noise. In contrast to the model-based approaches, the OLTD-based approach assumes neither CSI nor statistics of the noise, which makes it robust against non-Gaussian interferences. In contrast to the state-of-the-art machine learning assisted methods, such as the BCJRNet and ViterbiNet, the proposed method does not assume the a priori probabilities of the coded bits, which makes it readily applicable to turbo equalization. The OLTD-based algorithms can be applied to the standard Bluetooth system and the enhanced one with bit-interleaving, because they require only some pilot of moderate length. Introducing bit-interleaving can be a beneficial enhancement to the BLE standard as an interesting by-product of this study.
References
- [1] J. G. Proakis and M. Salehi, Digital communications. McGraw-Hill., 2008.
- [2] T. O’Shea and J. Hoydis, “An Introduction to Deep Learning for the Physical Layer,” IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 4, pp. 563–575, 2017.
- [3] Q. Mao, F. Hu, and Q. Hao, “Deep Learning for Intelligent Wireless Networks: A Comprehensive Survey,” IEEE Communications Surveys and Tutorials, vol. 20, no. 4, pp. 2595–2621, 2018.
- [4] D. Gunduz, P. de Kerret, N. D. Sidiropoulos, D. Gesbert, C. R. Murthy, and M. van der Schaar, “Machine learning in the air,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 10, pp. 2184–2199, 2019.
- [5] N. H. Tran, W. Bao, A. Zomaya, N. H. N. Minh, and C. S. Hong, “Federated Learning over Wireless Networks: Optimization Model Design and Analysis,” in IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 1387–1395, 2019.
- [6] S. Park, O. Simeone, and J. Kang, “Meta-Learning to Communicate: Fast End-to-End Training for Fading Channels,” in ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5075–5079, 2020.
- [7] E. Nachmani, Y. Be’ery, and D. Burshtein, “Learning to decode linear codes using Deep Learning,” in 2016 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 341–346, 2016.
- [8] A. Bennatan, Y. Choukroun, and P. Kisilev, “Deep Learning for Decoding of Linear Codes - A Syndrome-Based Approach,” in 2018 IEEE International Symposium on Information Theory (ISIT), pp. 1595–1599, 2018.
- [9] F. Liang, C. Shen, and F. Wu, “An Iterative BP-CNN Architecture for Channel Decoding,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 144–159, 2018.
- [10] E. Nachmani, E. Marciano, L. Lugosch, W. J. Gross, D. Burshtein, and Y. Be’ery, “Deep Learning Methods for Improved Decoding of Linear Codes,” IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 1, pp. 119–131, 2018.
- [11] T. Gruber, S. Cammerer, J. Hoydis, and S. ten Brink, “On deep learning-based channel decoding,” in 2017 51st Annual Conference on Information Sciences and Systems (CISS), pp. 1–6, 2017.
- [12] N. Samuel, T. Diskin, and A. Wiesel, “Deep MIMO detection,” in 2017 IEEE 18th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 1–5, 2017.
- [13] N. Samuel, T. Diskin, and A. Wiesel, “Learning to Detect,” IEEE Transactions on Signal Processing, vol. 67, no. 10, pp. 2554–2564, 2019.
- [14] J. Li, Q. Zhang, X. Xin, Y. Tao, Q. Tian, F. Tian, D. Chen, Y. Shen, G. Cao, Z. Gao, and J. Qian, “Deep learning-based massive MIMO CSI feedback,” in 2019 18th International Conference on Optical Communications and Networks (ICOCN), pp. 1–3, 2019.
- [15] T. J. O’Shea, T. Erpek, and T. C. Clancy, “Deep Learning Based MIMO Communications.,” arXiv preprint arXiv:1707.07980, 2017.
- [16] Y. Liao, N. Farsad, N. Shlezinger, Y. C. Eldar, and A. J. Goldsmith, “Deep Neural Network Symbol Detection for Millimeter Wave Communications,” in 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1–6, 2019.
- [17] N. Farsad and A. Goldsmith, “Neural Network Detection of Data Sequences in Communication Systems,” IEEE Transactions on Signal Processing, vol. 66, no. 21, pp. 5663–5678, 2018.
- [18] F. A. Aoudia and J. Hoydis, “End-to-End Learning of Communications Systems Without a Channel Model,” in 2018 52nd Asilomar Conference on Signals, Systems, and Computers, pp. 298–303, 2018.
- [19] H. Ye, L. Liang, G. Y. Li, and B.-H. Juang, “Deep Learning-Based End-to-End Wireless Communication Systems With Conditional GANs as Unknown Channels,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3133–3143, 2020.
- [20] H. He, S. Jin, C.-K. Wen, F. Gao, G. Y. Li, and Z. Xu, “Model-Driven Deep Learning for Physical Layer Communications,” IEEE Wireless Communications, vol. 26, no. 5, pp. 77–83, 2019.
- [21] X. Gao, S. Jin, C.-K. Wen, and G. Y. Li, “ComNet: Combination of deep learning and expert knowledge in OFDM receivers,” IEEE Communications Letters, vol. 22, no. 12, pp. 2627–2630, 2018.
- [22] J. Liao, J. Zhao, F. Gao, and G. Y. Li, “A Model-Driven Deep Learning Method for Massive MIMO Detection,” IEEE Communications Letters, vol. 24, no. 8, pp. 1724–1728, 2020.
- [23] N. Shlezinger, N. Farsad, Y. C. Eldar, and A. J. Goldsmith, “Data-Driven Factor Graphs for Deep Symbol Detection,” in 2020 IEEE International Symposium on Information Theory (ISIT), pp. 2682–2687, 2020.
- [24] N. Shlezinger, N. Farsad, Y. C. Eldar, and A. J. Goldsmith, “ViterbiNet: A Deep Learning Based Viterbi Algorithm for Symbol Detection,” IEEE Transactions on Wireless Communications, vol. 19, no. 5, pp. 3319–3331, 2020.
- [25] J. G. Forney, “The Viterbi algorithm,” Proceedings of the IEEE, vol. 61, no. 3, pp. 268–278, 1973.
- [26] L. R. Bahl, “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Transactions on Information Theory, vol. 20, pp. 284–287, 1974.
- [27] G. Mclachlan and D. Peel, “Finite Mixture Model,” Partha Deb, vol. 44, 01 2000.
- [28] E. Au, “Bluetooth 5.0 and Beyond [Standards],” IEEE Vehicular Technology Magazine, vol. 14, no. 2, pp. 119–120, 2019.
- [29] C.-E. Sundberg, “Continuous phase modulation,” IEEE Communications Magazine, vol. 24, no. 4, pp. 25–38, 1986.
- [30] “Bluetooth Core Specification, version 5.0.” Available: https://www.bluetooth.com/zh-cn/specifications/specs/core-specification-5/.
- [31] T. Okada and Y. Iwanami, “Turbo Equalization of GMSK Signals Using Noncoherent Frequency Detection,” IEICE Transactions on Electronics, vol. 85, no. 3, pp. 473–479, 2002.
- [32] X. Wang and Z. Yang, “Turbo equalization for GMSK signaling over multipath channels,” in 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221), vol. 4, pp. 2641–2644, 2001.
- [33] T. Okada and Y. Iwanami, “TURBO EQUALIZATION OF GMSK SIGNALS USING LIMITER-DISCRIMINATOR,” Proc. ISSSE, 2001.
- [34] L. Atzori, A. Iera, and G. Morabito, “The Internet of Things: A survey,” Computer Networks, vol. 54, no. 15, pp. 2787–2805, 2010.
- [35] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
- [36] R. Koetter, A. Singer, and M. Tuchler, “Turbo Equalization,” IEEE Signal Processing Magazine, 2004.
- [37] D. P. Kingma and J. L. Ba, “Adam: A Method for Stochastic Optimization,” in ICLR 2015 : International Conference on Learning Representations 2015, 2015.
- [38] L. I. Bing, F. Wei, B. M. Bai, and M. A. Xiao, “Fundamental performance limits of CPM coded modulation system,” Journal on Communications, 2014.
- [39] D. M. Arnold, H. . Loeliger, P. O. Vontobel, A. Kavcic, and W. Zeng, “Simulation-Based Computation of Information Rates for Channels With Memory,” IEEE Transactions on Information Theory, vol. 52, no. 8, pp. 3498–3508, 2006.