New Upper Bounds on the Error Probability under ML Decoding for Spinal Codes and the Joint Transmission-Decoding System Design
Abstract
Spinal codes are a type of capacity-achieving rateless codes that have been proved to approach the Shannon capacity over the additive white Gaussian noise (AWGN) channel and the binary symmetric channel (BSC). In this paper, we aim to analyze the bounds on the error probability of Spinal codes and design a joint transmission-decoding system. First, in the finite block-length regime, we derive new upper bounds on the Maximum Likelihood (ML) decoding error probability for Spinal codes over both the AWGN channel and the BSC. Then, based on the derived bounds, we formulate a rate maximization problem. As the solution exhibits an incremental-tail-transmission pattern, we propose an improved transmission scheme, referred to as the thresholded incremental tail transmission (TITT) scheme. Moreover, we also develop a dynamic TITT-matching decoding algorithm, called the bubble decoding with memory (BD-M) algorithm, to reduce the decoding time complexity. The TITT scheme at the transmitter and the BD-M algorithm at the receiver jointly constitute a dynamic transmission-decoding system for Spinal code, improving its rate performance and decoding throughput. Theoretical analysis and simulation results are provided to verify the superiority of the derived bounds and the proposed joint transmission-decoding system design.
Index Terms:
Spinal codes, performance bounds, decoding error probability, ML decoding, rate optimization, transmission-decoding system design, low-complexity decoding, upper bounds, rateless codes.I Introduction
I-A Background
First proposed by Perry et al. in [1], Spinal codes are a new type of near-Shannon-capacity channel coding techniques that have been theoretically proved to achieve the Shannon capacity over both the additive white Gaussian noise (AWGN) channel and the binary symmetric channel (BSC) [2]. The core idea in this code is applying a sequential of hash functions combing with the random number generators (RNGs) to generate pseudo-random coded symbols: the hash functions provide the coded symbols with the pairwise independent feature, which thereby ensures good resilience to noise and errors; the RNGs inherit from the idea of Shannon random coding, which can generate pseudo-random coded symbols as much as possible to cope with the time-varying channel by nature. Spinal code draws its strength from the excellent code rate performance. It has been shown in [3] that rateless Spinal codes exhibit superiority in rate performance when compared to fixed-rate LDPC codes [4], rateless Raptor codes [5], and the layered rateless coding approach [6] of Strider codes [7], across various channel conditions and message sizes.
In addition to the capacity-approaching code rate performance, a more attractive feature for Spinal codes is that Spinal codes are also a family of rateless codes. Different from conventional fixed-rate codes that should dynamically pre-define the “bit rate”, i.e., a modulation mode, channel code, and code rate, Spinal codes transmit coded symbols in a rateless mode. By rateless transmission, Spinal codes do not need to frequently estimate the time-varying channel state to dynamically adjust the code rate and the modulation mode. Instead, they keep generating and transmitting pseudo-random coded symbols until the transmitted symbols are sufficient for the decoder to decode successfully and an acknowledgment (ACK) is fed back to the transmitter to interrupt the transmission. By this means, Spinal codes themselves are a type of advanced adaptive coding modulation technique, which enables adaptive parameter selection to ensure reliable and efficient end-to-end communication.
I-B Literature Review
Attributed to the near-Shannon-capacity characteristic and the rateless feature, Spinal codes have aroused extensive research interest in both performance optimization and theoretical analysis. One of the most popular and open challenges in facilitating Spinal codes from theory to practice is to design a practical low-complexity decoding algorithm for Spinal codes. Formally, such a challenge exists due to the high-frequency use of hash functions and the multiple rounds of tentative decoding. To circumvent this problem, traditionally, the tree pruning strategies have been extensively explored to design the low-complexity decoding algorithms [1, 9, 10, 11]. In [3], the proposed bubble decoding algorithm is the initial practical decoding algorithm customized for Spinal codes. Then, inspired by the tree-structure feature of Spinal codes, Yang et al. introduce the forward stack decoding (FSD) algorithm, which was first proposed in [8], to implement the decoding process of Spinal codes [9]. By such an implementation, the FSD algorithm showcases much lower decoding time complexity compared to the bubble decoding algorithm without sacrificing rate performance. In [10], the proposed sliding window decoding (SFD) algorithm further reap benefits in much lower decoding time complexity compared to the FSD algorithm. A trade-off between decoding efficiency and decoding reliability is also investigated. In [11], Hu et al. introduces several dynamic parameters in the decoding process Spinal codes. The proposed algorithm, referred to as the block dynamic decoding (BBD) algorithm, also demonstrates superior performance in both the decoding complexity and the decoding error probability. Nevertheless, we notice that the above works mainly focus on designing the pruning strategy to eliminate the calculation required in a single decoding round. For rateless Spinal codes, the transmission and decoding process are both consecutive and not unique. In this regard, we would like to pay attention to investigating the dynamic changes of the decoding tree between adjacent decoding rounds of rateless Spinal codes.
Puncturing on the coded symbols is a commonly adopted transmission design to improve the rate performance of channel coding techniques. In [12], coding schemes that can be punctured to adapt to time-varying channels are known as rate-compatible codes. As the redundant symbols are punctured, the code rate increases by nature. As such, the puncturing technique has drawn extensive interest in the literature, such as punctured LDPC codes [15, 13, 14], punctured Turbo codes [16], and punctured Polar codes [17, 18]. For rateless Spinal codes, uniform puncturing is one of the most commonly used puncturing schemes, where the coarse-grained transmission unit, pass, is refined into some finer-grained sub-passes, and the transmitter only transmits a single symbol in a sub-pass [3]. In [19], uniform puncturing is also adopted in the proposed S-Spinal codes to achieve finer granularity for compression purposes. However, the uniform puncturing implementation is only a general-purpose puncturing method that has been applied extensively in rateless coded systems[20]. In such a case, the puncturing scheme that customizes the unique properties of Spinal codes is expected to be designed.
Performance evaluation of channel codes is another important work in understanding the essence of a coding technique. In addition to Monte Carlo simulations, which require amounts of repeated calculations, the bounding technique is a more straightforward method to evaluate the performance of a coding technique111In most of the cases, to derive the exact closed-form expression to evaluate the performance of a near-Shannon-limit coding technique is intractable, and thus to derive the bound is an alternative.. The recent focus given to the performance bounds of codes is on some near-Shannon-limit performing codes, ranging from the performance bounds on LDPC codes [21, 22], to other advanced tight bounds on Turbo codes[23, 24, 25], and Polar codes[26, 27]. However, for Spinal codes, which are a new family of near-Shannon-capacity codes, the theoretical performance analysis is still in its infancy. One of the first works devoted to the performance of Spinal codes is that of Yu et al.[28], wherein the tree structure of Spinal codes is considered, and some general bounding techniques on pairwise independent random codes, i.e., the Random Coding Union (RCU) bound and a variant of Gallager’s result, are introduced as cores to obtain the closed-form upper bounds on the error probability of Spinal codes. Nevertheless, we notice that derived upper bounds are not customized for Spinal codes, but play as applications of the RCU bound and Gallager’s result. In this regard, the bounds on the error probability exhibit loose performance (see Section VI). In our previous work [29], we analyze the upper bound on the error probability of Spinal codes over the BSC. However, the derived upper bound in [29] remains several static parameters obtained through simulation implementations. Therefore, the bounds in [29] and are not completely explicit. To this end, we would like to derive explicit upper bounds on the error probability of Spinal codes. Instead of taking the general upper bounds of pairwise independent random codes as cores, we consider the coding, decoding and transmission process of Spinal codes in detail to derive the upper bounds on the error probability for Spinal codes in the FBL regime.
I-C Contributions
Motivated by the above, the main contributions of this work are summarized as follows. We begin with deriving the upper bounds on the error probability of ML-decoded Spinal codes over both the BSC and the AWGN channel under the finite block-length (FBL) regime. The derivation leads to the closed-form upper bounds shown in Section III, and demonstrate tighter performance at high SNRs as shown in Section VI. Then, based on the derived closed-form upper bounds, we formulate a rate maximization problem and correspondingly design a customized iterative algorithm to solve it. Notice that the optimal solution to this problem forms an incremental-tail-transmission pattern; heuristically, we propose the thresholded incremental tail transmission (TITT) scheme for Spinal codes. Furthermore, to address the high decoding complexity issue at the receiver, we investigate the dynamic changes of the decoding tree between adjacent decoding tentatives and show that only parts of the decoding tree change during the consecutive decoding process, and thus the tree reconstruction of the unchanged parts can be skipped. To this end, we find that most of the available decoding algorithms for Spinal codes in the literature can be improved by simply introducing an extra memory to restore the unchanged parts of the decoding tree. For the experimental purpose, we take the most commonly used bubble decoding algorithm as an example, and design a bubble-decoding-based dynamic decoding algorithm, referred to as the bubble decoding with memory (BD-M). By combing the proposed TITT scheme at the transmitter with the BD-M algorithm at the receiver, we further observe that the BD-M algorithm can match perfectly with the tail-incremental-transmission property of the TITT scheme, enjoying both higher rate performance and lower decoding complexity.
I-D Organization
The rest of this paper is organized as follows. Section II introduces the preliminaries of Spinal codes. In Section III, the new upper bounds on the error probability of ML-decoded Spinal codes are derived. The details of the transmission scheme design are presented in Section IV. In Section V, the details of the BD-M algorithm and its complexity analysis are presented. In Section VI, simulation results are provided, followed by conclusions in Section VII.
II Preliminaries
II-A Encoding Process of Spinal Codes

As shown in Fig. 1, the encoding process of Spinal codes can be accomplished in four steps:
-
1.
An -bit message is divided into -bit segments, denoted by , where .
-
2.
An iterative process is invoked to generate the series of -bit spine values :
(1) -
3.
The -bit spine value serves as a seed of an RNG to generate a pseudo-random uniform-distributed sequence :
(2) where , .
-
4.
The constellation mapper maps each c-bit symbol to a channel input set :
(3) where is a constellation mapping function and denotes the channel input set. In this paper, we consider the uniform constellation mapper, where converts each -bit symbol to a decimal symbol .
Remark 1.
As Perry et al. indicates, the hash function employed by Spinal codes should have pairwise independent property: [3]
(4) |
where and are two different inputs of the hash function.
II-B Rateless Transmission
The original transmission scheme for Spinal codes is a pass-by-pass transmission scheme. Denote as the pass of coded symbols. The conventional pass-by-pass transmission scheme will form a transmission process of . This transmission process ends only when the number of received symbols is enough for successful decoding and an ACK is sent to interrupt the transmission. Note that correct decoding may be completed in any round of transmission, the cumulative transmitted symbols before correct decoding must be a multiple of , belonging to the set .
In [3], the authors propose a uniform-puncturing-based transmission scheme for Spinal codes. Instead of transmitting Spinal codes pass by pass, the uniform-puncturing-based transmission scheme transmits Spinal codes in a symbol by symbol pattern. By this means, the cumulative transmitted symbols before correct decoding belongs to a finer set . Take a typical Spinal codes as an example. Let denote the transmission order of the symbols. The uniform-puncturing-based transmission scheme forms a transmission process of . The transmitter will continue the transmission process until an ACK is received.
For the uniform-puncturing-based transmission scheme, the punctured redundant symbols are from the last pass of transmission. For example, assuming that only symbols need to be sent for successful decoding of uniform-puncturing-based transmission scheme, then under the same condition, the traditional pass-by-pass transmission scheme needs to send additional symbols in the pass to decode correctly since its transmission unit is pass. Compared with the traditional pass-by-pass transmission scheme, the uniform-puncturing-based transmission scheme punctured out symbols, thus improving the rate performance of Spinal codes. To conclude, the uniform-puncturing-based transmission scheme reduces the transmission of redundant symbols in the last pass and improves the code rate of Spinal codes by refining the unit of each transmission from the coarse-grained pass to the finer-grained symbol.
Although the uniform-puncturing-based transmission scheme indeed improves the rate performance of Spinal codes, there are still some shortcomings that can be improved. First, the uniform-puncturing-based transmission scheme needs to completely transmit a whole pass of symbols before continuing to transmit the subsequent pass, which limits its puncturing coverage to the last transmitted pass only. A more flexible transmission scheme without this limitation has yet to be proposed. Second, the symbol-by-symbol transmission pattern increases the decoding attempts at the receiver and thus reduces the decoding throughput of Spinal codes. To resolve this issue, we design a dynamic decoding framework for Spinal codes. The proposed decoder can dynamically store the decoding information in each decoding process and utilize it to assist subsequent decoding.
II-C Decoding Process of Spinal Codes
For Spinal codes over the AWGN channel, the optimal decoding algorithm is maximum likelihood (ML) decoding. In ML decoding, the decoder adopts the shared knowledge of the same hash function, the same initial spine value and the same RNG to replay the coding process over the set of received symbols. By adopting the shared knowledge, the decoder aims to find the best matching sequence whose coded vector is closest to the received vector y in Euclidean distance.
For conventional pass-by-pass transmission scheme, let denote the number of transmitted passes for Spinal codes, the ML decoding rule can be expressed as:
(5) |
where is the decoding result, represents the candidate sequence.
For puncturing symbol-by-symbol transmission scheme, let denote the number of transmitted symbols generated from the spine value , the ML decoding rule should be slightly
(6) |
For the BSC, the only difference is that the Euclidean distance should be replaced with Hamming distance.
However, traversing all candidate sequences results in an exponential increase in complexity. The serial coding structure of Spinal codes determines that Spinal codes are a type of tree code. As a result, the bubble decoding algorithm proposed in [3] can be applied to decrease the decoding complexity. Instead of searching the entire decoding tree, a bubble decoder calculates the cost for the nodes at each layer and sorts them to retain only the best candidate nodes with the lowest cost at each layer. The reserved nodes are then expanded to child nodes. This repeats until the bubble decoder ends up with a list of messages at the last layer, from which the best one is selected as the decoding output. The cost at each layer can be calculated as:
(7) |
where denotes the currently expanded number of layers.
III Upper bounds on the error Probability
III-A Upper Bounds Based on General Bounding Techniques
Here we simply review the upper bounds obtained in [28], where the authors have derived the upper bounds on the error probability of ML decoding for Spinal codes over the AWGN channel and the BSC, respectively.
III-A1 The Bound Based on Gallager’s Result
Theorem 1.
(The upper bound on the error probability over the AWGN channel [28]) Consider Spinal codes with message length and segmentation parameter transmitted over an AWGN channel with noise variance . Let be the number of transmitted passes. The average error probability under ML decoding can be upper bounded by
(8) |
with
(8a) |
where , , denotes the probability distribution of the channel input and
(8b) |
III-A2 The Bound Based on RCU Bound
Theorem 2.
(The upper bound on the error probability over the BSC [28]) Consider Spinal codes with message length and segmentation parameter transmitted over a BSC with crossover probability . Let be the number of passes the receiver has received. The average error probability under ML decoding can be upper bounded by
(9) |
with
(9a) |
where , .
Theorem 2 utilizes the RCU bound to evaluate the performance of Spinal codes over the BSC. First proposed in [31], RCU bound is a significant general bounding technique that requires the property of pairwise independent property. As Spinal codes are also a type of pairwise independent random codes, the RCU bound is introduced to obtain Eq. (9a), thereby leading to the upper bound on the error probability of Spinal codes.
III-B A New Upper Bound over the AWGN Channel
Different from the analysis in [28], which investigates the performance of Spinal codes by introducing general bounding techniques on random codes, we pay attention to Spinal code itself to obtain the new characterized upper bounds. Specifically, we systematically consider the coding structure, the decoding rule and the pairwise independent property of Spinal codes to conduct performance analysis for Spinal codes.
Theorem 3.
(The new upper bound on the error probability over the AWGN channel) Consider Spinal codes with message length , segmentation parameter and modulation parameter transmitted over an AWGN channel with noise variance . The average error probability under ML decoding can be upper bounded by
(10) |
with
(10a) |
(10b) |
(10c) |
where denotes the Gamma function, and is the number of transmitted symbols generated from the spine value .
Note that the right-hand side (RHS) of (10a) can be immediately calculated by solving the integrals and :
Lemma 1.
.
Let , the integral is calculated by:
(11) |
Proof.
Please See Appendix B. ∎
Lemma 2.
.
Let , the integral is calculated by:
(12) |
where is the function defined as .
Proof.
Please See Appendix C. ∎
The proof of Theorem 3 is given as follows:
Proof.
Suppose message is encoded to Spinal code words to be transmitted through an AWGN channel. Let denote the corresponding received symbol at the receiver, and denote as the AWGN with variance . The received symbol is:
(13) |
The ML decoding process is to select the one with the lowest decoding cost from the candidate sequence space . In this regard, we simply divide the candidate sequence space into two categories: ) the correct decoding sequence, i.e., ; ) the wrong decoding sequences , which are not unique and will form a set . The core idea of the ML decoding process is to distinguish theses two categories by their decoding costs. Thus, we begin with analyzing the decoding costs of them.
The cost of the correct decoding sequence is defined as , calculated by
(14) |
Similarly, the cost of a wrong decoding sequence is defined as :
(15) |
Denote as the event that there exists an error in the segment of Spinal codes and as the opposing event of . The error probability of Spinal codes is:
(16) | ||||
Define , which consists of wrong candidate sequences with condition that event is true.
Within these notations, is given as:
(17) |
Note that the RHS of (17) can be upper bounded by the union bound of probability, which yields the inequality:
(18) |
Next, we turn our attention back to the the iterative process that , which has been given in (1), to further analyze the RHS of (18). Denote the spine values of and as and , respectively. Since , we obtain that for any . This leads to a further conclusion that for any . Denote as the vector consisted of with . Similarly defining as the vector consisted of and as the vector consisted of with , we have
(19) | ||||
where is the AWGN with distribution that
(20) |
Also, as and because of the pairwise independent property of hash functions, the spine value of the wrong decoding sequence is independent with the spine value of the correct decoding sequence . Note that the spine values of Spinal codes are iteratively obtained with . This leads to the pairwise independence of and for all . Thereby, the symbols generated by the RNG seeded with are also independent with the symbols generated by the RNG seeded with for , which yields that is independent with and follows the uniform distribution for . As such, the probability in (19) is upper bounded by:
(21) | ||||
where
represents the volume of the -ball with radius [32], and denotes the volume of the -cube with side length .
Note that the volume of the -ball in (21) might be larger than the volume of the -cube, while the probability is up to , the function can be applied to yield a further tightened inequality as follows:
(22) |
Substituting (20) and (22) into (19) results in the bound:
(23) | ||||
Note that the RHS of (22) is a piecewise function. Let
we have
(24) |
Denote the RHS of (24) as , and we obtain a piecewise form as follows:
(25) |
Plug (25) in (23), we obtain the overall bound:
(26) | ||||
We solve each of the terms in the above separately by introducing the hyperspherical coordinates:
(27) | ||||
taking
(28) | ||||
Denote the first term in the RHS of (26) as and the second term as , applying (27) and (28) in 26 yields the following equations:
Note that the integrals above can be decomposed as in
(29) | ||||
wherein the multiple integrals on the RHS is given by [32]:
(30) |
Applying (30) in (29) and substituting (29) into (26) yields that
III-C A New Upper Bound over the BSC
Theorem 4.
(The new upper bound on the error probability over the BSC) Consider Spinal codes with message length and segmentation parameter transmitted over a BSC with crossover probability . Let be the number of symbols generated from the spine value of the -bit message sequence. The FER of Spinal codes under ML decoding can be upper bounded by
(33) |
with
(33a) |
Note that
(33b) |
where
(33c) |
Proof.
Please refer to Appendix D. ∎
IV Transmission Scheme Design
This section aims to optimize the transmission scheme of Spinal codes. We begin with formulating and solving a rate maximization problem under ML decoding, wherein the derived upper bounds in Section III are adopted as a constraint. The results show that the maximum rate can be achieved by the incremental tail transmission scheme under ML decoding. Then, heuristically, we focus on the more practical near-ML bubble decoding algorithm. Under the bubble decoding algorithm, we design an improved transmission scheme named TITT scheme. The simulation results given in Section VI show that the proposed TITT scheme enables further rate improvement for Spinal codes.
IV-A Problem Formulation and Solution
We establish a rate maximization model under a sufficiently small error probability constraint. Two related fundamental parameters are emphasize here:
-
•
Code rate: Consider Spinal codes with message length and segmentation parameter transmitted over a channel. Let be the number of symbols generated from the spine value of the -bit message sequence. The code rate can be calculated as
-
•
Error Probability: The error probability can be upper bounded by the results given in Theorem 3 and Theorem 4 over the AWGN channel and the BSC respectively.
The rate maximization problem can be expressed as follows:
1) Objective function: To maximize the code rate .
2) Constraint: To ensure that the transmitted symbols can meet a prescribed requirement on successful decoding level that , we hold a stronger setting that:
where denotes the upper bound on the error probability for ML decoded Spinal codes, the value of can be preset according to specific application requirements.
3) Decision variables: The number of symbols generated from each spine value, .
To sum up, the overall optimization problem can be formulated as follows.
Problem 1.
For Spinal codes transmitted over the AWGN channel:
s.t. | |||
where is given in Theorem 3 and stands for the positive integer set.
Problem 2.
For Spinal codes transmitted over the BSC:
s.t. | |||
where and are provided in Theorem 4.
The above problems can be solved using nonlinear integer programming techniques. The Branch-Bound algorithm is a general method to solve this type of problem. However, the iterative mode of the Branch-Bound method suffers from high complexity and is likely to converge to a locally optimal solution. Motivated by the above, we design an algorithm customized for the problems above, which can solve the problem globally.
The general idea is to consider the dual problems rather than solve them directly. The dual problem of Problem 1 (or Problem 2) is as follows.
In essence, the dual problem above is to determine the decision variables under the constraint of a fixed code rate , so as to minimize the error probability of the transmission scheme. Though the difference between the dual problem and the original problem is just a simple exchange of the constraint and the objective function, the constraint corresponding to the decision variables in the dual problem is a linear constraint, which is much simpler than the original problem.
We devise an on-the-fly algorithm to solve Problem 1 or Problem 2 by leveraging their dual problems (see Algorithm 1). By ‘on-the-fly’, the minimum error probability is found by repeatedly solving Problem 3 with an increasing number of transmitted symbols. The on-the-fly process is terminated to output once the upper bound on the error probability meets the condition that . As the upper bound on the error probability of Spinal codes is monotonically decreasing with , the output also maps to a minimum number of cumulative transmission symbols such that . Note that the minimum number of transmission symbols corresponds to the maximum code rate, the problem of rate maximization under the constraint of a prescribed small error probability is thereby solved out.
We initially used Algorithm 1 to obtain the optimal solution to Problem 1. With these results in hand, we try to design a greedy baseline algorithm to solve this problem, which significantly reduces the calculations caused by repeatedly solving problem 3. Our experiments showed no difference in solution results between these two algorithms. Specifically, the greedy baseline algorithm always determines the current most error-prone symbol, that is, the symbol that can minimize the current FER to transmit. The details of the greedy baseline algorithm are given in Algorithm 2.
Remark 2.
The value of can be preset based on specific application requirements. For example, typical ultra-reliable and low latency communication (URLLC) systems commonly require the error probability lower than . In our simulations, we set .
IV-B Improved Transmission Scheme under ML Decoding
Consider Spinal codes with and , initially sending several passes is generally used for the decoder to construct an integral decoding tree. In this paper, we set and as case studies. By using both Algorithm 1 and Algorithm 2, we obtain exactly the same solutions as shown in Table I.
Solutions to the Rate Maximization Problem
(a) Transmission Scheme over the BSC
Crossover probability
0.05
0.01
0.005
0.001
(b) Transmission Scheme over the AWGN channel
SNR (dB)
7
8
9
10
It can be observed from Table I that with ML decoding algorithm, the transmitter tends to continuously transmit the tail symbols, i.e., the symbols generated from the spine value. This ‘incremental-tail-transmission’-pattern result can be explained and generalized universally by the serial coding structure of Spinal codes. Due to the serial coding structure with , the symbols generated from the spine value is related to the block message and the previous spine value . Since is related to and is related to the block message , we can conclude that the symbols generated from the spine value is related to all with , which also means that for Spinal codes, tail symbols generated from are more relevant to the whole message sequence, i.e., with , and hence carries more information.
Through empirical observations and qualitative analysis, we can conclude that the transmitting tail symbols can further improve the code rate of ML decoded Spinal codes.
Remark 3.
The ML decoding algorithm is difficult to be applied in a practical communication system due to its high complexity. Though not applicable, it provides an ideal benchmark performance of Spinal codes. Furthermore, the related analysis and transmission scheme design can serve as an inspirational basis for the transmission scheme design with some near-ML decoding algorithms.
IV-C Improved Transmission Scheme under Bubble Decoding
In this subsection, inspired by the incremental tail transmission scheme for ML decoding, we propose an improved transmission scheme named thresholded incremental tail transmission scheme (TITT scheme). The TITT scheme combines uniform puncturing with incremental tail transmission scheme for the practical Spinal codes using bubble decoding algorithm.
As introduced in Section II-C, bubble decoding is a kind of near-ML decoding algorithm. It prunes the branch at each layer and retains only nodes with the lowest cost. It is obvious that the decoding will succeed only if the decoder prunes the branch correctly at every layer of the decoding tree.
Recall that the cost can be calculated in (7). Since the calculation of the cost at each layer is only related to the preceding received symbols, i.e., with , the inadequate transmission of the preceding symbols may result in an incorrect pruning, and thus directly leads to a decoding error. Therefore, for bubble decoding algorithm, merely transmitting the tail symbols without considering the sufficiency of transmission of preceding symbols is not rational. In addition, in the pruning process of the bubble decoding algorithm, the previous layer will retain candidate sequences, while the last layer will retain only one candidate sequence, which means that the pruning of the bubble algorithm in the last layer is more prone to errors.
Considering the analysis above, we propose an improved transmission scheme for bubble decoding in Algorithm 3. It combines uniform puncturing with incremental tail transmission, forming a two-stage scheme. The first stage adopts the uniform-puncturing-based transmission, aiming to refine the code rate granularity and more importantly, to ensure the pruning at the preceding levels of the decoding tree is correct. After sufficient symbols having been sent, the transmitter switches to the second stage, which adopts the incremental tail transmission scheme, aiming to improve the probability that the correct candidate sequence is selected within the -node list at the last layer. Let denote the switch-over threshold of the number of transmitted symbols. The detailed procedures of the improved transmission scheme for bubble decoding are described in Algorithm 3.
The value of the switch-over threshold in Algorithm 3 can be set according to the following inferences: with increasing number of transmitted symbols , the instantaneous code rate of the rateless code decreases; when the instantaneous code rate approaches the channel capacity , it is highly probable that the previous pruning is correct. Therefore, we can set . In this work, we provide the following heuristic settings, by which we have obtained satisfactory simulation performance, as will be shown in Section VI.
-
•
For the AWGN channel,
-
•
For the BSC,
V Bubble Decoding with Memory (BD-M)
In essence, both uniform puncturing and the proposed TITT scheme improve the rate performance by refining the code rate granularity of Spinal codes. However, one trade-off is that the refinement of code rate granularity increases the number of decoding attempts at the receiver, resulting in extensive calculations. To address this issue, we investigate the dynamic changes of the decoding tree between adjacent decoding rounds of rateless Spinal codes and find that only parts of the decoding tree will change during the dynamic transmission-decoding process (see Theorem 5). Therefore, the conventional full-tree updating process can be improved as a partial-tree updating one.
Theorem 5.
If a symbol is newly received at the decoder, then only the decoding tree with layer will change in the decoding process.
Proof.
Please refer to Appendix E. ∎
Based on Theorem 5, we know that if the receiver end introduces an extra memory to dynamically store and partially update the decoding tree of Spinal codes, the calculations required for the unchanged parts of the decoding tree can be eliminated, thereby reducing the decoding complexity. Inspired by this idea, we take the most commonly used bubble decoding algorithm as an example, and propose the bubble decoding with memory (BD-M) algorithm here. An interesting issue is that the proposed BD-M algorithm matches the proposed TITT scheme well. The combination of the TITT scheme at the transmitter and the BD-M algorithm at the receiver constitutes an improved transmission-decoding system, which not only exhibits good rate performance but also improves the decoding efficiency of Spinal codes.
V-A Decoding Process of the BD-M Algorithm
Conventional bubble decoding is a full-tree-updating based algorithm, i.e., it needs to repeatedly reconstruct the integral decoding tree once receiving any new sub-passes or symbols. However, by introducing a decoding tree memory to the receiver, the decoder can store the constructed decoding tree and updates only a part of it. Based on this finding, we design the BD-M algorithm, which is a partial-tree-updating based algorithm.
The primary decoding process of the BD-M algorithm can be summarized in four steps:
-
1.
Construct a pruned decoding tree according to the received integral pass. The pruning strategy is referred to the original bubble decoding algorithm [3].
-
2.
Store the pruned decoding tree.
-
3.
If the decoder receives symbol , it will retain the top layers of the decoding tree and update the information after it according to the pruning strategy of bubble decoding algorithm.
-
4.
Repeat steps 2) and 3) until the decoding process is finished.
It can be observed that the proposed BD-M algorithm does not change the bubble decoding-based pruning decision at each layer since the cost of each node calculated by (7) remains unchanged. The core idea of the BD-M algorithm is to restore the unchanged tree information and reduce the cost recalculation of the unchanged part.
Remark 4.
The BD-M algorithm can be applied to both uniform puncturing and the proposed TITT scheme to reduce decoding complexity, matches better with the proposed TITT scheme. As described in Algorithm 3, the TITT scheme tends to transmit more tail symbols. Correspondingly, the BD-M algorithm at the decoder only needs to update the last layer of the decoding tree when receiving tail symbols. By this means, this algorithm dramatically eliminates the need for repeatedly calculating the decoding cost, as the only thing for the decoder is to update the last layer of the decoding tree.
V-B Qualitative Complexity Comparison
During the decoding process, the computation involved at each layer of the decoding tree mainly includes the cost calculating and the bubble sorting for all the nodes at the layer. Let denote the number of nodes at a particular layer and represent the associated computation amount.
First, we analyze the number of nodes at each decoding tree layer. According to the decoding process, the branch pruning starts if , where denotes the layer index of the decoding tree. Thus, we have
where denotes the number of nodes at the layer.
Let denote the amount of calculations required to update the decoding tree from the layer, we have:
It is obvious that . denotes the computation amount required to reconstruct the full decoding tree.
Next, we analyze the total computation amount required by different transmission-decoding pairs and compare them.
1) Uniform Puncturing with Bubble Decoding vs. Uniform Puncturing with BD-M:
For uniform puncturing (UP) with the original bubble decoding (BD) algorithm, the decoder needs to reconstruct the full decoding tree when receiving any sub-passes or symbols, so the total computation amount can be expressed as
(34) |
where is the number of transmitted symbols generated from the spine value required to successfully recover the message when using the uniform-puncturing-based transmission scheme.
For uniform puncturing with the BD-M algorithm, the decoder will not reconstruct the full decoding tree but retain the existing decoding tree in memory and update only a part. The corresponding total computation amount can be expressed as
(35) |
Since , we can subtract (34) from (35) and obtain that , showing that the proposed BD-M algorithm requires less computation than the original bubble decoding.
2) Uniform Puncturing with BD-M vs. TITT with BD-M:

Let denote the number of transmitted symbols generated from the spine value required to successfully recover the message when using the proposed improved transmission scheme. The total computation amount of the TITT with BD-M can be expressed as
(36) |
Compare (35) with (36), we can prove that (see Appendix F)
(37) |
which demonstrates the superiority of the proposed transmission scheme. As a result, the transmitter’s improved transmission scheme and the matching BD-M algorithm at the decoder consist of an optimized transmission system for Spinal codes.
VI Simulation Results
In this section, we conduct Monte Carlo simulations to verify the upper bounds derived in Section III and demonstrate the gain of rate performance obtained by leveraging the TITT scheme. Also, we would show that the BD-M algorithm proposed in Section V matches well with the proposed TITT scheme.

VI-A Bounding Performance
Fig. 2 is conducted over the AWGN channel model, which demonstrates the comparison among the upper bound proposed in [28], the upper bound given in (10), and the Monte Carlo simulation results. Considering that the decoding complexity of ML decoding increases exponentially with , we select as short as for the simulation setup. We set the the Monte Carlo Simulation sample size as and obtain the simulated error probability as in Fig. 2. From Fig. 2, we can observe that compared to the available bounds in [28], the proposed upper bound draws its strength from the tightness at high SNRs. However, at low SNRs, the proposed bound performs poor tightness. This phenomenon are mainly due to (21), where the bound on the probability is given as the volume of a -ball divided by the volume of a -cube. Note that the radius of the hypersphere is exactly the -norm of the noise vector, while the volume of the cube is a fixed value. In such a case, if the SNR is low, i.e., the noise variance is high, the RHS of (21) may exceed and the bound will degenerate into an invalid one, with the LHS of (21) upper bounded by . However, in the high-SNR regime, the RHS of (21) remains far away from , thereby exhibiting the tightened bounding performance as shown in Fig. 2.


Fig. 3 is obtained over the BSC model, which provides the numerical comparison among the upper bound proposed in [28], the upper bound given in (33), and the Monte Carlo simulation results. For the simulation setup, we also select as short as and set the Monte Carlo Simulation sample size as to obtain the simulated error probability. From Fig. 3, we can see a superior bounding performance of the proposed upper bound with wide ranges of SNR. That is, the proposed upper bound remains a closer gap with the Monte Carlo Simulation results compared to that in [28]. As the proposed bound further narrows the gap with the simulated results, it can provide a more exact performance approximation for Spinal codes.
VI-B Rate Performance
Fig. 4 shows the rate performance comparisons among the proposed TITT transmission scheme, the uniform-puncturing-based transmission scheme (referred to as UP scheme), and the pass-by-pass transmission scheme. The simulation results in Fig. 4 (a) are conducted over the AWGN channel model, while those in Fig. 4 (b) are obtained over the BSC model. Here we leverage the bubble decoding algorithm with for the simulation setup. It can be seen that the proposed TITT scheme outperforms the UP scheme and the pass-by-pass transmission scheme over both the BSC and the AWGN channel.
VI-C Decoding Complexity

Fig. 5 gives the average normalized decoding time of different ‘transmission scheme - decoding algorithm’ pairs. For the comparison purpose, we count the elapsed time required for successfully decoding packets and obtain the relative decoding time as in Fig. 5. From Fig. 5, we can observe that the proposed TITT scheme exhibits lower complexity than the uniform-puncturing-based transmission scheme. The reason is that the TITT scheme requires fewer coded symbols for successful decoding, thereby reducing the decoding attempts at the decoder.
Furthermore, we can also observe that the proposed BD-M algorithm reduces the decoding time for both the UP scheme and the proposed TITT scheme, demonstrating its superiority of decoding efficiency compared to the original bubble decoding algorithm. In comparison, the reduced decoding time for TITT scheme is more evident than that for the UP, verifying the compatibility between the proposed BD-M algorithm and the TITT transmission scheme. Combining the simulation results given in Fig. 4 and Fig. 5, we find that the designed TITT scheme at the transmitter and the proposed BD-M algorithm at the receiver jointly constitute a dynamic transmission-decoding system for Spinal codes. With this combination implementation, both the rate performance and the decoding complexity of Spinal codes are improved.
VII Conclusions
This paper analyzed the error probability of Spinal codes over both the BSC and the AWGN channel and derived new upper bounds on the ML decoding error probability. Based on the derived upper bounds, the improved transmission scheme is designed to increase the code rate of Spinal codes further. Furthermore, to address the issue of decoding complexity, we prove that the decoding tree only changes partially during the dynamic transmission-decoding process. Inspired by this idea, we design a partial-tree-updating BD-M algorithm, which matches the TITT scheme perfectly.
The work in this paper may also stimulate further research interests and efforts in related topics. The derived upper bounds can provide theoretical support and guidance for designing other high-efficiency coding-associated techniques, such as unequal error protection and concatenation with outer codes. In addition, the idea of partial-tree-updating decoding also provides a framework to further improve the decoding efficiency of other state-of-the-art decoding algorithms in the literature. For example, the FSD with memory and the SFD with memory algorithms may also be invented.
Appendix A An approach to calculate
For our considered Spinal codes based system, the channel input set is , wherein each coded symbol follows the uniform distribution with . Applying these in (8b) results in the equality:
By calculating , one can obtain the numerical result of .
Appendix B Proof of Lemma 1
We notice that if we let and compute the differential , we can obtain that
(38) |
We begin with considering the indefinite form of the RHS of (38). Note that the indefinite integral is composed by two parts: and , we can do this integral by iteratively introducing the method of integration by parts as:
Appendix C Proof of Lemma 2
Let and we obtain that
(42) |
If is even, the solution to indefinite integral can be obtained from (39):
(43) | ||||
Let , and the definite integral can be calculated by
(44) |
If is odd, notice that , we can iteratively introduce the method of integration by parts and obtain the recurrence relation that:
By solving the above recursive formula, we can obtain the explicit expression of as:
(45) |
As such, we have:
(46) | ||||
Note that
(47) |
where is the Q function with . The definite integral in the RHS of (42) is then given as:
(48) | ||||
Appendix D Proof of Theorem 4
For ML decoding over the BSC, the Euclidean distance should be replaced by the Hamming distance:
If the crossover probability of the BSC is , we have , where obeys the Bernoulli distribution with parameter .
Similarly, we can classify the candidate sequence set into categories: the correct sequence and the the wrong sequences. The cost of correct candidate sequences can be calculated as
Then, it can be shown that obeys a binomial distribution with
(49) |
By similarly adopting the union bound of probability, we have
(50) |
The cost of wrong candidate sequence can be calculated as
Note that , we can verify that all the coded symbols follows Bernoulli distribution with parameter . Thus, we have
It can be noted that the distribution of is not related to the noise and the correct encoded symbols generated by . Then, it holds that
(51) |
The process above can be performed similarly and it holds that
(53) |
Appendix E Proof of Theorem 5
Note that the cost calculation at each decoding tree layer is implemented by
(54) |
Provided that is newly received, and denote cost of the updated decoding tree at layer as , then the costs at layer remains the same as (54), with
(55) |
However, for , turns to
(56) | ||||
Appendix F Proof of (37)
For the improved transmission scheme which combines the uniform puncturing with the incremental tail transmission, it is easy to find that
(57) |
As the rate corresponding to the improved transmission scheme is higher than the rate when using uniform puncturing, it turns out that
(58) |
Therefore, we obtain that
(60) | ||||
References
- [1] J. Perry, H. Balakrishnan, and D. Shah, “Rateless Spinal Codes,” Proceedings of the 10th ACM Workshop on Hot Topics in Networks, Art. no. 6, 2011.
- [2] H. Balakrishnan, P. Iannucci, J. Perry, and D. Shah, “De-randomizing Shannon: The design and Analysis of a Capacity-Achieving Rateless Code,” arXiv preprint arXiv:1206.0418, 2012. Available: http://arxiv.org/pdf/1206.0418v1.pdf.
- [3] J.Perry, P. A. Lannucci, K. Fleming, H. Balakrishnan, “Spinal Codes,” Proceedings of the ACM SIGCOMM 2012 conference, pp. 49-60, 2012.
- [4] R. Gallager, “Low-Density Parity-Check codes,” IRE Transactions on Information Theory, 8(1):21–28, 1962.
- [5] A. Shokrollahi, “Raptor Codes,” IEEE Transactions on Information Theory, vol. 52, no. 6, pp. 2551–2567, 2006.
- [6] U. Erez, M. D. Trott, and G. W. Wornell, “Rateless Coding for Gaussian Ghannels,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 530–547, 2012.
- [7] A. Gudipati and S. Katti, “Strider: Automatic Rate Adaptation and Collision Handling,” Proceedings of the ACM SIGCOMM 2011 conference, 2011.
- [8] F. Jelinek, “Fast Sequential Decoding Algorithm Using a Stack,” IBM Journal of Research and Development, vol. 13, no. 6, pp. 675-685, 2010.
- [9] W. Yang, Y. Li, X. Yu and J. Li, “A Low Complexity Sequential Decoding Algorithm for Rateless Spinal Codes,” IEEE Communications Letters, vol. 19, no. 7, pp. 1105-1108, July 2015.
- [10] S. Xu, S. Wu, J. Luo, J. Jiao and Q. Zhang, “Low Complexity Decoding for Spinal Codes: Sliding Feedback Decoding,” 2017 IEEE 86th Vehicular Technology Conference (VTC-Fall), Toronto, ON, pp. 1-5, 2017.
- [11] Y. Hu, R. Liu, H. Bian and D. Lyu, “Design and Analysis of a Low-Complexity Decoding Algorithm for Spinal Codes,” IEEE Transactions on Vehicular Technology, vol. 68, no. 5, pp. 4667-4679, 2019.
- [12] J. Hagenauer, “Rate-Compatible Punctured Convolutional Codes (RCPC codes) and Their Applications,” IEEE Transactions on Communications, vol. 36, no. 4, pp. 389–400, Apr. 1988.
- [13] J. Ha, J. Kim, and S. W. McLaughlin, “Rate-Compatible Puncturing of Low-Density Parity-Check codes,” IEEE Transactions on Information Theory, vol. 50, no. 11, pp. 2824–2836, 2004.
- [14] S. Sesia, G. Caire, and G. Vivier, “Incremental Redundancy Hybrid ARQ Schemes Based on Low-Density Parity-Check Codes,” IEEE Transactions on Communications, vol. 52, no. 8, pp. 1311–1321, 2004.
- [15] D. G. M. Mitchell, M. Lentmaier, A. E. Pusane and D. J. Costello, “Randomly Punctured LDPC Codes,” IEEE Journal on Selected Areas in Communications, vol. 34, no. 2, pp. 408-421, Feb. 2016.
- [16] R. Garzón-Bohórquez, C. Abdel Nour and C. Douillard, “Protograph-Based Interleavers for Punctured Turbo Codes,” IEEE Transactions on Communications, vol. 66, no. 5, pp. 1833-1844, May 2018.
- [17] K. Niu, K. Chen and J. Lin, “Beyond turbo codes: Rate-compatible punctured polar codes,” 2013 IEEE International Conference on Communications (ICC), 2013, pp. 3423-3427.
- [18] R. Wang and R. Liu, “A Novel Puncturing Scheme for Polar Codes,” IEEE Communications Letters, vol. 18, no. 12, pp. 2081-2084, Dec. 2014.
- [19] Y. Li, J. Wu, B. Tan, M. Wang and W. Zhang, “Compressive Spinal Codes,” IEEE Transactions on Vehicular Technology, vol. 68, no. 12, pp. 11944-11954, 2019.
- [20] J. Xu, S. Wu, J. Jiao and Q. Zhang, “Optimized Puncturing for the Spinal Codes,” 2019 IEEE International Conference on Communications (ICC), Shanghai, China, pp. 1-5, 2019.
- [21] I. Sason and S. Shamai (Shitz), “Improved Upper Bounds on the Ensemble Performance of ML Decoded Low Density Parallel Check Codes,” IEEE Communications Letter, vol. 4, pp. 89–91, Mar. 2000.
- [22] I. Goldenberg and D. Burshtein, “Upper Bound on Error Exponent of Regular LDPC Codes Transmitted Over the BEC,,” IEEE Transactions on Information Theory, vol. 55, no. 6, pp. 2674-2681, June 2009.
- [23] I. Sason and S. Shamai (Shitz), “Improved Upper Bounds on the Decoding Error Probability of Parallel and Serial Concatenated Turbo Codes via Their Ensemble Distance Spectrum,” IEEE Transactions on Information Theory, vol. 46, no. 1, pp. 1–23, Jan. 2000.
- [24] T. M. Duman and M. Salehi, “New Performance Bounds for Turbo Codes,” IEEE Transactions on Communications, vol. 46, no. 6, pp. 717–723, June 1998.
- [25] I. Sason and S. Shamai (Shitz), “On Improved Bounds on the Decoding Error Probability of Block Codes over Interleaved Fading Channels, with Applications to Turbo-like Codes,” IEEE Transactions on Information Theory, vol. 47, no. 6, pp. 2275–2299, Nov. 2001.
- [26] D. Goldin and D. Burshtein, “Performance Bounds of Concatenated Polar Coding Schemes,” IEEE Transactions on Information Theory, vol. 65, no. 11, pp. 7131-7148, Nov. 2019.
- [27] B. Shuval and I. Tal, “A Lower Bound on the Probability of Error of Polar Codes over BMS Channels,” IEEE Transactions on Information Theory, vol. 65, no. 4, pp. 2021-2045, April 2019.
- [28] X. Yu, Y. Li, W. Yang, and Y. Sun, “Design and Analysis of Unequal Error Protection Rateless Spinal Codes,” IEEE Transactions on Communications, vol. 64, no. 11, pp. 4461–4473, 2016.
- [29] A. Li, S. Wu, Y. Wang, J. Jiao and Q. Zhang, “Spinal Codes over BSC: Error Probability Analysis and the Puncturing Design,” 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, pp. 1-5, 2020.
- [30] R. G. Gallager, Information theory and reliable communication, New York: Wiley, 1968.
- [31] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307–2359, 2010.
- [32] J. Scott, “82.18 The Volume of the n-ball – I,” The Mathematical Gazette, vol. 82, no. 493, pp. 104-105, 1998.