Tight Upper Bounds on the Error Probability of Spinal Codes over Fading Channels
Abstract
Spinal codes, a family of rateless codes introduced in 2011, have been proved to achieve Shannon capacity over both the additive white Gaussian noise (AWGN) channel and the binary symmetric channel (BSC). In this paper, we derive explicit tight upper bounds on the error probability of Spinal codes under maximum-likelihood (ML) decoding and perfect channel state information (CSI) over three typical fading channels, including the Rayleigh channel, the Nakagami-m channel and the Rician channel. Simulation results verify the tightness of the derived upper bounds.
Index Terms:
Spinal codes, decoding error probability, fading channels, ML decoding. upper bounds.I Introduction
First proposed in 2011 [1], Spinal codes are a family of rateless codes that have been proved to achieve Shannon capacity over both the AWGN channel and the BSC [2]. Possessing the rateless capacity-achieving nature, Spinal codes demonstrate their superiority in bridging a reliable and high-efficiency information delivery pipeline between transceivers under highly dynamic channels. Specifically, the rateless nature allows Spinal codes to automatically adapt to time-varying channel conditions. Unlike fixed-rate codes, which require a specific code rate in advance, rateless codes work by a natural channel adaptation manner: the sender transmits a potentially limitless stream of encoded bits, and the receiver collects bits consecutively until the successful decoding process takes place. In [3], Spinal codes have shown advantages in terms of rate performance compared to other state-of-the-art rateless codes under different channel conditions and message sizes. Also, [3] notes the similarity between Spinal codes and Trellis Coded Modulation (TCM) [4, 5] is superficial because of their differences in nature and encoded popuse.
In coding theory, performance analysis is an intriguing topic. A closed-form expression for the error probability, in general, could not only enable more efficient performance evaluations but also shed light on coding optimization design. However, in most cases, it is intractable to obtain a closed-form expression for error probabilities. As an alternative, bounding techniques are usually used to approximate performance [6]. Along this avenue, there are already a lot of established bounds for some specific channel codes, such as the advanced tight bounds on Polar codes [7, 8], the upper and lower bounds on Raptor codes [9], and the performance bounds on the LT codes [10, 11]. And in [12], upper and lower bounds on the error probability of linear codes under ML decoding are surveyed. For Spinal codes, however, it is still early days to get tight explicit bounds over fading channels.
To date, there have been a few works that have preliminarily explored the performance analysis of Spinal codes. In [2], Balakrishnan et al. analyze the asymptotic rate performance of Spinal codes and theoretically prove that Spinal codes are capacity-achieving over both the AWGN channel and the BSC. In [13], two state-of-the-art results in the finite block length (FBL) regime, i.e., the Random Coding Union (RCU) bound [14] and a variant of Gallager result [15] are applied to analyze the error probability of Spinal codes over the AWGN channel and the BSC, respectively. In [16], the authors further tighten bounds by characterizing the error probability as the volume of a hypersphere divided by the volume of a hypercube, while the analysis is still performed over the AWGN channel. Until now, little has been done in the way of error probability analysis for Spinel codes over fading channels. An exception work is [17], which derived a probability-dependent convergent upper bound over Rayleigh fading channels by adopting the Chernoff bound [18]. However, ) the derived bound over the Rayleigh fading channel in [17] is not strictly explicit, since the convergence of the upper bound is probability-dependent; ) Nakagami-m and Rician channels, both common in practical wireless communication scenarios, have not been considered in the available error probability analyses; ) There is not yet an upper bound that achieves uniform tight approximations over a wide range of signal-to-noise ratio (SNR), either over the fading channel or over the AWGN channel.
Motivated by the above, this paper aims to derive tight upper bounds over three typical fading channels, Rayleigh, Nakagami-m and Rician fading channels. Strictly explicit upper bounds are provided, which cover uniform tight error probability approximations over a wide range of SNR.
The paper is organized as follows. In Section II, the encoding process of Spinal codes is given as a priori knowledge. Section III derives upper bounds over Rayleigh, Nakagami-m and

II Encoding Process of Spinal Codes
This section briefly introduces the primary building blocks of Spinal codes, where the combination of a hash function and random number generator (RNG) functions is a key. Fig. 1 shows the encoding process of Spinal codes, which comprises five steps:
-
1.
Segmentation: Divide an -bit message into -bit segments , where .
-
2.
Sequentially Hashing111A discussion concerning the properties of the hash function can be found in Appendix A.: The hash function sequentially generates -bit spine values , with
(1) -
3.
RNG: Each spine value is used to seed an RNG to generate a binary pseudo-random uniform-distributed sequence :
(2) where .
-
4.
Constellation Mapping: The constellation mapper maps each -bit symbol to a channel input set :
(3) In this paper, is a uniform constellation mapping function, i.e., it converts each -bit symbol to a decimal symbol .
-
5.
Rateless Transmission: The encoder continues to generate and transmit symbols pass by pass until the receiver successfully decodes the message.
III UPPER BOUNDS ON THE ERROR PROBABILITY
Error probability analysis is of primary importance in coding theory. Closed-form expressions for error probabilities, however, are intractable in most cases. To address this issue, a commonly adopted alternative approach is to introduce bounding techniques for the error probability approximation. In line with this idea, this section aims to derive tight upper bounds on the error probability for Spinal codes over Rayleigh, Nakagami-m and Rician fading channels, respectively.
III-A Upper Bounds on the Rayleigh Fading Channel
Theorem 1.
Consider Spinal codes with message length , segmentation parameter , modulation parameter , and sufficiently large hash parameter transmitted over a flat slow Rayleigh fading channel with mean square and AWGN variance , the average error probability given perfect channel state information (CSI) under ML decoding for Spinal codes can be upper bounded by
(4) |
with
(5) |
(6) |
(7) |
where , , is the number of transmitted passes, is arbitrarily chosen with , and , and represents the number of values which enables the adjustment of accuracy.
Proof.
Suppose a message is encoded to to be transmitted over a flat slow Rayleigh fading channel with an AWGN. The received signal can be expressed as:
(8) |
where is the corresponding received signal at the receiver, is the channel fading parameter following Rayleigh distribution with mean square , and is the AWGN with variance .
ML decoding selects the one with the lowest decoding cost from the candidate sequence space . Given received symbols and perfect CSI , the ML rule for the Rayleigh fading channel with the AWGN is
(9) |
where is the decoding result, is the candidate sequence.
From this perspective, we classify the set of candidate sequences into two subsets: ) the correct decoding sequence ; ) wrong decoding sequences , with . Denoting the cost of as , it turns out that
(10) |
Similarly, denote the cost of as , given by:
(11) |
In the sequal, we attempt to explicitly express the error probability of Spinal codes. Let be the event that there exists an error in the segment, which implies that:
-
1.
The segment is different, i.e., .
-
2.
The cost of the wrong decoding sequence is less than the correct one, i.e., . In this case, the ML decoder will incorrectly choose a certain wrong sequence as the decoding output.
Denote as the complement of . The error probability of Spinal codes can be expressed as:
(12) |
Thus, to obtain the error probability , the remaining issue is to calculate , which is interpreted as the probability that the segment is wrong while the previous segments are correct. With the defination that , the conditional probability is transformed as:
(13) |
Applying the union bound of probability [19] yields that
(14) |
Given that and have been calculated in (10) and (11) respectively, the probability is calculated as follows:
(15) |
where () establishes since for , which is proved in Appendix A by leveraging the property of hash function. () is obtained by applying (8).
Define and , and then (III-A) can be expanded as
(16) |
Denoting as the random row vector composed of random variables with , and as the random row vector composed of random variables with , (16) could be rewritten into a vector form, given as
(17) |
which could be then expanded as
(18) | ||||
With (18) in hand, the next problem is to explicitly solve and . First, we attempt to simplify . By introducing two rotation matrices for -dimensions hyperspace, we obtain Lemma 1 as follows:
Lemma 1.
Given that is i.i.d AWGN with variance , the probability in (18) can be simplified as
(19) |
where represents the function, represents the norm.
Proof.
Please refer to Appendix B. ∎
Note that [20] has shown a transformation of , which is presented as an exponential form, given as
(20) |
Adopting (20) into (19) yields that
(21) |
Applying (III-A) in (18) and swapping the integrates, we have
(22) |
For the Rayleigh fading channel, we denote the multiple integrals with respect to as . By adopting the i.i.d of , given as
(23) |
could be decomposed as
(24) |
where (a) establishes by adopting
(25) |
and (b) holds for the i.i.d of the random variable . Recall that where and are independent with each other, the integral in (24) with respect to could be then transformed to
(26) |
where is the probability mass function (PMF) of and is the probability density function (PDF) of . At this point, the right-hand side (RHS) of (26) could be explicitly calculated by the following lemma.
Lemma 2.
For Rayleigh distribution whose PDF is , the RHS of (26) can be calculated by
(27) |
where is the channel input set.
Proof.
Please refer to Appendix C for the detailed derivation. ∎
As such, substituting (27) back into (24) turns out that
(28) |
With (28) in hand, the RHS of (LABEL:eq12) is transformed as
(29) |
III-B Bounds on the Nakagami-m Fading Channel
Theorem 2.
Consider Spinal codes with message length , segmentation parameter , modulation parameter and sufficiently large hash parameter transmitted over a flat slow Nakagami-m fading channel with mean square , shape parameter and AWGN variance . The average error probability given perfect CSI under ML decoding for Spinal codes can be upper bounded by
(33) |
with
(34) |
(35) |
(36) |
where , , is the number of transmitted passes, is arbitrarily chosen with , and , and N represents the number of values which enables the adjustment of accuracy.
Proof.
Please refer to Appendix E. ∎
III-C Bounds on the Rician Fading Channel
Theorem 3.
Consider Spinal codes with message length , segmentation parameter , modulation parameter and sufficiently large hash parameter transmitted over a flat slow Rician fading channel with mean square ,shape parameter and AWGN variance . The average error probability given perfect CSI under ML decoding for Spinal codes can be upper bounded by
(37) |
with
(38) |
(39) |
(40) |
where , , is the number of transmitted passes, is arbitrarily chosen with , and , and N represents the number of values which enables the adjustment of accuracy.
Proof.
Please refer to Appendix F. ∎
IV Simulation Result
In this section, we conduct Monte Carlo simulations to illustrate the accuracy of the upper bounds derived in Section III. Since the exponential nature of the ML-decoding complexity, the input message bits is selected as low as and the number of pass is set as here for the easy ML-decoding simulation setup. The parameter is set to for the implementation and experiments, demonstrated by the property 2 in Appendix A that the hash collision will occur only once per hash function invocations on average. Furthermore, to improve the accuracy of approximation for upper bounds, we set and the sample size of Monte Carlo simulations to be .
We nomalize the mean square values of all fading channels, i.e., . Besides the setting of , for the Nakagami-m fading channel, we also set the shape parameter as . And for Rician fading channels, the shape parameter , defined as the ratio of the power contributions by line-of-sight path to the remaining multipaths, is set as and , respectively.

Fig.2 shows examples of the error probability for Rayleigh, Nakagami-m and Rician fading channels, respectively. All approximations are close to simulated values. We could observe that the FER curve for is lower than the one for in Rician fading channels, which is due to is the ratio of the energy in the specular path to the energy in the scattered paths, i.e., the larger , the more deterministic the channel.
V Conclusion
This paper analyzes the error probability of Spinal codes and derives upper bounds on the error probability under ML decoding over several fading channels, including the Rayleigh fading channel, the Nakagami-m fading channel and the Rician fading channel. Additionally, we conduct simulations for different fading channels and parameters. Our experimental examples show that the upper bounds we derived have a good performence on the estimation of the average error probablity.
The work in this paper may also inspire innovation about further research and efforts in related topics. The derived upper bounds can provide theoretical support and guidance for designing other high-efficiency coding-associated techniques, such as unequal error protection and concatenation with outer codes.
References
- [1] J. Perry, H. Balakrishnan, and D. Shah, “Rateless Spinal Codes,” Proceedings of the 10th ACM Workshop on Hot Topics in Networks, 2011.
- [2] H. Balakrishnan, P. Iannucci, J. Perry, and D. Shah, “De-randomizing shannon: The design and analysis of a capacity-achieving rateless code,” arXiv preprint arXiv:1206.0418, 2012.
- [3] J. Perry, P. A. Iannucci, K. E. Fleming, H. Balakrishnan, and D. Shah, “Spinal Codes,” Proceedings of the ACM SIGCOMM 2012 Conference, pp. 49––60, 2012.
- [4] G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Transactions on Information Theory, vol. 28, no. 1, pp. 55–67, 1982.
- [5] G. Ungerboeck and I. Csajka, “On Improving data-link performance by increasing the channel alphabet and introducing sequence coding,” IEEE International Symposium on Information Theory, 1976.
- [6] S. Yousefi and A. Khandani, “A new upper bound on the ML decoding error probability of linear binary block codes in AWGN interference,” IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3026–3036, 2004.
- [7] D. Goldin and D. Burshtein, “Performance Bounds of Concatenated Polar Coding Schemes,” IEEE Transactions on Information Theory, vol. 65, no. 11, pp. 7131–7148, 2019.
- [8] B. Shuval and I. Tal, “A Lower Bound on the Probability of Error of Polar Codes over BMS Channels,” IEEE Transactions on Information Theory, vol. 65, no. 4, pp. 2021–2045, 2019.
- [9] F. Lázaro, G. Liva, G. Bauch, and E. Paolini, “Bounds on the Error Probability of Raptor Codes Under Maximum Likelihood Decoding,” IEEE Transactions on Information Theory, vol. 67, no. 3, pp. 1537–1558, 2021.
- [10] B. Schotsch, G. Garrammone, and P. Vary, “Analysis of LT Codes over Finite Fields under Optimal Erasure Decoding,” IEEE Communications Letters, vol. 17, no. 9, pp. 1826–1829, 2013.
- [11] B. Schotsch, Rateless coding in the finite length regime. Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2014.
- [12] I. Sason and S. Shamai, “Performance Analysis of Linear Codes under Maximum-Likelihood Decoding: A Tutorial,” Foundations and Trends in Communications and Information Theory, vol. 3, no. 1–2, pp. 1–222, 2006.
- [13] X. Yu, Y. Li, W. Yang, and Y. Sun, “Design and Analysis of Unequal Error Protection Rateless Spinal Codes,” IEEE Transactions on Communications, vol. 64, no. 11, pp. 4461–4473, 2016.
- [14] Y. Polyanskiy, H. V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307–2359, 2010.
- [15] R. G. Gallager, Information theory and reliable communication. Wiley, 1968, vol. 588.
- [16] A. Li, S. Wu, J. Jiao, N. Zhang, and Q. Zhang, “New Upper Bounds on the Error Probability under ML Decoding for Spinal Codes and the Joint Transmission-Decoding System Design,” arXiv preprint arXiv:2101.07953, 2021.
- [17] A. Li, S. Wu, J. Jiao, N. Zhang, and Q. Zhang, “Spinal Codes Over Fading Channel: Error Probability Analysis and Encoding Structure Improvement,” IEEE Transactions on Wireless Communications, vol. 20, no. 12, pp. 8288–8300, 2021.
- [18] H. Chernoff, “A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations,” The Annals of Mathematical Statistics, pp. 493–507, 1952.
- [19] G. Boole, The mathematical analysis of logic. Philosophical Library, 1847.
- [20] U. Erez, M. D. Trott, and G. W. Wornell, “Rateless Coding for Gaussian Channels,” IEEE Transactions on Information Theory, vol. 58, no. 2, pp. 530–547, 2012.
- [21] M. Mitzenmacher and E. Upfal, Probability and computing: Randomized algorithms and probabilistic analysis. Cambridge University Press, 2005.
- [22] W. Kaplan, Advanced calculus. Pearson Education India, 1952.
Appendix A The properties of hash and related inferences
The core idea of Spinal codes is to apply a hash function sequentially to generate a pseudo-random mapping from message bits to coded bits. The hash function is expressed as
(41) |
where and are both integers. In Spinal codes, is chosen from a pairwise independent family of hash functions [21] by uniformly using a random seed.
Property 1.
As Perry et al. indicate [3], the hash function employed by Spinal codes should have pairwise independent property:
(42) |
where and are arbitrarily chosen, .
Property 2.
A sufficiently large v will lead to a zero-approaching hash collision probability, with [3]
(43) |
where and are arbitrarily chosen, .
Recall that is the correct decoding sequence and is a certain wrong decoding sequence. We denote the spine values of as and the spine values of as .
Conversely, if or with a sufficiently large , from Property 2 we can obtain that for .
When , , for and , we can iterate to obtain that for and for . Hence after RNGs and uniform constellation mapping, the coded symbols satisfy
(45) |
where .
Attributed to for , and follow the uniform distribution over , i.d.d (independent and identically distribution).
Appendix B Proof of lemma 1
To simplify the probability in (18), we introduce a rotation matrix for the -dimensions hyperspace. The rotation matrix is denoted as
(46) |
which satisfies
(47) |
As a rotation matrix, owns the property that , which can be used to rewrite the probability as
(48) |
Applying (46) and (47) in (48) results in
(49) |
the RHS of which is given by the following equivalent form:
(50) |
In order to further simplify the -fold integral above, we introduce another rotation matrix , which is denoted as
(51) |
such that
(52) |
and that
(53) |
where is the transformed vector composed of with .
From (52) we can get that
(54) |
Substitute (53) into (54), we have that
(55) |
where (c) is obtained from the property that rotation matrix fulfills .
Together with (B) and (56), (50) can be rewritten as [22]
(57) |
in which is the Jacobi matrix, behaving as
(58) |
Since , we get , therefore the Jacobi matrix behaves as
(59) |
Recalling that , we have
(60) |
Appendix C Proof of Lemma 2
The central problem is to solve out the integral of in the RHS of (26). Recall that follows Rayleigh distribution, whose PDF is given as
(62) |
Appendix D Proof Of Monotony
(66) |
The monotony of equals to the monotony of
(67) |
where .
Denoting and , the derivative of is given by
(68) |
which indicates the monotonically increasing property of function .
As a consequence, is a monotonically increasing function.
(69) |
Because , the monotony of equals to the monotony of
(70) |
where .
Similar to the proof for , we denote that and . Since the result is the same as (68), is also a monotonically increasing function.
(71) |
Because exponential function is monotonically increasing, the monotony of equals to the monotony of
(72) |
where .
Similar to the proof for , we denote and . As the result is the same as is also a monotonically increasing function.
Appendix E Proof of Theorem 2
For Spinal codes over the Nakagami-m fading channel, the only difference is to replace Rayleigh distribution with Nakagami-m distribution. The PDF Nakagami-m distribution is given by
(73) |
where is the channel fading parameter, and the Gamma function is defined as
(74) |
Substitute (73) into the integral of (26) and denote , after which the integral of is transformed as
(75) |
As mentioned above, , substitute into (LABEL:eq48):
(76) |
Appendix F Proof of Theorem 3
For Spinal codes over the Rician fading channel, the only difference is to replace Rayleigh distribution with Rician distribution. The PDF of Rician distribution is given by
(79) |
where is the channel fading parameter, and is the modified Bessel function of the first kind with order zero, given by
(80) |
Let , , we have
(82) |
With (LABEL:eq59) in hand, (LABEL:eq58) can be rewritten as
(83) |
Now that , (83) comes into
(84) |