This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\AtBeginEnvironment

example \AtEndEnvironmentexample∎

Soft Demapping of Spherical Codes from
Cartesian Powers of PAM Constellations

Reza Rafie Borujeny , Susanna E. Rumsey , Stark C. Draper , Frank R. Kschischang The authors are with the Edward S. Rogers Sr. Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G4, Canada (e-mail: [email protected], [email protected], [email protected], [email protected]).
Abstract

For applications in concatenated coding for optical communications systems, we examine soft-demapping of short spherical codes constructed as constant-energy shells of the Cartesian power of pulse amplitude modulation constellations. These are unions of permutation codes having the same average power. We construct a list decoder for permutation codes by adapting Murty’s algorithm, which is then used to determine mutual information curves for these permutation codes. In the process, we discover a straightforward expression for determining the likelihood of large subcodes of permutation codes. We refer to these subcodes, obtained by all possible sign flips of a given permutation codeword, as orbits. We introduce a simple process, which we call orbit demapping with frozen symbols, that allows us to extract soft information from noisy permutation codewords. In a sample communication system with probabilistic amplitude shaping protected by a standard low-density parity-check code that employs short permutation codes, we demonstrate that orbit demapping with frozen symbols provides a gain of about 0.30.3 dB in signal-to-noise ratio compared to the traditional symbol-by-symbol demapping. By using spherical codes composed of unions of permutation codes, we can increase the input entropy compared to using permutation codes alone. In one scheme, we consider a union of a small number of permutation codes. In this case, orbit demapping with frozen symbols provides about 0.20.2 dB gain compared to the traditional method. In another scheme, we use all possible permutations to form a spherical code that exhibits a computationally feasible trellis representation. The soft information obtained using the BCJR algorithm outperforms the traditional symbol-by-symbol method by 0.10.1 dB. Overall, using the spherical codes containing all possible permutation codes of the same average power and the BCJR algorithm, a gain of 0.50.5 dB is observed compared with the case of using one permutation code with the symbol-by-symbol demapping. Comparison of the achievable information rates of bit-metric decoding verifies the observed gains.

Index Terms:
Probabilistic amplitude shaping, spherical codes, permutation codes, trellis, soft-out demapping.

I Introduction

Recent advancements in optical fiber communication have highlighted the potential of employing short spherical codes for modulation, leading to promising nonlinear gains [1]. These spherical codes, constructed from Cartesian powers of pulse amplitude modulation (PAM) constellations, are essentially unions of permutation codes [2]. Permutation codes have already found their application in probabilistic amplitude shaping (PAS) within the realm of fiber optic communication [3]. This paper aims to delve deeper into the study of such unions of permutation codes, exploring their potential benefits and implications in the context of optical fiber communications.

Permutation codes manifest three intriguing properties. Firstly, all codewords possess the same energy. This characteristic has been identified as the cause for their reduced cross-phase modulation (XPM) and enhanced received signal-to-noise ratio (SNR) when deployed over a nonlinear optical fiber [1], serving as the primary motivation for this work.

Secondly, all codewords, when considering the amplitude of their elements, exhibit the same type. Consequently, they can be utilized to approximate a probability mass function on their constituent constellation (i.e., the alphabet from which the elements of the initial vector are selected). This capability enables their use in distribution matching [4], a key property implicitly fundamental to the development of PAS [3].

Thirdly, minimum Euclidean distance decoding of permutation codes is notably straightforward [2], with a decoding complexity not worse than that of sorting.

The performance of permutation codes has been extensively studied in terms of their distance properties and error probability under maximum likelihood detection when transmitted over an additive white Gaussian noise (AWGN) channel (see [5] and references therein). The performance of permutation codes in terms of mutual information in an AWGN channel has remained largely elusive until recent advancements. In particular, the work of [3] suggests that, given reasonable protection with a forward error-correcting code, subsets of permutation codes with large blocklength can operate remarkably close to the Shannon limit.

To circumvent the inherent rate loss at short blocklengths [6], permutation codes of sufficiently large blocklength should be employed for PAS [7]. Conversely, to leverage their enhanced SNR in nonlinear fiber, a smaller blocklength is preferred [1]. This presents a trade-off between the blocklength and the linear/nonlinear gains, which is an area of focus in this work. Our primary interest lies in the application of permutation codes, and generalizations thereof, as inner multi-dimensional modulation formats in a concatenated scheme with an outer error-correcting code111In the reverse concatenation scheme used in PAS, the meanings of “inner” and “outer” should be interpreted from the point of view of the receiver.. We consider the PAS [3] setup to demonstrate our findings, as detailed in Section II. The novelty of our work lies in our choice of signal set for the inner modulation and in the soft demapping of the inner modulation.

We will focus on permutation codes with a PAM constituent constellation. This maintains standard transmitter designs, making our codes compatible with existing systems. As mentioned, the performance of permutation codes in terms of mutual information has not been extensively studied in the literature. This is chiefly due to the very large size of these codes, which makes conventional computation of mutual information practically impossible. In Sections III and IV, different types of achievable rates of permutation codes are discussed. In particular, using a novel list decoder for permutation codes constructed as a special case of Murty’s algorithm [8], we develop a methodology to obtain mutual information curves of permutation codes for the AWGN channel—a task that was otherwise infeasible using conventional methods. These curves, which are counterparts of the mutual information curves of [9] for PAM constellations, characterize the mutual information as a function of SNR for permutation codes when, at the transmitter, codewords are chosen uniformly at random. Using these curves, we study the shaping performance of permutation codes as a function of their blocklength and find the range of blocklengths for which a reasonable shaping gain can be expected.

The soft-demapping of permutation codes (or the more general spherical codes we will study) has received little attention in the literature. One common approach to obtaining soft information is to assume that the elements of each codeword are independent [3]. This significantly reduces the computational complexity of calculating the soft information, as it can be computed in a symbol-by-symbol fashion. The type of the permutation code contributes to the calculation of the soft information through the prior distribution. While the soft information obtained using this assumption becomes more accurate as the blocklength goes to infinity, for the small blocklengths of interest in nonlinear fiber, the symbol-by-symbol soft-demapping approach of [3] is far from accurate [10, 11]. One key result of this work is a simple and computationally efficient method, which is described in Section IV, to obtain soft information from received permutation codewords.

Finding the largest permutation code for a given average energy and a fixed constituent constellation is a finite-dimensional counterpart of finding the maximizer of input entropy with an average power constraint [12]. By removing the constraint on having a permutation code (i.e., having only one type), the problem of finding the largest spherical code for a given average energy and a fixed constituent constellation is more relevant at finite blocklengths. This will allow us to further increase the input entropy at shorter blocklengths of interest in communication over nonlinear optical fiber while preserving the constant-energy property. Most importantly, in Section V, we show that such codes exhibit a trellis structure with reasonable trellis complexity, allowing us to obtain soft information from the received spherical codewords. Moreover, the achievable rates of permutation codes under bit-metric decoding are numerically evaluated using various methods for estimating soft information. The soft-in soft-out structure of our scheme makes it an appealing choice for fiber-optic communications systems, as it allows for short constant-energy spherical codes with reasonable shaping gain and tolerance to nonlinear impairments caused by XPM.

Throughout this paper, we adopt the following notational conventions to ensure clarity and consistency. The set of real numbers is denoted as \mathbb{R}. A normal lowercase italicized font is used for variables, samples of a random variable and functions with one-letter names, as in x,y,f(x)x,y,f(x) or α(x)\alpha(x). A bold lowercase italicized font is used for vectors, which are always treated as column vectors, as in 𝒙=(x1,x2,,xn)\bm{x}=(x_{1},x_{2},\dots,x_{n}). A normal capital italicized font is used for random variables and graphs, as in X,Y,ZX,Y,Z and GG. The only exception to this rule is OO which is used as Landau’s big-O symbol. A bold capital italicized font is used for random vectors, as in 𝑿\bm{X}. A sans-serif capital font is used for matrices, as in 𝖯\mathsf{P}. A capital calligraphy font is used for sets and groups, as in 𝒜,𝒮\mathcal{A},\mathcal{S} and 𝒞\mathcal{C}. A monospace font is used for energy (squared euclidean norm) of vectors as in 𝙴\mathtt{E} or 𝙴𝒚\mathtt{E}_{\bm{y}}.

II Background

This section contains an overview of some of the most important concepts used in this paper including a brief description of permutation codes and a general overview of PAS. We also review the assignment problem, a fundamental combinatorial optimization problem which will play a central role in the development of list decoders for permutation codes.

II-A Permutation Codes

We denote the set of the first nn positive integers by [n][n]. A permutation of length nn is a bijection from [n][n] to itself. The set of all permutations of length nn forms a group under function composition. This group is called the symmetric group on [n][n] and is denoted by 𝒮n\mathcal{S}_{n}.

There are different ways to represent a permutation α𝒮n\alpha\in\mathcal{S}_{n}. In the two-line form, the permutation α\alpha is given as

[123nα(1)α(2)α(3)α(n)].\phantom{.}\begin{bmatrix}1&2&3&\ldots&n\\ \alpha(1)&\alpha(2)&\alpha(3)&\ldots&\alpha(n)\end{bmatrix}.

The matrix form of α\alpha is given as

𝖯α=[𝒆α(1)𝒆α(2)𝒆α(3)𝒆α(n)]\mathsf{P}_{\alpha}=\begin{bmatrix}\bm{e}_{\alpha(1)}&\bm{e}_{\alpha(2)}&\bm{e}_{\alpha(3)}&\ldots&\bm{e}_{\alpha(n)}\end{bmatrix}^{\top}

where 𝒆j\bm{e}_{j} denotes a column vector of length nn with 11 in the jthj^{\text{th}} position and 0 in every other position. The set of permutation matrices {𝖯αα𝒮n}\{\mathsf{P}_{\alpha}\mid\alpha\in\mathcal{S}_{n}\} forms a group under matrix multiplication and is usually denoted as 𝒜n1\mathcal{A}_{n-1}. This group is, of course, isomorphic to 𝒮n\mathcal{S}_{n}. A signed permutation matrix is a generalized permutation matrix whose nonzero entries are ±1\pm 1. The set of all n×nn\times n signed permutation matrices forms a group under matrix multiplication and is usually denoted as n\mathcal{B}_{n}.

A Variant I permutation code 𝒞\mathcal{C} of length nn [2] is characterized as the set of points in real Euclidean nn-space n\mathbb{R}^{n} that are generated by the group 𝒜n1\mathcal{A}_{n-1} acting on a vector 𝒙\bm{x}:

𝒞={𝖯𝒙𝖯𝒜n1}.\phantom{.}\mathcal{C}=\{\mathsf{P}\bm{x}\mid\mathsf{P}\in\mathcal{A}_{n-1}\}.

This means that the code 𝒞\mathcal{C} is the set of all distinct vectors that can be formed by permuting the order of the nn elements of a vector 𝒙\bm{x}. Each vector in 𝒞\mathcal{C} is called a codeword. Likewise, a Variant II permutation code 𝒞\mathcal{C} of length nn is characterized as the set of points in n\mathbb{R}^{n} that are generated by the group n\mathcal{B}_{n} acting on a vector 𝒙\bm{x}:

𝒞={𝖯𝒙𝖯n}.\phantom{.}\mathcal{C}=\{\mathsf{P}\bm{x}\mid\mathsf{P}\in\mathcal{B}_{n}\}.

Here, the code 𝒞\mathcal{C} is the set of all distinct vectors that can be formed by permuting the order or changing the sign of the nn elements of a vector 𝒙\bm{x}.

The initial vector of a Variant I or II permutation code is the codeword

𝒙=(x1,x2,x3,,xn)\bm{x}=(x_{1},x_{2},x_{3},\dots,x_{n})

that satisfies 0<x1x2x3xn0<x_{1}\leq x_{2}\leq x_{3}\leq\dots\leq x_{n}. The left most inequality is strict to be consistent with the use of a PAM constituent constellation.

Refer to caption
Figure 1: Block diagram of the communication system that we consider.

II-B Probabilistic Amplitude Shaping

We consider the communication system of PAS as described in [3]. As shown in Fig. 1, at the first stage in the transmitter, a sequence of binary information inputs is mapped to a Variant I permutation code whose initial vector has an empirical distribution that resembles a desired input distribution. This first step is called permutation encoding and is described further in Section IV. Each symbol in the constituent constellation is labeled according to some Gray labeling of the used PAM. Therefore, corresponding to each Variant I codeword, there is a binary label obtained by concatenating the Gray label of each of the elements in the codeword. If the constituent constellation222We assume that the constituent constellation has a power-of-two size. is \mathcal{M}, the length of the label of each Variant I codeword of blocklength nn is exactly nlog2||n\log_{2}\lvert\mathcal{M}\rvert. The binary label of a codeword should not be confused with the information bits at the input of the permutation encoder that produced the codeword. Each Variant I permutation codeword is the output of the permutation encoder applied to a sequence of typically less than nlog2||n\log_{2}\lvert\mathcal{M}\rvert input information bits.

To protect the labels, a soft-decodable channel code is used. We emphasize that instead of encoding the input information bits that was used by the permutation encoder, the codeword labels are encoded by the channel code. This is fundamental in the development of PAS that uses bit-interleaved coded modulation as it allows the permutation codewords to have a semi-Gray labeling: if codewords are close in Euclidean distance, their labels are also close in the Hamming distance. If the input information bits are used to label the codewords, a semi-Gray labeling, even for small blocklengths, appears to be challenging to find. The channel encoder produces a sequence of parity bits which is used to assign signs to the elements of the Variant I permutation codeword333Sometimes, some binary inputs are directly used as sign bits of some of the elements of the permutation codeword. This is done when the ratio of the blocklength of the permutation code to the length of its binary label, 1/log2||1/\log_{2}\lvert\mathcal{M}\rvert, is larger than the overhead of the channel code. See [3] for details.. This will produce a Variant II permutation codeword which will be transmitted over an AWGN channel with noise variance σ2\sigma^{2}.

A noisy version of each transmitted Variant II permutation codeword is received at the receiver. By demapping, we mean the detection of the binary label and sign bits of a permutation codeword from its received noisy version. The first block at the receiver estimates the log-likelihood ratio (LLR) for each bit in the label of the received word—a process we refer to as soft demapping. The estimated LLRs are then passed to the decoder of the channel code to approximate the label of the transmitted Variant I permutation codeword at the receiver. If an acceptable Variant I codeword is obtained, the inverse of permutation encoder produces the corresponding binary input sequence. Conversely, if the reproduced symbols do not form a valid Variant I permutation codeword, a block error is declared.

II-C The Assignment Problem

A bipartite graph G=(𝒰,𝒱,)G=(\mathcal{U},\mathcal{V},\mathcal{E}) is a triple consisting of two disjoint set of vertices 𝒰\mathcal{U} and 𝒱\mathcal{V} and a set of edges 𝒰×𝒱\mathcal{E}\subseteq\mathcal{U}\times\mathcal{V}. Each edge in \mathcal{E} connects a vertex in 𝒰\mathcal{U} to a vertex in 𝒱\mathcal{V}. The graph GG is called balanced if |𝒰|=|𝒱|=n\lvert\mathcal{U}\rvert=\lvert\mathcal{V}\rvert=n. The bipartite graph GG is complete if =𝒰×𝒱\mathcal{E}=\mathcal{U}\times\mathcal{V}. All bipartite graphs that we consider are assumed complete and balanced. The graph GG is weighted if there exist a weight function w:{}w:\mathcal{E}\to\mathbb{R}\cup\{-\infty\}. A matching is a set of edges without common vertices. The weight of a matching is the sum of the weight of the edges in that matching. A perfect matching is a matching that matches all vertices of the graph. The maximum weight perfect matching problem is the problem of finding a perfect matching with maximum weight. This problem is usually called the assignment problem and is stated as follows.

Problem 1 (Assignment Problem).

Find a bijection f:𝒰𝒱f:\mathcal{U}\to\mathcal{V} such that the reward function

a𝒰w(a,f(a))\sum_{a\in\mathcal{U}}w(a,f(a))

is maximized.

The assignment problem is typically solved using the Hungarian method [13, 14]. The time complexity of the Hungarian algorithm is O(n3)O\left(n^{3}\right). However, the weight function may have additional structure that can be exploited to reduce the complexity of solving the assignment problem.

A weight function ww is said to be multiplicative if there exist two functions

w𝒰:𝒰{},\displaystyle w_{\mathcal{U}}:\mathcal{U}\to\mathbb{R}\cup\{-\infty\},
w𝒱:𝒱{},\displaystyle w_{\mathcal{V}}:\mathcal{V}\to\mathbb{R}\cup\{-\infty\},

such that

(a,b)𝒰×𝒱:w(a,b)=w𝒰(a)w𝒱(b).\phantom{.}\forall(a,b)\in\mathcal{U}\times\mathcal{V}:w(a,b)=w_{\mathcal{U}}(a)w_{\mathcal{V}}(b).

If the weight function is multiplicative the assignment problem can be solved using sorting as follows: Assume

𝒰={aii[n]},𝒱={bii[n]},\phantom{,}\mathcal{U}=\{a_{i}\mid i\in[n]\},\quad\mathcal{V}=\{b_{i}\mid i\in[n]\},

and that there are two permutations σ𝒰\sigma_{\mathcal{U}} and σ𝒱\sigma_{\mathcal{V}} on [n][n] such that

w𝒰(aσ𝒰(1))w𝒰(aσ𝒰(2))w𝒰(aσ𝒰(n)),\displaystyle w_{\mathcal{U}}(a_{\sigma_{\mathcal{U}}(1)})\leq w_{\mathcal{U}}(a_{\sigma_{\mathcal{U}}(2)})\leq\dots\leq w_{\mathcal{U}}(a_{\sigma_{\mathcal{U}}(n)}),
w𝒱(bσ𝒱(1))w𝒱(bσ𝒱(2))w𝒱(bσ𝒱(n)).\displaystyle w_{\mathcal{V}}(b_{\sigma_{\mathcal{V}}(1)})\leq w_{\mathcal{V}}(b_{\sigma_{\mathcal{V}}(2)})\leq\dots\leq w_{\mathcal{V}}(b_{\sigma_{\mathcal{V}}(n)}).

In this case, the assignment problem can be immediately solved using the upper bound in the rearrangement inequality [15]. The bijection ff that solves the assignment problem is defined by

f(ai)=bσ𝒱σ𝒰1(i).\phantom{.}f(a_{i})=b_{\sigma_{\mathcal{V}}\sigma_{\mathcal{U}}^{-1}(i)}.

The solution can be found by sorting in O(nlogn)O(n\log n) time.

III Permutation Codes as Multidimensional Constellations

In this section, we discuss how permutation codes can be used as multidimensional constellations. List decoding of permutation codes is formulated as an assignment problem and proper weight functions for permutation codes are defined. In particular, a novel weight function for Variant II permutation codes is introduced that allows for efficient orbit list decoding of Variant II permutation codes. Orbit decoding is used in the rest of the paper in order to obtain soft information when demapping Variant II permutation codes. In the Appendix, using the definitions of this section, we present a methodology to obtain mutual information vs. SNR curves for permutation codes. Essential to the development of our method is a specialized version of Murty’s algorithm which is described in the Appendix. This section concludes with example mutual information curves which show that considerable shaping gain is obtained by using permutation codes with blocklengths as small as n=50n=50.

III-A List Decoding of Variant I Permutation Codes

We consider a vector Gaussian noise channel

𝒀=𝑿+σ𝒁\bm{Y}=\bm{X}+\sigma\bm{Z}

where the input 𝑿\bm{X} is a random vector uniformly distributed over a permutation code 𝒞\mathcal{C}, the noise term 𝒁\bm{Z} is a standard normal random vector and σ\sigma is the standard deviation of the additive white Gaussian noise. The output 𝒀\bm{Y} is a noisy version of 𝑿\bm{X}. The joint distribution444Instead of distinguishing between discrete, continuous or mixed random vectors, we always refer to pp as a distribution for convenience. of the input and the output is denoted by p(𝒙,𝒚)p(\bm{x},\bm{y}). Similarly, p(𝒚𝒙)p(\bm{y}\mid\bm{x}) is the conditional distribution of the output conditional on the input and p(𝒚)p(\bm{y}) is the output distribution. List decoding of a Variant I permutation code is to find a set ={𝒄0,𝒄1,,𝒄L1}\mathcal{L}=\{\bm{c}_{0},\bm{c}_{1},\dots,\bm{c}_{L-1}\} of codewords with the highest likelihoods so that

p(𝒚𝒄0)p(𝒚𝒄1)p(𝒚𝒄L1),\phantom{,}p(\bm{y}\mid\bm{c}_{0})\geq p(\bm{y}\mid\bm{c}_{1})\geq\dots\geq p(\bm{y}\mid\bm{c}_{L-1}),

where L=||L=\lvert\mathcal{L}\rvert is the list size.

To be able to find such a list of candidate codewords, we need to define a weight function for the maximum likelihood decoding of permutation codes. Because the noise is Gaussian, a codeword 𝒙\bm{x} has a higher likelihood than a codeword 𝒙\bm{x}^{\prime} if 𝒙\bm{x} is closer to the received word 𝒚\bm{y} than 𝒙\bm{x}^{\prime}. That is,

p(𝒚𝒙)p(𝒚𝒙)i=1n(yixi)2i=1n(yixi)2.\phantom{.}p(\bm{y}\mid\bm{x})\geq p(\bm{y}\mid\bm{x}^{\prime})\iff\sum_{i=1}^{n}(y_{i}-x_{i})^{2}\leq\sum_{i=1}^{n}(y_{i}-x_{i}^{\prime})^{2}.

Using the constant-energy property of permutation codes, we can simplify further to get

p(𝒚𝒙)p(𝒚𝒙)i=1nyixii=1nyixi.\phantom{.}p(\bm{y}\mid\bm{x})\geq p(\bm{y}\mid\bm{x}^{\prime})\iff\sum_{i=1}^{n}y_{i}x_{i}\geq\sum_{i=1}^{n}y_{i}x_{i}^{\prime}.

Therefore, list decoding with maximum likelihood reward is equivalent to list decoding with maximum correlation reward. For Variant I permutation codes, we can define the following assignment problem:

Problem 2 (Maximum Correlation Decoding).

Find a bijection f:[n][n]f:[n]\to[n] such that the reward function i=1nxiyf(i)\sum_{i=1}^{n}x_{i}y_{f(i)} is maximized.

This is an assignment problem with

𝒰={xii[n]},𝒱={yii[n]},\phantom{,}\mathcal{U}=\{x_{i}\mid i\in[n]\},\quad\mathcal{V}=\{y_{i}\mid i\in[n]\},

and weight function

w(xi,yj)=xiyj.\phantom{.}w(x_{i},y_{j})=x_{i}y_{j}.

The weight function is clearly multiplicative with w𝒰(xi)=xiw_{\mathcal{U}}(x_{i})=x_{i} and w𝒱(yj)=yjw_{\mathcal{V}}(y_{j})=y_{j}. The list decoding problem, then, can be solved by a direct application of Murty’s algorithm as described in the Appendix.

The assignment problem defined for Variant I permutation codes is not suitable for Variant II permutation codes as in a Variant II code, each symbol can have a plus or minus sign. It is not possible to accommodate such arbitrary sign flips in the assignment problem defined for Variant I permutation codes. In what follows, we introduce maximum orbit-likelihood decoding of Variant II permutation codes that allows us to reformulate the decoding problem in the framework of an assignment problem.

III-B Orbit Decoding of Variant II Permutation Codes

A trivial subcode of a Variant II permutation code 𝒞\mathcal{C} corresponding to a codeword 𝒙𝒞\bm{x}\in\mathcal{C} is the subcode containing 𝒙\bm{x} and all codewords obtained by changing the sign of elements in 𝒙\bm{x} in all possible ways. We denote the group of nn-dimensional diagonal unitary matrices (i.e., n×nn\times n diagonal integer matrices with ±1\pm 1 diagonal entries) as 𝒟n\mathcal{D}_{n}. A trivial subcode corresponding to 𝒙\bm{x} is the orbit of 𝒙\bm{x} under the group action ϕ\phi defined by

ϕ:𝒟n×𝒞\displaystyle\phi:\mathcal{D}_{n}\times\mathcal{C} 𝒞,\displaystyle\rightarrow\mathcal{C},
(𝖠,𝒙)\displaystyle(\mathsf{A},\bm{x}) 𝖠𝒙.\displaystyle\mapsto\mathsf{A}\bm{x}.

Similar to finding the codeword in a Variant I permutation code with the maximum likelihood, we can think about finding the orbit in a Variant II permutation code with the maximum orbit likelihood—a process that we call maximum orbit-likelihood decoding. Also, similar to a list of LL most likely codewords for a Variant I permutation code, we can think about finding a list of LL most likely orbits for a Variant II permutation code—a process that we refer to as orbit list-decoding. In order to use Murty’s algorithm, a weight function for the orbits needs to be defined. We denote the likelihood of the orbit of 𝒙𝒞\bm{x}\in\mathcal{C} by

fo(𝒚𝒙)𝖠𝒟np(𝒚𝖠𝒙)p(𝖠𝒙)𝖠𝒟np(𝖠𝒙)=𝖠𝒟np(𝒚𝖠𝒙)2n.\phantom{.}f_{\mathrm{o}}(\bm{y}\mid\bm{x})\coloneqq\frac{\sum_{\mathsf{A}\in\mathcal{D}_{n}}p(\bm{y}\mid\mathsf{A}\bm{x})p(\mathsf{A}\bm{x})}{\sum_{\mathsf{A}\in\mathcal{D}_{n}}p(\mathsf{A}\bm{x})}=\!\!\!\!\sum_{\mathsf{A}\in\mathcal{D}_{n}}\!\!\!\!\frac{p(\bm{y}\mid\mathsf{A}\bm{x})}{2^{n}}.

The key result of this section is the following theorem.

Refer to caption
Figure 2: Mutual information curves for five codes with n=12n=12.
Theorem 1.

The likelihood of the orbit of 𝐱𝒞\bm{x}\in\mathcal{C} under the action of ϕ\phi is given by

fo(𝒚𝒙)=a(𝙴𝒚,𝙴,σ,n)i=1ncoshyixiσ2,\phantom{,}f_{\mathrm{o}}(\bm{y}\mid\bm{x})=a(\mathtt{E}_{\bm{y}},\mathtt{E},\sigma,n)\prod_{i=1}^{n}\cosh\frac{y_{i}x_{i}}{\sigma^{2}}, (1)

where a(𝙴𝐲,𝙴,σ,n)a(\mathtt{E}_{\bm{y}},\mathtt{E},\sigma,n) is a constant that depends on the energy of the received word 𝙴𝐲\mathtt{E}_{\bm{y}}, the energy 𝙴\mathtt{E} of each permutation codeword, the noise standard deviation σ\sigma and the blocklength nn.

Proof.
fo(𝒚𝒙)\displaystyle f_{\mathrm{o}}(\bm{y}\mid\bm{x}) =𝖠𝒟np(𝒚𝖠𝒙)2n\displaystyle=\sum_{\mathsf{A}\in\mathcal{D}_{n}}\frac{p(\bm{y}\mid\mathsf{A}\bm{x})}{2^{n}}
=1(22πσ)ns1=±1s2=±1sn=±1i=1ne(yisixi)22σ2\displaystyle=\frac{1}{\left(2\sqrt{2\pi}\sigma\right)^{n}}\sum_{s_{1}=\pm 1}\sum_{s_{2}=\pm 1}\!\dots\!\sum_{s_{n}=\pm 1}\prod_{i=1}^{n}e^{-\frac{(y_{i}-s_{i}x_{i})^{2}}{2\sigma^{2}}}
=1(22πσ)ni=1nsi=±1e(yisixi)22σ2\displaystyle=\frac{1}{\left(2\sqrt{2\pi}\sigma\right)^{n}}\prod_{i=1}^{n}\sum_{s_{i}=\pm 1}e^{-\frac{(y_{i}-s_{i}x_{i})^{2}}{2\sigma^{2}}}
=1(22πσ)ni=1n(e(yixi)22σ2+e(yi+xi)22σ2)\displaystyle=\frac{1}{\left(2\sqrt{2\pi}\sigma\right)^{n}}\prod_{i=1}^{n}\left(e^{-\frac{(y_{i}-x_{i})^{2}}{2\sigma^{2}}}+e^{-\frac{(y_{i}+x_{i})^{2}}{2\sigma^{2}}}\right)
=1(22πσ)ni=1neyi2+xi22σ2(eyixiσ2+eyixiσ2)\displaystyle=\frac{1}{\left(2\sqrt{2\pi}\sigma\right)^{n}}\prod_{i=1}^{n}e^{-\frac{y_{i}^{2}+x_{i}^{2}}{2\sigma^{2}}}\left(e^{\frac{y_{i}x_{i}}{\sigma^{2}}}+e^{-\frac{y_{i}x_{i}}{\sigma^{2}}}\right)
=1(22πσ)ni=1n2eyi2+xi22σ2coshyixiσ2\displaystyle=\frac{1}{\left(2\sqrt{2\pi}\sigma\right)^{n}}\prod_{i=1}^{n}2e^{-\frac{y_{i}^{2}+x_{i}^{2}}{2\sigma^{2}}}\cosh\frac{y_{i}x_{i}}{\sigma^{2}}
=e𝙴𝒚+𝙴2σ2(2πσ)ni=1ncoshyixiσ2.\displaystyle=\frac{e^{-\frac{\mathtt{E}_{\bm{y}}+\mathtt{E}}{2\sigma^{2}}}}{\left(\sqrt{2\pi}\sigma\right)^{n}}\prod_{i=1}^{n}\cosh\frac{y_{i}x_{i}}{\sigma^{2}}.

Theorem 1 suggests the weight function

w(xi,yj)=logcoshyjxiσ2w(x_{i},y_{j})=\log\cosh\frac{y_{j}x_{i}}{\sigma^{2}}

for Murty’s algorithm in orbit list-decoding of Variant II permutation codes. This weight function, however, is not multiplicative. Using a generalized version of the rearrangement inequality [16], it turns out that this weight function has all the required properties of a multiplicative weight function that we used to simplify Murty’s algorithm in the Appendix.

Mutual information of Variant II permutation codes can be computed using the orbit likelihoods by the method discussed in the Appendix. Note that the contribution of the likelihood of one orbit in p(𝒚)p(\bm{y}) in (7) is equivalent to the contribution of all 2n2^{n} codewords in the orbit.

Example 1.

As an example, we find the mutual information vs. SNR for five codes with blocklength n=12n=12 with the following initial vectors:

  • for Code 1, (1,1,1,1,1,3,3,3,3,3,5,7)(1,1,1,1,1,3,3,3,3,3,5,7),

  • for Code 2, (1,1,1,1,1,3,3,3,5,5,5,7)(1,1,1,1,1,3,3,3,5,5,5,7),

  • for Code 3, (1,1,1,1,3,3,3,3,5,5,7,7)(1,1,1,1,3,3,3,3,5,5,7,7),

  • for Code 4, (1,1,1,1,3,3,5,5,5,5,7,7)(1,1,1,1,3,3,5,5,5,5,7,7),

  • for Code 5, (1,1,1,3,3,3,5,5,5,7,7,7)(1,1,1,3,3,3,5,5,5,7,7,7).

The average symbol power of each of the codes is 10.3310.33, 1313, 15.6615.66, 18.3318.33 and 2121, respectively, and the initial vectors have been chosen to maximize the code size given these power constraints. Note that the constituent constellation is an 88-PAM. The average power of this constituent constellation with uniform input distribution is 2121. In Fig. 2, we illustrate the mutual information curves for these five codes as well as the constituent constellation under uniform input distribution. Also shown is the mutual information of the constituent constellation with Maxwell–Boltzman (MB) distributions which are known to be entropy maximizers [12] and close to optimal, in terms of mutual information, for the AWGN channel [17]. The channel capacity is also depicted. It is obvious that none of the five codes show any shaping gain as all mutual information curves are below that of the 88-PAM with uniform distribution.

Refer to caption
Figure 3: Mutual information curves for five codes with n=50n=50.
Example 2.

As another example, we find the mutual information vs. SNR for five codes with blocklength n=50n=50. The initial vectors are chosen so that the largest Variant II codes with the average power of 7.087.08, 10.610.6, 14.1214.12, 17.6417.64 and 21.1621.16, are chosen. The list size for Murty’s algorithm was set to 565^{6}. The number of Monte Carlo samples for each SNR is 100,000100,000. However, no numerically significant difference was observed even with 10,00010,000 samples. In Fig. 3, we illustrate the mutual information curves for these five codes as well as the constituent constellation under uniform and MB input distribution. It is evident that Code 1, Code 2 and Code 3 show considerable shaping gains as their curves cross that of the 88-PAM with uniform distribution. For lower SNR values, one needs to have larger list sizes to get an accurate estimate for p(𝐲)p(\bm{y}). As a result, the computation of curves for lower SNRs is slower. We did not generate the curves for lower SNRs as we were mostly interested in the crossing of the permutation codes’ mutual information curves and the mutual information curve of the 88-PAM with uniform distribution.

Remark 1.

As shown in Fig. 3, a short blocklength of n=50n=50 is enough to have considerable shaping gain over the AWGN channel. In practice, we are usually interested in using binary channel codes and, therefore, the so-called bit-metric decoding (BMD) of permutation codes, in which the mutual information induced via the labels of permutation codewords is considered. In the next section, we present BMD rates for Code 2 of Example 2 and its generalizations for various methods of soft demapping.

IV Soft Demapping of Permutation Codes

IV-A Variant I Encoder Function and its Inverse

Input: codeword integer index qq, vector of repetitions 𝒎\bm{m}, cardinality of the code 𝑠𝑖𝑧𝑒\mathit{size}, code length nn
Output: index vector 𝒄\bm{c}
𝒄\bm{c}\leftarrow initialize to a vector of length nn
for 𝑖𝑛𝑑𝑒𝑥1\mathit{index}\leftarrow 1 to nn do
       𝑙𝑒𝑡𝑡𝑒𝑟0\mathit{letter}\leftarrow 0
       𝑠𝑖𝑧𝑒temp0\mathit{size}_{\mathrm{temp}}\leftarrow 0
       while q0q\geq 0 do
             𝑙𝑒𝑡𝑡𝑒𝑟𝑙𝑒𝑡𝑡𝑒𝑟+1\mathit{letter}\leftarrow\mathit{letter}+1
             𝑠𝑖𝑧𝑒temp(𝑠𝑖𝑧𝑒𝒎[𝑙𝑒𝑡𝑡𝑒𝑟])/(n𝑖𝑛𝑑𝑒𝑥+1)\mathit{size}_{\mathrm{temp}}\leftarrow\lfloor(\mathit{size}\cdot\bm{m}[\mathit{letter}])/(n-\mathit{index}+1)\rfloor
             qq𝑠𝑖𝑧𝑒tempq\leftarrow q-\mathit{size}_{\mathrm{temp}}
            
       end while
      𝑠𝑖𝑧𝑒𝑠𝑖𝑧𝑒temp\mathit{size}\leftarrow\mathit{size}_{\mathrm{temp}}
       qq+𝑠𝑖𝑧𝑒tempq\leftarrow q+\mathit{size}_{\mathrm{temp}}
       𝒄[𝑖𝑛𝑑𝑒𝑥]𝑙𝑒𝑡𝑡𝑒𝑟\bm{c}[\mathit{index}]\leftarrow\mathit{letter}
       𝒎[𝑙𝑒𝑡𝑡𝑒𝑟]𝒎[𝑙𝑒𝑡𝑡𝑒𝑟]1\bm{m}[\mathit{letter}]\leftarrow\bm{m}[\mathit{letter}]-1
      
end for
return 𝐜\bm{c}
Algorithm 1 An encoding algorithm based on [18]. The output of the algorithm is a vector of indices 𝒄\bm{c} of length nn. Each index is an integer between 11 and uu and represents a symbol in the vector 𝝁\bm{\mu}. That is, the corresponding codeword is 𝝁[𝒄]\bm{\mu}[\bm{c}].

A Variant I permutation code 𝒞\mathcal{C} is uniquely identified by its initial vector 𝒙\bm{x}. Note that when the length of the code is nn, the number of distinct elements in 𝒙\bm{x} is at most nn. That is,

|{xjj[n]}|n.\phantom{.}\lvert\{x_{j}\mid j\in[n]\}\rvert\leq n.

We denote the number of unique elements of 𝒙\bm{x} by uu. We define 𝝁\bm{\mu} as the vector whose elements are the distinct elements of 𝒙\bm{x} in ascending order. That is, if

𝝁=(μ1,μ2,μ2,,μu),\phantom{,}\bm{\mu}=(\mu_{1},\mu_{2},\mu_{2},\dots,\mu_{u}),

then μ1<μ2<μ3<<μu\mu_{1}<\mu_{2}<\mu_{3}<\dots<\mu_{u}. The number of times each μi\mu_{i} occurs in 𝒙\bm{x} is denoted as mim_{i}. This means that

m1+m2+m3++mu=n.\phantom{.}m_{1}+m_{2}+m_{3}+\dots+m_{u}=n.

The size of this code is

|𝒞|=n!m1!m2!m3!mu!\lvert\mathcal{C}\rvert=\frac{n!}{m_{1}!m_{2}!m_{3}!\dots m_{u}!}

and its rate is r=log2|𝒞|nr=\frac{\log_{2}\lvert\mathcal{C}\rvert}{n}.

The problem of encoding permutation codes has been considered by many [19, 4, 20, 18]. Here, we adapt the encoder provided in [18]. A pseudocode for this algorithm is provided in Algorithm 1. A pseudocode for the inverse encoder is given in Algorithm 2. The computational complexity of both algorithms is O(nu)O(nu) arithmetic operations.

IV-B Variant II Encoder Function and its Inverse

Input: index vector 𝒄\bm{c}, vector of repetitions 𝒎\bm{m}, cardinality of the code 𝑠𝑖𝑧𝑒\mathit{size}, code length nn
Output: codeword integer index qq
qq\leftarrow 0
for index1index\leftarrow 1 to nn do
       for k1k\leftarrow 1 to 𝐜[𝑖𝑛𝑑𝑒𝑥]1\bm{c}[\mathit{index}]-1 do
             𝑠𝑖𝑧𝑒i𝑠𝑖𝑧𝑒𝒎[k]/(n𝑖𝑛𝑑𝑒𝑥+1)\mathit{size}_{i}\leftarrow\lfloor\mathit{size}\cdot\bm{m}[k]/(n-\mathit{index}+1)\rfloor
             qq+𝑠𝑖𝑧𝑒iq\leftarrow q+\mathit{size}_{i}
            
       end for
      𝑠𝑖𝑧𝑒𝑠𝑖𝑧𝑒𝒎[𝒄[𝑖𝑛𝑑𝑒𝑥]]/(n𝑖𝑛𝑑𝑒𝑥+1)\mathit{size}\leftarrow\lfloor\mathit{size}\cdot\bm{m}[\bm{c}[\mathit{index}]]/(n-\mathit{index}+1)\rfloor
       𝒎[𝒄[𝑖𝑛𝑑𝑒𝑥]]𝒎[𝒄[𝑖𝑛𝑑𝑒𝑥]]1\bm{m}[\bm{c}[\mathit{index}]]\leftarrow\bm{m}[\bm{c}[\mathit{index}]]-1
      
end for
return qq
Algorithm 2 An inverse for the encoder of Algorithm 1 that produces the index in the range [0,|𝒞|1][0,\lvert\mathcal{C}\rvert-1] of the input permutation in lexicographic order.

A Variant II permutation code 𝒞\mathcal{C}, similar to a Variant I permutation code, is uniquely identified by its initial vector 𝒙\bm{x}. Note that all elements of the initial vector 𝒙\bm{x} are positive. We use the same notation uu to denote the number of distinct elements of 𝒙\bm{x}. Similarly, the vector 𝝁\bm{\mu} is defined as the vector whose elements are the distinct elements of 𝒙\bm{x} in ascending order. The number of times each μi\mu_{i} occurs in 𝒙\bm{x} is denoted as mim_{i}. The rate of this code is

r=1+1nlog2n!m1!m2!m3!mu!.\phantom{.}r=1+\frac{1}{n}\log_{2}\frac{n!}{m_{1}!m_{2}!m_{3}!\dots m_{u}!}.

Let k=nrk=\lfloor nr\rfloor. Encoding an integer in the range 0 to 2k12^{k}-1 can be done using an encoder for its constituent Variant I permutation code. The knk-n least significant bits are mapped to a codeword 𝒄\bm{c} in the constituent Variant I permutation code using an encoder as in Section IV-A. The signs of the symbols in the resulting codeword 𝒄\bm{c} are chosen using the nn most significant bits. If the encoder for the constituent Variant I permutation code is encI\operatorname{enc}_{\mathrm{I}}, the encoder for the Variant II permutation code will be the function

encII:{0,1}k𝒞\operatorname{enc}_{\mathrm{II}}:\{0,1\}^{k}\to\mathcal{C}

such that

encII(b1,b2,b3,,bk)=\displaystyle\operatorname{enc}_{\mathrm{II}}(b_{1},b_{2},b_{3},\dots,b_{k})=
diag(12b1,12b2,,12bn)encI(q),\displaystyle\operatorname{diag}(1\!-\!2b_{1},1\!-\!2b_{2},\dots,1\!-\!2b_{n})\operatorname{enc}_{\mathrm{I}}(q),

where, base-22 representation of qq is bn+1bn+2bkb_{n+1}b_{n+2}\dots b_{k}. Similarly, an inverse encoder can be defined using the inverse encoder for the constituent Variant I permutation code.

IV-C Permutation Encoding for PAS

We are now ready to explicitly describe the first block in the PAS system of Fig. 1. Assume we want to use a Variant I permutation code with initial vector 𝒙\bm{x} and blocklength nn whose rate is rr. The first step for us is to expurgate the corresponding Variant I code so that its size is a power of 22. Let ka=nrk_{\mathrm{a}}=\lfloor nr\rfloor. To identify a sequence of amplitudes of length nn, i.e., a Variant I codeword of blocklength nn, one can use the encoder defined in Section IV-A. The input is an integer ii in the range [0,2ka)[0,2^{k_{\mathrm{a}}}). Therefore, each codeword in this Variant I permutation code can be indexed by a binary vector of length kak_{\mathrm{a}}. This way, if nrnr is not an integer, the Variant I permutation code is expurgated so that only the first 2ka2^{k_{\mathrm{a}}} codewords, based on the lexicographical ordering, are utilized. This can potentially introduce a considerable correlation between the elements of codewords and breaks the geometric uniformity of the permutation code [21].

In order to properly randomize so that each codeword is surrounded by approximately equal number of codewords at any given distance, we first try to randomly spread the set of integers in the range of [0,2ka)[0,2^{k_{\mathrm{a}}}) on the integers in the range of [0,2nr)[0,2^{nr}). One way to achieve this is to take the sampled integer ii from [0,2ka)[0,2^{k_{\mathrm{a}}}) and multiply it by a randomly chosen integer ee coprime with 2nr2^{nr} and add another randomly chosen integer dd in [0,2ka)[0,2^{k_{\mathrm{a}}}) modulo 2nr2^{nr}. The coprime factor ee and the dither dd can be generated based on some shared randomness so that the receiver can also reproduce them synchronously. Consequently, the first block in Fig. 1 takes in an integer ii from [0,2ka)[0,2^{k_{\mathrm{a}}}) at the input and produces a Variant I codeword 𝒄\bm{c} by

𝒄=encI(ei+d(mod2nr)).\phantom{.}\bm{c}=\operatorname{enc}_{\mathrm{I}}(ei+d\pmod{2^{nr}}).

The codeword 𝒄\bm{c} serves as the set of amplitudes in PAS and is labeled based on the Gray labeling of the constituent PAM constellation. The labels are then encoded using a systematic soft-decodable channel code. The resulting parity bits are used to set the signs of each of the amplitudes and as a result, a Variant II permutation code is produced.

Example 3.

To better understand the significance of randomization, consider the case of using a permutation code 𝒞\mathcal{C} and choosing the first log2|C|\log_{2}\lfloor\lvert C\rvert\rfloor codewords according to the lexicographical ordering of the codewords. For example, assume that the initial vector is

v=(1,1,1,1,1,3,3,3,5,5,5,7).v=(1,1,1,1,1,3,3,3,5,5,5,7).

With this choice, |𝒞|216.76\lvert\mathcal{C}\rvert\approx 2^{16.76}. If we only consider the first 2162^{16} codewords according to the lexicographical ordering, the last codeword that we use is

c=(3,5,1,3,1,1,1,3,5,1,5,7).c=(3,5,1,3,1,1,1,3,5,1,5,7).

Note, for instance, that in none of the codewords is the first symbol 55 or 77. Also note that the second symbol is never 77. As is evident, the correlation between symbols is very strong. With the proposed randomization, such strong correlations are avoided to a great extend.

IV-D Soft-in Soft-out Demapping

The input to the first block at the receiver in Fig. 1 contains noisy version of Variant II permutation codewords. Assume that the received word is 𝒚\bm{y} and that we want to estimate the LLR for the ithi^{\mathrm{th}} bit, LLRi\mathrm{LLR}_{i}, which is a part of the label in the jthj^{\mathrm{th}} symbol, cjc_{j}, of the Variant II codeword 𝒄\bm{c}. Four different estimators are discussed in what follows.

Exact Method: To get the exact LLR, one should use

LLRi=log𝒄,bi=0p(𝒚𝒄)𝒄,bi=1p(𝒚𝒄),\phantom{,}\mathrm{LLR}_{i}=\log\frac{\sum_{\bm{c},b_{i}=0}p(\bm{y}\mid\bm{c})}{\sum_{\bm{c},b_{i}=1}p(\bm{y}\mid\bm{c})}, (2)

where, in the numerator, 𝒄\bm{c} runs over all Variant II codewords that their ithi^{\mathrm{th}} bit bib_{i} is 0, and that

e1(encI1(|c1|,|c2|,,|cn|)d)(mod2nr)[0,2ka).\phantom{,}e^{-1}(\operatorname{enc}_{\mathrm{I}}^{-1}(\lvert c_{1}\rvert,\lvert c_{2}\rvert,\dots,\lvert c_{n}\rvert)-d)\pmod{2^{nr}}\in[0,2^{k_{\mathrm{a}}}).

The condition for codewords 𝒄\bm{c} in the denominator is similar except ithi^{\mathrm{th}} bit bib_{i} is 11. The total number of terms in both summations is 2n+ka2^{n+k_{\mathrm{a}}} which is at least half of the size of the Variant II code 𝒞\mathcal{C} and is typically very large. Note that there are nlog2||n\log_{2}\lvert\mathcal{M}\rvert label bits in each codeword and, as a result, the computational complexity of this method is O(n|𝒞|log||)O(n\lvert\mathcal{C}\rvert\log\lvert\mathcal{M}\rvert) operations per permutation codeword. The exact method, therefore, is only useful for very short permutation codes.

Symbol-by-Symbol: In the symbol-by-symbol method of [3], the ithi^{\mathrm{th}} LLR is estimated using

LLRi=logs,bi=0msp(yjs)s,bi=1msp(yjs),\phantom{,}\mathrm{LLR}_{i}=\log\frac{\sum_{s,b_{i}=0}m_{s}p(y_{j}\mid s)}{\sum_{s,b_{i}=1}m_{s}p(y_{j}\mid s)}, (3)

where ss runs over all symbols in the constituent constellation and, with slight abuse of notation, msm_{s} is the number of occurrences of |s|\lvert s\rvert in the initial vector. From the point of view of the receiver, any word from n\mathcal{M}^{n} is a possible codeword and in the computation of the LLRs all words in n\mathcal{M}^{n} can contribute. This allows for a computationally efficient method of estimating LLRs. For each LLR, computation of (3) requires O(||)O(\lvert\mathcal{M}\rvert) operations. Therefore, the computational complexity of the symbol-by-symbol method is O(n||log||)O(n\lvert\mathcal{M}\rvert\log\lvert\mathcal{M}\rvert) operations per permutation codeword. However, this method is inaccurate at short blocklengths [10, 11].

Orbit Decoding with Frozen Symbols: One key contribution of this paper is this new and simple method of soft demapping using orbit decoding of permutation codes. If the ithi^{\mathrm{th}} bit is an amplitude bit (i.e., it is not a sign bit), to estimate LLRi\mathrm{LLR}_{i} we use

LLRi=log𝒄,bi=0fo(𝒚𝒄)𝒄,bi=1fo(𝒚𝒄),\phantom{,}\mathrm{LLR}_{i}=\log\frac{\sum_{\bm{c},b_{i}=0}f_{\mathrm{o}}(\bm{y}\mid\bm{c})}{\sum_{\bm{c},b_{i}=1}f_{\mathrm{o}}(\bm{y}\mid\bm{c})}, (4)

where 𝒄\bm{c} runs over all Variant I codewords in the following list: For any amplitude s{μ1,μ2,,μu}s\in\{\mu_{1},\mu_{2},\dots,\mu_{u}\}, we only include the codeword 𝒄\bm{c} with the most likely orbit such that cj=sc_{j}=s on the list. That is, the most likely orbits for every element of the codeword frozen to any possible amplitude will contribute in the calculation of LLRi\mathrm{LLR}_{i}.

If the ithi^{\mathrm{th}} bit is a sign bit, we only take into account the contribution of the half of the codewords in each orbit that have positive sign in the numerator and the other half in the denominator. Using a technique similar to the proof of Theorem 1, one can show that for a sign bit

LLRi=log𝒄ecjyjσ2sechcjyjσ2fo(𝒚𝒄)𝒄ecjyjσ2sechcjyjσ2fo(𝒚𝒄).\phantom{.}\mathrm{LLR}_{i}=\log\frac{\sum_{\bm{c}}e^{\frac{c_{j}y_{j}}{\sigma^{2}}}\operatorname{sech}\frac{c_{j}y_{j}}{\sigma^{2}}f_{\mathrm{o}}(\bm{y}\mid\bm{c})}{\sum_{\bm{c}}e^{-\frac{c_{j}y_{j}}{\sigma^{2}}}\operatorname{sech}\frac{c_{j}y_{j}}{\sigma^{2}}f_{\mathrm{o}}(\bm{y}\mid\bm{c})}. (5)

To find the corresponding orbit of each codeword on the list, the assignment problem needs to be solved with the frozen symbol as an inclusion constraint (See the Appendix). Once the received word is sorted by the magnitude of its elements, the orbit likelihood of each item on the list can be found using O(n)O(n) operations (see Theorem 1). There are a total of O(n||)O(n\lvert\mathcal{M}\rvert) orbits to consider. Therefore, the orbit likelihoods can be computed with O(n2||)O(n^{2}\lvert\mathcal{M}\rvert) complexity. Once the orbit likelihoods are computed, each of the nlog2||n\log_{2}\lvert\mathcal{M}\rvert LLRs can be found with O(||)O(\lvert\mathcal{M}\rvert) complexity. Therefore, the computational complexity of this method is O(n2||+n||log||)O(n^{2}\lvert\mathcal{M}\rvert+n\lvert\mathcal{M}\rvert\log\lvert\mathcal{M}\rvert) operations per permutation codeword.

Example 4.

Consider a permutation code with the initial vector

(1,1,1,1,1,1,1,3).\phantom{.}(1,1,1,1,1,1,1,3).

The corresponding Variant I code has size 232^{3}, and since nr=3nr=3 is an integer, there is no need for random spreading. Assume that we send the codeword

𝒙=(1,1,1,1,1,1,1,3)\bm{x}=(1,1,1,1,1,1,1,3)

and the noise vector, sampled from a standard normal random distribution and rounded to one significant digit, is

𝒛=(2.1,0.2,0.1,1.5,0.7,1.6,1.9,0.2).\phantom{.}\bm{z}=(2.1,0.2,0.1,1.5,0.7,1.6,-1.9,0.2).

Assume that the constituent constellation is labeled such that the amplitude of 11 is labeled with bit 0 and the amplitude of 33 is labeled with bit 11. The LLR for the first amplitude bit using the exact method is 0.690.69. The estimate for this LLR using the symbol-by-symbol method is 0.25-0.25. The estimate for the LLR using orbit decoding is 0.200.20. Note that the sign of the LLR using the symbol-by-symbol method is more in favor of an amplitude of 33, while the other two values are in favor of an amplitude of 11 in the first position. This shows that in this example the quality of the LLR estimated using the orbit decoding with frozen symbols is superior to the estimation using the symbol-by-symbol method.

Soft Decoding Using the BCJR Algorithm: For typical permutation code parameters of interest such as Code 2 of Example 2, the natural trellis representation of permutation codes has a prohibitively large number of states and edges [22, 10]. However, the trellis representation of the more general spherical codes that we study in Section V, which include permutation codes as subcodes, is much more tractable. Similar to the symbol-by-symbol method, in which the receiver presumes a codebook encompassing the used permutation code, we can consider the larger, more general spherical codes and estimate the likelihood of each symbol in the codeword by running the Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm [23].

IV-E Numerical Evaluation

Consider the abstract channel for which the channel input is the label of the transmitted permutation codeword 𝑩=(B1,B2,,Bka+n)\bm{B}=(B_{1},B_{2},\dots,B_{k_{\mathrm{a}}+n}) and its output is the noisy permutation codeword 𝒀=(Y1,Y2,,Yn)\bm{Y}=(Y_{1},Y_{2},\dots,Y_{n}). The bit-metric decoding (BMD) rate555Here, [x]+\left[x\right]^{+} is the same as max{x,0}\max\{x,0\}. [7]

[H(𝑩)iH(BiY)]+\left[H(\bm{B})-\sum_{i}H(B_{i}\mid Y)\right]^{+}

is an achievable rate [24] and can be used to compare different methods of soft demapping. This is particularly useful as it provides an achievable rate when a binary soft-decodable channel code is intended to be used, without specifically restricting the system to a particular implementation of the channel code. The BMD rate of permutations codes, as well as generalizations discussed in the next section, for different methods of estimating LLRs are shown in Fig. 4.

Refer to caption
Figure 4: BMD rates for various spherical codes with n=50n=50 and 𝙴=530\mathtt{E}=530 with a 88-PAM constituent constellation are shown.

To better compare the symbol-by-symbol method with our proposed method using orbit decoding as well as decoding using the BCJR algorithm, we use a permutation code in a PAS scheme. We compare the permutation block error rate (BLER) for Code 2 from Example 2 for these three estimation methods. By each block, we mean a permutation codeword (or, in general, a spherical codeword) and BLER is the ratio of blocks received in error after the channel decoder to the total number of blocks sent. For the channel code, we use the (10 860,8 448)(10\,860,8\,448) low density parity check (LDPC) code from the 5G NR standard [25]. Each LDPC code frame contains 6767 permutation codewords. Among the sign bits, 1414 are used for systematic information bits [3]. In Fig. 5, we show the BLER for these three LLR estimation methods. Note that we think of the case of using a permutation code with symbol-by-symbol decoding [3] as the baseline for comparison with other spherical codes and decoding methods that we study. For instance, it is clear that at BLER =103=10^{-3}, about 0.30.3 dB improvement is achieved by using the orbit decoding with frozen symbols compared with the symbol-by-symbol method. Also, about 0.20.2 dB improvement is achieved compared with the BCJR method. In Fig. 5, similar results are shown for more general spherical codes as well. The structure of these codes will be explained in Section V.

V Structure of Constant-Energy Shells of PAM Constellations

In this section, we explore a generalization of permutation codes that consists of codewords from a union of permutation codes, each corresponding to a distinct initial vector or “type”. If each of these initial vectors has the same energy, the resulting code is termed a “spherical code”. In comparison to permutation codes, this construction can deliver an increase in code rate. At the same time the codes we now describe are also spherical codes and so, as mentioned in the introduction, can reduce the XPM in the optical fiber channel. We are particularly interested in constructions of spherical codes that abide by the constraints enforced by standard (PAM) modulation formats. Given an underlying constellation and a desired codeword energy (squared sphere radius), the largest code can be designed is the intersection of a sphere of the given radius with the nn-fold product of the underlying constellation.

Larger spherical codes, while often superior in rate, can have high complexity for storage, encoding, and decoding. For this reason, for a given PAM constellation, we also investigate the structure of these codes so as to describe intermediate subcodes that can increase code performance without drastically increasing complexity.

Refer to caption
Figure 5: Block error rates for various spherical codes with n=50n=50 and 𝙴=530\mathtt{E}=530 with a 88-PAM constituent constellation. The (10 860,8 448)(10\,860,8\,448) LDPC code from the 5G NR standard is used in the PAS architecture.

V-A Complete Shell Codes

We design codes with blocklength nn that are shells of the product of nn 2p2p-PAM constellations, constrained to have total energy 𝙴\mathtt{E} in each block. This is formalized as follows.

Definition 1.

Given a symbol set \mathcal{M}, a blocklength-nn code consisting of all codewords of the form (x1,x2,,xn)(x_{1},x_{2},\dots,x_{n}), where the xix_{i}s satisfy xix_{i}\in\mathcal{M} and i=1nxi2=𝙴\sum_{i=1}^{n}x_{i}^{2}=\mathtt{E} is referred to as a complete shell code over \mathcal{M}.

We refer to the complete shell code of length nn and squared radius 𝙴\mathtt{E} over the 2p2p-PAM constellation as the (n,𝙴,p)(n,\mathtt{E},p) code. Complete shell code automatically deliver a shaping gain, as the symmetry of the construction induces an approximate Boltzmann distribution over the symbols. That is, if codewords are selected uniformly, the probability distribution over the symbols xix_{i} is p(xi)exp[λxi2]p(x_{i})\propto\exp[\lambda x_{i}^{2}], where λ\lambda is a function of the code parameters nn and 𝙴\mathtt{E}. Note that this is a discretized version of a Gaussian distribution.

Complete shell codes can be decomposed into a union of permutation subcodes as 𝒞=i=1t(𝒞)𝒞i\mathcal{C}=\cup_{i=1}^{t(\mathcal{C})}\mathcal{C}^{i}, where each 𝒞i\mathcal{C}^{i} is a permutation code, and the number of distinct permutation subcodes t(𝒞)t(\mathcal{C}) is a function of the code parameters. We will also write t(𝒞)=t(n,𝙴,p)t(\mathcal{C})=t(n,\mathtt{E},p) when 𝒞\mathcal{C} is the (n,𝙴,p)(n,\mathtt{E},p) code. We refer to each subcode as a type class following the terminology of, e.g., [26, Ch. 11.1]. Describing the code in this union form is useful when describing and analyzing the code performance.

We call the initial vector in a type class the type class representative. For instance, the (8,32,4)(8,32,4) code has two type classes, with representatives (1,1,1,1,1,3,3,3)(1,1,1,1,1,3,3,3) and (1,1,1,1,1,1,1,5)(1,1,1,1,1,1,1,5). The first class contains 14 336213.814\,336\approx 2^{13.8} codewords, and the second 2048=2112048=2^{11}. In general, the number of type classes t(n,𝙴,p)t(n,\mathtt{E},p) of a given energy 𝙴\mathtt{E} is a complicated function, which we usually compute numerically. A loose bound for the maximum value of t(n,𝙴,p)t(n,\mathtt{E},p) when varying nn and pp is O(np)O(n^{p}). This becomes intractable quickly for larger values of pp.

V-B Encoding and Decoding Complete Codes

V-B1 Encoding

It is natural to extend the encoding and decoding methods described in Section IV to the complete shell codes. For encoding, if the tt type-class subcodes are 𝒞1,𝒞2,,𝒞t\mathcal{C}^{1},\mathcal{C}^{2},\dots,\mathcal{C}^{t}, we can assign the first |𝒞1|\lvert\mathcal{C}^{1}\rvert codeword indices to the first type class, the next |𝒞2|\lvert\mathcal{C}^{2}\rvert to the second, and so forth. This is formalized in Algorithm 3. The index of the corresponding permutation subcode can be found using binary search on the vector cumulative sums 𝐶𝑆=(|𝒞1|,|𝒞1|+|𝒞2|,,i=1T|𝒞i|)\mathit{CS}=(\lvert\mathcal{C}^{1}\rvert,\lvert\mathcal{C}^{1}\rvert+\lvert\mathcal{C}^{2}\rvert,\dots,\sum_{i=1}^{T}\lvert\mathcal{C}^{i}\rvert). Since the complexity of Algorithm 1 is O(np)O(np) (as u=pu=p for the PAM case), our new algorithm has complexity O(np+t(n,𝙴,p))O(np)O(np+t(n,\mathtt{E},p))\approx O(n^{p}). The randomization method of Section IV-C can be used here as well. The index input to this algorithm is obtained similarly to Section IV-C, with the exception that the code rate rr is the rate of the complete shell code.

Input: codeword integer index qq, vectors of repetitions 𝒎1,𝒎2,,𝒎t\bm{m}^{1},\bm{m}^{2},\dots,\bm{m}^{t}, cumulative cardinalities of the subcodes 𝐶𝑆\mathit{CS}, code length nn
Output: index vector 𝒄\bm{c}, type class index kk
Find the index kk such that 𝐶𝑆[k1]<q𝐶𝑆[k]\mathit{CS}[k-1]<q\leq\mathit{CS}[k]. 𝑠𝑖𝑧𝑒𝐶𝑆[k]𝐶𝑆[k1]\mathit{size}\leftarrow\mathit{CS}[k]-\mathit{CS}[k-1]
cencI(q𝐶𝑆[k1];𝒎k,𝑠𝑖𝑧𝑒,n)c\leftarrow enc_{I}(q-\mathit{CS}[k-1];\bm{m}^{k},\mathit{size},n)
return c,kc,k
Algorithm 3 A generalization of Algorithm 1 to the case of the shell code. The output is a vector of indices 𝒄\bm{c} of length nn and a type class index kk between 1 and tt. That is, the corresponding codeword is 𝝁k[𝒄]\bm{\mu}^{k}[\bm{c}].

V-B2 Orbit decoding

For decoding, we again consider our code as a union of permutation codes (type classes). When forming the list of most likely orbits for each frozen symbol, we find the most likely orbit for each type class and pick the one with the highest likelihood to put on our list. Once the received word is sorted by the magnitude of its elements, the orbit likelihood of each item on the list can be found using O(n)O(n) operations. There are a total of O(npt(n,𝙴,p))O(npt(n,\mathtt{E},p)) orbits to consider. Once the list of orbits with the highest likelihoods is formed, each of the O(nlog2p)O(n\log_{2}p) LLRs can be computed with O(p)O(p) complexity. Therefore, the total computational complexity per spherical codeword is O(n2pt(n,𝙴,p)+nplogp)O(pnp)O(n^{2}pt(n,\mathtt{E},p)+np\log p)\approx O(pn^{p}).

V-B3 BCJR Decoding

A natural alternative decoding procedure to consider in this context is BCJR decoding, as described in [23]. The complete shell code has a simple trellis structure, similar to that discussed in [27]. An example trellis for the (8,32,3)(8,32,3) code is shown in Fig. 6. Here, each path from left to right through the trellis represents an unsigned codeword, with states labeled by the energy accumulated up to that point in the code. It can be shown that at depths between 3 and n3n-3, the trellis will have at most (𝙴n)/8(\mathtt{E}-n)/8 states. Since each state has a maximum of pp outgoing edges, the number of edges in the trellis is O(n(𝙴n)p)O(n(\mathtt{E}-n)p), which is the complexity of the algorithm.

Refer to caption
Figure 6: The trellis for the (8,32,3)(8,32,3) code.

V-B4 Results

If we accept our O(np2)O(n^{p-2}) heuristic for t(n,𝙴,p)t(n,\mathtt{E},p), we see that orbit decoding has a much worse complexity than BCJR decoding, meaning that BCJR decoding is preferable. In Fig. 5 we see the results of decoding the complete (50,530,4)(50,530,4) shell code, using both symbol-by-symbol decoding and BCJR decoding. Results are presented in terms of the rate-normalized SNR which is defined as follows. The channel coding theorem for the real AWGN channel indicates that a coding scheme with rate ρ\rho can achieve an arbitrarily small probability of error if and only if

ρ12log2(1+SNR).\rho\leq\frac{1}{2}\log_{2}(1+\mathrm{SNR}).

Equivalently, a coding scheme with rate ρ\rho can achieve an arbitrarily small probability of error if and only if

SNR22ρ11.\frac{\mathrm{SNR}}{2^{2\rho}-1}\geq 1.

The quantity SNR22ρ1\frac{\mathrm{SNR}}{2^{2\rho}-1} is defined as the rate-normalized SNR SNRnorm\mathrm{SNR}_{\mathrm{norm}} [28].

Symbol-by-symbol decoding is not optimal. Furthermore, the complete code outperforms the permutation code.

V-C Partial Shell Spherical Codes

For large nn, orbit decoding is impractical. Even BCJR decoding becomes unwieldy as 𝙴\mathtt{E} grows. Here we define a new family of codes that has competitive rate and shaping properties, with vastly improved encoding and decoding complexities. In particular, keeping only a small number of the largest type classes in a complete shell code is often sufficient to create a high-performing code. We formalize this as follows.

Definition 2.

Take a complete shell code i=1t(𝒞)𝒞i\cup_{i=1}^{t(\mathcal{C})}\mathcal{C}^{i}, where each 𝒞i\mathcal{C}^{i} is a distinct type class with symbols drawn from a modulation format \mathcal{M}, and |𝒞1||𝒞2||𝒞t|\lvert\mathcal{C}^{1}\rvert\geq\lvert\mathcal{C}^{2}\rvert\geq\dots\geq\lvert\mathcal{C}^{t}\rvert. The subcode of 𝒞\mathcal{C} defined by 𝒞1𝒞2𝒞k\mathcal{C}^{1}\cup\mathcal{C}^{2}\cup\dots\cup\mathcal{C}^{k} is referred to as the maximal kk-class partial shell code over \mathcal{M}.

The maximal kk-class partial shell code consists of the kk largest type classes. In many cases, relatively small values of kk result in codes whose rates are only marginally smaller than that of the complete shell code, with the benefit that the encoding and decoding algorithms now have a constant number of type classes, i.e., we have replaced the function t(𝒞)t(\mathcal{C}) with the constant kk. Note that k=1k=1 corresponds to a permutation code.

Example 5.

Consider the (50,530,4)(50,530,4), (25,305,4)(25,305,4) and (100,996,4)(100,996,4) codes. While these codes have different block-lengths, 𝙴\mathtt{E} was chosen to give similar numbers of bits per dimension. In each case, we consider the kk subcodes for different values of kk, comparing the rate of the partial shell code to the rate of the complete code. In Fig. 7 the fractional rate loss is very small even when kk is small. These codes have 34, 113, and 369 type classes respectively, but values of kk below 10 already achieve 99% of the rate.

Refer to caption
Figure 7: A comparison of sizes of subcodes constructed from the largest type classes in the indicated codes.

The acceptable reduction in rate will depend on the use-case for the code, but we already see the complexity benefits of using selective codes instead of complete codes. Conversely, if we want to create a selective code from a union of permutation codes, we see that the rate increases for adding each subsequent code to the union decrease quickly.

As previously noted, one of the appealing features of complete codes over PAM constellations is that they induce an approximate marginal Boltzmann distribution. Partial shell codes also have this property. This is a result of the fact that they are constructed from the largest type classes available, whose symbols mim_{i} are distributed proportionally to exp[λ(2i1)2]n\exp[\lambda(2i-1)^{2}]n, a Boltzmann distribution, where parameter λ\lambda depends on 𝙴\mathtt{E}. This result (more specifically its dual) is shown in [29].

The exact Boltzmann distribution does not give an integer solution in general, but the non-integer solution is often a near approximation for solutions. In fact, since we are often interested not only in the largest type class, but in the kk largest type classes, an alternative method for constructing kk-class partial shell codes is to start with the non-integer Boltzmann solution and search for several nearby solutions, e.g., by iterating over all combinations of integer values within some neighborhood of each non-integer entry, and keeping the resulting vectors that are the kk best solutions.

Example 6.

As an example, for the (n,𝙴,p)=(50,530,4)(n,\mathtt{E},p)=(50,530,4) code, whose largest type class is Code 2 in Example 2, we have the following exact Boltzmann distribution:

(m1,m2,m3,m4)=(22.38,16.12,8.37,3.13)(m_{1},m_{2},m_{3},m_{4})=(22.38,16.12,8.37,3.13) (6)

with λ=0.04\lambda=-0.04. The largest type classes in the integer-constrained case are given in Table I. The distributions are similar to the non-integer solution, and can be found by searching for integer solutions in its neighborhood. Each type class in this table contains approximately 2792^{79} codewords, not counting sign flips (i.e., in the Variant I version). Most of the balance of the 113 type classes in the complete shell code are much smaller and contribute little, as seen in Example 5.

(m1,m2,m3,m4)(m_{1},m_{2},m_{3},m_{4}) Bits
(24, 15, 7, 4) 79.87
(21, 18, 8, 3) 79.40
(23, 15, 9, 3) 78.45
TABLE I: The largest type classes for the (50,530,4)(50,530,4) code.

V-D Encoding and Decoding Partial Codes

The primary benefit of partial codes over complete codes is their improved complexity of decoding. Encoding can be done using Algorithm 3 with using kk type-classes.

V-D1 Encoding

The encoder is the same as the encoder in Algorithm 3, with the one change that t(n,𝙴,p)t(n,\mathtt{E},p) is replaced with constant kk. This now has complexity O(np)O(np), which is much more practical. The randomization method of Section IV-C can be used here as well.

V-D2 Type-class decoding

The orbit decoding algorithm is also the same as the algorithm for the complete case. Again, t(n,𝙴,p)t(n,\mathtt{E},p) is replaced with kk, which gives a complexity of O(np(n+logp))O(np(n+\log p)).

V-D3 BCJR decoding

Finally, we can still use BCJR decoding in the partial shell case. The trellis for the partial code is complicated, but we can decode more efficiently by using the trellis for the complete code, and reporting an error if the output is not a codeword. This still has complexity O(np(𝙴n))O(np(\mathtt{E}-n)).

Comparing complexities, we see that orbit decoding is now preferable when n+logpo(p(𝙴n))n+\log p\in o(p(\mathtt{E}-n)), which happens as EE becomes large for fixed nn and pp.

V-E Results

Fig. 5 also shows the SNR values for the 4-maximal partial shell code. We see that they are intermediate between the results for the permutation and complete codes. It is important to note that, compared with orbit decoding, the complete shell code with symbol-by-symbol decoding performs very well and is less than 0.15 dB worse than decoding with the BCJR algorithm. In terms of computational complexity, this is much cheaper than the BCJR algorithm and, therefore, the complete shell code with symbol-by-symbol decoding is still practically competitive.

VI Conclusions

In this paper, we have studied the use of more general spherical codes in place of permutation codes for probabilistic amplitude shaping at short blocklengths, which are of particular relevance to optical communications. A key result is the development of a novel soft-demapping technique for these short spherical codes. When used in a PAS framework with a standard LDPC channel code, our findings suggest that a gain of about 0.50.5 dB can be achieved compared to the conventional case of using one permutation code with the symbol-by-symbol decoding. In terms of BMD rate, the total gain is calculated to be more than 0.60.6 dB for a particular spherical codes at blocklength n=50n=50.

However, while our results are promising, it remains to be seen how well our proposed methods work for nonlinear optical fiber. Future work will focus on further exploring this aspect, while studying possible improvements and general performance versus complexity trade-offs for our proposed methods.

Refer to caption
Figure 8: The matching corresponding to the solution with the largest reward is shown on the left. The weight function for the assignment problem of our running example is also shown on the right.

[Computation of Mutual Information] One can calculate the mutual information between input and output of the channel by Monte Carlo evaluation of the mutual information integral given by

𝒙𝒞p(𝒙,𝒚)logp(𝒚𝒙)p(𝒚)d𝒚.\phantom{,}\int\sum_{\bm{x}\in\mathcal{C}}p(\bm{x},\bm{y})\log\frac{p(\bm{y}\mid\bm{x})}{p(\bm{y})}\,\mathrm{d}\bm{y}.

To find p(𝒚)p(\bm{y}) for each trial, one may use the total law of probability

p(𝒚)=𝒙𝒞p(𝒚𝒙)p(𝒙).\phantom{.}p(\bm{y})=\sum_{\bm{x}\in\mathcal{C}}p(\bm{y}\mid\bm{x})p(\bm{x}).

The code 𝒞\mathcal{C} is typically very large, meaning that this summation cannot be computed exactly. Instead, we use a list-decoding approximation as follows:

p(𝒚)𝒙p(𝒚𝒙)p(𝒙)p(\bm{y})\approx\sum_{\bm{x}\in\mathcal{L}}p(\bm{y}\mid\bm{x})p(\bm{x}) (7)

where ={𝒄0,𝒄1,,𝒄L1}\mathcal{L}=\{\bm{c}_{0},\bm{c}_{1},\dots,\bm{c}_{L-1}\} is a list of LL codewords with the highest likelihoods so that

p(𝒚𝒄0)p(𝒚𝒄1)p(𝒚𝒄L1).\phantom{.}p(\bm{y}\mid\bm{c}_{0})\geq p(\bm{y}\mid\bm{c}_{1})\geq\dots\geq p(\bm{y}\mid\bm{c}_{L-1}).

If LL is large enough, p(𝒚)p(\bm{y}) can be approximated accurately. Because of the exponential decay of a Gaussian distribution, usually a computationally feasible list size is enough to get an accurate estimate of p(𝒚)p(\bm{y}). Note that higher SNRs require smaller list sizes.

Murty’s algorithm finds LL solutions to the assignment problem with the largest rewards [8]. We describe a specialized version of Murty’s algorithm pertinent to the list decoding problem of this paper using a running example. The first step is to find the maximizing solution 𝒄0\bm{c}_{0}.

Example 7.

We consider the assignment problem with

𝒰\displaystyle\mathcal{U} ={a1,a2,a3,a4},\displaystyle=\{a_{1},a_{2},a_{3},a_{4}\},
𝒱\displaystyle\mathcal{V} ={b1,b2,b3,b4},\displaystyle=\{b_{1},b_{2},b_{3},b_{4}\},

and the weight function w(ai,bj)=ijw(a_{i},b_{j})=ij as our running example, as shown in Fig. 8. We want to find

{𝒄0,𝒄1,𝒄2,𝒄3,𝒄4},\phantom{,}\{\bm{c}_{0},\bm{c}_{1},\bm{c}_{2},\bm{c}_{3},\bm{c}_{4}\},

the 55 solutions to the assignment problem with largest rewards. The matching representing the solution 𝐜0\bm{c}_{0} is also shown in Fig. 8.

Refer to caption
Figure 9: Two different enumerations for the edges in 𝒄0\bm{c}_{0} are shown.

The next step in Murty’s algorithm is to choose an arbitrary enumeration666An enumeration of a non-empty finite set AA is a bijection fA:[|A|]Af_{A}:[\lvert A\rvert]\to A. In other words, an enumeration is an ordered indexing of all the items in the set AA with numbers 1,2,,|A|1,2,\dots,\lvert A\rvert. of the edges in the matching 𝒄0\bm{c}_{0}. Then we partition the set of remaining matchings into n1n-1 subsets which we refer to as cells. Each cell is specified by a set of inclusion and exclusion constraints. The first cell contains all matchings that exclude edge 11 in 𝒄0\bm{c}_{0}. The second cell contains all matchings that include edge 11 but exclude edge 22. The next cell contains all matchings that include both edges 11 and 22 but exclude edge 33. The algorithm goes on to partition the set of matchings similarly according to the enumeration of the edges in 𝒄0\bm{c}_{0}. A weight function is defined for each cell by modifying the original weight function based on the inclusion and exclusion constraints. For an excluded edge, the corresponding weight is set to -\infty. For an included edge, the two vertices incident with that edge are removed from the vertex sets 𝒰\mathcal{U} and 𝒱\mathcal{V} and the corresponding row and column of the weight function are removed. The assignment problem is then solved for each cell with respect to the modified weight function.

Example 8.

We consider two different enumerations for the edges in 𝐜0\bm{c}_{0}. These are shown in Fig. 9. The cells obtained by partitioning according to both enumerations are shown in Fig. 10 and Fig. 11, respectively. For each cell, the inclusion constrains are shown with blue dotted lines and the exclusion constraints are shown with red dashed lines.

Refer to caption
Figure 10: The cells obtained after partitioning with respect to Enumeration 11 for 𝒄0\bm{c}_{0}.

The algorithm then proceeds by solving the assignment problem for each cell. Due to the introduction of -\infty into the range of weight function, it should be clear that the weight function of the resulting assignment problems are not necessarily multiplicative. Therefore, solving the assignment problem for each cell may not be as easy as sorting. However, if the enumeration of the edges in 𝒄0\bm{c}_{0} is chosen carefully, solving the resulting assignment problem may still be easy. In particular, if we can start off by matching the vertices in 𝒰\mathcal{U} that correspond to columns in the weight function with -\infty entries, we can reduce the problem again to an assignment problem with a multiplicative weight function. To formalize this intuition, the following definition will be useful.

Definition 3.

Let \mathcal{H} be a perfect matching of the balanced bipartite graph G=(𝒰,𝒱,)G=(\mathcal{U},\mathcal{V},\mathcal{E}). Let w𝒰:𝒰{}w_{\mathcal{U}}:\mathcal{U}\to\mathbb{R}\cup\{-\infty\} be a weight function for the vertices in 𝒰\mathcal{U}. An enumeration ff_{\mathcal{H}} of HH is said to be ordered with respect to ww_{\mathcal{H}} if

f(a1,b1)>f(a2,b2)w𝒰(a1)w𝒰(a2).\phantom{.}f_{\mathcal{H}}(a_{1},b_{1})>f_{\mathcal{H}}(a_{2},b_{2})\iff w_{\mathcal{U}}(a_{1})\leq w_{\mathcal{U}}(a_{2}).

In other words, an enumeration is ordered with respect to the weight function of 𝒰\mathcal{U}, if it sorts the edges based on the weight of the vertices in 𝒰\mathcal{U} that are incident with the edges in descending order.

Definition 4.

A weight function ww is almost multiplicative if there exist two functions

w𝒰:𝒰{},\displaystyle w_{\mathcal{U}}:\mathcal{U}\to\mathbb{R}\cup\{-\infty\},
w𝒱:𝒱{},\displaystyle w_{\mathcal{V}}:\mathcal{V}\to\mathbb{R}\cup\{-\infty\},

such that w𝒰w_{\mathcal{U}} assumes its maximum at a𝒰a^{*}\in\mathcal{U},

b𝒱:w(a,b){,w𝒰(a)w𝒱(b)},\phantom{,}\forall b\in\mathcal{V}:w(a^{*},b)\in\{-\infty,w_{\mathcal{U}}(a^{*})w_{\mathcal{V}}(b)\},

and

(a,b)𝒰×𝒱{(a,b)b𝒱}:w(a,b)=w𝒰(a)w𝒱(b).\phantom{.}\forall(a,b)\in\mathcal{U}\times\mathcal{V}\setminus\{(a^{*},b)\mid b\in\mathcal{V}\}:w(a,b)=w_{\mathcal{U}}(a)w_{\mathcal{V}}(b).
Refer to caption
Figure 11: The cells obtained after partitioning with respect to Enumeration 22 for 𝒄0\bm{c}_{0}.

If the weight function of an assignment problem is multiplicative and an ordered enumeration with respect to the weight function of 𝒰\mathcal{U} is used, the weight function of each cell after the first partitioning in Murty’s algorithm is almost multiplicative. Solving the assignment problem with an almost multiplicative weight function is straightforward. One can show that in a maximum weight matching, the edge incident on aa^{*} with the largest weight is included in the matching. After removing the corresponding row and column from the weight function, the problem then reduces to an assignment problem with a multiplicative weight function and can be solved by sorting.

Refer to caption
Figure 12: The solution of the assignment problem for each cell is obtained after partitioning with respect to Enumeration 22.
Example 9.

Enumeration 22 in our running example is ordered with respect to w𝒰w_{\mathcal{U}}. In what follows, we only use this ordered enumeration. In each of the cells, the edge connected to vertex in 𝒰\mathcal{U} with the largest vertex weight is first selected. This is shown by the thick black edge in Fig. 12. The weight function is correspondingly adjusted by removing one row and one column. After choosing the first edge, the rest of the edges in the maximum weight matching are found by sorting and are shown using thin gray lines.

A sorted list of potential candidates 𝒑\bm{p}, sorted by their reward, is formed and all the solutions obtained for each cell are placed on this list. Recall that the solution with the largest reward is denoted 𝒄0\bm{c}_{0}. The solution with the second largest reward, 𝒄1\bm{c}_{1}, is the first item in 𝒑\bm{p}, i.e., the potential candidate in 𝒑\bm{p} with the largest reward. The algorithm removes this solution 𝒄1\bm{c}_{1} from 𝒑\bm{p} and partitions the set of remaining matchings in its cell according to an enumeration of c1c_{1}. Again, by choosing an ordered enumeration with respect to w𝒰w_{\mathcal{U}}, the reduced problems can be solved by sorting.

Refer to caption
Figure 13: The cells obtained after partitioning Cell 11 with respect to 𝒄1\bm{c}_{1}.
Example 10.

We insert the solutions for each cell into the sorted list PP. The reward of the solutions of Cell 11, Cell 22 and Cell 33 in Fig. 12 are all the same and equal to 2929. Note that in calculating the reward of each solution, the edges coming from the inclusion constraints (the blue dotted edges) are also counted. These three matchings are

𝒫1\displaystyle\mathcal{P}_{1} ={(a1,b1),(a2,b2),(a3,b4),(a4,b3)},\displaystyle=\{(a_{1},b_{1}),(a_{2},b_{2}),(a_{3},b_{4}),(a_{4},b_{3})\},
𝒫2\displaystyle\mathcal{P}_{2} ={(a1,b1),(a2,b3),(a3,b2),(a4,b4)},\displaystyle=\{(a_{1},b_{1}),(a_{2},b_{3}),(a_{3},b_{2}),(a_{4},b_{4})\},
𝒫3\displaystyle\mathcal{P}_{3} ={(a1,b2),(a2,b1),(a3,b3),(a4,b4)}.\displaystyle=\{(a_{1},b_{2}),(a_{2},b_{1}),(a_{3},b_{3}),(a_{4},b_{4})\}.

The list 𝐩\bm{p} is

𝒑=(𝒫1,𝒫2,𝒫3).\phantom{.}\bm{p}=(\mathcal{P}_{1},\mathcal{P}_{2},\mathcal{P}_{3}).

We remove the first matching from this list and set the second solution with the largest reward to be the codeword 𝐜1\bm{c}_{1} corresponding to the matching 𝒫1\mathcal{P}_{1}.

We then partition the set of remaining matchings in Cell 11 by considering the ordered enumeration for the edges in 𝐜1\bm{c}_{1}. The resulting cells, together with their inclusion and exclusion constraints as well as their weight functions are shown in Fig. 13. The inclusion and exclusion constraints of Cell 11 are carried forward in forming Cell 1.11.1, Cell 1.21.2 and Cell 1.31.3.

Refer to caption
Figure 14: The solution of the assignment problem for each cell is obtained after partitioning with respect to the ordered enumeration of 𝒄1\bm{c}_{1}.

The algorithm then solves the assignment problem in the newly partitioned cells. The solution of each cell is then added to the list of candidates 𝒑\bm{p}. The next matching with the largest weight 𝒄2\bm{c}_{2} will then be the first item in the sorted list 𝒑\bm{p}. The algorithm proceeds by partitioning the corresponding cell with respect to the ordered enumeration of 𝒄2\bm{c}_{2}.

Example 11.

The solutions of the assignment problem in Cell 1.11.1, Cell 1.21.2 and Cell 1.31.3 are shown in Fig. 14. Again, the edge connected to vertex in 𝒰\mathcal{U} with the largest vertex weight is selected first. The weight function is correspondingly adjusted by removing one row and one column. After choosing the first edge, the rest of the edges in the maximum-weight matching are found by sorting (shown using thin gray lines). The solution in Cell 1.11.1 is referred to as 𝒫1.1\mathcal{P}_{1.1} and has a weight of 2727. The solution in Cell 1.21.2 is called 𝒫1.2\mathcal{P}_{1.2} and has a weight of 2727. The solution in Cell 1.31.3, 𝒫1.3\mathcal{P}_{1.3}, has a weight of 2828. The sorted list of candidate solutions is updated to

𝒑=(𝒫2,𝒫3,𝒫1.3,𝒫1.1,𝒫1.2).\phantom{.}\bm{p}=(\mathcal{P}_{2},\mathcal{P}_{3},\mathcal{P}_{1.3},\mathcal{P}_{1.1},\mathcal{P}_{1.2}).

The next solution in Murty’s algorithm is then the codeword 𝐜2\bm{c}_{2} corresponding to the matching 𝒫2\mathcal{P}_{2}. The next step is to partition Cell 22 with respect to the ordered enumeration of 𝐜2\bm{c}_{2}. Continuing this, one can see that 𝐜3\bm{c}_{3} is the codeword corresponding to the matching 𝒫3\mathcal{P}_{3} and c4c_{4} is the codeword corresponding to the matching 𝒫1.3\mathcal{P}_{1.3}.

This specialized Murty’s algorithm continues until the LL solutions with the largest rewards are found.

Remark 2.

In practice, we may have some extra constraints on the perfect matchings that disallow some matchings. That is, it is common to have an allowed set of perfect matchings 𝒜\mathcal{A} that is a proper subset of all possible perfect matchings. If one wishes to find the LL solutions to the assignment problem with the largest rewards subject to the extra constraint that the corresponding matchings should be in 𝒜\mathcal{A}, once a candidate solution is chosen from the top of the list 𝐩\bm{p}, the candidate solution should be tested for inclusion in the set of allowed solutions 𝒜\mathcal{A} and only then be considered as a member in the final list of LL solutions. If such a solution is disallowed because of non-membership of 𝒜\mathcal{A}, it still must be kept on the sorted list 𝐩\bm{p} as the descendant cells may lead to a viable solution included in 𝒜\mathcal{A}. We often expurgate permutation codes to have a subset with a power-of-two size, meaning that not all permutation codewords are allowed. This translates into a restriction on the set of allowed perfect matchings that we can have on the list of most likely codewords.

Acknowledgment

The authors wish to thank the reviewers of this paper for their insightful comments. Their constructive suggestions have significantly contributed to the improvement of this work. This work was supported in part by Huawei Technologies, Canada.

References

  • [1] R. Rafie Borujeny and F. R. Kschischang, “Why constant-composition codes reduce nonlinear interference noise,” J. Lightw. Technol., vol. 41, pp. 4691–4698, July 2023.
  • [2] D. Slepian, “Permutation modulation,” Proceedings of the IEEE, vol. 53, pp. 228–236, Mar. 1965.
  • [3] G. Böcherer, F. Steiner, and P. Schulte, “Bandwidth efficient and rate-matched low-density parity-check coded modulation,” IEEE Trans. Commun., vol. 63, pp. 4651–4665, Dec. 2015.
  • [4] P. Schulte and G. Böcherer, “Constant composition distribution matching,” IEEE Trans. Inf. Theory, vol. 62, pp. 430–434, Jan. 2016.
  • [5] I. Ingemarsson, Group Codes for the Gaussian Channel, pp. 73–108. Berlin, Heidelberg: Springer Berlin Heidelberg, 1989.
  • [6] Y. C. Gültekin, T. Fehenberger, A. Alvarado, and F. M. Willems, “Probabilistic shaping for finite blocklengths: Distribution matching and sphere shaping,” Entropy, vol. 22, p. 581, May 2020.
  • [7] G. Böcherer, “Probabilistic amplitude shaping,” Foundations and Trends in Communications and Information Theory, vol. 20, pp. 390–511, 2023.
  • [8] K. G. Murty, “An algorithm for ranking all the assignments in order of increasing cost,” Operations Research, vol. 16, pp. 682–687, June 1968.
  • [9] G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEE Trans. Inf. Theory, vol. 28, pp. 55–67, Jan. 1982.
  • [10] P. Schulte, W. Labidi, and G. Kramer, “Joint decoding of distribution matching and error control codes,” in International Zurich Seminar on Information and Communication, pp. 53–57, ETH Zurich, 2020.
  • [11] Y. Luo and Q. Huang, “Joint message passing decoding for LDPC codes with CCDMs,” Optics Letters, vol. 48, pp. 4933–4936, Oct. 2023.
  • [12] F. R. Kschischang and S. Pasupathy, “Optimal nonuniform signaling for Gaussian channels,” IEEE Trans. Inf. Theory, vol. 39, pp. 913–929, May 1993.
  • [13] R. Burkard, M. Dell’Amico, and S. Martello, Assignment Problems. SIAM, 2012.
  • [14] H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research Logistics Quarterly, vol. 2, pp. 83–97, Mar. 1955.
  • [15] G. H. Hardy, J. E. Littlewood, and G. Pólya, Inequalities. Cambridge, England: Cambridge University Press, Feb. 1988.
  • [16] J. Holstermann, “A generalization of the rearrangement inequality,” Mathematical Reflections, vol. 5, no. 4, 2017.
  • [17] S. Delsad, “Probabilistic shaping for the AWGN channel,” 2023.
  • [18] A. Nordio and E. Viterbo, “Permutation modulation for fading channels,” in 10th International Conference on Telecommunications (ICT), vol. 2, pp. 1177–1183 vol.2, 2003.
  • [19] T. Berger, F. Jelinek, and J. Wolf, “Permutation codes for sources,” IEEE Trans. Inf. Theory, vol. 18, pp. 160–169, Jan. 1972.
  • [20] W. Myrvold and F. Ruskey, “Ranking and unranking permutations in linear time,” Information Processing Letters, vol. 79, pp. 281–284, Sept. 2001.
  • [21] G. D. Forney, “Geometrically uniform codes,” IEEE Trans. Inf. Theory, vol. 37, pp. 1241–1260, Sept. 1991.
  • [22] F. R. Kschischang, “The trellis structure of maximal fixed-cost codes,” IEEE Trans. Inf. Theory, vol. 42, pp. 1828–1838, Nov. 1996.
  • [23] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for minimizing symbol error rate,” IEEE Trans. Inf. Theory, vol. 20, pp. 284–287, Mar. 1974.
  • [24] G. Böcherer, “Achievable rates for shaped bit-metric decoding,” CoRR, vol. abs/1410.8075, 2014.
  • [25] 3GPP, “5G NR multiplexing and channel coding,” Tech. Rep. TS 138.212 V15.2.0 (2018-07), 3GPP, July 2018.
  • [26] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2005.
  • [27] Y. C. Gültekin, W. J. van Houtum, A. G. C. Koppelaar, and F. M. J. Willems, “Enumerative sphere shaping for wireless communications with short packets,” IEEE Trans. Wireless Commun., vol. 19, pp. 1098–1112, Feb. 2020.
  • [28] G. D. Forney and G. Ungerboeck, “Modulation and coding for linear Gaussian channels,” IEEE Trans. Inf. Theory, vol. 44, pp. 2384–2415, Oct. 1998.
  • [29] I. Ingemarsson, “Optimized permutation modulation,” IEEE Trans. Inf. Theory, vol. 36, no. 5, pp. 1098–1100, 1990.
Reza Rafie Borujeny (Member, IEEE) received the B.Sc. degree from the University of Tehran in 2012, the M.Sc. degree from the University of Alberta in 2014 and the Ph.D. degree from the University of Toronto in 2022 all in electrical and computer engineering. His research interests include applications of information theory and coding theory.
Susanna E. Rumsey (Graduate Student Member, IEEE) was born in Toronto, ON, Canada in 1993. She received the B.A.Sc. degree with honours in engineering science (major in engineering physics), and the M.Eng and M.A.Sc. degrees in electrical and computer engineering from the University of Toronto, Toronto, ON, Canada, in 2015, 2016, and 2019 respectively. Since 2019, she has been a Ph.D. student in electrical and computer engineering at the University of Toronto, Toronto, ON, Canada.
Stark C. Draper (Senior Member, IEEE) received the B.S. degree in Electrical Engineering and the B.A. degree in History from Stanford University, and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology (MIT). He completed postdocs at the University of Toronto (UofT) and at the University of California, Berkeley. He is a Professor in the Department of Electrical and Computer Engineering at the University of Toronto and was an Associate Professor at the University of Wisconsin, Madison. As a Research Scientist he has worked at the Mitsubishi Electric Research Labs (MERL), Disney’s Boston Research Lab, Arraycomm Inc., the C. S. Draper Laboratory, and Ktaadn Inc. His research interests include information theory, optimization, error-correction coding, security, and the application of tools and perspectives from these fields in communications, computing, learning, and astronomy. He has been the recipient of the NSERC Discovery Award, the NSF CAREER Award, the 2010 MERL President’s Award, and teaching awards from UofT, the University of Wisconsin, and MIT. He received an Intel Graduate Fellowship, Stanford’s Frederick E. Terman Engineering Scholastic Award, and a U.S. State Department Fulbright Fellowship. He spent the 2019–2020 academic year on sabbatical visiting the Chinese University of Hong Kong, Shenzhen, and the Canada-France-Hawaii Telescope (CFHT), Hawaii, USA. Among his service roles, he was the founding chair of the Machine Intelligence major at UofT, was the Faculty of Applied Science and Engineering (FASE) representative on the UofT Governing Council, is the FASE Vice-Dean of Research, and is the President of the IEEE Information Theory Society for 2024.
Frank R. Kschischang (Fellow, IEEE) received the B.A.Sc. degree (with honors) from the University of British Columbia, Vancouver, BC, Canada, in 1985 and the M.A.Sc. and Ph.D. degrees from the University of Toronto, Toronto, ON, Canada, in 1988 and 1991, respectively, all in electrical engineering. Since 1991 he has been a faculty member in Electrical and Computer Engineering at the University of Toronto, where he presently holds the title of Distinguished Professor of Digital Communication. His research interests are focused primarily on the area of channel coding techniques, applied to wireline, wireless and optical communication systems and networks. He has received several awards for teaching and research, including the 2010 Communications Society and Information Theory Society Joint Paper Award, and the 2018 IEEE Information Theory Society Paper Award. He is a Fellow of IEEE, of the Engineering Institute of Canada, of the Canadian Academy of Engineering, and of the Royal Society of Canada. During 1997–2000, he served as an Associate Editor for Coding Theory for the IEEE Transactions on Information Theory, and from 2014–16, he served as this journal’s Editor-in-Chief. He served as general co-chair for the 2008 IEEE International Symposium on Information Theory, and he served as the 2010 President of the IEEE Information Theory Society. He received the Society’s Aaron D. Wyner Distinguished Service Award in 2016. He was awarded the IEEE Richard W. Hamming Medal for contributions to the theory and practice of error correcting codes and optical communications in 2023.