This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Gradient Flow Decoding for LDPC Codes

Tadashi Wadayama, Kensho Nakajima and Ayano Nakai-Kasai 1Nagoya Institute of Technology, Gokiso, Nagoya, Aichi 466-8555, Japan,
[email protected]
Abstract

The power consumption of the integrated circuit is becoming a significant burden, particularly for large-scale signal processing tasks requiring high throughput. The decoding process of LDPC codes is such a heavy signal processing task that demands power efficiency and higher decoding throughput. A promising approach to reducing both power and latency of a decoding process is to use an analog circuit instead of a digital circuit. This paper investigates a continuous-time gradient flow-based approach for decoding LDPC codes, which employs a potential energy function similar to the objective function used in the gradient descent bit flipping (GDBF) algorithm. We experimentally demonstrate that the decoding performance of the gradient flow decoding is comparable to that of the multi-bit mode GDBF algorithm. Since an analog circuit of the gradient flow decoding requires only analog arithmetic operations and an integrator, future advancements in programmable analog integrated circuits may make practical implementation feasible.

I Introduction

Recently, Moore’s law, which states that the number of transistors in an integrated circuit doubles every 24 months, is facing challenges due to the power consumption of the integrated circuit. The power consumption is becoming a significant burden to prevent handling large scale signal processing tasks requiring extremely high throughput such as multi-Gbits per second.

The decoding of low-density parity-check (LDPC) codes, first introduced by Gallager [1], is a heavy signal processing task requiring high throughput. In various fields, such as storage systems including solid-state drives and next-generation wireless systems beyond 5G or 6G, there is a demand for high-throughput decoding algorithms. As a result, power efficiency and higher decoding throughput are becoming crucial topics for further research.

One promising approach to reducing power consumption is to reduce circuit size. Bit flipping (BF) decoding algorithms meet this demand, as they require fewer flip-flops in a circuit compared to conventional belief propagation (BP) decoding or min-sum decoding. Specifically, variations of the gradient descent bit flipping (GDBF) algorithm [2] have been extensively studied for implementation.

An alternative approach to reducing both power and latency of a decoding process is to use an analog circuit instead of a digital circuit. An analog computer [3][4] capable of simulating nonlinear ordinary differential equations (ODE) is composed of analog adders, multipliers, integrators, and other nonlinear devices. In the field of optical computation, optical adders and multipliers have been realized and used for solving ODEs [5]. A recent example is a neural network implemented in the optical domain [6]. Optical analog integrated circuits are currently under intensive study [7], with reports showing that optical signal processing achieves both fast processing and extremely low power consumption. Although it is too early to evaluate their competitiveness against the digital circuits, we believe that there is plenty of room for further studies on the fully-analog implementation of decoding algorithms for LDPC codes. Furthermore, from a theoretical point of view, the development of continuous-time error-correcting dynamical systems appears to be an interesting topic to pursue [8].

This paper investigates a gradient flow-based approach for decoding LDPC codes. The gradient flow dynamics is a continuous-time system that evolves the state to reduce a predefined potential energy. We will employ a potential energy function that is similar to the objective function used in the GDBF algorithm.

II Preliminaries

II-A Notation

The code length is denoted by nn. A binary sparse matrix over 𝔽2\mathbb{F}_{2}, 𝑯𝔽2m×n\bm{H}\in\mathbb{F}_{2}^{m\times n}, is a parity check matrix for an LDPC code. The binary linear code C~(𝑯)\tilde{C}(\bm{H}) is defined by

C~(𝑯){𝒃𝔽2n𝑯𝒃=𝟎}.\displaystyle\tilde{C}(\bm{H})\equiv\{\bm{b}\in\mathbb{F}_{2}^{n}\mid\bm{H}\bm{b}=\bm{0}\}. (1)

A binary to bipolar transform β:𝔽2{1,1}\beta:\mathbb{F}_{2}\rightarrow\{1,-1\} defined as β(0)1\beta(0)\equiv 1 and β(1)1\beta(1)\equiv-1 transforms C~(𝑯)\tilde{C}(\bm{H}) into the bipolar code defined by

C(𝑯){β(𝒃){1,1}n𝒃C~(𝑯)}.\displaystyle C(\bm{H})\equiv\{\beta(\bm{b})\in\{1,-1\}^{n}\mid\bm{b}\in\tilde{C}(\bm{H})\}. (2)

The index sets A(i)A(i) and B(j)B(j) are defined as

A(i)\displaystyle A(i) {jj[n],Hi,j=1},i[m],\displaystyle\equiv\{j\mid j\in[n],H_{i,j}=1\},\quad i\in[m], (3)
B(j)\displaystyle B(j) {ii[m],Hi,j=1},j[n],\displaystyle\equiv\{i\mid i\in[m],H_{i,j}=1\},\quad j\in[n], (4)

respectively, where Hi,jH_{i,j} denotes the (i,j)(i,j)-element of 𝑯\bm{H}. The notation [n][n] denotes the set of consecutive integers {1,2,,n}\{1,2,\ldots,n\}. The multivariate Gaussian distribution with mean vector 𝒎\bm{m} and covariance 𝚺\bm{\Sigma} is denoted by 𝒩(𝒎,𝚺){\cal N}(\bm{m},\bm{\Sigma}).

II-B Related works

The focus of current research lies in optimization-based approaches. Several works have been developed in this category, with the celebrated work by Feldman [9] on linear programming decoding providing a clear demonstration that the decoding problem can be viewed as an optimization problem.

Applications of interior point methods for solving a convex problem to decoding problems have been studied [10] [11]. A gradient descent formulation of a non-convex objective function leads to the GDBF algorithm [2].

Some variants of the GDBF algorithm, such as the noisy GDBF algorithm [12], have shown considerable improvement in decoding performance. Additionally, recent works have presented ADMM-based decoding algorithms for LDPC codes, including those by Zhang, Liu, and Wang [13][14][15][16], which have shown promising results. Although the advancement of these optimization-based decoding algorithms is significant, all of them are discrete-time algorithms.

Research on continuous-time dynamical systems for signal processing is a relatively new field and still in its early stages. However, it has shown promise in solving optimization problems and signal recovery tasks. For example, a MIMO MMSE detector based on continuous-time dynamics was proposed in [17], and a sparse signal recovery dynamics was studied in [18]. Further studies in this field, including the proposed work on LDPC decoding using gradient flow dynamics, can help to clarify the benefits and limitations of signal processing based on continuous dynamics.

III Potential energy function

III-A Code potential energy function

To define the gradient flow, we need an appropriate potential energy function. The code potential energy function for C(𝑯)C(\bm{H}) is a multivariate polynomial defined as

hα,β(𝒙)αj=1n(xj21)2+βi=1m((jA(i)xj)1)2,h_{\alpha,\beta}(\bm{x})\equiv\alpha\sum_{j=1}^{n}(x_{j}^{2}-1)^{2}+\beta\sum_{i=1}^{m}\left(\left(\prod_{j\in A(i)}x_{j}\right)-1\right)^{2}, (5)

where 𝒙=(x1,,xn)Tn\bm{x}=(x_{1},\ldots,x_{n})^{T}\in\mathbb{R}^{n}. The parameters α+\alpha\in\mathbb{R}_{+} and β+\beta\in\mathbb{R}_{+} control the strength of the constraints. In the right-hand side of this equation, the first term represents the bipolar constraint for 𝒙{+1,1}n\bm{x}\in\{+1,-1\}^{n}, and the second term corresponds to the parity constraint induced by 𝑯\bm{H}, i.e., if 𝒙C(𝑯)\bm{x}\in C(\bm{H}), we have (jA(i)xj)1=0\left(\prod_{j\in A(i)}x_{j}\right)-1=0 for any i[m]i\in[m]. The code potential energy function was first introduced in the work on proximal decoding algorithm [19].

Since the polynomial h(𝒙)h(\bm{x}) has a sum-of-squares form, the polynomial can be regarded as a penalty function that yields a positive penalty value for non-codeword vectors in n\mathbb{R}^{n}. The code potential energy hα,β(𝒙)h_{\alpha,\beta}(\bm{x}) is inspired by the non-convex parity constraint function used in the GDBF objective function [2]. The sum-of-squares form directly implies the most important property of hα,β(𝒙)h_{\alpha,\beta}(\bm{x}), i.e., the inequality hα,β(𝒙)0h_{\alpha,\beta}(\bm{x})\geq 0 holds for any 𝒙n\bm{x}\in\mathbb{R}^{n}. The equality holds if and only if 𝒙C(𝑯)\bm{x}\in C(\bm{H}).

III-B Gradient

In the following discussion, we need the gradient of hα,β(𝒙)h_{\alpha,\beta}(\bm{x}). The first-order derivative of hα,β(𝒙)h_{\alpha,\beta}(\bm{x}) with respect to xk(k[n])x_{k}(k\in[n]) is given by

xkhα,β(𝒙)\displaystyle\frac{\partial}{\partial x_{k}}h_{\alpha,\beta}(\bm{x}) =4α(xk21)xk\displaystyle=4\alpha(x_{k}^{2}-1)x_{k}
+2βiB(k)((jA(i)xj)1)(jA(i)\{k}xj).\displaystyle+2\beta\sum_{i\in B(k)}\left(\left(\prod_{j\in A(i)}x_{j}\right)-1\right)\left(\prod_{j\in A(i)\backslash\{k\}}x_{j}\right). (6)

The gradient hα,β(𝒙)\nabla h_{\alpha,\beta}(\bm{x}) is thus given by

hα,β(𝒙)=(x1hα,β(𝒙),,xnhα,β(𝒙))T.\nabla h_{\alpha,\beta}(\bm{x})=\left(\frac{\partial}{\partial x_{1}}h_{\alpha,\beta}(\bm{x}),\ldots,\frac{\partial}{\partial x_{n}}h_{\alpha,\beta}(\bm{x})\right)^{T}. (7)

The point 𝒛n\bm{z}\in\mathbb{R}^{n} satisfying the equality hα,β(𝒛)=𝟎\nabla h_{\alpha,\beta}(\bm{z})=\bm{0} is a stationary point of hα,βh_{\alpha,\beta}. For any codeword 𝒙C(𝑯)\bm{x}\in C(\bm{H}), xk2=1x_{k}^{2}=1 for any k[n]k\in[n] and jA(i)xj=1\prod_{j\in A(i)}x_{j}=1 holds for any i[m]i\in[m]. This means hα,β(𝒙)=𝟎\nabla h_{\alpha,\beta}(\bm{x})=\bm{0}, i.e., a codeword vector is a stationary point of hh.

IV Gradient Flow Decoding

IV-A Gradient flow dynamics

The gradient flow dynamics, also known as steepest descent dynamics, is a continuous-time dynamics defined by an ODE:

d𝒙(t)dt=F(𝒙(t)),\displaystyle\frac{d\bm{x}(t)}{dt}=-\nabla F(\bm{x}(t)), (8)

where F:nF:\mathbb{R}^{n}\rightarrow\mathbb{R} is a potential energy function. The system’s state 𝒙:n\bm{x}:\mathbb{R}\rightarrow\mathbb{R}^{n} evolves to reduce the potential energy FF as the time variable tt increases. This continuous dynamical system can be viewed as the continuous counterpart of the gradient descent method. If the potential function FF is strictly convex, the equilibrium point of the dynamics coincides with the minimum point of FF. Therefore, the gradient flow dynamics can be seen as a continuous-time minimization process of the potential energy. If FF is a non-convex function, then the gradient flow dynamics finds a stationary point of FF as tt\rightarrow\infty.

IV-B Gradient flow decoding

We assume an AWGN channel in this paper. A transmitter send a codeword 𝒔C(𝑯)\bm{s}\in C(\bm{H}) and a receiver obtain a received word

𝒚=𝒔+𝒏,\displaystyle\bm{y}=\bm{s}+\bm{n}, (9)

where 𝒏𝒩(𝟎,σ2𝑰)\bm{n}\sim{\cal N}(\bm{0},\sigma^{2}\bm{I}).

Let ff be a potential energy function defined by

f(𝒙)12𝒙𝒚2+hα,β(𝒙).\displaystyle f(\bm{x})\equiv\frac{1}{2}\|\bm{x}-\bm{y}\|^{2}+h_{\alpha,\beta}(\bm{x}). (10)

The first term of the potential energy function can be regarded as the negative log likelihood function for an AWGN channel. The second term is a penalty term attracting 𝒙\bm{x} to a codeword of C(𝑯)C(\bm{H}).

The gradient flow decoding is simply a gradient flow dynamics based on the potential energy function ff that is given by d𝒙(t)/dt=f(𝒙(t)).{d\bm{x}(t)}/{dt}=-\nabla f(\bm{x}(t)). This ODE can be rewritten as follows.

Definition 1 (Gradient flow decoding)

The gradient flow (GF) decoding is defined by the ODE:

d𝒙dt\displaystyle\frac{d\bm{x}}{dt} =(𝒙𝒚+hα,β(𝒙))\displaystyle=-(\bm{x}-\bm{y}+\nabla h_{\alpha,\beta}(\bm{x})) (11)
𝒙(0)\displaystyle\bm{x}(0) =𝒙0,\displaystyle=\bm{x}_{0}, (12)

where hα,β(𝐱)h_{\alpha,\beta}(\bm{x}) is the code potential energy and 𝐱(0)=𝐱0n\bm{x}(0)=\bm{x}_{0}\in\mathbb{R}^{n} is an initial point.

In the above ODE, the notation 𝒙(t)\bm{x}(t) is abbreviated to 𝒙\bm{x} for simplicity. If we have a suitably powerful analog circuit or analog computer, we can simulate the dynamics of this ODE. This simulation process is equivalent to a decoding process, where the equilibrium point represents the decoding result. A possible choice for the initial point is to set 𝒙0=𝟎\bm{x}_{0}=\bm{0}. Alternatively, for certain cases, a small-norm vector or a random vector could be used as the initial point.

IV-C Euler method

To evaluate the decoding performance of the GF decoding (11), a numerical method is required to solve the ODE. The simplest numerical method for solving simultaneous nonlinear differential equations is the Euler method [20]. While the convergence order of the Euler method is lower than that of higher-order methods, it is straightforward to use and can provide sufficiently accurate solutions if the time interval is sufficiently fine discretized. Therefore, in the present study, we will use the Euler method to solve the ODE (11).

Suppose that we require to simulate the dynamics defined by the above ODE (11) in the time interval 0tT0\leq t\leq T. The interval is divided into NN bins where NN denotes the number of discretized intervals. The discrete-time ticks tk=kη(k=0,1,,N)t_{k}=k\eta(k=0,1,\ldots,N) defines the boundaries of the bins where the width of a bin is given by ηT/N.\eta\equiv T/N. It should be noted that the choice of the bin width η\eta is crucial in order to ensure the stability and the accuracy of the Euler method. By using the Euler method, the ODE (11) can be approximated by

𝒙(k+1)=𝒙(k)ηf(𝒙(k))\displaystyle\bm{x}^{(k+1)}=\bm{x}^{(k)}-\eta\nabla f(\bm{x}^{(k)}) (13)

for k=0,1,2,,N1k=0,1,2,\ldots,N-1. The initial condition of this recursion is 𝒙(0)=𝒙0\bm{x}^{(0)}=\bm{x}_{0}.

IV-D Example of Decoding Process

Here, we present a small example to illustrate a decoding process. Suppose that we have a repetition code of length 2, i.e., C=(+1,+1),(1,1)C={(+1,+1),(-1,-1)}. The code potential energy function for CC is given by:

h1,1(𝒙)=(x121)2+(x221)2+(x1x21)2,\displaystyle h_{1,1}(\bm{x})=(x_{1}^{2}-1)^{2}+(x_{2}^{2}-1)^{2}+(x_{1}x_{2}-1)^{2}, (14)

where the parameters α\alpha and β\beta are set to 1.

Refer to caption
Figure 1: Example of solution curve. The repetition code of length 2 is assumed.

Assume that a transmitted word is 𝒙=(1,1)\bm{x}=(1,1) and that the corresponding received word is 𝒚=(0.6027,0.8244)\bm{y}=(0.6027,0.8244). In this case, the ODE (11) for the GF decoding becomes

(dx1dtdx2dt)=(x10.6027+4x1(x121)+2(x1x21)x2x20.8244+4x2(x221)+2(x1x21)x1).\displaystyle\left(\begin{matrix}\frac{dx_{1}}{dt}\\ \frac{dx_{2}}{dt}\end{matrix}\right)=-\left(\begin{matrix}x_{1}-0.6027+4x_{1}(x_{1}^{2}-1)+2(x_{1}x_{2}-1)x_{2}\\ x_{2}-0.8244+4x_{2}(x_{2}^{2}-1)+2(x_{1}x_{2}-1)x_{1}\end{matrix}\right).

The initial condition for the ODE is set to 𝒙(0)=(0,0)\bm{x}(0)=(0,0). Figure 1 shows the solution curve of the ODE, with the white small circle representing the initial point and the black small circle indicating the equilibrium point (0.9642,0.9901)(0.9642,0.9901). The solution curve is obtained numerically using the Euler method. In Fig. 1, the contour curves of the potential energy function f(𝒙)f(\bm{x}) are also plotted. We can observe that the solution curve is orthogonal to the contour curves because the solution path follows the negative gradient vector field of the potential energy. This means that the curve is a steepest descent curve for ff. By rounding the equilibrium point, we obtain 𝒙^=(1,1)\hat{\bm{x}}=(1,1), which is the correct estimated word.

V Gradient Flow Dynamics

This section focuses on observing the behavior of the gradient flow dynamics defined by (11), which provides insights into the decoding process of the GF decoding. The experiments in this section use a (3,6)-regular LDPC code with n=204n=204 and m=102m=102. The signal-to-noise ratio (SNR) is set to 𝖲𝖭𝖱=4{\sf SNR}=4dB, and the standard deviation of the Gaussian noise σ\sigma is given by

σ=(1/2)10𝖲𝖭𝖱/10/R,\displaystyle\sigma=\sqrt{(1/2)10^{-{\sf SNR}/10}/R}, (15)

where RR is the design code rate. For example, RR equals 1/21/2 for a (3,6)-regular LDPC code. For the numerical results presented in this section, we used the Euler method with N=1000N=1000 or 20002000. The parameters α\alpha and β\beta were both set to 1.

V-A Observation on potential energy

If the state follows a gradient flow dynamics, the potential energy should decrease as the state evolves. To confirm the evolution of potential energy, we conducted an experiment as follows. For each trial, a random bipolar codeword 𝒔\bm{s} of a (3,6)-regular LDPC code was generated. The received word 𝒚\bm{y} was also randomly sampled as 𝒚=𝒔+𝒏\bm{y}=\bm{s}+\bm{n}, where 𝒏𝒩(𝟎,σ2𝑰)\bm{n}\sim{\cal N}(\bm{0},\sigma^{2}\bm{I}). The received vector 𝒚\bm{y} was then provided to the gradient flow decoder. We conducted 50 trials in total.

Figure 2 plots the potential energy f(𝒙(t))f(\bm{x}(t)) as a function of time tt. As expected, we observed that the value of potential energy rapidly decreased. In the regime t1t\geq 1, the decrement of the potential energy was invisible, likely because the state vector was sufficiently close to the equilibrium point in such a regime.

Refer to caption
Figure 2: Potential energy as a function of time tt. (3,6)-regular LDPC codes of length 204 is used.

V-B Decoding process

It may be informative to observe a decoding process of the GF decoding. Figure 3 presents a decoding process of the (3,6)-regular LDPC code. The dashed line represents the transmitted word where the horizontal axis represents the index of a vector from 11 to n=204n=204. The solid line in Fig. 3 indicates the value of xix_{i} in 𝒙\bm{x}. The upper panel corresponds to time t=0.1t=0.1 and lower panel corresponds to time t=1t=1. At the beginning of the decoding process (upper panel), the magnitude of xix_{i} in the state vector 𝒙\bm{x} are fairly small. They are not sufficient to yield the final estimation. When t=1t=1 (lower panel), the values of xix_{i} are significantly close to the original vector. The state vector may be close to the equilibrium point. This decoding result corresponds to successful decoding.

Refer to caption
Figure 3: Example of decoding process. The dashed line represents the original codeword. The solid line indicates the decoder output. 𝖲𝖭𝖱=4{\sf SNR}=4dB.

V-C Solution curves

Figure 4 displays the solution curves of each element in the state vector 𝒙\bm{x}, where xi(t)x_{i}(t) (for i[204]i\in[204]) represents the solution of the ODE (11) plotted as a function of time tt. Notably, all solution curves are smooth and continuous. The left panel illustrates the solution curves for 𝖲𝖭𝖱=4{\sf SNR}=4dB, where the equilibrium point is in proximity to a bipolar vector. Conversely, in the right panel (𝖲𝖭𝖱=0{\sf SNR}=0dB), the solution curves are more widely dispersed.

Refer to caption
Figure 4: Solution curves. (3,6)-regular LDPC codes of length 204.

VI Decoder Architecture

Our proposed scheme for decoding LDPC codes does not rely on any specific analog hardware technology for its implementation. We aim for a simple analog circuit to simulate the ODE (11) to facilitate easier implementation. In this section, we show a possible architecture of a GF decoder suitable for hardware implementation. We assume that analog domain integrators exist and that basic real arithmetic operations, such as addition, subtraction, multiplication, and division, can be performed in the analog domain.

If xk0x_{k}\neq 0 for any k[n]k\in[n], then the partial derivative of hα,β(𝒙)h_{\alpha,\beta}(\bm{x}) can be rewritten as

xkhα,β(𝒙)\displaystyle\frac{\partial}{\partial x_{k}}h_{\alpha,\beta}(\bm{x}) =4α(xk21)xk\displaystyle=4\alpha(x_{k}^{2}-1)x_{k}
+2βxkiB(k)((jA(i)xj)1)(jA(i)xj).\displaystyle+\frac{2\beta}{x_{k}}\sum_{i\in B(k)}\left(\left(\prod_{j\in A(i)}x_{j}\right)-1\right)\left(\prod_{j\in A(i)}x_{j}\right). (16)

Let zi(i[m])z_{i}(i\in[m]) be intermediate variable defined by zijA(i)xj.z_{i}\equiv\prod_{j\in A(i)}x_{j}. We will introduce another intermediate variable wk(k[n])w_{k}(k\in[n]) that is defined by

wk\displaystyle w_{k} iB(k)(zi1)zi=iB(k)U(zi),\displaystyle\equiv\sum_{i\in B(k)}\left(z_{i}-1\right)z_{i}=\sum_{i\in B(k)}U(z_{i}), (17)

where U:U:\mathbb{R}\rightarrow\mathbb{R} is given by

U(z)(z1)z.\displaystyle U(z)\equiv(z-1)z. (18)

The function V:V:\mathbb{R}\rightarrow\mathbb{R} defined by

V(w,x,y)x+y4α(x21)x2βwx\displaystyle V(w,x,y)\equiv-x+y-4\alpha(x^{2}-1)x-\frac{2\beta w}{x} (19)

is used to generate 𝒗=(v1,v2,,vn)\bm{v}=(v_{1},v_{2},\ldots,v_{n}) where

vkV(wk,xk,yk).\displaystyle v_{k}\equiv V(w_{k},x_{k},y_{k}). (20)

By using the vector 𝒗\bm{v} obtained in the above way, we can rewrite the ODE (11) by d𝒙/dt=𝒗.{d\bm{x}}/{dt}=\bm{v}.

Figure 5 presents a configuration of a circuit having the gradient flow dynamics of (11). The parity check matrix

𝑯=(111000001100000111)\displaystyle\bm{H}=\left(\begin{matrix}1&1&1&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&1&1&1\end{matrix}\right) (21)

is assumed in this example. The vector 𝒙=(x1,x2,,xn)T\bm{x}=(x_{1},x_{2},\ldots,x_{n})^{T} is entered to the circuit from the left. According to the Tanner graph of the parity check matrix 𝑯\bm{H}, the intermediate variables 𝒛\bm{z} and 𝒘\bm{w} are evaluated. Applying the function VV to 𝒘\bm{w}, we finally obtain 𝒗=(𝒙𝒚+hα,β(𝒙)).\bm{v}=-(\bm{x}-\bm{y}+\nabla h_{\alpha,\beta}(\bm{x})). By integrating the vector 𝒗\bm{v} with respect to time tt, we have 𝒙\bm{x} and the vector is fed back to the input as in Fig. 5. An analog domain integrator can converts 𝒙˙\dot{\bm{x}} to 𝒙\bm{x}.

Refer to caption
Figure 5: Circuit diagram of gradient flow decoding.

There are several remarks on the the analog circuit depicted in Fig. 5. Notably, the circuit lacks a component for initializing the value of 𝒙\bm{x} at the beginning. Given that the function V(w,x,y)V(w,x,y) involves division by xx, an initial point of 𝟎\bm{0} is not feasible. An alternative solution is to use an initial vector of 𝒙0=δ𝒚\bm{x}_{0}=\delta\bm{y} where δ\delta is a sufficiently small real number.

VII BER Performance

We evaluated the bit error rate (BER) of GF decoding through computer simulations on LDPC codes with design rate 1/2, the (3,6)(3,6)-regular LDPC codes (96.33.964, 204.33.484, PEGReg252x504, PEGReg504x1008) [21]. We used BP decoding as the baseline and set the maximum iteration of BP to 100. The parameter setting of GF decoding is summarized in Table I.

Figure 6 displays the BER performance of the proposed GF decoding. Compared to BP, the GF decoding has a BER performance that is approximately 2dB worse. Notably, for PEGReg504x1008, the GF decoding performance is almost on par with the multi-mode GDBF algorithm using 100 iterations ([2], Fig. 3). Overall, GF decoding’s BER performance is comparable to that of bit flip-type decoding algorithms.

TABLE I: Parameter setting of GF decoding.
Potential energy parameters α=1,β=2\alpha=1,\beta=2
Parameters of Euler method T=10,N=1000T=10,N=1000
Sampling time t=10t=10
Encoding Uniformly random codeword
Refer to caption
Figure 6: Bit error rate of the GF decoding.

VIII Concluding summary

We presented continuous-time dynamical systems for decoding LDPC codes. Specifically, the GF decoding is based on the gradient flow dynamics of the potential energy function (10). We demonstrated that the decoding performance of the GF decoding is comparable to that of the multi-bit mode GDBF algorithm [2]. Advancements in programmable analog integrated circuits may make practical implementation feasible in the near future.

Acknowledgment

This work was supported by JSPS KAKENHI Grant Number JP22H00514.

References

  • [1] R. G. Gallager, Low density parity check codes.   MIT Press, 1963.
  • [2] T. Wadayama, K. Nakamura, M. Yagita, Y. Funahashi, S. Usami, and I. Takumi, “Gradient descent bit flipping algorithms for decoding LDPC codes,” IEEE Transactions on Communications, vol. 58, no. 6, pp. 1610–1614, 2010.
  • [3] B. Ulmann, Analog and hybrid computer programming.   Walter de Gruyter, 2020.
  • [4] S. Köppel, B. Ulmann, L. Heimann, and D. Killat, “Using analog computers in today’s largest computational challenges,” Advances in Radio Science, vol. 19, pp. 105–116, 2021.
  • [5] L. Lu, J. Wu, T. Wang, and Y. SU, “Compact all-optical differential-equation solver based on silicon microring resonator,” Frontiers of Optoelectronics, vol. 5, pp. 99–106, 2012.
  • [6] H. Zhang, M. Gu, X. D. Jiang, and et al., “An optical neural chip for implementing complex-valued neural network,” Nature Commun., vol. 12, 2021.
  • [7] J. Capmany and D. Pérez, Programmable Integrated Photonics.   Oxford University Press, 2020.
  • [8] T. Wadayama, “Fully analog noise-resilient dynamical systems storing binary sequence,” IEEE International Symposiumn on Information Theory (ISIT), 2022.
  • [9] J. Feldman, “Decoding error-correcting codes via linear programming,” Massachusetts Institute of Technology, Ph. D. thesis, 2003.
  • [10] P. O. Vontobel, “Interior-point algorithms for linear-programming decoding,” in IEEE Information Theory and Applications Workshop, 2008.
  • [11] T.Wadayama, “Interior point decoding for linear vector channels based on convex optimization,” IEEE Transactions on Information Theory.
  • [12] G. Sundararajan, C. Winstead, and E. Boutillon, “Noisy gradient descent bit-flip decoding for LDPC codes,” IEEE Transactions on Communications, vol. 62, no. 10, pp. 3385–3400, 2014.
  • [13] X. Zhang and P. H. Siegel, “Efficient iterative LP decoding of LDPC codes with alternating direction method of multipliers,” in IEEE International Symposium on Information Theory (ISIT), 2013.
  • [14] X. Liu and S. C. Draper, “The ADMM penalized decoder for LDPC codes,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 2966–2984, 2016.
  • [15] ——, “ADMM LP decoding of non-binary LDPC codes in 𝔽2m\mathbb{F}_{2^{m}},” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 2985–3010, 2016.
  • [16] Y. Wang and J. Bai, “Proximal-ADMM decoder for nonbinary LDPC codes,” arXiv preprint arXiv:2010.09534, 2020.
  • [17] A. Nakai-Kasai and T. Wadayama, “MMSE signal detection for MIMO systems based on ordinary differential equation,” IEEE Global Communications Conference (GLOBECOM), 2022.
  • [18] T. Wadayama and A. Nakai-Kasai, “Ordinary differential equation-based sparse signal recovery,” International Symposium on Information Theory and Its Applications (ISITA), 2022.
  • [19] T. Wadayama and S. Takabe, “Proximal decoding for LDPC-coded massive MIMO channels,” IEEE International Symposiumn on Information Theory (ISIT), 2021.
  • [20] D. F. Griffiths and D. J. Higham, Numerical Methods for Ordinary Differential Equations.   Springer, 2010.
  • [21] D. J. C. MacKay, “Encyclopedia of sparse graph codes [online],” Available: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html.