Informed Dynamic Scheduling for QLDPC Codes
Abstract
Recent research has shown that syndrome-based belief propagation using layered scheduling (sLBP) can not only accelerate the convergence rate but also improve the error rate performance by breaking the quantum trapping sets for quantum low-density parity-check (QLDPC) codes, showcasing a result distinct from classical error correction codes.
In this paper, we consider edge-wise informed dynamic scheduling (IDS) for QLDPC codes based on syndrome-based residual belief propagation (sRBP). However, the construction of QLDPC codes and the identical prior intrinsic information assignment will result in an equal residual in many edges, causing a performance limitation for sRBP. Two strategies, including edge pool design and error pre-correction, are introduced to tackle this obstacle and quantum trapping sets. Then, a novel sRBP equipped with a predict-and-reduce-error mechanism (PRE-sRBP) is proposed, which can provide a performance gain on considered QLDPC codes of over one order of magnitude compared to sLBP.
1 Introduction
In the field of quantum storage, quantum error correction codes are applied to protect the original information from noise by encoding logical quantum bits (qubits) to larger redundant physical qubits. Recently, quantum low-density parity-check (QLDPC) codes, a type of quantum error correction codes, have been a potential candidate for near-term quantum computers for their higher coding rate and minimum distance compared to surface codes [1, 2, 3]. For QLDPC codes, the binary [4] or the 4-ary [5] isomorphism to elements in Pauli groups allows the application of binary and non-binary syndrome-based belief propagation (sBP) decoders [6, 7]. The binary sBP is considered in this paper owing to its lower complexity and potential for low-latency hardware implementation.
In order to simultaneously detect both bit-flip and phase-flip errors in quantum error correction, QLDPC codes need to be designed to satisfy orthogonal measurement requirements. However, such construction limitations of quantum codes can result in quantum trapping sets, which will limit decoding performance [8]. For iterative decoders such as BP, a quantum trapping set refers to the part of the error that either fails to converge or converges to an error in which the corresponding syndrome is different from the given syndrome input, causing the decoder to suffer from an error floor.
In [9, 10], it was shown that by appending the ordered statistics decoder (OSD) to sBP, the error floor can be reduced, which indicates the mitigation of the impact from trapping sets. However, the significant improvement in error rate using sBP+OSD comes at the cost of high complexity due to the Gaussian elimination used to solve linear equations, making it challenging to design low-latency hardware for the decoder [11]. The study in [8] demonstrates that using sequential scheduling methods, such as layered sBP (sLBP), can not only accelerate convergence but also reduce the impact of the trapping set, which inspires us to explore different scheduling strategies for sBP based on QLDPC codes.
In classical error correction, unlike fixed sequential scheduling, where the update order is predetermined before the decoder begins, informed dynamic scheduling (IDS) [12] is an edge-wise decoder, which can mitigate the impact of classical trapping sets by dynamically changing the update order of edges, further accelerating the convergence.
A common IDS instance is residual belief propagation (RBP) [13]. Each time, RBP updates an edge of the check-to-variable (C2V) message corresponding to the maximum difference between the pre-computed C2V message and the current message, known as the residual. Although the convergence rate can be accelerated using RBP, the error rate performance subsequent to a sufficiently large iteration is even worse than LBP due to the greedy group phenomenon [14]. The RBP decoder tends to greedily update a small group of edges for certain errors, neglecting the contribution of intrinsic information from other edges, resulting in convergence failure and error rate loss. Several RBP-based variants have been proposed that alleviate the greedy update of RBP and can further improve the error performance rate [15, 16].
Although IDS and its variations provide a faster convergence rate for classical LDPC codes, their effect on quantum codes has not yet been studied well. Unlike classical error correction, the transmitted information cannot be measured directly; otherwise, the qubit state will collapse [4]. Thus, a constant value that depends on the given fixed channel probability is identically assigned to each variable node as prior intrinsic information. In addition, due to the orthogonal measurement requirements, quantum codes will contain more symmetry structures than classical codes. Under these effects, many edges will exist with equal residuals, reducing the chance of sRBP converging to the true syndrome.
To tackle the obstacle for QLDPC codes, we propose two strategies: 1) Design an edge pool providing diverse edges to enhance the chance of updating edges outside the greedy group or trapping set. 2) Predict an error position by using the given syndrome information and reduce the impact of this associated error. Based on these strategies, we propose a new sRBP equipped with a predict-and-reduce-error mechanism (PRE-sRBP). The edge pool design in PRE-sRBP can provide abundant different edges each time. The assistance of the estimated support sequence can further mitigate the impact of the trapping set. Simulation results show that for bicycle codes [6], the elaborately designed edge pool for an sRBP-based decoder can not only yield better error-rate performance but also provide a faster convergence rate than sLBP. For those codes with many quantum trapping sets, such as hypergraph-product (HP) code [17], simulation results show that an intense mechanism, such as the predict-and-reduce-error mechanism, is needed in order to improve the performance. As a result, the proposed PRE-sRBP can achieve superior error-rate performance and comparable complexity to other sRBP-based decoders on different QLDPC codes. For example, the proposed PRE-sRBP can provide an improvement over one order of magnitude compared to other sRBP-based decoders on a HP code.
The remainder of this paper is organized as follows. In Section 2, we introduce quantum error correction codes and the sBP. The impact of cycles and quantum trapping sets are also introduced. In Section 3, we present several syndrome-based IDS algorithms based on sRBP using the concept of the edge pool and then describe the challenges and the design strategies for sRBP on QLDPC codes. In Section 4, based on these design strategies, the proposed PRE-sRBP is introduced. Section 5 shows the simulation results and a complexity analysis. Finally, a conclusion is provided in Section 6.
2 Background
2.1 Quantum error codes
In a single-qubit system, Pauli operators describe the operations acting on a single qubit, consisting of and , where and each element is mutually anti-commute [4]. For example, and denote a bit-flip and phase-flip on a single qubit state. The Pauli group is formed by Pauli operators with the multiplicative factors and . For an -qubit system, the general Pauli group is considered as the -fold tensor product of .
Quantum stabilizer codes are well-studied due to their similar form to classical linear codes [18, 19]. An stabilizer code encodes qubits to an -qubit codeword with a minimum distance . As its name suggests, is defined by its stabilizer group , where . All the elements in are mutually commuted to ensure the bit-flip and phase-flip can be detected simultaneously. Furthermore, can be characterized by generators , i.e., . An -qubit codeword is then the eigenstate of all the generators, i.e., . Since the Pauli operators can be expressed as a binary string, i.e., , the set of generators can then be represented as a binary matrix , where . By using the binary representation, the commutativity of each generator is then expressed as the orthogonality of the rows under the symplectic product, equivalently,
(1) |
where is the addition under mod 2.
It can be seen that identifying two matrices satisfying the constraint (1) is not trivial. Thus, a class of stabilizer codes, called the Calderbank, Shor-Steane (CSS) codes, are widely used [20, 21]. The parity-check matrix for a CSS code has the form
(2) |
Thus, the equation (1) can be reduced to . Suppose two classical linear block codes satisfy the relationship , then the parity-check matrix of the code and of can form a CSS code since , where is the generator matrix of . If and are LDPC codes, i.e., the non-zero elements in its parity-check matrix are sparse, then the corresponding quantum code is called a QLDPC code. The QLDPC codes considered in this paper, including the bicycle code [6], generalized bicycle code [9], and HP code [17], are all types of CSS codes.
2.2 Quantum channel and syndrome-based BP (sBP)
For an -qubit system, any arbitrary error pattern can be digitalized and can then be represented as the element in the -fold Pauli group, which consists of and [22]. Thus, the noisy model can be viewed as two independent channels, a bit-flip with probability and a phase-flip with . Since all the codes considered in this paper are CSS codes, neglecting the correlation between and errors, it is sufficient to decode using just one type of parity-check matrix and error channel. For instance, the bit-flip noisy channel with crossover probability and are considered in this paper.
To apply binary BP on the given error correction code, the parity-check matrix of the code is first transformed to a factor graph (or the Tanner graph [23]), which consists of check nodes and variable nodes, denoted as and , corresponding to the th row and the th column of the parity-check matrix, respectively. Let be the degree of check node . Them each check node has edges linked to its neighboring variable node , where . Likewise, each variable node has edges linked to its neighboring check node , where is the degree of variable node and .
To avoid confusion in this paper, we refer to an error vector as an error pattern. The index of each coordinate of is referred to as the error position or a variable node in error (in the Tanner graph). The support of , denoted as , is the set of indices corresponding to the non-zero error positions in , i.e., . Then, the weight of is the cardinality of , denoted as . The syndrome corresponding to an error pattern can be computed as . Likewise, the syndrome position or a check node, the support of a syndrome , and the weight are used under a similar definition as for an error pattern.
A difference in using BP on quantum codes is that the intrinsic information from the channel cannot be measured directly due to the collapse of the quantum state. Instead, the syndrome measured on the auxiliary qubits is the only information that can be obtained [4]. Hence, the so-called syndrome-based BP (sBP) is used [24]. Considering the sBP decoder in the log domain [25], where the log-likelihood ratio (LLR) is used for message passing, we denote a message from to the neighboring at the th iteration as , where . Then, due to the lack of different prior intrinsic information for each variable node , at the th iteration, the prior LLR and are both assigned to the identical constant value for all , given . Then the message from to is computed according to
(3) | ||||
where denotes the th syndrome position, where .
In the next iteration, is computed according to
(4) |
The element-wise LLR at the th iteration for each can be computed as
(5) |
and the estimated error if ; Otherwise, . Then the estimated syndrome at the iteration is . If is consistent with the given syndrome , the decoder is declared to have converged. Otherwise, the sBP decoder will repeat (3) and (4) until either the decoder converges or the maximum iteration is reached. The sBP introduced above is sometimes referred to as flooding scheduling since, for an iteration, all the C2V messages are updated, followed by an update of all the variable-to-check (V2C) messages.
2.3 Quantum trapping sets
In the Tanner graph, a cycle is formed by a set of check and variable nodes that create a closed loop. The cycle length is determined by the number of edges it contains. As mentioned in Section 2.1, the CSS code is built from two classical codes that satisfy the symplectic product relationship, leading to more cycles and even length-4 cycles compared to the classical code, which will induce multiple quantum trapping sets. A quantum trapping set consists of classical-type trapping sets and symmetric stabilizer trapping sets. The classical type includes a set of variable nodes that cannot converge correctly even after sufficiently large iterations [8, Definition 1]. The subgraph induced by this kind of trapping set will contain odd-degree check nodes, and the most problematic trapping sets will contain degree-1 and degree-2 check nodes in the induced sub-graphs [26].
In contrast, the symmetric stabilizer trapping sets refer to a set of variable nodes that converge incorrectly, where the corresponding syndrome is inconsistent with the given syndrome [8, Definition 4]. Symmetric stabilizer trapping, which is unique to quantum codes, is related to code construction. Certain quantum codes have numerous sets of variable and check nodes, whose induced subgraphs have no odd-degree check nodes and can be partitioned into symmetric disjoint subsets [8, Definition 5]. Under these conditions, it’s possible that, given a syndrome, there are two equal-weight error patterns in the subgraph. As a result, the iterative decoder cannot determine which one should be converged upon and often converges to the addition of these two error patterns, leading to a zero syndrome. According to [8], HP codes suffer from an error floor due to the abundance of quantum trapping sets.
3 Informed dynamic scheduling (IDS) for QLDPC codes
If rows (or columns) of the parity-check matrix are divided into different layers, and the message is updated layer by layer, then this kind of fixed sequential scheduling, called layered-BP (LBP) [27, 28], can achieve twice the convergence rate while maintaining the error-rate performance for classical error correction codes. It was demonstrated in [8] that using syndrome-based LBP (sLBP) can improve not only the convergence speed but also the error floor performance for QLDPC codes by solving the quantum trapping set, mainly the symmetric-stabilizer type. In classical error correction, dynamic scheduling can further alleviate the impact of trapping sets, which motivates us to explore the potential of syndrome-based IDS for QLDPC codes.
3.1 Syndrome-based IDS
Unlike fixed scheduling, where the update order of the check or variable node is predetermined sequentially before the iteration begins, IDS, an edge-wise decoder, dynamically changes the update order of the edges according to the residual
(6) |
where the residual is referred to as the norm of the difference between the pre-computed message and the current message . For instance, a specific criterion of the RBP [13] is to identify the update order according to the residual of the C2V message, which is defined as
(7) |
The criterion is based on the idea that a larger residual indicates lower message reliability, necessitating more message updates to achieve convergence. Therefore, prioritizing updates to these edges can accelerate the convergence rate. Additionally, following the edge-wise criterion, the update order can spoil the constraint of the topological structure compared to fixed scheduling, which may lead to an improvement in error-rate performance.
3.2 The edge pool for sRBP
Several algorithms based on sRBP for quantum codes are presented in this section. The concept of the edge pool is introduced to aid in distinguishing the similarities and differences between algorithms. Let and respectively denote the chosen check node and the variable node corresponding to the , where represents an edge connected by check node and variable node , and an edge pool is a set of edges. Then, after computing all residuals for all by using (7) and (8), we can arrange the sRBP and its variation algorithms into three main steps:
-
S.1
Select the and corresponding to the maximum residual of all edges in the edge pool and update the edge from to by (3). The residual of the updated edge is assigned as zero.
- S.2
-
S.3
Select the next and as in S.1 but from the new edge pool , which depends on different decoders.
The decoder will repeat the three steps above for a single iteration, where an iteration is conventionally defined as when the number of C2V updates reaches the total number of undirected edges in the parity-check matrix due to an unbalanced update of the edge-wise decoder. It can be observed that S.2 only depends on the choice of and in S.1. As a result, different IDSs concentrate on designing the and . The design for the sRBP is shown in the Edge pool for sRBP.
In classical error correction, the convergent rate of RBP is faster than for both BP and LBP, i.e., the error rate performance is better for the first few iterations. However, if there are many iterations, the performance may worsen due to the greedy group phenomenon [13]. When RBP encounters an error that it can not solve, it may try to correct the wrong variable node, leading to another unsatisfied check node. As a result, the residual of the edges related to this incorrect variable node will always be significant. The update will always occur in these greedy groups of edges, consuming a large number of resources and never succeeding.
A node-wise RBP (NW-RBP) [15] has been proposed to address this issue. This approach simultaneously updates all edges from the same check node, allowing for the update of edges outside the greedy group and ultimately improving the error rate. In [15], the authors showed how NW-RBP can tackle trapping sets. Suppose a variable node in a trapping set has been solved. In that case, at least one degree-2 check node becomes degree-1, which is likely to be chosen due to the large residual, resulting in the possible correctness for another variable node in the trapping set. However, the performance improvement using NW-RBP comes at the partial cost of the convergence rate. For syndrome-based NW-RBP (NW-sRBP), the design is shown in Edge pool for NW-sRBP.
After updating and propagating the related edges from to its neighboring , it is reasonable that not only the message for the edge from the but also the message emitting from will be more reliable. Thus, the authors in [16] propose a strategy in which the next update is related to the latest-updated variable node, which they call the latest-message-driven RBP (LMD-RBP). From [16], it shows that the LMD-RBP algorithm performs better during the first few decoding iterations for the 5G New Radio LDPC codes [30]. The design for the syndrome-based LMD-RBP (LMD-sRBP) is shown in the Edge pool for LMD-sRBP.
3.3 Error weight analysis
We show the distribution of the error weight versus the solvable error ratio using sBP, sLBP, and other sRBP variations on the bicycle code, as shown in Fig. 1.
[width=]ErrWRBP256.pdf
The results show that sBP can effectively solve error patterns with a weight of 9 or less with a ratio of over . However, the solvable ratio decreases and cannot solve weights larger than 13, while sLBP can solve error patterns with a weight of 10 or less with a ratio over . sRBP can almost solve error patterns with a weight of 10 or less with a ratio over . However, for those with weights larger than 11, the ratio decreases rapidly. In contrast, by using LMD-sRBP, almost all error patterns with a weight of 10 or less can be solved with a ratio over . Compared to sLBP, the average solvable ratio for weights larger than 11 is also higher.
3.4 Issue for sRBP on QLDPC codes
The results shown in Fig. 1 indicate that there is room for improvement in performance using sRBP-based decoders on QLDPC codes. In classical error correction, the information of the residual of variable node, related to the different prior LLR , can be used to assist the decoder in identifying the reliability of edges, resulting in the change of update priority [31, 32, 33].
However, this assistance design is limited when considering sRBP on QLDPC codes since the identical prior LLR is assigned for every variable node as described in Section 2.2. Additionally, QLDPC codes are often regular, defined in [34] as the parity-check matrix having a constant row weight and a constant column weight, which are not necessarily equal. Here, weight refers to the number of non-zero positions in a row or column. This regularity of QLDPC codes, due to construction limitation introduced in Section 2.1, coupled with identical prior LLR assignments, results in an equal residual in many edges. Thus, the decoder may struggle to identify the reliability residual of the C2V and select the ineffective edge, which will have no effect on the error position in the trapping set, leading to a performance limitation.
To address the negative impact of the greedy group and trapping set, the first strategy is to design an edge pool that can offer various edges whenever we need to make a selection. In this way, the chance of the message exchange within and outside the greedy group or trapping set possibly leads to the decoding success. The second strategy is to predict an error position possibly belonging to the support of an error pattern and operate a pre-correction. If the error position involved in a trapping set can be solved first, then the effect of the trapping set may be mitigated. However, a candidate error position should be elaborately found since the syndrome is used as an input. Otherwise, the syndrome corresponding to the new error pattern after pre-correction would likely differ significantly from the original, resulting in a limitation on decoder performance.
4 The proposed sRBP for QLDPC codes
4.1 A proposed edge pool for QLDPC codes
To provide the diversity of the edge pool, we propose an induced by a variable node and a list of its neighboring check nodes that have not been updated, where will be adjusted for each time we update a C2V edge.
Given a current considered variable node , each time to determine a C2V edge for updating, only those edges neighboring are compared in S.1, and is set to be different for the next time to determine the new C2V message update; in this way, we ensure that there must be different edges between and . The order of the considered can be carefully designed by exploiting the correlation between each variable node, but in practice, sequentially setting from to is sufficient.
An additional -tuple flag list is further introduced to the C2V comparison for each . Each element in , denoted as , corresponds to neighboring check node , which is initialized as 0. Each time to find a C2V edge from the edge pool induced by , only edges from check node where are considered. When a C2V edge is updated in S.1, we set . In this way, we can further provide different edges between and induced by . To ensure at least one edge can be updated from induced by , once the number of neighboring where reaches , all are reset to 0.
In summary, the proposed pool design can deliver a diverse edge update order by creating abundant different edges for every new edge pool, providing a chance for message exchange within and outside the trapping set. Moreover, the complexity of the proposed design can be lower than other sRBP-based decoders presented in Section 3 since only the C2V neighboring , and check nodes where will be involved in the residual comparison. The design for and is shown in Proposed Edge pool design.
4.2 Error prediction from the syndrome
To predict a candidate error position that likely belongs to under the given syndrome , Proposition 1 is given to assist us in identifying the candidate more accurately.
Proposition 1.
Considering a QLDPC code where in which the check node degree is a constant over the bit-flip channel with crossover probability , a priority of the estimated error position belonging to the given the syndrome can be determined by the value , where is the degree of , and denotes the cardinality of the set of check nodes neighboring and belonging to the support of the given syndrome, i.e., .
Proof.
The proof is detailed in Appendix A. ∎
The lower the value of means the higher the ratio of the check nodes neighboring involving the given syndrome, which implies that the th error position is likely belonging to . Thus, we can obtain all candidate error positions by computing and sorting based on . A case is presented in Example 1.
Example 1.
Considering the bicycle codes with the given syndrome , where . By computing for each and sorting in ascending order, we can obtain the sequence , with the corresponding . The error pattern corresponding to has , which are exactly the first elements in this sequence. ∎
Based on Proposition 1, we can build a sequence that reflects the possibility of errors belonging to for a given syndrome, where the most likely error in will occur at the beginning of this sequence, hereafter called the estimated support sequence. Since not every element in the support of an error pattern will exist in the first few positions in the estimated support sequence, we must carefully reduce the effect of an error position in the sequence individually. Suppose an error position belonging to the support is selected. In that case, we can obtain a new syndrome by reducing its corresponding syndrome position from the given syndrome, which might lead to the success of the sRBP. If a selected error is not in the support, the following position in the sequence will be considered for a new decoding process.
4.3 The proposed sRBP with a predict-and-reduce-error mechanism (PRE-sRBP)
Now, we introduce our proposed algorithm. Denote as the maximum number of error positions that we want to select from the estimated support sequence. Each time we select an error position from the sequence for the decoder, it is referred to as a trial. The algorithm begins with the first position that we select, i.e., . For each trial we record the -tuple standard error pattern , where only the th position is and is the th element in the estimated support sequence. To reduce the impact of , we compute its corresponding syndrome and then compute the input syndrome for the decoder presented in Section 4.1 as . The decoder for each trial will operate iterations; thus, the total maximum number of iterations will become .
If the decoder converges to an estimated error pattern , then terminate and output the final estimated error pattern as . If the decoder does not converge when the number of iterations reaches , then move to the next position of the sequence until . Otherwise, declare that the decoder has failed to converge. We conclude that the sRBP is equipped with a predict-and-reduce-error mechanism, referred to as PRE-sRBP, and the details are provided in Algorithm 1.
4.4 Error weight analysis
Fig. 2 shows the solvable ratio using PRE-sRBP where and . In comparison to that shown in Fig. 1, the algorithm can solve error patterns with a weight of up to , but the solvable ratio decreases to and for weights , and , respectively.
The impact of trapping sets may be mitigated if a variable node involved in the trapping set is selected from the estimated support sequence. A scenario is demonstrated in Example 2 to show the effect of the selection on the decoder.
[width=0.99]ErrWCan256.pdf
Example 2.
Considering the bicycle codes with the given syndrome , where , the corresponding weight- error pattern has the support . The estimated support sequence is obtained as . With this given syndrome, the iterative decoder induces the trapping set shown in Fig. 3. In the figure, circles and squares represent the variable and check nodes, respectively. Additionally, the black and red margins indicate whether a node belongs to the support or not. Variable nodes and each forms a cycle-4, with the other two variable nodes and , respectively.
Comparing and the estimated support sequence, we can see that is the first position belonging to that is selected from the estimated support sequence. However, even though , which actually belongs to , is selected, and the effect of its corresponding syndrome position is reduced, the decoder still fails. The same situation occurs for error position . The reason is that they will not affect the considered trapping set. For both trials, error position will always be determined to be in the support of an estimated error pattern during the decoding process. After is determined, so is position since in the Tanner graph, all its neighboring check nodes belong to . However, neither of them is in ; the syndrome corresponding to the estimated error pattern must not be consistent with the given syndrome, and the decoder fails. Thus, selecting an error not just from but also involved in the trapping set is crucial in this case. For example, when error position is selected and reduced, PRE-sRBP is able to break this trapping set and will succeed. ∎
[width=0.99]FigureRBP.pdf
[width=0.99]ErrWCan400.pdf
HP code is more severely impacted by the trapping set. As shown in Fig. 4, when using LMD-sRBP, the ratio for some high-weight error patterns improves, while for certain weights, it remains almost the same as when using sBP. Using PRE-sRBP, the solvable ratios for the collected error weight are all above .
5 Performance evaluation
5.1 Parameter set-up for simulations
From Example 2, we can see that the selection of is important for the performance of PRE-sRBP. Apparently, a higher value of leads to a better performance until the upper bound is reached, as shown in Fig. 5.
[width=0.99]lamdaDecay.pdf
However, error patterns where the decoder fails can waste a lot of decoding time. Considering the latency issue for practical applications, setting a large number for is impractical. Thus, we set an upper bound for for simulations. This configuration provides a trade-off between and such that the proper will be determined.
[width=0.99]lamda.pdf
For instance, considering the bicycle code at , the frame-error rate (FER) versus the selection of is shown in Fig. 6, which shows that when , the FER begins to drop rapidly, consistent with the results where the solvable ratio of sRBP-based decoders begins to decline for error weights larger than 10. However, should not exceed 45 to ensure sufficient iterations .
5.2 Simulation results
In each simulation, at least logical errors are collected, and the maximum iterations for sBP and syndrome-based IDS decoders unless otherwise specified. In the simulations, sBP uses (3) for C2V updates, and the sLBP processes each check node sequentially.
Fig. 7 shows the FER performance versus the bit-flip crossover probability using different IDS decoders based on the bicycle code. The edge pool design significantly impacts performance. For example, NW-sRBP performs worse than sRBP and sLBP, contrary to the results for classical LDPC codes. When carefully designed for the edge pool, the performance using LMD-sRBP at reaches an FER of , which improves by almost an order compared to sLBP. By updating diverse edges and proactively reducing the predicted error position, PRE-sRBP where , achieves the best performance among the considered IDS decoders.
[width=]BYC256FER_new.pdf
We show the plot for FER versus the maximum number of iterations in Fig. 8, which shows that sRBP can provide a faster convergence rate than sLBP at . The figure also indicates that the elaborately designed edge pool can lead to massive improvements in the convergence rate. For example, since the edge neighboring the newest updated variable nodes is updated, LMD-sRBP provides the lowest error rate within three iterations. For PRE-sRBP, parameter is adjusted for different . The figure shows that PRE-sRBP not only provides a convergence rate on par with sRBP in the first few iterations but also the lowest frame-error rate performance after where .
[width=]BYC256ITER_new.pdf
Fig. 9 shows the performance of the generalized bicycle code using different sRBP-based decoders. It shows that using edge-wise decoders can provide a better FER than sBP and sLBP. Whereas, the limitation of further improving performance by designing different edge pools is also reflected in the subtle performance gap between using sRBP, NW-sRBP, and LMD-sRBP. It is worth noting that the gap between our proposed decoders and LMD-sRBP for this code is larger than for the bicycle code, which indicates the significant impact of using additional predict-and-reduce-error mechanisms.
[width=]GBY126FER_new.pdf
The structure of the hyper-graph product will lead to multiple quantum trapping sets, which can result in a performance limitation [8]. Fig. 10 shows that under , the performance of the HP code using sRBP, NW-sRBP, and LMD-sRBP is nearly identical. This implies that the influence of the edge pool design is negligible, and a more intense mechanism is needed for improvement. As a result, using the predict-and-reduce-error mechanism, PRE-sRBP can yield a significant performance gain compared to other decoders. In Fig. 10, all decoders have converged under , except the proposed PRE-sRBP.
[width=]HGP400FER90.pdf
Under sufficiently large iterations, say , the error rate using PRE-sRBP can be further improved. In Fig. 11, we compare the performance of the HP code using PRE-sRBP to other sBP-based decoders, including the posterior LLR adjustment method (Post-Adjust sBP) [35], sBP+OSD [10], and the sBP-guided decimation (sBPGD) [36]. The maximum number of iterations is shown in the brackets, and is shown alongside it for PRE-sRBP. Post-Adjust sBP can improve the FER to by flipping the posterior LLR of selected variable nodes at each iteration. sBP+OSD uses the soft output obtained from BP where as the reliability auxiliary to search for possible error patterns, where the performance can be improved by increasing the search depth (order). Fig. 11 shows that sBP+OSD with a zero order (sBP+OSD-0) provides merely a slight performance gain from sBP. A relatively significant increase can be achieved using sBP+OSD with 60 orders by using a combination sweep method (sBP+OSD-CS) at the cost of additional complexity. sBPGD provides approximately an order of magnitude gain over sBP at the cost of rounds, each consisting of 400 iterations. Notably, our proposed PRE-sRBP under where can outperform those considered decoders when . Furthermore, the performance can be improved where , and , providing about two orders of magnitude improvement compared to sBP.
[width=]HGP400FER400.pdf
5.3 Complexity Comparisons
Consider a parity-check matrix whose Tanner graph consists of check nodes, variable nodes, and the total number of edges , where and denote the average degree of the check nodes and the variable nodes, respectively. We evaluate the complexity in terms of the C2V message, the V2C message, the pre-computed C2V, and the residual comparison per iteration in Table 1. An iteration for sBP consists of updates to all C2V and V2C edges, i.e., an iteration includes C2V and V2C updates. For sRBP, an iteration is defined as when the number of C2V updates reaches due to the unbalanced update distribution specified in Section 3.2.
In Table 1, it is shown that both sBP and sLBP have the same number of updates without any pre-computation or comparison. For all sRBP-based decoders, the number of V2C updates and pre-computations are the same since these operations occur in S.2. When a C2V edge is updated, each V2C from (or for NW-sRBP) is then updated except for . Thus, there are V2C updates. Additionally, the new C2V is computed from each check node neighboring , except for , to its neighboring variable nodes except for (or for NW-sRBP), leading to a total pre-computation of .
The difference lies in the residual comparison due to different edge pools in S.1 and S.3. For sRBP to determine the C2V update, contains all the check and variable nodes; thus, comparisons are required. For the NW-sRBP, after comparisons, there are edges updated from check node , i.e., there are comparisons for one C2V update. For one iteration, the comparison is . In LMD-sRBP, the number of comparisons can be computed in two steps following the method of constructing and . There are comparisons to determine the and to determine the neighboring . Thus, the total comparisons for one C2V update is . PRE-sRBP will contain a sequence computation process and the sRBP-based process. In the sRBP-based process, for one C2V update, PRE-sRBP will only compare messages from check nodes neighboring the variable node and those that have not been updated to their neighboring variable nodes excluding , resulting in a comparison below .
Schedules | C2V update | V2C update | Pre-computation | Residual comparison |
---|---|---|---|---|
sBP | 0 | 0 | ||
sLBP | 0 | 0 | ||
sRBP | ||||
NW-sRBP | ||||
LMD-sRBP | ||||
PRE-sRBP |
6 Conclusions
In this paper, we have presented several syndrome-based IDS algorithms based on sRBP. Due to the construction constraint and the identical prior intrinsic information assigned for each variable node, multiple equal residuals will exist when using sRBP on QLDPC codes, resulting in a performance limitation. To tackle this issue, we have introduced two strategies: the edge pool design with diverse edges together with error prediction and reduction. Simulation results show that sRBP-based decoders can provide a faster convergence rate than sLBP, and by elaborately designing the edge pool, the FER performance can also be improved for the considered bicycle code. A more intense strategy, such as our proposed predict-and-reduce-error mechanism using the estimated support sequence, is needed to further improve the performance of the generalized bicycle code and HP code. Based on these two strategies, we propose a novel PRE-sRBP, which can not only provide a fast convergence rate but also achieve the best error-rate performance among all other considered decoders for QLDPC codes.
References
- [1] Nikolas P. Breuckmann and Jens Niklas Eberhardt. “Quantum low-density parity-check codes”. PRX Quantum 2, 040101 (2021).
- [2] Maxime A. Tremblay, Nicolas Delfosse, and Michael E. Beverland. “Constant-overhead quantum error correction with thin planar connectivity”. Phys. Rev. Lett. 129, 050504 (2022).
- [3] Sergey Bravyi, Andrew W. Cross, Dmitri Maslov Jay M. Gambetta, Patrick Rall, and Theodore J. Yoder. “High-threshold and low-overhead fault-tolerant quantum memory”. Nature 627, 778 (2024).
- [4] Michael A. Nielsen and Isaac L. Chuang. “Quantum computation and quantum information”. Cambridge University Press. (2009). 10th Anniversary edition. url: https://doi.org/10.1017/cbo9780511976667.
- [5] A.R. Calderbank, E.M. Rains, P.M. Shor, and N.J.A. Sloane. “Quantum error correction via codes over GF(4)”. IEEE Transactions on Information Theory 44, 1369 (1998).
- [6] D.J.C. MacKay, G. Mitchison, and P.L. McFadden. “Sparse-graph codes for quantum error correction”. IEEE Transactions on Information Theory 50, 2315–2330 (2004).
- [7] David Poulin and Yeojin Chung. “On the iterative decoding of sparse quantum codes”. Quant. Inf. Comput. 8, 0987–1000 (2008).
- [8] Nithin Raveendran and Bane Vasić. “Trapping sets of quantum LDPC codes”. Quantum5 (2021).
- [9] Pavel Panteleev and Gleb Kalachev. “Degenerate quantum LDPC codes with good finite length performance”. Quantum 5, 585 (2021).
- [10] Joschka Roffe, David R. White, Simon Burton, and Earl Campbell. “Decoding across the quantum low-density parity-check code landscape”. Phys. Rev. Res. 2, 043423 (2020).
- [11] Javier Valls, Francisco Garcia-Herrero, Nithin Raveendran, and Bane Vasić. “Syndrome-based min-sum vs OSD-0 decoders: FPGA implementation and analysis for quantum ldpc codes”. IEEE Access 9, 138734–138743 (2021).
- [12] Andres I. Vila Casado, Miguel Griot, and Richard Wesel. “Overcoming LDPC trapping sets with informed scheduling”. In Information Theory and Applications Workshop. UCSD, San Diego, CA (2007). url: https://www.seas.ucla.edu/csl/files/publications/Andres_ITA2007Pres.pdf.
- [13] Gal Elidan, Ian McGraw, and Daphne Koller. “Residual belief propagation: Informed scheduling for asynchronous message passing” (2006). url: https://api.semanticscholar.org/CorpusID:9008993.
- [14] A. I. V. Casado, M. Griot, and R. D. Wesel. “Informed dynamic scheduling for belief-propagation decoding of LDPC codes”. In 2007 IEEE International Conference on Communications. Pages 932–937. (2007).
- [15] Andres I. Vila Casado, Miguel Griot, and Richard D. Wesel. “LDPC decoders with informed dynamic scheduling”. IEEE Transactions on Communications 58, 3470–3479 (2010).
- [16] Tofar C.-Y. Chang, Pin-Han Wang, Jian-Jia Weng, I-Hsiang Lee, and Yu T. Su. “Belief-propagation decoding of LDPC codes with variable node–centric dynamic schedules”. IEEE Transactions on Communications 69, 5014–5027 (2021).
- [17] Jean-Pierre Tillich and Gilles Zémor. “Quantum ldpc codes with positive rate and minimum distance proportional to the square root of the blocklength”. IEEE Transactions on Information Theory 60, 1193–1202 (2014).
- [18] Daniel Eric Gottesman. “Stabilizer codes and quantum error correction”. PhD thesis. California Institute of Technology. (1997).
- [19] Richard Cleve. “Quantum stabilizer codes and classical linear codes”. Phys. Rev. A 55, 4054–4059 (1997).
- [20] A. R. Calderbank and Peter W. Shor. “Good quantum error-correcting codes exist”. Phys. Rev. A 54, 1098–1105 (1996).
- [21] A. M. Steane. “Error correcting codes in quantum theory”. Phys. Rev. Lett. 77, 793–797 (1996).
- [22] Emanuel Knill and Raymond Laflamme. “Theory of quantum error-correcting codes”. Phys. Rev. A 55, 900–911 (1997).
- [23] F.R. Kschischang, B.J. Frey, and H.-A. Loeliger. “Factor graphs and the sum-product algorithm”. IEEE Transactions on Information Theory 47, 498–519 (2001).
- [24] D. J. C. MacKay. “Information theory, inference and learning algorithms”. Cambridge University Press. (2003).
- [25] Ching-Yi Lai and Kao-Yueh Kuo. “Log-domain decoding of quantum LDPC codes over binary finite fields”. IEEE Transactions on Quantum Engineering 2, 1–15 (2021).
- [26] Tom Richardson. “Error floors of LDPC codes”. Proc. annual Allerton conference on commun. control and computing (2003).
- [27] D.E. Hocevar. “A reduced complexity decoder architecture via layered decoding of LDPC codes”. In IEEE Workshop onSignal Processing Systems, 2004. SIPS 2004. Pages 107–112. (2004).
- [28] Juntan Zhang and M.P.C. Fossorier. “Shuffled iterative decoding”. IEEE Transactions on Communications 53, 209–213 (2005).
- [29] J. Chen and M.P.C. Fossorier. “Density evolution for two improved BP-based decoding algorithms of LDPC codes”. IEEE Communications Letters 6, 208–210 (2002).
- [30] 3GPP. “5G; NR; multiplexing and channel coding (release 15) 38.212”. document Technical specification (TS) (2018).
- [31] Xingcheng Liu, Zhenzhu Zhou, Ru Cui, and Erwu Liu. “Informed decoding algorithms of LDPC codes based on dynamic selection strategy”. IEEE Transactions on Communications 64, 1357–1366 (2016).
- [32] Xingcheng Liu, Chunlei Fan, and Xuechen Chen. “Dynamic scheduling decoding of LDPC codes based on Tabu search”. IEEE Transactions on Communications 65, 4612–4621 (2017).
- [33] Xingcheng Liu, Li’e Zi, Dong Yang, and Zhongfeng Wang. “Improved decoding algorithms of LDPC codes based on reliability metrics of variable nodes”. IEEE Access (2019).
- [34] R. Gallager. “Low-density parity-check codes”. IRE Transactions on Information Theory 8, 21–28 (1962).
- [35] Tzu-Hsuan Huang and Yeong-Luh Ueng. “A binary BP decoding using posterior adjustment for quantum LDPC codes”. In ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Pages 9001–9005. (2024).
- [36] H. Yao, W. A. Laban, C. Hager, A. G. i Amat, and H. D. Pfister. “Belief propagation decoding of quantum LDPC codes with guided decimation” (2023). arXiv:2312.10950.
- [37] Kao-Yueh Kuo and Ching-Yi Lai. “Refined belief propagation decoding of sparse-graph quantum codes”. IEEE Journal on Selected Areas in Information Theory 1, 487–498 (2020).
Appendix A Proof of Proposition 1
To prove the proposition with a clear description, the sBP in the probability domain, as outlined in [37], is used. Denote the message from to and the message from to as and , respectively. The and denote the marginal probability of to be 0 and 1, respectively. The proof is described as follows.
For the sBP in the probability domain, the V2C message is initialized as . Then the first iteration that includes three steps:
-
1.
C2V update:
where .
-
2.
V2C update:
Introduce the intermediary term and , thenwhere .
-
3.
Compute the marginal probability of :
where .
Recall that depends on syndrome position , we can further divide and into four terms: , and . Denote as the cardinality of the set of check nodes neighboring and belonging to the support of the given syndrome, i.e., , then and can be rewritten as
By expanding these terms, we can determine
Use and , then by computing to decide the hard decision of , we can obtain
Since is also a function of given the fixed , the hard decision for only depends on and . Then given the fixed , the priority of the estimated belonging to is related to .∎