• 2-D LDPC Codes and Joint Detection and Decoding for Two-Dimensional Magnetic Recording

      Matcha, Chaitanya Kumar; Roy, Shounak; Bahrami, Mohsen; Vasic, Bane; Srinivasa, Shayan Garani; Univ Arizona, Dept Elect & Comp Engn (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018-02)
      Two-dimensional magnetic recording (TDMR) is a promising technology for boosting areal densities (ADs) using sophisticated signal processing algorithms within a systems framework. The read/write channel architectures have to effectively tackle 2-D inter-symbol interference (ISI), 2-D synchronization errors, media and electronic noise sources, as well as thermal asperities resulting in burst erasures. The 1-D low-density parity check (LDPC) codes are well studied to correct large 1-D burst errors/erasures. However, such 1-D LDPC codes are not suitable for correcting 2-D burst errors/erasures due to the 2-D span of errors. In this paper, we propose construction of a native 2-D LDPC code to effectively correct 2-D burst erasures. We also propose a joint detection and decoding engine based on the generalized belief propagation algorithm to simultaneously handle 2-D ISI, as well as correct bit/burst errors for TDMR channels. This paper is novel in two aspects: 1) we propose the construction of native 2-D LDPC codes to correct large 2-D burst erasures and 2) we develop a 2-D joint signal detection-decoder engine that incorporates 2-D ISI constraints, and modulation code constrains along with LDPC decoding. The native 2-D LDPC code can correct >20% more burst erasures compared with the 1-D LDPC code over a 128 x 256 2-D page of detected bits. Also, the proposed algorithm is observed to achieve a signal-to-noise ratio gain of >0.5 dB in bit error rate performance (translating to 10% increase in ADs around the 1.8 Tb/in(2) regime with grain sizes of 9 nm) as compared with a decoupled detector-decoder system configuration over a small 2-D LDPC code of size 16 x 16. The efficacy of our proposed algorithm and system architecture is evaluated by assessing AD gains via simulations for a TDMR configuration comprising of a 2-D generalized partial response over the Voronoi media model assuming perfect 2-D synchronization.
    • Analysis and Implementation of Resource Efficient Probabilistic Gallager B LDPC Decoder

      Unal, Burak; Ghaffari, Fakhreddine; Akoglu, Ali; Declercq, David; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017-08)
      Low-Density-Parity-Check (LDPC) codes have gained popularity in communication systems and standards due to their capacity-approaching error-correction performance. In this paper, we first expose the tradeoff between decoding performance and hardware performance across three LDPC hard-decision decoding algorithms: Gallager B (GaB), Gradient Descent Bit Flipping (GDBF), and Probabilistic Gradient Descent Bit Flipping (PGDBF). We show that GaB architecture delivers the best throughput while using fewest Field Programmable Gate Array (FPGA) resources, however performs the worst in terms of decoding performance. We then modify the GaB architecture, introduce a new Probabilistic stimulation function (PGaB), and achieve dramatic decoding performance improvement over the GaB, exceeding the performance of GDBF, without sacrificing its superior maximum operating frequency.
    • Asymptotic Error Probability of the Gallager B Decoder Under Timing Errors

      Dupraz, Elsa; Declercq, David; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017-01-04)
      In a circuit, timing errors occur when a logic gate output does not switch before the clock rising edge. In this letter, we consider Gallager B decoders under timing errors, following the error model derived by Amaricai et al. from SPICE measurements. For this model, we provide a theoretical analysis of the performance of LDPC decoders. This letter is based on the analysis of the computation trees of the decoder free of logic gate errors and of the decoder with timing errors. As a main result, we show that as the number of iterations goes to infinity, the error probability of the decoder with timing errors converges to the error probability of the logic gate error-free decoder. Monte Carlo simulations confirm this result even for moderate code lengths, which is in accordance with the experimental observations.
    • Check-hybrid GLDPC codes: Systematic elimination of trapping sets and guaranteed error correction capability

      Ravanmehr, Vida; Khatami, Mehrdad; Declercq, David; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (WILEY-BLACKWELL, 2016-10-12)
      In this paper, we propose a new approach to construct a class of check-hybrid generalized low-density parity-check (CH-GLDPC) codes, which are free of small trapping sets. The approach is based on converting some selected check nodes involving a trapping set into super checks corresponding to a 2-error-correcting component code. Specifically, we follow 2 main purposes to construct the check-hybrid codes; first, on the basis of the knowledge of trapping sets of an LDPC code, single parity checks are replaced by super checks to disable the trapping sets. We show that by converting specified single check nodes, denoted as critical checks, to super checks in a trapping set, the parallel bit flipping decoder corrects the errors on a trapping set. The second purpose is to minimize the rate loss through finding the minimum number of such critical checks. We also present an algorithm to find critical checks in a trapping set of a column-weight 3 LDPC code of girth 8 and then provide upper bounds on the minimum number of such critical checks such that the decoder corrects all error patterns on elementary trapping sets. Guaranteed error correction capability of the CH-GLDPC codes is also studied. We show that a CH-GLDPC code in which each variable node is connected to 2 super checks corresponding to a 2-error-correcting component code corrects up to 5 errors. The results are also extended to column-weight 4 LDPC codes of girth 6. Finally, we investigate eliminating of trapping sets of a column-weight 3 LDPC code of girth 8 using the Gallager B decoding algorithm.
    • Combinatorial Constructions of Low-Density Parity-Check Codes for Iterative Decoding

      Vasic, Bane; Milenkovic, O.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2004-06-01)
      This paper introduces several new combinatorial constructions of low-density parity-check (LDPC) codes, in contrast to the prevalent practice of using long, random-like codes. The proposed codes are well structured, and unlike random codes can lend themselves to a very low-complexity implementation. Constructions of regular Gallager codes based on cyclic difference families, cycle-invariant difference sets, and affine 1-configurations are introduced. Several constructions of difference families used for code design are presented, as well as bounds on the minimal distance of the codes based on the concept of a generalized Pasch configuration.
    • A Deliberate Bit Flipping Coding Scheme for Data-Dependent Two-Dimensional Channels

      Bahrami, Mohsen; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (Institute of Electrical and Electronics Engineers (IEEE), 2020-02)
      In this paper, we present a deliberate bit flipping (DBF) coding scheme for binary two-dimensional (2-D) channels, where specific patterns in channel inputs are the significant cause of errors. The idea is to eliminate a constrained encoder and, instead, embed a constraint into an error correction codeword that is arranged into a 2-D array by deliberately flipping the bits that violate the constraint. The DBF method relies on the error correction capability of the code being used so that it should be able to correct both deliberate errors and channel errors. Therefore, it is crucial to flip minimum number of bits in order not to overburden the error correction decoder. We devise a constrained combinatorial formulation for minimizing the number of flipped bits for a given set of harmful patterns. The generalized belief propagation algorithm is used to find an approximate solution for the problem. We evaluate the performance gain of our proposed approach on a data-dependent 2-D channel, where 2-D isolated-bits patterns are the harmful patterns for the channel. Furthermore, the performance of the DBF method is compared with classical 2-D constrained coding schemes for the 2-D no isolated-bits constraint on a memoryless binary symmetric channel.
    • Designing Finite Alphabet Iterative Decoders of LDPC Codes Via Recurrent Quantized Neural Networks

      Xiao, Xin; Vasic, Bane; Tandon, Ravi; Lin, Shu; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2020-04-06)
      In this paper, we propose a new approach to design finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over binary symmetric channel (BSC) via recurrent quantized neural networks (RQNN). We focus on the linear FAID class and use RQNNs to optimize the message update look-up tables by jointly training their message levels and RQNN parameters. Existing neural networks for channel coding work well over Additive White Gaussian Noise Channel (AWGNC) but are inefficient over BSC due to the finite channel values of BSC fed into neural networks. We propose the bit error rate (BER) as the loss function to train the RQNNs over BSC. The low precision activations in the RQNN and quantization in the BER cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We leverage straight-through estimators as surrogate gradients to tackle this issue and provide a joint training scheme. We show that the framework is flexible for various code lengths and column weights. Specifically, in high column weight case, it automatically designs low precision linear FAIDs with superior performance, lower complexity, and faster convergence than the floating-point belief propagation algorithms in waterfall region.
    • Diagnosis of Weaknesses in Modern Error Correction Codes: A Physics Approach

      Stepanov, M. G.; Chernyak, V.; Chertkov, M.; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (AMER PHYSICAL SOC, 2005-11-22)
      One of the main obstacles to the wider use of the modern error-correction codes is that, due to the complex behavior of their decoding algorithms, no systematic method which would allow characterization of the bit-error-rate (BER) is known. This is especially true at the weak noise where many systems operate and where coding performance is difficult to estimate because of the diminishingly small number of errors. We show how the instanton method of physics allows one to solve the problem of BER analysis in the weak noise range by recasting it as a computationally tractable minimization problem.
    • An Efficient Instanton Search Algorithm for LP Decoding of LDPC Codes Over the BSC

      Chilappagari, Shashi Kiran; Chertkov, Michael; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2011-06-20)
      We consider linear programming (LP) decoding of a fixed low-density parity-check (LDPC) code over the binary symmetric channel (BSC). The LP decoder fails when it outputs a pseudo-codeword which is not equal to the transmitted codeword. We design an efficient algorithm termed the Instanton Search Algorithm (ISA) which generates an error vector called the BSC-instanton. We prove that: (a) the LP decoder fails for any error pattern with support that is a superset of the support of an instanton; (b) for any input, the ISA outputs an instanton in the number of steps upper-bounded by twice the number of errors in the input error vector. We then find the number of unique instantons of different sizes for a given LDPC code by running the ISA sufficient number of times.
    • Eliminating trapping sets in low-density parity-check codes by using Tanner graph covers

      Ivkovic, Milos; Chilappagari, Shashi Kiran; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2008-08-01)
      We discuss error floor asympotics and present a method for improving the performance of low-density parity-check (LDPC) codes in the high SNR (error floor) region. The method is based on Tanner graph covers that do not have trapping sets from the original code. The advantages of the method are that it is universal, as it can be applied to any LDPC code/channel/decoding algorithm and it improves performance at the expense of increasing the code length, without losing the code regularity, without changing the decoding algorithm, and, under certain conditions, without lowering the code rate. The proposed method can be modified to construct convolutional LDPC codes also. The method is illustrated by modifying Tanner, MacKay and Margulis codes to improve performance on the binary symmetric channel (BSC) under the Gallager B decoding algorithm. Decoding results on AWGN channel are also presented to illustrate that optimizing codes for one channel/decoding algorithm can lead to performance improvement on other channels.
    • Error Correction Capability of Column-Weight-Three LDPC Codes Under the Gallager A Algorithm—Part II

      Chilappagari, Shashi Kiran; Nguyen, Dung Viet; Vasic, Bane; Marcellin, Michael W.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2010-05-18)
      The relation between the girth and the error correction capability of column-weight-three LDPC codes under the Gallager A algorithm is investigated. It is shown that a column-weight-three LDPC code with Tanner graph of girth g ¿ 10 can correct all error patterns with up to (g /2-1) errors in at most g /2 iterations of the Gallager A algorithm. For codes with Tanner graphs of girth g ¿ 8, it is shown that girth alone cannot guarantee correction of all error patterns with up to (g /2-1) errors under the Gallager A algorithm. Sufficient conditions to correct (g /2-1) errors are then established by studying trapping sets.
    • Error Correction on a Tree: An Instanton Approach

      Chernyak, V.; Chertkov, M.; Stepanov, M. G.; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (AMER PHYSICAL SOC, 2004-11-05)
      We introduce a method that allows analytical or semianalytical estimating of the post-error correction bit error rate (BER) when a forward-error correction is utilized for transmitting information through a noisy channel. The generic method that applies to a variety of error-correction schemes in the regimes where the BER is low is illustrated using the example of a finite-size code approximated by a treelike structure. Exploring the statistical physics formulation of the problem we find that the BER decreases with the signal-to-noise ratio nonuniformly, i.e., crossing over through a sequence of phases. The higher the signal-to-noise ratio the lower the symmetry of the phase dominating BER.
    • Error Errore Eicitur: A Stochastic Resonance Paradigm for Reliable Storage of Information on Unreliable Media

      Ivanis, Predrag; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2016-09)
      We give an architecture of a storage system consisting of a storage medium made of unreliable memory elements and an error correction circuit made of a combination of noisy and noiseless logic gates that is capable of retaining the stored information with the lower probability of error than a storage system with a correction circuit made completely of noiseless logic gates. Our correction circuit is based on the iterative decoding of low-density parity check codes, and uses the positive effect of errors in logic gates to correct errors in memory elements. In the spirit of Marcus Tullius Cicero's Clavus clavo eicitur (one nail drives out another), the proposed storage system operates on the principle: error errore eicitur-one error drives out another. The randomness that is present in the logic gates makes these classes of decoders superior to their noiseless counterparts. Moreover, random perturbations do not require any additional computational resources as they are inherent to unreliable hardware itself. To utilize the benefits of logic gate failures, our correction circuit relies on two key novelties: a mixture of reliable and unreliable gates and decoder rewinding. We present a method based on absorbing Markov chains for the probability of error analysis, and explain how the randomness in the variable and check node update function helps a decoder to escape to local minima associated with trapping sets.
    • Error-Correction Capability of Column-Weight-Three LDPC Codes

      Chilappagari, Shashi Kiran; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2009-04-21)
      In this paper, the error-correction capability of column-weight-three low-density parity-check (LDPC) codes when decoded using the Gallager A algorithm is investigated. It is proved that a necessary condition for a code to correct all error patterns with up to k ges 5 errors is to avoid cycles of length up to 2k in its Tanner graph. As a consequence of this result, it is shown that given any alpha > 0, exist N such that forall n > N, no code in the ensemble of column-weight-three codes can correct all alphan or fewer errors. The results are extended to the bit flipping algorithms.
    • Fault-Tolerant Probabilistic Gradient-Descent Bit Flipping Decoder

      Rasheed, Omran Al; Ivanis, Predrag; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2014-07-30)
      We propose a gradient descent type bit flipping algorithm for decoding low density parity check codes on the binary symmetric channel. Randomness introduced in the bit flipping rule makes this class of decoders not only superior to other decoding algorithms of this type, but also robust to logic-gate failures. We report a surprising discovery that for a broad range of gate failure probability our decoders actually benefit from faults in logic gates which serve as an inherent source of randomness and help the decoding algorithm to escape from local minima associated with trapping sets.
    • Finite alphabet iterative decoders for LDPC codes surpassing floating-point iterative decoders

      Planjery, S.K.; Declercq, D.; Danjean, L.; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (Institution of Engineering and Technology (IET), 2011)
      Introduced is a new type of message-passing (MP) decoders for low-density parity-check (LDPC) codes over the binary symmetric channel. Unlike traditional belief propagation (BP) based MP algorithms which propagate probabilities or log-likelihoods, the new MP decoders propagate messages requiring only a finite number of bits for their representation in such a way that good performance in the error floor region is ensured. Additionally, these messages are not quantised probabilities or log-likelihoods. As examples, MP decoders are provided that require only three bits for message representation, but surpass the floating-point BP (which requires a large number of bits for representation) in the error-floor region.
    • Finite Alphabet Iterative Decoders for LDPC Codes: Optimization, Architecture and Analysis

      Cai, Fang; Zhang, Xinmiao; Declercq, David; Planjery, Shiva Kumar; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2014-03-20)
      Low-density parity-check (LDPC) codes are adopted in many applications due to their Shannon-limit approaching error-correcting performance. Nevertheless, belief-propagation (BP) based decoding of these codes suffers from the error-floor problem, i.e., an abrupt change in the slope of the error-rate curve that occurs at very low error rates. Recently, a new type of decoders termed finite alphabet iterative decoders (FAIDs) were introduced. The FAIDs use simple Boolean maps for variable node processing, and can surpass the BP-based decoders in the error floor region with very short word length. We restrict the scope of this paper to regular d v =3 LDPC codes on the BSC channel. This paper develops a low-complexity implementation architecture for the FAIDs by making use of their properties. Particularly, an innovative bit-serial check node unit is designed for the FAIDs, and a small-area variable node unit is proposed by exploiting the symmetry in the Boolean maps. Moreover, an optimized data scheduling scheme is proposed to increase the hardware utilization efficiency. From synthesis results, the proposed FAID implementation needs only 52% area to reach the same throughput as one of the most efficient standard Min-Sum decoders for an example (7807, 7177) LDPC code, while achieving better error-correcting performance in the error-floor region. Compared to an offset Min-Sum decoder with longer word length, the proposed design can achieve higher throughput with 45% area, and still leads to possible performance improvement in the error-floor region.
    • Finite Alphabet Iterative Decoders—Part I: Decoding Beyond Belief Propagation on the Binary Symmetric Channel

      Planjery, Shiva Kumar; Declercq, David; Danjean, Ludovic; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2013-09-16)
      We introduce a new paradigm for finite precision iterative decoding on low-density parity-check codes over the binary symmetric channel. The messages take values from a finite alphabet, and unlike traditional quantized decoders which are quantized versions of the belief propagation (BP) decoder, the proposed finite alphabet iterative decoders (FAIDs) do not propagate quantized probabilities or log-likelihoods and the variable node update functions do not mimic the BP decoder. Rather, the update functions are maps designed using the knowledge of potentially harmful subgraphs that could be present in a given code, thereby rendering these decoders capable of outperforming the BP in the error floor region. On certain column-weight-three codes of practical interest, we show that there exist {FAIDs that surpass the floating-point BP decoder in the error floor region while requiring only three bits of precision for the representation of the messages}. Hence, FAIDs are able to achieve a superior performance at much lower complexity. We also provide a methodology for the selection of FAIDs that is not code-specific, but gives a set of candidate FAIDs containing potentially good decoders in the error floor region for any column-weight-three code. We validate the code generality of our methodology by providing particularly good three-bit precision FAIDs for a variety of codes with different rates and lengths.
    • Finite Alphabet Iterative Decoders—Part II: Towards Guaranteed Error Correction of LDPC Codes via Iterative Decoder Diversity

      Declercq, David; Vasic, Bane; Planjery, Shiva Kumar; Li, Erbao; Univ Arizona, Dept Elect & Comp Engn (Institute of Electrical and Electronics Engineers (IEEE), 2013-10)
      Recently, we introduced a new class of finite alphabet iterative decoders (FAIDs) for low-density parity-check (LDPC) codes. These decoders are capable of surpassing belief propagation (BP) in the error floor region on the binary symmetric channel (BSC) with much lower complexity. In this paper, we introduce a novel scheme with the objective of guaranteeing the correction of a given and potentially large number of errors on column-weight-three LDPC codes. The proposed scheme uses a plurality of FAIDs which collectively correct more error patterns than a single FAID on a given code. The collection of FAIDs utilized by the scheme is judiciously chosen to ensure that individual decoders have different decoding dynamics and correct different error patterns. Consequently, they can collectively correct a diverse set of error patterns, which is referred to as decoder diversity. We provide a systematic method to generate the set of FAIDs for decoder diversity on a given code based on the knowledge of the most harmful trapping sets present in the code. Using the well-known column-weight-three (155,64) Tanner code with d min = 20 as an example, we describe the method in detail and show, by means of exhaustive simulation, that the guaranteed error correction capability on short length LDPC codes can be significantly increased with decoder diversity.
    • Finite Alphabet Iterative Decoding of LDPC Codes with Coarsely Quantized Neural Networks

      Xiao, Xin; Vasic, Bane; Tandon, Ravi; Lin, Shu; Univ Arizona (IEEE, 2019-12)
      In this paper, we introduce a method of using quantized neural networks (QNN) to design finite alphabet message passing decoders (FAID) for Low-Density Parity Check (LDPC) codes. Specifically, we construct a neural network with low precision activations to optimize a FAID over Additive White Gaussian Noise Channel (AWGNC). The low precision activations cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We introduce straight-through estimators (STE) [1] to avoid this problem, by replacing zero derivatives of quantized activations with surrogate gradients in the chain rules. We present a systematic approach to train such networks while minimizing the bit error rate, which is a widely used and accurate metric to measure the performance of iterative decoders. Examples and simulations show that by training a QNN, a FAID with 3-bit of message and 4-bit of channel output can be obtained, which performs better than the more complex floating-point minsum decoding algorithm. This methodology is promising in the sense that it facilitates designing low-precision FAID for LDPC codes while maintaining good error performance in a flexible and efficient manner.