• Finite Alphabet Iterative Decoders—Part II: Towards Guaranteed Error Correction of LDPC Codes via Iterative Decoder Diversity

      Declercq, David; Vasic, Bane; Planjery, Shiva Kumar; Li, Erbao; Univ Arizona, Dept Elect & Comp Engn (Institute of Electrical and Electronics Engineers (IEEE), 2013-10)
      Recently, we introduced a new class of finite alphabet iterative decoders (FAIDs) for low-density parity-check (LDPC) codes. These decoders are capable of surpassing belief propagation (BP) in the error floor region on the binary symmetric channel (BSC) with much lower complexity. In this paper, we introduce a novel scheme with the objective of guaranteeing the correction of a given and potentially large number of errors on column-weight-three LDPC codes. The proposed scheme uses a plurality of FAIDs which collectively correct more error patterns than a single FAID on a given code. The collection of FAIDs utilized by the scheme is judiciously chosen to ensure that individual decoders have different decoding dynamics and correct different error patterns. Consequently, they can collectively correct a diverse set of error patterns, which is referred to as decoder diversity. We provide a systematic method to generate the set of FAIDs for decoder diversity on a given code based on the knowledge of the most harmful trapping sets present in the code. Using the well-known column-weight-three (155,64) Tanner code with d min = 20 as an example, we describe the method in detail and show, by means of exhaustive simulation, that the guaranteed error correction capability on short length LDPC codes can be significantly increased with decoder diversity.
    • Finite Alphabet Iterative Decoding of LDPC Codes with Coarsely Quantized Neural Networks

      Xiao, Xin; Vasic, Bane; Tandon, Ravi; Lin, Shu; Univ Arizona (IEEE, 2019-12)
      In this paper, we introduce a method of using quantized neural networks (QNN) to design finite alphabet message passing decoders (FAID) for Low-Density Parity Check (LDPC) codes. Specifically, we construct a neural network with low precision activations to optimize a FAID over Additive White Gaussian Noise Channel (AWGNC). The low precision activations cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We introduce straight-through estimators (STE) [1] to avoid this problem, by replacing zero derivatives of quantized activations with surrogate gradients in the chain rules. We present a systematic approach to train such networks while minimizing the bit error rate, which is a widely used and accurate metric to measure the performance of iterative decoders. Examples and simulations show that by training a QNN, a FAID with 3-bit of message and 4-bit of channel output can be obtained, which performs better than the more complex floating-point minsum decoding algorithm. This methodology is promising in the sense that it facilitates designing low-precision FAID for LDPC codes while maintaining good error performance in a flexible and efficient manner.
    • Generalized belief propagation based TDMR detector and decoder

      Matcha, Chaitanya Kumar; Bahrami, Mohsen; Roy, Shounak; Srinivasa, Shayan Garani; Vasic, Bane; Department of Electrical and Computer Engineering, University of Arizona (IEEE, 2016-07)
      Two dimensional magnetic recording (TDMR) achieves high areal densities by reducing the size of a bit comparable to the size of the magnetic grains resulting in two dimensional (2D) inter symbol interference (ISI) and very high media noise. Therefore, it is critical to handle the media noise along with the 2D ISI detection. In this paper, we tune the generalized belief propagation (GBP) algorithm to handle the media noise seen in TDMR. We also provide an intuition into the nature of hard decisions provided by the GBP algorithm. The performance of the GBP algorithm is evaluated over a Voronoi based TDMR channel model where the soft outputs from the GBP algorithm are used by a belief propagation (BP) algorithm to decode low-density parity check (LDPC) codes.
    • Girth-Eight Reed-Solomon Based QC-LDPC Codes

      Xiao, Xin; Vasic, Bane; Lin, Shu; Abdel-Ghaffar, Khaled; Ryan, William E.; Univ Arizona, Sch Elect & Comp Engn (IEEE, 2018)
      This paper presents a class of regular quasi-cyclic (QC) LDPC codes whose Tanner graphs have girth at least eight. These codes are constructed based on the conventional parity-check matrices of Reed-Solomon (RS) codes with minimum distance 5. Masking their parity-check matrices significantly reduces the numbers of short cycles in their Tanner graphs and results in codes which perform well over the AWGN channel in both waterfall and low error-rate regions.
    • Hard-Decision Decoding of LDPC Codes Under Timing Errors: Overview and New Results

      Brkic, Srdan; Ivanis, Predrag; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017)
      This paper contains a survey on iterative decoders of low-density parity-check (LDPC) codes built form unreliable logic gates. We assume that hardware unreliability comes from supply voltage reduction, which causes probabilistic gate failures, called timing errors. We are able to demonstrate robustness of simple Gallager B decoder to timing errors, when applied on codes free of small trapping sets, as well as positive effects that timing errors have on the decoding of codes with contain small trapping sets. Furthermore, we show that concept of guaranteed error correction can be applied to the decoders made partially from unreliable components. In contrast to the decoding under uncorrelated gate failures, we prove that bit-flipping decoding under timing errors can achieve arbitrary low error probability. Consequently, we formulate condition sufficient that memory architecture, which employs bit-flipping decoder, preserved all stored information.
    • High-Rate Girth-Eight Low-Density Parity-Check Codes on Rectangular Integer Lattices

      Vasic, Bane; Pedagani, K.; Ivkovic, M.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2004-08-30)
      This letter introduces a combinatorial construction of girth-eight high-rate low-density parity-check codes based on integer lattices. The parity-check matrix of a code is defined as a point-line incidence matrix of a 1-configuration based on a rectangular integer lattice, and the girth-eight property is achieved by a judicious selection of sets of parallel lines included in a configuration. A class of codes with a wide range of lengths and column weights is obtained. The resulting matrix of parity checks is an array of circulant matrices.
    • Information Theoretic Modeling and Analysis for Global Interconnects With Process Variations

      Denic, Stojan Z.; Vasic, Bane; Charalambous, Charalambos D.; Chen, Jifeng; Wang, Janet M.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2011-12-11)
      As the CMOS semiconductor technology enters nanometer regime, interconnect processes must be compatible with device roadmaps and meet manufacturing targets at the specified wafer size. The resulting ubiquitous process variations cause errors in data delivering through interconnects. This paper proposes an Information Theory based design method to accommodate process variations. Different from the traditional delay based design metric, the current approach uses achievable rate to relate interconnect designs directly to communication applications. More specifically, the data communication over a typical interconnect, a bus, subject to process variations (“uncertain” bus), is defined as a communication problem under uncertainty. A data rate, called the achievable rate, is computed for such a bus, which represents the lower bound on the maximal data rate attainable over the bus. When a data rate applied over the bus is smaller than the achievable data rate, a reliable communication can be guaranteed regardless of process variations, i.e., a bit error rate arbitrarily close to zero is achievable. A single communication strategy to combat the process variations is proposed whose code rate is equal to the computed achievable rate. The simulations show that the variations in the interconnect resistivity could have the most harmful effect regarding the achievable rate reduction. Also, the simulations illustrate the importance of taking into account bus parasitic parameters correlations when measuring the influence of the process variations on the achievable rates.
    • An Information Theoretical Framework for Analysis and Design of Nanoscale Fault-Tolerant Memories Based on Low-Density Parity-Check Codes

      Vasic, Bane; Chilappagari, S.K.; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2007-11-12)
      In this paper, we develop a theoretical framework for the analysis and design of fault-tolerant memory architectures. Our approach is a modification of the method developed by Taylor and refined by Kuznetsov. Taylor and Kuznetsov (TK) showed that memory systems have nonzero computational (storage) capacity, i.e., the redundancy necessary to ensure reliability grows asymptotically linearly with the memory size. The restoration phase in the TK method is based on low-density parity-check codes which can be decoded using low complexity decoders. The equivalence of the restoration phase in the TK method and faulty Gallager B algorithm enabled us to establish a theoretical framework for solving problems in reliable storage on unreliable media using the large body of knowledge in codes on graphs and iterative decoding gained in the past decade.
    • Instanton-based techniques for analysis and reduction of error floors of LDPC codes

      Chilappagari, Shashi; Chertkov, Michael; Stepanov, Mikhail; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2009-07-28)
      We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
    • Iterative Decoding of Linear Block Codes: A Parity-Check Orthogonalization Approach

      Sankaranarayanan, S.; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2005-08-22)
      It is widely accepted that short cycles in Tanner graphs deteriorate the performance of message-passing algorithms. This discourages the use of these algorithms on Tanner graphs (TGs) of well-known algebraic codes such as Hamming codes, Bose-Chaudhuri-Hocquenghem codes, and Reed-Solomon codes. Yedidia et al. presented a method to generate code representations suitable for message-passing algorithms. This method does not guarantee a representation free of four-cycles. In this correspondence, we present an algorithm to convert an arbitrary linear block into a code with orthogonal parity-check equations. A combinatorial argument is used to prove that the algorithm guarantees a four-cycle free representation for any linear code. The effects of removing four-cycles on the performance of a belief propagation decoder for the binary erasure channel are studied in detail by analyzing the structures in different representations. Finally, we present bit-error rate (BER) and block-error rate (BLER) performance curves of linear block codes under belief propagation algorithms for the binary erasure channel and the additive white Gaussian noise (AWGN) channel in order to demonstrate the improvement in performance achieved with the help of the proposed algorithm.
    • A Log-Likelihood Ratio based Generalized Belief Propagation

      Amaricai, Alexandru; Bahrami, Mohsem; Vasic, Bane; Univ Arizona (IEEE, 2019-07)
      In this paper, we propose a reduced complexity Generalized Belief Propagation (GBP) that propagates messages in Log-Likelihood Ratio (LLR) domain. The key novelties of the proposed LLR-GBP are: (i) reduced fixed point precision for messages instead of computational complex floating point format, (ii) operations performed in logarithm domain, thus eliminating the need for multiplications and divisions, (iii) usage of message ratios that leads to simple hard decision mechanisms. We demonstrated the validity of LLR-GBP on reconstruction of images passed through binary-input two-dimensional Gaussian channels with memory and affected by additive white Gaussian noise.
    • Majority Logic Decoding Under Data-Dependent Logic Gate Failures

      Brkic, Srdan; Ivanis, Predrag; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2017-10)
      A majority logic decoder made of unreliable logic gates, whose failures are transient and data-dependent, is analyzed. Based on a combinatorial representation of fault configurations a closed-form expression for the average bit error rate for a one-step majority logic decoder is derived, for a regular low-density parity-check (LDPC) code ensemble and the proposed failure model. The presented analysis framework is then used to establish bounds on the one-step majority logic decoder performance under the simplified probabilistic gateoutput switching model. Based on the expander property of Tanner graphs of LDPC codes, it is proven that a version of the faulty parallel bit-flipping decoder can correct a fixed fraction of channel errors in the presence of data-dependent gate failures. The results are illustrated with numerical examples of finite geometry codes.
    • On the Construction of Structured LDPC Codes Free of Small Trapping Sets

      Nguyen, Dung Viet; Chilappagari, Shashi Kiran; Marcellin, Michael W.; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2012-03-15)
      We present a method to construct low-density parity-check (LDPC) codes with low error floors on the binary symmetric channel. Codes are constructed so that their Tanner graphs are free of certain small trapping sets. These trapping sets are selected from the trapping set ontology for the Gallager A/B decoder. They are selected based on their relative harmfulness for a given decoding algorithm. We evaluate the relative harmfulness of different trapping sets for the sum-product algorithm by using the topological relations among them and by analyzing the decoding failures on one trapping set in the presence or absence of other trapping sets. We apply this method to construct structured LDPC codes. To facilitate the discussion, we give a new description of structured LDPC codes whose parity-check matrices are arrays of permutation matrices. This description uses Latin squares to define a set of permutation matrices that have disjoint support and to derive a simple necessary and sufficient condition for the Tanner graph of a code to be free of four cycles.
    • On Trapping Sets and Guaranteed Error Correction Capability of LDPC Codes and GLDPC Codes

      Chilappagari, Shashi Kiran; Nguyen, Dung Viet; Vasic, Bane; Marcellin, Michael W.; Department of Electrical and Computer Engineering, The University of Arizona (Institute of Electrical and Electronics Engineers (IEEE), 2010-04)
      The relation between the girth and the guaranteed error correction capability of ¿ -left-regular low-density parity-check (LDPC) codes when decoded using the bit flipping (serial and parallel) algorithms is investigated. A lower bound on the size of variable node sets which expand by a factor of at least 3 ¿/4 is found based on the Moore bound. This bound, combined with the well known expander based arguments, leads to a lower bound on the guaranteed error correction capability. The decoding failures of the bit flipping algorithms are characterized using the notions of trapping sets and fixed sets. The relation between fixed sets and a class of graphs known as cage graphs is studied. Upper bounds on the guaranteed error correction capability are then established based on the order of cage graphs. The results are extended to left-regular and right-uniform generalized LDPC codes. It is shown that this class of generalized LDPC codes can correct a linear number of worst case errors (in the code length) under the parallel bit flipping algorithm when the underlying Tanner graph is a good expander. A lower bound on the size of variable node sets which have the required expansion is established.
    • Quasi-Cyclic LDPC Codes for Correcting Multiple Phased Bursts of Erasures

      Xiao, Xin; Vasic, Bane; Lin, Shu; Abdel-Ghaffar, Khaled; Ryan, William E.; Univ Arizona (IEEE, 2019-07)
      This paper presents designs and constructions of two classes of binary quasi-cyclic LDPC codes for correcting multiple random phased-bursts of erasures over the binary erasure channel. The erasure correction of codes in both classes is characterized by the cycle and adjacency structure of their Tanner graphs. Erasure correction of these codes is a very simple process which requires only modulo-2 additions. The codes in the second class are capable of correcting locally and globally distributed phased-bursts of erasures with a two-phase iterative erasure-correction process.
    • Reed-Solomon Based Quasi-Cyclic LDPC Codes: Designs, Girth, Cycle Structure, and Reduction of Short Cycles

      Xiao, Xin; Vasic, Bane; Lin, Shu; Abdel-Ghaffar, Khaled; Ryan, William E.; Univ Arizona, Dept Elect & Comp Engn (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2019-08)
      Designs and constructions of quasi-cyclic (QC) LDPC codes for the AWGN channel are presented. The codes are constructed based on the conventional parity-check matrices of Reed-Solomon (RS) codes and are referred to as RS-QC-LDPC codes. Several classes of RS-QC-LDPC codes are given. Cycle structural properties of the Tanner graphs of codes in these classes are analyzed and specific methods for constructing codes with girth at least eight and reducing their short cycles are presented. The designed codes perform well in both waterfall and low error-rate regions.
    • Reliability of Memories Built From Unreliable Components Under Data-Dependent Gate Failures

      Brkic, Srdan; Ivanis, Predrag; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (Institute of Electrical and Electronics Engineers (IEEE), 2015-12)
      In this letter, we investigate fault-tolerance of memories built from unreliable cells. In order to increase the memory reliability, information is encoded by a low-density parity-check (LDPC) code, and then stored. The memory content is updated periodically by the bit-flipping decoder, built also from unreliable logic gates, whose failures are transient and data-dependent. Based on the expander property of Tanner graph of LDPC codes, we prove that the proposed memory architecture can tolerate a fixed fraction of component failures and consequently preserve all the stored information, if code length tends to infinity.
    • Serial Concatenation of Reed Muller and LDPC Codes with Low Error Floor

      Xiao, Xin; Nasseri, Mona; Vasic, Bane; Lin, Shu; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2017)
      In this paper, we propose a concatenated coding scheme involving an outer Reed-Muller (RM) code and an inner Finite Field low-density parity check (LDPC) code of medium length and high rate. It lowers the error floor of inner Finite Field LDPC code. This concatenation scheme offers flexibility in design and it is easy to implement. In addition, the decoding works in a serial turbo manner and has no harmful trapping sets of size smaller than the minimum distance of the outer code. The simulation results indicate that the proposed serial concatenation can eliminate the dominant trapping sets of the inner Finite Field LDPC code.
    • Signal Processing and Coding Techniques for 2-D Magnetic Recording: An Overview

      Garani, Shayan Srinivasa; Dolecek, Lara; Barry, John; Sala, Frederic; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC, 2018-02)
      Two-dimensional magnetic recording (TDMR) is an emerging storage technology that aims to achieve areal densities on the order of 10 Tb/in 2, mainly driven by innovative channels engineering with minimal changes to existing head/ media designs within a systems framework. Significant additive areal density gains can be achieved by using TDMR over bit patterned media (BPM) and energy-assisted magnetic recording (EAMR). In TDMR, the sectors are inherently 2-D with reduced track pitch and bit widths, leading to severe 2-D intersymbol interference (ISI). This necessitates the development of powerful 2-D signal processing and coding algorithms for mitigating 2-D ISI, timing artifacts, jitter, and electronics noise resulting from irregular media grain positions and read-head electronics. The algorithms have to be eventually realized within a read/write channel architecture as a part of a system-on-chip (SoC) within the disk controller system. In this work, we provide a wide overview of TDMR technology, channel models and capacity, signal processing algorithms (detection and timing recovery), and error-correcting codes attuned to 2-D channels. The innovations and advances described not only make TDMR a promising future technology, but may serve a broader engineering audience as well.
    • Simplification Resilient LDPC-Coded Sparse-QIM Watermarking for 3D-Meshes

      Vasic, Bata; Vasic, Bane; Univ Arizona, Dept Elect & Comp Engn (IEEE, 2013-06-03)
      We propose a blind watermarking scheme for 3D meshes that combines sparse quantization index modulation (QIM) with deletion correction codes. The QIM operates on the vertices in rough concave regions of the surface thus ensuring impeccability, while the deletion correction code recovers the data hidden in the vertices, which is removed by mesh optimization and/or simplification. The proposed scheme offers two orders of magnitude better performance in terms of recovered watermark bit error rate compared to the existing schemes of similar payloads and fidelity constraints.