Designing Finite Alphabet Iterative Decoders of LDPC Codes Via Recurrent Quantized Neural Networks
AffiliationUniv Arizona, Dept Elect & Comp Engn
MetadataShow full item record
CitationX. Xiao, B. Vasić, R. Tandon and S. Lin, "Designing Finite Alphabet Iterative Decoders of LDPC Codes Via Recurrent Quantized Neural Networks," in IEEE Transactions on Communications, vol. 68, no. 7, pp. 3963-3974, July 2020, doi: 10.1109/TCOMM.2020.2985678.
RightsCopyright © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractIn this paper, we propose a new approach to design finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over binary symmetric channel (BSC) via recurrent quantized neural networks (RQNN). We focus on the linear FAID class and use RQNNs to optimize the message update look-up tables by jointly training their message levels and RQNN parameters. Existing neural networks for channel coding work well over Additive White Gaussian Noise Channel (AWGNC) but are inefficient over BSC due to the finite channel values of BSC fed into neural networks. We propose the bit error rate (BER) as the loss function to train the RQNNs over BSC. The low precision activations in the RQNN and quantization in the BER cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We leverage straight-through estimators as surrogate gradients to tackle this issue and provide a joint training scheme. We show that the framework is flexible for various code lengths and column weights. Specifically, in high column weight case, it automatically designs low precision linear FAIDs with superior performance, lower complexity, and faster convergence than the floating-point belief propagation algorithms in waterfall region.
VersionFinal accepted manuscript