Designing Finite Alphabet Iterative Decoders of LDPC Codes Via Recurrent Quantized Neural Networks
Name:
Designing Finite Alphabet Iterative ...
Size:
1.792Mb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
Univ Arizona, Dept Elect & Comp EngnIssue Date
2020-04-06
Metadata
Show full item recordPublisher
IEEECitation
X. Xiao, B. Vasić, R. Tandon and S. Lin, "Designing Finite Alphabet Iterative Decoders of LDPC Codes Via Recurrent Quantized Neural Networks," in IEEE Transactions on Communications, vol. 68, no. 7, pp. 3963-3974, July 2020, doi: 10.1109/TCOMM.2020.2985678.Rights
Copyright © 2020 IEEE.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
In this paper, we propose a new approach to design finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over binary symmetric channel (BSC) via recurrent quantized neural networks (RQNN). We focus on the linear FAID class and use RQNNs to optimize the message update look-up tables by jointly training their message levels and RQNN parameters. Existing neural networks for channel coding work well over Additive White Gaussian Noise Channel (AWGNC) but are inefficient over BSC due to the finite channel values of BSC fed into neural networks. We propose the bit error rate (BER) as the loss function to train the RQNNs over BSC. The low precision activations in the RQNN and quantization in the BER cause a critical issue that their gradients vanish almost everywhere, making it difficult to use classical backward propagation. We leverage straight-through estimators as surrogate gradients to tackle this issue and provide a joint training scheme. We show that the framework is flexible for various code lengths and column weights. Specifically, in high column weight case, it automatically designs low precision linear FAIDs with superior performance, lower complexity, and faster convergence than the floating-point belief propagation algorithms in waterfall region.ISSN
0090-6778Version
Final accepted manuscriptae974a485f413a2113503eed53cd6c53
10.1109/tcomm.2020.2985678