Show simple item record

dc.contributor.authorXiao, Xin
dc.contributor.authorRaveendran, Nithin
dc.contributor.authorVasic, Bane
dc.contributor.authorLin, Shu
dc.contributor.authorTandon, Ravi
dc.date.accessioned2022-02-22T20:03:41Z
dc.date.available2022-02-22T20:03:41Z
dc.date.issued2021-08-30
dc.identifier.citationXiao, X., Raveendran, N., Vasic, B., Lin, S., & Tandon, R. (2021). FAID Diversity via Neural Networks. 2021 11th International Symposium on Topics in Coding, ISTC 2021.en_US
dc.identifier.doi10.1109/istc49272.2021.9594253
dc.identifier.urihttp://hdl.handle.net/10150/663398
dc.description.abstractDecoder diversity is a powerful error correction framework in which a collection of decoders collaboratively correct a set of error patterns otherwise uncorrectable by any individual decoder. In this paper, we propose a new approach to design the decoder diversity of finite alphabet iterative decoders (FAIDs) for Low-Density Parity Check (LDPC) codes over the binary symmetric channel (BSC), for the purpose of lowering the error floor while guaranteeing the waterfall performance. The proposed decoder diversity is achieved by training a recurrent quantized neural network (RQNN) to learn/design FAIDs. We demonstrated for the first time that a machine-learned decoder can surpass in performance a man-made decoder of the same complexity. As RQNNs can model a broad class of FAIDs, they are capable of learning an arbitrary FAID. To provide sufficient knowledge of the error floor to the RQNN, the training sets are constructed by sampling from the set of most problematic error patterns - trapping sets. In contrast to the existing methods that use the cross-entropy function as the loss function, we introduce a frame-error-rate (FER) based loss function to train the RQNN with the objective of correcting specific error patterns rather than reducing the bit error rate (BER). The examples and simulation results show that the RQNN-aided decoder diversity increases the error correction capability of LDPC codes and lowers the error floor.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.rights© 2021 IEEE.en_US
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en_US
dc.source2021 11th International Symposium on Topics in Coding (ISTC)
dc.subjectDecoder diversityen_US
dc.subjectError flooren_US
dc.subjectLDPC codesen_US
dc.subjectQuantized neural networken_US
dc.titleFAID Diversity via Neural Networksen_US
dc.typeArticleen_US
dc.contributor.departmentUniversity Of Arizonaen_US
dc.identifier.journal2021 11th International Symposium on Topics in Coding, ISTC 2021en_US
dc.description.noteImmediate accessen_US
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.en_US
dc.eprint.versionFinal accepted manuscripten_US
refterms.dateFOA2022-02-22T20:03:42Z


Files in this item

Thumbnail
Name:
2021208884.pdf
Size:
470.5Kb
Format:
PDF
Description:
Final Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record