Optimizing Deep Learning Decoders for FPGA Implementation

Recently, Deep Learning (DL) methods have been proposed for use in the decoding of linear block codes. While novel DL decoders show promising error correcting performance, they suffer from computational complexity issues, which prevent their usage with large block codes and make their implementation...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Field-programmable Logic and Applications pp. 271 - 272
Main Authors Kavvousanos, E., Paliouras, V.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2021
Subjects
Online AccessGet full text
ISSN1946-1488
DOI10.1109/FPL53798.2021.00053

Cover

More Information
Summary:Recently, Deep Learning (DL) methods have been proposed for use in the decoding of linear block codes. While novel DL decoders show promising error correcting performance, they suffer from computational complexity issues, which prevent their usage with large block codes and make their implementation in digital hardware inefficient. The subject of the presented doctoral research is the design of DL decoding methods with low computational complexity and resource requirements, by applying compression and approximation techniques to the employed Neural Networks. Efficient hardware architectures are expected to be designed for these optimized DL decoders on FPGA devices, which will overcome the current performance limitations.
ISSN:1946-1488
DOI:10.1109/FPL53798.2021.00053