Tina: Acceleration of Non-NN Signal Processing Algorithms Using NN Accelerators

This paper introduces TINA, a novel framework for implementing non Neural Network (NN) signal processing algorithms on NN accelerators such as GPUs, TPUs or FPGAs. The key to this approach is the concept of mapping mathematical and logic functions as a series of convolutional and fully connected lay...

Full description

Saved in:
Bibliographic Details
Published in2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP) pp. 1 - 6
Main Authors Boerkamp, Christiaan, Van der Vlugt, Steven, Al-Ars, Zaid
Format Conference Proceeding
LanguageEnglish
Published IEEE 22.09.2024
Subjects
Online AccessGet full text
ISSN2161-0371
DOI10.1109/MLSP58920.2024.10734727

Cover

More Information
Summary:This paper introduces TINA, a novel framework for implementing non Neural Network (NN) signal processing algorithms on NN accelerators such as GPUs, TPUs or FPGAs. The key to this approach is the concept of mapping mathematical and logic functions as a series of convolutional and fully connected layers. By mapping functions into such a small sub stack ofNN layers, it becomes possible to execute non-NN algorithms on NN hardware (HW) accelerators efficiently, as well as to ensure the portability of TINA implementations to any platform that supports such NN accelerators. Results show that TINA is highly competitive vs alternative frame-works, specifically for complex functions with iterations. For a Polyphase Filter Bank use case TINA shows GPU speedups of up to 80x vs a CPU baseline with NumPy compared to 8x speedup achieved by alternative frameworks. The frame-work is open source and publicly available at httPs://github.com/ChristiaanBoe/TINA.
ISSN:2161-0371
DOI:10.1109/MLSP58920.2024.10734727