A Survey on Hardware Accelerator Design of Deep Learning for Edge Devices
In artificial intelligence, the large role is played by machine learning (ML) in a variety of applications. This article aims at providing a comprehensive survey on summarizing recent trends and advances in hardware accelerator design for machine learning based on various hardware platforms like ASI...
Saved in:
| Published in | Wireless personal communications Vol. 137; no. 3; pp. 1715 - 1760 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
New York
Springer US
01.08.2024
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0929-6212 1572-834X |
| DOI | 10.1007/s11277-024-11443-2 |
Cover
| Summary: | In artificial intelligence, the large role is played by machine learning (ML) in a variety of applications. This article aims at providing a comprehensive survey on summarizing recent trends and advances in hardware accelerator design for machine learning based on various hardware platforms like ASIC, FPGA and GPU. In this article, we look at different architectures that allow NN executions in respect of computational units, network topologies, dataflow optimization and accelerators based on new technologies. The important features of the various strategies for enhancing acceleration performance are highlighted. The numerous current difficulties like fair comparison, as well as potential subjects and obstacles in this field has been examined. This study intends to provide readers with a fast overview of neural network compression and acceleration, a clear evaluation of different methods, and the confidence to get started in the right path. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0929-6212 1572-834X |
| DOI: | 10.1007/s11277-024-11443-2 |