Neuromorphic Accelerator for Spiking Neural Network Using SOT-MRAM Crossbar Array

Spiking neural networks (SNNs) have gained a significant interest in recent years due to their biological system-like processing. However, the hardware implementation of spiking neurons, synapses, and related algorithms by CMOS technology is limited by area and power constraints. In this work, an ap...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on electron devices Vol. 70; no. 11; pp. 6012 - 6020
Main Authors Verma, Gaurav, Nisar, Arshid, Dhull, Seema, Kaushik, Brajesh Kumar
Format Journal Article
LanguageEnglish
Published New York IEEE 01.11.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9383
1557-9646
DOI10.1109/TED.2023.3317357

Cover

More Information
Summary:Spiking neural networks (SNNs) have gained a significant interest in recent years due to their biological system-like processing. However, the hardware implementation of spiking neurons, synapses, and related algorithms by CMOS technology is limited by area and power constraints. In this work, an approach for spin-orbit-torque magnetic random access memory (SOT-MRAM)-based hardware accelerator for SNNs is presented. The accelerator for the neuromorphic core consists of crossbar arrays of SOT-MRAM devices interfaced with spiking neurons and peripheral circuits. The proposed design is compared with various other nonvolatile memory devices, including phase-change memory (PCM), resistive random access memory (RRAM), and spin-transfer torque MRAM (STT-MRAM). SOT-MRAM provides subnanosecond switching with low energy consumption and high throughput. The benefits of the proposed design for a large-scale neuromorphic accelerator are explored using a complete device-circuit-algorithm framework for a standard MNIST image classification. The results show that SOT-MRAM-based neuromorphic core achieves <inline-formula> <tex-math notation="LaTeX">6.4\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">70.32\times </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">20.25\times </tex-math></inline-formula>, and <inline-formula> <tex-math notation="LaTeX">4.83\times </tex-math></inline-formula> higher throughput per unit Watt as compared to SRAM, PCM, RRAM, and STT-MRAM-based designs, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9383
1557-9646
DOI:10.1109/TED.2023.3317357