Surrogate gradients for analog neuromorphic computing

To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. Howe...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the National Academy of Sciences - PNAS Vol. 119; no. 4; pp. 1 - 9
Main Authors Cramer, Benjamin, Billaudelle, Sebastian, Kanya, Simeon, Leibfried, Aron, Grübl, Andreas, Karasenko, Vitali, Pehle, Christian, Schreiber, Korbinian, Stradmann, Yannik, Weis, Johannes, Schemmel, Johannes, Zenke, Friedemann
Format Journal Article
LanguageEnglish
Published United States National Academy of Sciences 25.01.2022
Subjects
Online AccessGet full text
ISSN0027-8424
1091-6490
1091-6490
DOI10.1073/pnas.2109194119

Cover

More Information
Summary:To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum, but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Surrogate gradient learning has emerged as a promising training strategy for spiking networks, but its applicability for analog neuromorphic systems has not been demonstrated. Here, we demonstrate surrogate gradient learning on the BrainScaleS-2 analog neuromorphic system using an in-the-loop approach. We show that learning self-corrects for device mismatch, resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, less than one spike per hidden neuron and input, perform inference at rates of up to 85,000 frames per second, and consume less than 200 mW. In summary, our work sets several benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by Terrence Sejnowski, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA; received May 19, 2021; accepted November 25, 2021
Author contributions: B.C., S.B., and F.Z. designed research; B.C., S.B., S.K., and F.Z. performed research; A.L., A.G., V.K., C.P., K.S., Y.S., J.W., J.S., and F.Z. contributed new reagents/analytic tools; B.C. and S.B. analyzed data; B.C., S.B., and F.Z. wrote the paper; A.L. contributed software; A.G., V.K., C.P., K.S., and Y.S. contributed core-components to the hardware; and J.S. designed the BrainScaleS-2 neuromorphic system.
1B.C. and S.B. contributed equally to this work.
ISSN:0027-8424
1091-6490
1091-6490
DOI:10.1073/pnas.2109194119