Brian2CUDA: Flexible and Efficient Simulation of Spiking Neural Network Models on GPUs

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in neuroinformatics Vol. 16; p. 883700
Main Authors Alevi, Denis, Stimberg, Marcel, Sprekeler, Henning, Obermayer, Klaus, Augustin, Moritz
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 31.10.2022
Frontiers Media
Frontiers Media S.A
Subjects
Online AccessGet full text
ISSN1662-5196
1662-5196
DOI10.3389/fninf.2022.883700

Cover

More Information
Summary:Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian's CPU backend. Currently, Brian2CUDA is the only package that supports Brian's full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: James Courtney Knight, University of Sussex, United Kingdom
Reviewed by: Jochen Martin Eppler, Helmholtz Association of German Research Centres (HZ), Germany; Felix Benjamin Kern, International Research Center for Neurointelligence (IRCN), Japan
ISSN:1662-5196
1662-5196
DOI:10.3389/fninf.2022.883700