2021
DOI: 10.3389/fninf.2021.659005
|View full text |Cite
|
Sign up to set email alerts
|

PyGeNN: A Python Library for GPU-Enhanced Neural Networks

Abstract: More than half of the Top 10 supercomputing sites worldwide use GPU accelerators and they are becoming ubiquitous in workstations and edge computing devices. GeNN is a C++ library for generating efficient spiking neural network simulation code for GPUs. However, until now, the full flexibility of GeNN could only be harnessed by writing model descriptions and simulation code in C++. Here we present PyGeNN, a Python package which exposes all of GeNN's functionality to Python with minimal overhead. This provides … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 34 publications
(44 citation statements)
references
References 31 publications
(51 reference statements)
0
44
0
Order By: Relevance
“…In our previous work [38] we introduced 'procedural connectivity', a technique where neurons' outgoing sparse random connectivity is regenerated on the fly when they spike, rather than being stored in memory. Using procedural connectivity we demonstrated how a model-so large it could previously only be simulated on a supercomputer-could be simulated on a single GPU.…”
Section: Procedural Connectivitymentioning
confidence: 99%
See 3 more Smart Citations
“…In our previous work [38] we introduced 'procedural connectivity', a technique where neurons' outgoing sparse random connectivity is regenerated on the fly when they spike, rather than being stored in memory. Using procedural connectivity we demonstrated how a model-so large it could previously only be simulated on a supercomputer-could be simulated on a single GPU.…”
Section: Procedural Connectivitymentioning
confidence: 99%
“…Because this algorithm is instantiated using GeNN's code generator, constant terms will be hard-coded into the kernel, allowing the CUDA compiler to transform divisions by constants into more efficient instructions, unroll loops and optimise for special cases such as O chan = 1. Furthermore, unlike with the probabilistic connectivity investigated in our previous work [38], combining procedural convolutions with learning is entirely possible as the algorithm to back-propagate through the convolutional connectivity is very similar and equally efficient.…”
Section: Procedural Connectivitymentioning
confidence: 99%
See 2 more Smart Citations
“…Fast and energy efficient simulation is a promise of neuromorphic computing [5]; desirable for large-scale neuroscientific models [6] and imperative in artificial intelligence and machine learning applications [7]. The first milestone is realtime performance, which was accomplished for the microcircuit model in 2019 on a neuromorphic system [8] followed this year by GPU systems [9,10], one of them already breaking into the sub-realtime regime [10]. However, these results have to be evaluated in the light of continuously advancing commodity hardware as a reference technology providing more flexibility at potentially lower costs.…”
Section: Introductionmentioning
confidence: 99%