2018
DOI: 10.3389/fnins.2018.00941
|View full text |Cite
|
Sign up to set email alerts
|

GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model

Abstract: While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimization for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

1
100
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 65 publications
(101 citation statements)
references
References 73 publications
1
100
0
Order By: Relevance
“…One important aspect of large-scale brain simulations not addressed in this work is synaptic plasticity and its role in learning. As discussed in our previous work (12), GeNN supports a wide variety of synaptic plasticity rules. In order to modify synaptic weights, they need to be stored in memory rather than generated procedurally.…”
Section: Discussionmentioning
confidence: 72%
See 4 more Smart Citations
“…One important aspect of large-scale brain simulations not addressed in this work is synaptic plasticity and its role in learning. As discussed in our previous work (12), GeNN supports a wide variety of synaptic plasticity rules. In order to modify synaptic weights, they need to be stored in memory rather than generated procedurally.…”
Section: Discussionmentioning
confidence: 72%
“…The results are stored in CPU memory, uploaded to GPU memory and then used during the simulation. We have recently extended GeNN to use code generation from code snippets to also generate efficient, parallel code for model initialisation (12). Offloading initialisation to the GPU in this way made it around 20× faster on a desktop PC (12), demonstrating that initialisation algorithms are well-suited for GPU acceleration.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations