2012
DOI: 10.3109/0954898x.2012.733842
|View full text |Cite
|
Sign up to set email alerts
|

Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks

Abstract: The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵ordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…This analysis ignores the fact that spikes in the ASNN (and SNN) are heavily localized to a subset of neurons: many neuron are silent while a few are active. Sparse and localised communication potentially offers a benefit to deep neural networks, as densely connected neural networks tend to be limited by the bandwidth required to read and write the appropriate weights from memory [21]. Thus reasoned, for an ASNN that incurs a 100ms delay to compete in terms of bandwidth used with an ANN, it can use at most a firing rate of 10Hz on average per neuron, since an ANN sampled with 10Hz would achieve the same worst case delay.…”
Section: Ann Snn Asnnmentioning
confidence: 99%
“…This analysis ignores the fact that spikes in the ASNN (and SNN) are heavily localized to a subset of neurons: many neuron are silent while a few are active. Sparse and localised communication potentially offers a benefit to deep neural networks, as densely connected neural networks tend to be limited by the bandwidth required to read and write the appropriate weights from memory [21]. Thus reasoned, for an ASNN that incurs a 100ms delay to compete in terms of bandwidth used with an ANN, it can use at most a firing rate of 10Hz on average per neuron, since an ANN sampled with 10Hz would achieve the same worst case delay.…”
Section: Ann Snn Asnnmentioning
confidence: 99%
“…Different parallelization strategies have been designed for GPUs (reviewed in [8]) with the aim to vectorize these calculations: across neurons [9], or across spikes and synapses [3]. However, these strategies all have in common that they run many obsolete operations in each time-step, as they typically include also the silent (non-spiking) neurons in the network.…”
Section: Introductionmentioning
confidence: 99%