2018
DOI: 10.1162/neco_a_01113
|View full text |Cite
|
Sign up to set email alerts
|

Computing with Spikes: The Advantage of Fine-Grained Timing

Abstract: Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventional methods, and under what circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 66 publications
(80 reference statements)
0
10
0
Order By: Relevance
“…When implemented on neuromorphic architectures, these algorithms promise speed and efficiency gains by exploiting fine-grain parallelism and event-based computation. Examples include computational primitives, such as sorting, max, min, and median operations [70], a wide range of graph algorithms [71]- [74], NP-complete/hard problems, such as constraint satisfaction [75], boolean satisfiability [76], dynamic programming [77], and quadratic unconstrained binary optimization [78], [79], and novel Turing-complete computational frameworks, such as Stick [80] and SN P [81].…”
Section: C O M P U T I N G W I T H T I M Ementioning
confidence: 99%
“…When implemented on neuromorphic architectures, these algorithms promise speed and efficiency gains by exploiting fine-grain parallelism and event-based computation. Examples include computational primitives, such as sorting, max, min, and median operations [70], a wide range of graph algorithms [71]- [74], NP-complete/hard problems, such as constraint satisfaction [75], boolean satisfiability [76], dynamic programming [77], and quadratic unconstrained binary optimization [78], [79], and novel Turing-complete computational frameworks, such as Stick [80] and SN P [81].…”
Section: C O M P U T I N G W I T H T I M Ementioning
confidence: 99%
“…In contrast to other work, the STPU has been developed to be a general neuromorphic architecture. Other neuroscience work and algorithms are being developed against the STPU such as spike sorting and using spikes for median filtering [13]. Currently, we have an STPU simulator implemented in MATLAB as well as an implementation on an FGPA chip.…”
Section: Mapping the Lsm Onto The Stpumentioning
confidence: 99%
“…While we examine the STPU in the context of LSMs, the STPU is a general neuromorphic architecture. Other spiked-based algorithms have been implemented on the STPU [12], [13].…”
Section: Introductionmentioning
confidence: 99%
“…Spiking Neural Networks display promising characteristics for this paradigm change [23], [25], [30], [34], such as unsupervised training with STDP rules, which reduces the need for large annotated datasets. SNNs show higher efficiency than classical neural networks, from both computation [18] and energy [8] points of view.…”
Section: Introductionmentioning
confidence: 99%