2018
DOI: 10.3389/fncom.2018.00074
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Localist and Distributed Coding of Spatiotemporal Spike Patterns Through STDP and Coincidence Detection

Abstract: Repeating spatiotemporal spike patterns exist and carry information. Here we investigated how a single spiking neuron can optimally respond to one given pattern (localist coding), or to either one of several patterns (distributed coding, i.e., the neuron's response is ambiguous but the identity of the pattern could be inferred from the response of multiple neurons), but not to random inputs. To do so, we extended a theory developed in a previous paper (Masquelier, 2017), which was limited to localist coding. M… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 60 publications
(107 reference statements)
0
19
0
Order By: Relevance
“…In fact, the STDP rule focuses on the first spikes of the input pattern which contain most of the information needed for pattern recognition. It has been shown that repeating spatio-temporal patterns can be detected and learned by a single neuron based on STDP [79], [80]. STDP can also solve difficult computational problems in localizing a repeating spatio-temporal spike pattern and enabling some forms of temporal coding, even if an explicit time reference is missing [79], [81].…”
Section: B Learning Rules In Snnsmentioning
confidence: 99%
“…In fact, the STDP rule focuses on the first spikes of the input pattern which contain most of the information needed for pattern recognition. It has been shown that repeating spatio-temporal patterns can be detected and learned by a single neuron based on STDP [79], [80]. STDP can also solve difficult computational problems in localizing a repeating spatio-temporal spike pattern and enabling some forms of temporal coding, even if an explicit time reference is missing [79], [81].…”
Section: B Learning Rules In Snnsmentioning
confidence: 99%
“…[83] Unsupervised learning helps to fully exploit the power advantage of SNN by localized training and STDP-based learning rules, [84,85] though it lacks in accuracy especially in the multilayer scenario. The merits of STDP of the localized spatio-temporal weights and sensitivity to repetitive spike trains in stochastic spike trains make it ideal for energy-efficient unsupervised learning, even with implicit time inference [86,87] when the training background is infused by noise. It has been proved that a full unsupervised SNN can achieve competitive training accuracy.…”
Section: Snn Training Topologiesmentioning
confidence: 99%
“…The full-digitalized RRAM crossbar exhibits great robustness when encountering computing variance caused by the stochastic switching property. [87] However, the intermediate states are Figure 6. Detailed structure of control bus and communication protocol.…”
Section: Cnn Circuitsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this method is still burdened with the training computational overhead and does little to utilize the efficiency of event driven computations. The SNN's Spike Time Dependent Plasticity (STDP) and spike-based back-propagation learning have been demonstrated to capture hierarchical features in SpikeCNNs (Masquelier and Thorpe, 2007;O'Connor et al, 2013;Panda et al, 2017;Kheradpisheh et al, 2018;Masquelier and Kheradpisheh, 2018;Falez et al, 2019). Both of these methods better equip the network to deal with event driven sensors, where the significant gains over CNNs could be realized.…”
Section: Introductionmentioning
confidence: 99%