2018
DOI: 10.1016/j.neunet.2017.12.005
|View full text |Cite
|
Sign up to set email alerts
|

STDP-based spiking deep convolutional neural networks for object recognition

Abstract: Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
544
0
27

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 601 publications
(608 citation statements)
references
References 56 publications
5
544
0
27
Order By: Relevance
“…In fact, the biologically-inspired model of spike-timing-dependent plasticity (STDP) can be expressed in terms of covariances between spike trains [22,16], which was an inspiration of the present study. STDPlike learning rules were used for object recognition [23] and related to the expectation-maximization algorithm [30]. Although genuine STDP relates to unsupervised learning, extensions were developed to implement supervised learning for spike patterns [20,35,19,14,41].…”
Section: Learning and (De)coding In Biological Spiking Neuronal Networkmentioning
confidence: 99%
“…In fact, the biologically-inspired model of spike-timing-dependent plasticity (STDP) can be expressed in terms of covariances between spike trains [22,16], which was an inspiration of the present study. STDPlike learning rules were used for object recognition [23] and related to the expectation-maximization algorithm [30]. Although genuine STDP relates to unsupervised learning, extensions were developed to implement supervised learning for spike patterns [20,35,19,14,41].…”
Section: Learning and (De)coding In Biological Spiking Neuronal Networkmentioning
confidence: 99%
“…We understand this intuition, but it is worth emphasizing again that there are existing models of visual object recognition that are designed to be biologically plausible that have grandmother cells . Indeed, there are biologically plausible models of visual object identification that learn grandmother cells . Even artificial neural network models that are claimed to learn distributed codes in fact learn grandmother representations when trained to coactivate multiple words (or objects or faces) at the same time in short‐term memory .…”
Section: Discussionmentioning
confidence: 99%
“…[31] Indeed, there are biologically plausible models of visual object identification that learn grandmother cells. [70] Even artificial neural network models that are claimed to learn distributed codes in fact learn grandmother representations when trained to coactivate multiple words (or objects or faces) at the same time in short-term memory. [50,71] These findings suggest that there are computational advantages of grandmother cells in the context of shortterm memory, much like there are computational advantages of learning highly selective representations in the hippocampus for the sake of episodic memory.…”
Section: Discussionmentioning
confidence: 99%
“…For example, in order for neural networks to act as accurate models of neural information processing it may be imperative to use spike-based rather than rate-based formulations (Brette, 2015). Efforts are underway to effectively train spiking neural networks (Gerstner et al, 2014;Gerstner and Kistler, 2002;O'Connor and Welling, 2016;Huh and Sejnowski, 2017) and endow them with the same cognitive capabilities as their rate-based cousins Zambrano and Bohte, 2016;Kheradpisheh et al, 2016;Lee et al, 2016;Thalmeier et al, 2015). In the same vein, researchers are exploring how probabilistic computations can be performed in neural networks (Pouget et al, 2013;Nessler et al, 2013;Orhan and Ma, 2016;Heeger, 2017) and deriving new biologically plausible synaptic plasticity rules (Schiess et al, 2016;Brea and Gerstner, 2016).…”
Section: Next-generation Artificial Neural Networkmentioning
confidence: 99%