The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017 International Joint Conference on Neural Networks (IJCNN) 2017
DOI: 10.1109/ijcnn.2017.7966099
|View full text |Cite
|
Sign up to set email alerts
|

Multi-layer unsupervised learning in a spiking convolutional neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 49 publications
(36 citation statements)
references
References 28 publications
0
36
0
Order By: Relevance
“…However, they still lead to bimodal distributions. Contrarily, [26], [27] claim that, by incorporating the weight dependency in an inversely proportional manner, stable unimodal distributions are obtained. Never-theless, their stability results from a complex temporal LTP-LTD balance, and it is not theoretically guaranteed.…”
Section: Synaptic Plasticitymentioning
confidence: 99%
“…However, they still lead to bimodal distributions. Contrarily, [26], [27] claim that, by incorporating the weight dependency in an inversely proportional manner, stable unimodal distributions are obtained. Never-theless, their stability results from a complex temporal LTP-LTD balance, and it is not theoretically guaranteed.…”
Section: Synaptic Plasticitymentioning
confidence: 99%
“…This leads to a better classification accuracy with a smaller number of neurons. Another unsupervised spiking network was presented by Tavanaei and Maida [23]. In contrast to our network, their network consists of four layers.…”
Section: Discussionmentioning
confidence: 99%
“…This is less biologically plausible than the used learning rules in our approach. Nonetheless, the complex structure of the network from Tavanaei and Maida (2017) [23] and of the Kheradpisheh et al (2017) [22] network is an evidence for the possibility of unsupervised STDP learning rules in a multi-layer network.…”
Section: Discussionmentioning
confidence: 99%
“…For this purpose, a new model was proposed using MNIST dataset. This approach reaches an accuracy of 98% and loss range 0.1% to 8.5% [20]. In Germany, a traffic sign recognition model of CNN is suggested.…”
Section: Literature Reviewmentioning
confidence: 99%