2018
DOI: 10.3389/fncom.2018.00046
|View full text |Cite
|
Sign up to set email alerts
|

Event-Based, Timescale Invariant Unsupervised Online Deep Learning With STDP

Abstract: Learning of hierarchical features with spiking neurons has mostly been investigated in the database framework of standard deep learning systems. However, the properties of neuromorphic systems could be particularly interesting for learning from continuous sensor data in real-world settings. In this work, we introduce a deep spiking convolutional neural network of integrate-and-fire (IF) neurons which performs unsupervised online deep learning with spike-timing dependent plasticity (STDP) from a stream of async… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(50 citation statements)
references
References 27 publications
0
46
0
Order By: Relevance
“…DVS Trained Network -During initial testing the net-work trained on real DVS event data struggled to converge to useful feature within the second layer, due to the sparse feature maps that were learned in the rst layer. A set of pre-trained weights representing Gabor features, shown in Figure 6, indicative of that seen in other rst layer SNNs [10,18], allowed all the networks to have better building blocks to create more complex features in the second and third layers. Throughout all of the further testing, this method was used, using the four features presented in Figure 6 as the rst layer of each network.…”
Section: Testing Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…DVS Trained Network -During initial testing the net-work trained on real DVS event data struggled to converge to useful feature within the second layer, due to the sparse feature maps that were learned in the rst layer. A set of pre-trained weights representing Gabor features, shown in Figure 6, indicative of that seen in other rst layer SNNs [10,18], allowed all the networks to have better building blocks to create more complex features in the second and third layers. Throughout all of the further testing, this method was used, using the four features presented in Figure 6 as the rst layer of each network.…”
Section: Testing Resultsmentioning
confidence: 99%
“…This allows it to act as a detection layer. In this situation, it is able to forgo usage of a fully connected layer [18] or support vector machine [10] as classi cation isn't required. The network's evaluation will be based upon the number of successful detections and its robustness to a range of highly spiking noisy inputs, replicating low light conditions.…”
Section: Proposed Uav Detection Systemmentioning
confidence: 99%
“…Indeed, our network is first trained in formal domain, and its weights are then exported to be used in an SNN with the same topology, which is then directly ready for inference. Note that there exist learning methods directly in spike domain, such as SpikeProp or STDP [18][39] [19] [40]. Additional information concerning spiking learning methods is available in [1], which presents a complete survey of Spiking Neural Network training techniques.…”
Section: Contributionsmentioning
confidence: 99%
“…Deep SNNs (DSNNs) are brain-inspired information processing systems, which have shown interesting capabilities such as fast inference and event-driven information processing which make them excellent approaches for deep neural network architectures [27]. Event-driven means that SNNs generate spikes in response to stimulation from other neurons and show very small firing activity when they receive sparse inputs, such strategy results in power-efficient computing [28]. DSNNs have been developed for supervised learning [29], unsupervised learning [30,28] and reinforcement learning paradigms [31].…”
Section: Introductionmentioning
confidence: 99%
“…Event-driven means that SNNs generate spikes in response to stimulation from other neurons and show very small firing activity when they receive sparse inputs, such strategy results in power-efficient computing [28]. DSNNs have been developed for supervised learning [29], unsupervised learning [30,28] and reinforcement learning paradigms [31].…”
Section: Introductionmentioning
confidence: 99%