The 2011 International Joint Conference on Neural Networks 2011
DOI: 10.1109/ijcnn.2011.6033344
|View full text |Cite
|
Sign up to set email alerts
|

Reference time in SpikeProp

Abstract: Abstract-Although some studies have been done on the learning algorithm for spiking neural networks SpikeProp, little has been mentioned about the required input bias neuron that sets the reference time start. This paper examines the importance of the reference time in neural networks based on temporal encoding. The findings refute previous assumptions about the reference start time.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Without start cue, the firing delay of each occurring spike cannot be accurately determined. The significance of the start cue has been discussed at length in literatures [11,28]. The time of start cue is kept when the first presynaptic spike arrives at CD neuron.…”
Section: Output Of Coincidence Detection Networkmentioning
confidence: 99%
“…Without start cue, the firing delay of each occurring spike cannot be accurately determined. The significance of the start cue has been discussed at length in literatures [11,28]. The time of start cue is kept when the first presynaptic spike arrives at CD neuron.…”
Section: Output Of Coincidence Detection Networkmentioning
confidence: 99%
“…In comparable settings, so far only classification tasks or simple mapping tasks have been considered [9,11,14,21], either with only a single neuron or in much larger Liquid State Machines (LSMs) [15], but no computational tasks. Or computational tasks like the Exclusive-Or problem have been considered in layered networks, but only with single-spike latency-encoded outputs [3,4,29,30]. In contrast, it is our aim to demonstrate that layered networks can learn to perform simple, but non-trivial computations in a supervised framework and make use of multiple timed spikes for input and output patterns.…”
Section: Introductionmentioning
confidence: 99%
“…In comparable settings, so far only classification tasks or simple mapping tasks have been considered [9,11,14,21], either with only a single neuron or in much larger Liquid State Machines (LSMs) [15], but no computational tasks. Or computational tasks like the Exclusive-Or problem have been considered in layered networks, but only with single-spike latency-encoded outputs [3,4,29,30]. In contrast, it is our aim to demonstrate that layered networks can learn to perform simple, but non-trivial computations in a supervised framework and make use of multiple timed spikes for input and output patterns.…”
Section: Introductionmentioning
confidence: 99%