The 2006 IEEE International Joint Conference on Neural Network Proceedings
DOI: 10.1109/ijcnn.2006.1716646
|View full text |Cite
|
Sign up to set email alerts
|

Fast Modifications of the SpikeProp Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
48
0
1

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(50 citation statements)
references
References 0 publications
1
48
0
1
Order By: Relevance
“…Although the gradient descent learning method has been tested with various benchmark problems [5], all other studies on SpikeProp ( [7], [8], [9], [10], [11], [12]) used the XOR problem or the Iris dataset with identical network structures as in the original paper on SpikeProp [5]. This gives little insight into the capabilities or limitations of a spiking neural network trained with SpikeProp.…”
Section: Reference Start Timementioning
confidence: 99%
See 1 more Smart Citation
“…Although the gradient descent learning method has been tested with various benchmark problems [5], all other studies on SpikeProp ( [7], [8], [9], [10], [11], [12]) used the XOR problem or the Iris dataset with identical network structures as in the original paper on SpikeProp [5]. This gives little insight into the capabilities or limitations of a spiking neural network trained with SpikeProp.…”
Section: Reference Start Timementioning
confidence: 99%
“…16 (0) 16 (0) 16 (0) 10 (1) f 8 10 (1) 10 (1) 10 (1) 16 (0) f 9 16 (0) 10 (1) 10 (1) 16 (0) f 10 10 (1) 16 (0) 10 (1) 16 (0) f 11 16 (0) 16 (0) 10 (1) 16 (0) f 12 10 (1) 10 (1) …”
unclassified
“…However, there were several issues this algorithm needed to address, such as slow convergence especially for large datasets and the problem of non-firing (silent) neurons. Subsequently, several methods have been developed to improve SpikeProp [11][12][13][14][15]. These gradient based algorithms are computationally powerful but are often regarded as non-biologically plausible because they require a non-local spread of error signals from one synapse to another.…”
Section: Introductionmentioning
confidence: 99%
“…This is referred to as the 'silent neuron' problem and has been discussed in detail in [8]. Several extensions to SpikeProp have been proposed which improve its convergence characteristics by adding a momentum term [9,10] and by using adaptive learning rates [11]. Other variations of SpikeProp have also been proposed which are capable of learning with multiple spikes [12][13][14].…”
Section: Introductionmentioning
confidence: 99%