2009 International Joint Conference on Neural Networks 2009
DOI: 10.1109/ijcnn.2009.5178751
|View full text |Cite
|
Sign up to set email alerts
|

Hebbian learning with winner take all for spiking neural networks

Abstract: Learning methods for spiking neural networks are not as well developed as the traditional rate based networks, which widely use the back-propagation learning algorithm. We propose and implement an efficient Hebbian learning method with homeostasis for a network of spiking neurons. Similar to STDP, timing between spikes is used for synaptic modification. Homeostasis ensures that the synaptic weights are bounded and the learning is stable. The winner take all mechanism is also implemented to promote competitive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
28
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 46 publications
(30 citation statements)
references
References 36 publications
2
28
0
Order By: Relevance
“…This learning modifies the weights of the presynaptic neurons with the winning output [18]. This observation is in agreement with the fact that, in biological neural networks, different axonal connections will have different signal transmission delays [19].…”
Section: Introductionsupporting
confidence: 80%
“…This learning modifies the weights of the presynaptic neurons with the winning output [18]. This observation is in agreement with the fact that, in biological neural networks, different axonal connections will have different signal transmission delays [19].…”
Section: Introductionsupporting
confidence: 80%
“…One would assume then that the mechanisms of subsymbolic generalization used by humans must somehow be different from the traditional generalizations in subsymbolic systems. However, perhaps the newer subsymbolic algorithms [8][9][10]34] will lead to more robust subsymbolic generalizations.…”
Section: Discussionmentioning
confidence: 99%
“…This would argue for a combination of approaches, with a subsumptive architecture being able to generalize over simple reactive tasks coupled with a symbolic system for generalization across more complex tasks. Some researchers have called for the combinations of Bayesian/probabilistic and connectionist approaches [34]. This would facilitate further understanding of the applicability of subsymbolic knowledge structures.…”
Section: Discussionmentioning
confidence: 99%
“…In the second phase, the same mechanism is applied for reinforcement of the excitatory response groups but based on the number of spikes within 20 ms. In addition, winner-take-all strategy is implemented in both phases through biased random excitatory signals to the winner of response groups for each phase (adapted from [13]). The training performance is computed based on the percentage of number of correct response over number of trials averaged by 10 simulations (i.e.…”
Section: Learning Protocolsmentioning
confidence: 99%