2008
DOI: 10.1016/j.neucom.2007.11.014
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning of recurrent neural network for temporal coding

Abstract: We study a reinforcement learning for temporal coding with neural network consisting of stochastic spiking neurons. In neural networks, information can be coded by characteristics of the timing of each neuronal firing, including the order of firing or the relative phase differences of firing. We derive the learning rule for this network and show that the network consisting of Hodgkin-Huxley neurons with the dynamical synaptic kinetics can learn the appropriate timing of each neuronal firing. We also investigat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2010
2010
2014
2014

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Transfer time would be directly proportional to the size of the oscillation vector or the number of pulses in the accumulator and the speed of transfer (i.e., baud rate). Pulse accumulation essentially functions as an up counter and memory transfer functions as a down counter, with the additional assumption that neural networks can function as look-up or conversion tables (Dali & Zemin 1993, Kimura & Hayakawa 2008.…”
Section: Memory Translation Constantmentioning
confidence: 99%
“…Transfer time would be directly proportional to the size of the oscillation vector or the number of pulses in the accumulator and the speed of transfer (i.e., baud rate). Pulse accumulation essentially functions as an up counter and memory transfer functions as a down counter, with the additional assumption that neural networks can function as look-up or conversion tables (Dali & Zemin 1993, Kimura & Hayakawa 2008.…”
Section: Memory Translation Constantmentioning
confidence: 99%
“…Eligibility denotes synapses that have contributed to either a correct or false output spike. These eligible synapses can be determined either analytically as in [ 15 , 17 , 40 ] or phenomenologically as in [ 16 , 22 , 25 ]. In order to keep complexity at a minimum, the latter approach is the one adopted in the presented study.…”
Section: Reinforcement Learning Frameworkmentioning
confidence: 99%