2012
DOI: 10.1155/2012/713581
|View full text |Cite
|
Sign up to set email alerts
|

A Radial Basis Function Spike Model for Indirect Learning via Integrate-and-Fire Sampling and Reconstruction Techniques

Abstract: This paper presents a deterministic and adaptive spike model derived from radial basis functions and a leaky integrate-and-fire sampler developed for training spiking neural networks without direct weight manipulation. Several algorithms have been proposed for training spiking neural networks through biologically-plausible learning mechanisms, such as spike-timing-dependent synaptic plasticity and Hebbian plasticity. These algorithms typically rely on the ability to update the synaptic strengths, or weights, d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
2
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 20 publications
1
2
0
Order By: Relevance
“…This was then used to train a simulated flying insect robot to follow a flight trajectory in Clawson et al (2016). Similar ideas were presented by Zhang et al (2012, 2013), Hu et al (2014), and Mazumder et al (2016) who trained a simple, virtual insect in a target reaching and obstacle avoidance task. However, this method is not suited for training an SNN on multi-dimensional inputs since the reward is dependent on the sign of the difference between the desired and actual SNN output.…”
Section: Related Worksupporting
confidence: 54%
“…This was then used to train a simulated flying insect robot to follow a flight trajectory in Clawson et al (2016). Similar ideas were presented by Zhang et al (2012, 2013), Hu et al (2014), and Mazumder et al (2016) who trained a simple, virtual insect in a target reaching and obstacle avoidance task. However, this method is not suited for training an SNN on multi-dimensional inputs since the reward is dependent on the sign of the difference between the desired and actual SNN output.…”
Section: Related Worksupporting
confidence: 54%
“…By minimizing the error between control output and optimal control law offline, it was able to learn adaptive control of an aircraft. Similar ideas were presented by Zhang et al ( 2012 ), Zhang et al ( 2013 ), Hu et al ( 2014 ), and Mazumder et al ( 2016 ) who trained a simple, virtual insect in a target reaching and obstacle avoidance task.…”
Section: Learning and Robotics Applicationsmentioning
confidence: 52%
“…[29] [30][31] [32][33], enhanced vision feedback technologies[34] [35][36] [37][38], network technologies[39] [40][41] [42], bigdata[43] [44] [45][46] and some optimization approaches[47] [48][49] [50][51] …”
mentioning
confidence: 99%