2014
DOI: 10.1007/978-3-319-05170-3_27
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Metaheuristic Algorithms on the Training Process of Spiking Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…Traditionally, when SNN models are trained to perform a behavior using biologically inspired learning mechanisms, algorithms used are variations on either spike timing dependent plasticity(STDP) (Tavanaei et al, 2019) or evolutionary strategies (EVOL) (Espinal et al, 2014). For learning behaviors from the reinforcement learning domain, STDP can be extended to use reward modulated plasticity (Chadderdon et al, 2012;Neymotin et al, 2013;Hazan et al, 2018;Patel et al, 2019;Anwar et al, 2022a), an algorithm denoted spike-timing dependent reinforcement learning (STDP-RL).…”
Section: Introductionmentioning
confidence: 99%
“…Traditionally, when SNN models are trained to perform a behavior using biologically inspired learning mechanisms, algorithms used are variations on either spike timing dependent plasticity(STDP) (Tavanaei et al, 2019) or evolutionary strategies (EVOL) (Espinal et al, 2014). For learning behaviors from the reinforcement learning domain, STDP can be extended to use reward modulated plasticity (Chadderdon et al, 2012;Neymotin et al, 2013;Hazan et al, 2018;Patel et al, 2019;Anwar et al, 2022a), an algorithm denoted spike-timing dependent reinforcement learning (STDP-RL).…”
Section: Introductionmentioning
confidence: 99%
“…Traditionally, when SNN models are trained to perform a behavior using biologically inspired learning mechanisms, algorithms used are variations on either Spike Timing Dependent Plasticity(STDP) (Tavanaei et al 2019) or Evolutionary Strategies (Espinal et al 2014). For learning behaviors from the reinforcement learning domain, STDP can be extended to use reward modulated plasticity (Anwar et al 2021; Patel et al 2019; Hazan et al 2018; Chadderdon et al 2012; Neymotin et al 2013), an algorithm denoted Spike-timing dependent reinforcement learning (STDP-RL).…”
Section: Introductionmentioning
confidence: 99%
“…Other works, in [43, 54–57], three-layered feedforward SNNs with synaptic connections were implemented, which are formed by a weight and a delay, to solve supervised classification problems through the use of time-to-first-spike as a classification criterion; in these works, the training has been carried out by means of evolutionary strategy (ES) [58, 59] and PSO algorithms. An extension of previous works is made in [60, 61], where the number of hidden layers and their computing units are defined by grammatical evolution (GE) [62] besides the metaheuristic learning. More complex SNN frameworks have been developed and trained with metaheuristics (such as ES) to perform tasks such as visual pattern recognition, audio-visual pattern recognition, taste recognition, ecological modelling, sign language recognition, object movement recognition, and EEG spatio/spectrotemporal pattern recognition (see [63] for a review of these frameworks).…”
Section: Introductionmentioning
confidence: 99%
“…More complex SNN frameworks have been developed and trained with metaheuristics (such as ES) to perform tasks such as visual pattern recognition, audio-visual pattern recognition, taste recognition, ecological modelling, sign language recognition, object movement recognition, and EEG spatio/spectrotemporal pattern recognition (see [63] for a review of these frameworks). The robotic locomotion is solved through SNNs designed by metaheuristics in [60, 64, 65]; in these works, both the connectivity pattern and synaptic weights of each Belson–Mazet–Soula (BMS) [66] neuron model into SNNs called spiking central pattern generators (SCPGs) are defined through GE or Christiansen grammar evolution (CGE) [67] algorithms; all individual designs are integrated to define the SCPGs that allow the locomotion of legged robots.…”
Section: Introductionmentioning
confidence: 99%