2020
DOI: 10.1016/j.neucom.2020.04.079
|View full text |Cite
|
Sign up to set email alerts
|

Surrogate-Assisted Evolutionary Search of Spiking Neural Architectures in Liquid State Machines

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 30 publications
(17 citation statements)
references
References 32 publications
0
17
0
Order By: Relevance
“…The parameters of the rest of recurrent neurons (the reservoir are randomly initialized subject to some stability constraints, and kept fixed while the readout layer is trained [227]. Some works have been reported in the last couple of years dealing with the optimization of Reservoir Computing models, such as the composition of the reservoir, connectivity and hierarchical structure of Echo State Networks via Genetic Algorithms [228], or the structural hyper-parameter optimization of Liquid State Machines [229,230] and Echo State Networks [231] using an adapted version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) solver. The relatively recent advent of Deep versions of Reservoir Computing models [232] unfolds an interesting research playground over which to propose new bio-inspired solvers for topology and hyperparameter optimization.…”
Section: Optimization Of New Deep Learning Architecturesmentioning
confidence: 99%
“…The parameters of the rest of recurrent neurons (the reservoir are randomly initialized subject to some stability constraints, and kept fixed while the readout layer is trained [227]. Some works have been reported in the last couple of years dealing with the optimization of Reservoir Computing models, such as the composition of the reservoir, connectivity and hierarchical structure of Echo State Networks via Genetic Algorithms [228], or the structural hyper-parameter optimization of Liquid State Machines [229,230] and Echo State Networks [231] using an adapted version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) solver. The relatively recent advent of Deep versions of Reservoir Computing models [232] unfolds an interesting research playground over which to propose new bio-inspired solvers for topology and hyperparameter optimization.…”
Section: Optimization Of New Deep Learning Architecturesmentioning
confidence: 99%
“…To cope with the state-of-the-art results in real world applications, many efforts on enhancing the LSM performance were based on the exploration of different LSM topologies [17], [18], new training algorithms or cost-intensive parameters search [19], [20], which increase accuracy at the cost of performance or resource overhead. The input format is another issue that has significant impact on the performance of the LSM.…”
Section: Introductionmentioning
confidence: 99%
“…It is well recognized, however, that most MOEAs require a large number of function evaluations before a set of diverse and well-converged non-dominated solutions can be found, which makes it hard for them to be directly applied to solve a class of data-driven optimization problems [24], whose objectives can be evaluated by means of conducting time-consuming computer simulations or costly physical experiments. Examples of real-world data-driven multi-objective optimization problems include blast furnace optimization [7], air intake ventilation system optimization [10], airfoil design optimization [29], bridge design optimization [34], design optimization for a stiffened cylindrical shell with variable ribs [57], optimization of operation of crude oil distillation units [19], resource allocation of trauma systems [47], and also neural architecture search [42,58], among many others.…”
Section: Introductionmentioning
confidence: 99%