2019
DOI: 10.1007/978-3-030-22796-8_41
|View full text |Cite
|
Sign up to set email alerts
|

Evolutionary Optimization of Liquid State Machines for Robust Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Particularly, [99] proposed an evolutionary algorithm to optimize the number of neurons and percentage connectivity on a single liquid. Meanwhile, [100] used a covariance matrix adaptation evolution strategy to optimize three parameters, i.e., percentage connectivity, weight distribution and membrane time constant in one liquid. However, [101] pointed out that the above algorithms "only perform parameter optimization in a single liquid and do not optimize the architectures of LSM," and proposed a Neural Architecture Search (NAS) based framework to optimize both architecture and parameters of LSM model.…”
Section: Recent Trends Of Lsm-based Rcmentioning
confidence: 99%
“…Particularly, [99] proposed an evolutionary algorithm to optimize the number of neurons and percentage connectivity on a single liquid. Meanwhile, [100] used a covariance matrix adaptation evolution strategy to optimize three parameters, i.e., percentage connectivity, weight distribution and membrane time constant in one liquid. However, [101] pointed out that the above algorithms "only perform parameter optimization in a single liquid and do not optimize the architectures of LSM," and proposed a Neural Architecture Search (NAS) based framework to optimize both architecture and parameters of LSM model.…”
Section: Recent Trends Of Lsm-based Rcmentioning
confidence: 99%
“…The parameters of the rest of recurrent neurons (the reservoir are randomly initialized subject to some stability constraints, and kept fixed while the readout layer is trained [227]. Some works have been reported in the last couple of years dealing with the optimization of Reservoir Computing models, such as the composition of the reservoir, connectivity and hierarchical structure of Echo State Networks via Genetic Algorithms [228], or the structural hyper-parameter optimization of Liquid State Machines [229,230] and Echo State Networks [231] using an adapted version of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) solver. The relatively recent advent of Deep versions of Reservoir Computing models [232] unfolds an interesting research playground over which to propose new bio-inspired solvers for topology and hyperparameter optimization.…”
Section: Optimization Of New Deep Learning Architecturesmentioning
confidence: 99%
“…Most existing evolutionary LSMs predominantly concentrate on parameter optimization, such as liquid density 34 , 35 and liquid size, 36 often resulting in inefficiency. An evolutionary framework of a three-step search is introduced, 35 including architectural parameters such as multiple-liquid architecture, liquid density, excitatory neuron ratio, and so forth.…”
Section: Introductionmentioning
confidence: 99%
“… 37 Some studies respectively apply covariance matrix adaptive evolutionary strategy (CMA-ES) and differential evolution algorithm (DE) to optimize the topology and parameters of the reservoir. 34 , 38 Other NAS-based SNN models aim to maximize classification accuracy with limited computing resources. For instance, energy-efficient SNN architectures 39 are evolved for both classification accuracy and the number of spikes.…”
Section: Introductionmentioning
confidence: 99%