2013 IEEE 18th Conference on Emerging Technologies &Amp; Factory Automation (ETFA) 2013
DOI: 10.1109/etfa.2013.6647975
|View full text |Cite
|
Sign up to set email alerts
|

Genetically optimized extreme learning machine

Abstract: This paper proposes a learning algorithm for singlehidden layer feedforward neural networks (SLFN) called genetically optimized extreme learning machine (GO-ELM). In the GO-ELM, the structure and the parameters of the SLFN are optimized by a genetic algorithm (GA). The output weights, like in the batch ELM, are obtained by a least squares algorithm, but using Tikhonov's regularization in order to improve the SLFN performance in the presence of noisy data. The GA is used to tune the set of input variables, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 23 publications
0
1
0
Order By: Relevance
“…In the early stages of neural network development, evolutionary algorithms were commonly employed to find optimal architectures and weights [44]. Matias et al [45] utilized the genetically optimized extreme learning machine (GO-ELM) to optimize the network architecture. Some researchers [46,47] have employed an adaptive strategy to progressively expand the network structure layer by layer from a small network guided by specific principles.…”
Section: Optimized Network Structure Designmentioning
confidence: 99%
“…In the early stages of neural network development, evolutionary algorithms were commonly employed to find optimal architectures and weights [44]. Matias et al [45] utilized the genetically optimized extreme learning machine (GO-ELM) to optimize the network architecture. Some researchers [46,47] have employed an adaptive strategy to progressively expand the network structure layer by layer from a small network guided by specific principles.…”
Section: Optimized Network Structure Designmentioning
confidence: 99%
“…In the ELM, several hidden node parameters such as input weights, bias, and impact factors are created randomly. In Figure 6 , x j represents the input parameter, and L is the number of parameter vectors in the extreme learning machine feature space acquired by parameter mapping [ 63 ] and linear variable solving [ 64 ]. ELM has three properties [ 65 ] compared with conventional ANN: The linking weights and thresholds are artificially set, which can be adjusted after setting.…”
Section: Machine Learning Techniquementioning
confidence: 99%