2017
DOI: 10.1016/j.neunet.2017.04.001
|View full text |Cite
|
Sign up to set email alerts
|

Recursive least meanp-power Extreme Learning Machine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
1
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 51 publications
0
1
0
Order By: Relevance
“…To handle this issue, the OS-ELM with forgetting mechanism (FOS-ELM) was proposed by Zhao et al [11], where the forgetting mechanism can discard obsolescence samples and enhance the accuracy of the predictive model. Zou et al [12] proposed the memory degradation based OS-ELM (MDOS-ELM) which adjusts the weights of the old and new samples by a self-adaptive memory factor, and discards invalid samples. Generally, FOS-ELM models built on samples without noise or only with the Gaussian noise can obtain satisfactory precisions.…”
Section: Introductionmentioning
confidence: 99%
“…To handle this issue, the OS-ELM with forgetting mechanism (FOS-ELM) was proposed by Zhao et al [11], where the forgetting mechanism can discard obsolescence samples and enhance the accuracy of the predictive model. Zou et al [12] proposed the memory degradation based OS-ELM (MDOS-ELM) which adjusts the weights of the old and new samples by a self-adaptive memory factor, and discards invalid samples. Generally, FOS-ELM models built on samples without noise or only with the Gaussian noise can obtain satisfactory precisions.…”
Section: Introductionmentioning
confidence: 99%
“…From the aspect of the optimization method, the stochastic gradient descent (SGD)-based algorithms cannot find the minimum using the negative gradient in some loss functions [20][21][22]. Toward this end, recursive-based algorithms [23] address these issues at the cost of increasing computational cost. In comparison with the SGD method and recursive method, the conjugate gradient (CG) method [24][25][26] and Newton's method as developments of SGD have become alternative optimization methods in KAFs.…”
Section: Introductionmentioning
confidence: 99%