2010
DOI: 10.1016/j.neucom.2010.07.012
|View full text |Cite
|
Sign up to set email alerts
|

Two-stage extreme learning machine for regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
0

Year Published

2013
2013
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 92 publications
(37 citation statements)
references
References 25 publications
0
37
0
Order By: Relevance
“…In an alternative extreme case, as seen here, a residual reduction is lower bounded by C log n if all basis functions are orthogonal. It is natural that |ξ k, α k | is an increasing function of n with a high probability when the number of linear independent basis functions increases with n. Actually, in a greedy version of the extreme learning machine, [16] first chooses candidates of basis functions by FPE in the forward selection while using PRESS for backward elimination. The requirement of the backward elimination may constitute empirical evidence for the above discussion.…”
Section: Greedy Algorithm For a Special Casementioning
confidence: 99%
See 4 more Smart Citations
“…In an alternative extreme case, as seen here, a residual reduction is lower bounded by C log n if all basis functions are orthogonal. It is natural that |ξ k, α k | is an increasing function of n with a high probability when the number of linear independent basis functions increases with n. Actually, in a greedy version of the extreme learning machine, [16] first chooses candidates of basis functions by FPE in the forward selection while using PRESS for backward elimination. The requirement of the backward elimination may constitute empirical evidence for the above discussion.…”
Section: Greedy Algorithm For a Special Casementioning
confidence: 99%
“…On the other hand, the sparseness of TCR is notable compared to LOOCV, OSER, FPE and LASSO as found in Table 3. Since [16] employs LOOCV for the backward elimination, we can say that TCR can provide a sparser representation than in [16]. In Table 3, we can see that LASSO is not stable in terms of the degree of sparseness while it shows a relatively stable generalization performance; e.g.…”
Section: Some Benchmark Examplesmentioning
confidence: 99%
See 3 more Smart Citations