2018
DOI: 10.1007/s13042-017-0777-2
|View full text |Cite
|
Sign up to set email alerts
|

Per-sample prediction intervals for extreme learning machines

Abstract: Prediction intervals in supervised Machine Learning bound the region where the true outputs of new samples may fall. They are necessary in the task of separating reliable predictions of a trained model from near random guesses, minimizing the rate of False Positives, and other problem-specific tasks in applied Machine Learning. Many real problems have heteroscedastic stochastic outputs, which explains the need of inputdependent prediction intervals.This paper proposes to estimate the input-dependent prediction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 40 publications
(48 reference statements)
0
4
0
Order By: Relevance
“…The HC notation follows what can be found in [13], which provides useful insight on this kind of estimators. Note that for sufficiently large n, HC 3 is close to H α m Ω m H αT m , which corresponds to the estimate used in [9] to build prediction intervals for large amounts of data, assuming fixed input weights.…”
Section: Estimation Under Independence and Heteroskedastic Assumptionsmentioning
confidence: 79%
See 2 more Smart Citations
“…The HC notation follows what can be found in [13], which provides useful insight on this kind of estimators. Note that for sufficiently large n, HC 3 is close to H α m Ω m H αT m , which corresponds to the estimate used in [9] to build prediction intervals for large amounts of data, assuming fixed input weights.…”
Section: Estimation Under Independence and Heteroskedastic Assumptionsmentioning
confidence: 79%
“…Practically, prediction variance estimation is straightforward by adding σ2 ε to the variance estimate in the homoskedastic case, while the noise variance could be estimated in the heteroskedastic case, e.g. with a second model [8,9]. Prediction interval can also be constructed, assuming convenient noise distribution.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Analysis of uncertainty is performed to understand [16,17] when the model predictions are underconfident and when they are overconfident. This analysis is performed by quantifying prediction intervals [18,19] and applying these predictions in decision making.…”
Section: Introductionmentioning
confidence: 99%