1998
DOI: 10.1109/5326.704579
|View full text |Cite
|
Sign up to set email alerts
|

Bias and variance of validation methods for function approximation neural networks under conditions of sparse data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
48
0
1

Year Published

2003
2003
2017
2017

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(51 citation statements)
references
References 25 publications
2
48
0
1
Order By: Relevance
“…The bootstrap error estimate consists of the training error for the application model plus the difference, averaged over all bootstrap models, between the error for the full data sample and the error for the bootstrap sample. Bootstrap appears to produce reliable model validation without an extensive computational effort, and is especially useful if the amount of data available for model construction and validation is limited [11] and when, as in the present study, some of the target cases are under-represented in the number of available examples.…”
Section: Model Validationmentioning
confidence: 93%
“…The bootstrap error estimate consists of the training error for the application model plus the difference, averaged over all bootstrap models, between the error for the full data sample and the error for the bootstrap sample. Bootstrap appears to produce reliable model validation without an extensive computational effort, and is especially useful if the amount of data available for model construction and validation is limited [11] and when, as in the present study, some of the target cases are under-represented in the number of available examples.…”
Section: Model Validationmentioning
confidence: 93%
“…To add further confidence in the validity of the results from this analysis, a re-sampling procedure is undertaken and the models recalculated (see for example [12]). Due to page limitation this validation exercise is undertaken on the incomplete data set only.…”
Section: Validation Analysis Of Ncarbs Results (Using Re-sampling)mentioning
confidence: 99%
“…The output of ANN models was represented in terms of bootstrap resamples and corresponding optimized weights as f NN (x i , w s ) where x i was the input data pattern, and w s was the optimized weights of the ANN model for a particular bootstrap resample s. The performance of both the models was then evaluated using a set A s . Then the generalization error denoted as Ê 0 was estimated (e.g., ANN model) as [TWOMEY, SMITH 1998]: [ ] …”
Section: Bootstrap Techniquementioning
confidence: 99%