33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)
DOI: 10.1109/aipr.2004.18
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Non-Parametric Methods for Assessing Classifier Performance in Terms of ROC Parameters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(20 citation statements)
references
References 6 publications
0
20
0
Order By: Relevance
“…To manually select an ␣ value, we trained ANNs with ␣ values ranging between 0.01 and 4.0, repeated the experiment independently 50 times, and chose the ␣ value that maximized the .632ϩ bootstrapping AUC value ͑calculated on the training dataset only͒ averaged across the replicated experiments. 29,30 This ␣ value was then used as a fixed value in all ANN training experiments. We used the method of .632ϩ bootstrapping because we did not want to bias the results by involving the validation dataset during training.…”
Section: Iie Banns and Weight Decaymentioning
confidence: 99%
See 1 more Smart Citation
“…To manually select an ␣ value, we trained ANNs with ␣ values ranging between 0.01 and 4.0, repeated the experiment independently 50 times, and chose the ␣ value that maximized the .632ϩ bootstrapping AUC value ͑calculated on the training dataset only͒ averaged across the replicated experiments. 29,30 This ␣ value was then used as a fixed value in all ANN training experiments. We used the method of .632ϩ bootstrapping because we did not want to bias the results by involving the validation dataset during training.…”
Section: Iie Banns and Weight Decaymentioning
confidence: 99%
“…However, withholding training cases for testing is not an efficient use of the data for small training datasets. We used the method of .632ϩ bootstrapping, 7,29,30 which allows all cases be used for training. Figure 2͑a͒ shows how the AUC values of ANNs varied with training iterations for three testing methods that could be used to determine when to stop training for the XOR experiment.…”
Section: Iif Early Stoppingmentioning
confidence: 99%
“…The (PF,PD) pairs generated by adjusting the algorithm's threshold form an ROC curve. ROC analysis is a more general way to measure a classifier's performance than numerical indices (Yousef et al 2004). An ROC curve offers a visual of the tradeoff between the classifier's ability to correctly detect faultprone modules (PD) and the number of incorrectly classified fault-free modules (PF).…”
Section: Roc Curvementioning
confidence: 99%
“…Hold-out, cross-validation, and bootstrapping methods were used with the available sample to predict the AUC and other performance measures, which were then compared to the performance of the LR classifier designed on the available sample and applied to the independent test set. Yousef et al 10 investigated the effectiveness of different bootstrap techniques in a Monte Carlo simulation study. Neither of the last two studies systematically investigated the effect of feature space dimensionality, class separability, or the performance of LOO and Fukunaga-Hayes ͑F-H͒ resampling methods.…”
Section: Introductionmentioning
confidence: 99%