2007
DOI: 10.1016/j.acra.2007.04.015
|View full text |Cite
|
Sign up to set email alerts
|

Reliable Evaluation of Performance Level for Computer-Aided Diagnostic Scheme

Abstract: Computer-aided diagnostic (CAD) schemes have been developed for assisting radiologists in the detection of various lesions in medical images. The reliable evaluation of CAD schemes is an important task in the field of CAD research. In the past, many evaluation approaches, such as the resubstitution, leave-one-out, cross-valiation, hold-out, and bootstrap methods have been proposed for evaluating the performance of various CAD schemes. However, some important issues in the evaluation of CAD schemes have not bee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2008
2008
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…We evaluated the general classification performance of the best-fit algorithm using the leave-one-out (LOO) method ( 34 ). We repeated the procedures described above, including extraction of candidate attributes and formulation of four separate algorithms to be merged into a single best-fit diagnostic algorithm, without the inclusion of one specific individual (LOO algorithm).…”
Section: Methodsmentioning
confidence: 99%
“…We evaluated the general classification performance of the best-fit algorithm using the leave-one-out (LOO) method ( 34 ). We repeated the procedures described above, including extraction of candidate attributes and formulation of four separate algorithms to be merged into a single best-fit diagnostic algorithm, without the inclusion of one specific individual (LOO algorithm).…”
Section: Methodsmentioning
confidence: 99%
“…When we discuss performance evaluation, the accuracy of the evaluation is most important; thus, researchers have been seeking an unbiased estimator. [48][49][50][51][52][53][54] As stated earlier, the performance estimate by a leave-one-out cross-validation test provides a pessimistically biased estimate [48][49][50] with good generalization. 48,51 Therefore, we expect that the performance estimates reported in this paper would be comparable to ͑or potentially better than͒ the performance obtained when applied to a larger data set.…”
Section: Discussionmentioning
confidence: 99%
“…To maximize the training dataset size and minimize the potential testing bias, we used a 3-fold cross-validation method in this study. We recognized the advantages and limitation of this cross-validation method in evaluating CAD performance [28]. Therefore, as the increase of database size, the reproducibility and generalization of the results have to be validated in the future studies.…”
Section: Discussionmentioning
confidence: 99%