2003
DOI: 10.1016/s1076-6332(03)00538-5
|View full text |Cite
|
Sign up to set email alerts
|

Statistical validation based on parametric receiver operating characteristic analysis of continuous classification data1

Abstract: Rationale and Objectives-The accuracy of diagnostic test and imaging segmentation is important in clinical practice because it has a direct impact on therapeutic planning. Statistical validations of classification accuracy was conducted based on parametric receiver operating characteristic analysis, illustrated on three radiologic examples.Materials and Methods-Two parametric models were developed for diagnostic or imaging data. Example 1: A semiautomated fractional segmentation algorithm was applied to magnet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2004
2004
2013
2013

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(23 citation statements)
references
References 34 publications
0
23
0
Order By: Relevance
“…The area under the receiver operating characteristic (ROC) curve (AUC) was developed for the model population by plotting sensitivity against 1 minus specificity over a range of total anemia risk scores and the AUC was calculated; the criteria used for acceptability of logistic regression models are: 0.90–1 = outstanding; 0.80–0.90 = excellent; 0.70–0.80 = acceptable; 0.60–0.70 = poor, and 0.50–0.60 = failure. Thus, AUC ≥0.7 is considered a valid robust model [19]. The same analytic methods were applied to the validation populations to validate the final risk model.…”
Section: Methodsmentioning
confidence: 99%
“…The area under the receiver operating characteristic (ROC) curve (AUC) was developed for the model population by plotting sensitivity against 1 minus specificity over a range of total anemia risk scores and the AUC was calculated; the criteria used for acceptability of logistic regression models are: 0.90–1 = outstanding; 0.80–0.90 = excellent; 0.70–0.80 = acceptable; 0.60–0.70 = poor, and 0.50–0.60 = failure. Thus, AUC ≥0.7 is considered a valid robust model [19]. The same analytic methods were applied to the validation populations to validate the final risk model.…”
Section: Methodsmentioning
confidence: 99%
“…Computer algorithms in medical imaging studies often yield continuous measurements [23]. These algorithms can also be referred to as continuous markers.…”
Section: Discussionmentioning
confidence: 99%
“…Later, Metz et al [12] extended the methods to account for continuous ratings which are yielded by applying computer algorithms on medical images. Additional examples of continuous ratings in radiology can be found in magnetic resonance imaging of brain tumors and spiral computed tomography of ureteral stone sizes [23, 24]. …”
Section: Methodsmentioning
confidence: 99%
“…The target outputs of the neural network were determined by setting one neuron of the output layer to 1 and all the other neurons to 0, so that each identification task was associated with one active output neuron. The training algorithm was run for different numbers of iterations (100, 250, 350, 500, and 1000) using an “early stopping” procedure to achieve the greatest sensitivity for a given value of specificity or misclassification [29]. …”
Section: Methodsmentioning
confidence: 99%
“…Misclassification measures the percentage of nonidentification tasks mistaken for identification tasks. The receiver operating characteristics (ROC) curves of the ANN and ANFIS were evaluated by plotting sensitivity versus specificity and sensitivity versus misclassification for all possible thresholds [29]. The equations for computing these outcomes are listed below…”
Section: Methodsmentioning
confidence: 99%