2013
DOI: 10.1016/j.compbiomed.2013.05.003
|View full text |Cite
|
Sign up to set email alerts
|

Using experts feedback in clinical case resolution and arbitration as accuracy diagnosis methodology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…As described by Rodríguez-González [41], a more rigorous evaluation design would use a panel of assessors who are assigned a random subset of questions and a panel of referees who arbitrate among results returned by the assessors and by the system. Our evaluations were formative in nature, designed to demonstrate the feasibility and plausibility of the methods we developed.…”
Section: Discussionmentioning
confidence: 99%
“…As described by Rodríguez-González [41], a more rigorous evaluation design would use a panel of assessors who are assigned a random subset of questions and a panel of referees who arbitrate among results returned by the assessors and by the system. Our evaluations were formative in nature, designed to demonstrate the feasibility and plausibility of the methods we developed.…”
Section: Discussionmentioning
confidence: 99%
“…The performance of the final trained CNN model needed to be evaluated by corresponding metrics [ 47 ]. Common evaluation metrics for classification tasks are true positive rate ( TPR ) and false positive rate ( FPR ) [ 48 , 49 ], which have the following equations: where TP indicates that a positive sample is correctly identified as a positive sample, TN indicates that a negative sample is correctly identified as a negative sample, FP indicates a false positive sample (which means that a negative sample is incorrectly identified as a positive sample), and FN indicates a false negative sample (which means that a positive sample is incorrectly identified as a negative sample).…”
Section: Methodsmentioning
confidence: 99%
“…The performance of the final trained CNN model needed to be evaluated by corresponding metrics [36]. Common evaluation metrics for classification tasks are Precision, Recall, and F1-Measure [37,38], which have the following equations: Among them, the role of the convolutional layer was to perform adaptive feature extraction on the Mel spectrogram, which was achieved by convolutional operations of the convolutional kernel matrix [34].…”
Section: Cnn Model Evaluation Metricsmentioning
confidence: 99%
“…The performance of the final trained CNN model needed to be evaluated by corresponding metrics [36]. Common evaluation metrics for classification tasks are Precision, Recall, and F1-Measure [37,38], which have the following equations:…”
Section: Cnn Model Evaluation Metricsmentioning
confidence: 99%