2016
DOI: 10.1007/978-3-319-41546-8_1
|View full text |Cite
|
Sign up to set email alerts
|

Agreement Between Radiologists’ Interpretations of Screening Mammograms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 1 publication
0
3
0
Order By: Relevance
“…Indeed, analyzing mammograms is a challenging task. Previous works have shown the agreement between radiologists to be slight to moderate at best [2][3][4]. A second reading of mammograms by an additional radiologist has been proven to increase sensitivity and specificity [5,6].…”
Section: Radiologists Performance In Screening Digital Mammographymentioning
confidence: 99%
“…Indeed, analyzing mammograms is a challenging task. Previous works have shown the agreement between radiologists to be slight to moderate at best [2][3][4]. A second reading of mammograms by an additional radiologist has been proven to increase sensitivity and specificity [5,6].…”
Section: Radiologists Performance In Screening Digital Mammographymentioning
confidence: 99%
“…Another challenge of health risk level estimation stems from the label quality. For instance, the agreement rate of the radiologists for malignancy is usually less than 80%, resulting in a noisy labeled dataset [38,47]. Despite the often-uncleared distinction between adjacent labels, it is more possible that a well-trained annotator will mislabel a Severe DR (3) sample to Moderate DR (2) rather than No DR (0).…”
Section: Introductionmentioning
confidence: 99%
“…• The distinction between adjacent labels is often unclear and annotator-dependent, leading to a situation where the same input may be marked with different (although probably adjacent) labels by different practitioners (or even by the same one). Radiologist usually agree on malignancy of less than 80% cases ( [8]), whereas agreement rate may be even lower on predicting individual BIRADS labels ( [10]).…”
Section: Introductionmentioning
confidence: 99%