2020
DOI: 10.1001/jamanetworkopen.2020.22779
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Chest Radiograph Interpretations by Artificial Intelligence Algorithm vs Radiology Residents

Abstract: IMPORTANCE Chest radiography is the most common diagnostic imaging examination performed in emergency departments (EDs). Augmenting clinicians with automated preliminary read assistants could help expedite their workflows, improve accuracy, and reduce the cost of care. OBJECTIVE To assess the performance of artificial intelligence (AI) algorithms in realistic radiology workflows by performing an objective comparative evaluation of the preliminary reads of anteroposterior (AP) frontal chest radiographs performe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

3
73
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 95 publications
(80 citation statements)
references
References 25 publications
3
73
0
Order By: Relevance
“…The diagnostic accuracy of the model also compared favourably with that of previously published models (eg, mean ChestNet AUC 0•78). 7,8,10,22,23,44 The accuracy of the model can be at least partly attributed to the large number of cases labelled by radiologists for model training. The evaluated chest x-ray model was trained on more than 800 000 images, each labelled by radiologists using a prospectively defined ontology tree of chest x-ray findings.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The diagnostic accuracy of the model also compared favourably with that of previously published models (eg, mean ChestNet AUC 0•78). 7,8,10,22,23,44 The accuracy of the model can be at least partly attributed to the large number of cases labelled by radiologists for model training. The evaluated chest x-ray model was trained on more than 800 000 images, each labelled by radiologists using a prospectively defined ontology tree of chest x-ray findings.…”
Section: Discussionmentioning
confidence: 99%
“…The evaluated chest x-ray model was trained on more than 800 000 images, each labelled by radiologists using a prospectively defined ontology tree of chest x-ray findings. Many other largescale attempts to train deep-learning models on chest x-ray data have relied on text mining from the original radiology reports, 8,45 a process that has been criticised for inconsistency and inaccuracy. 46 Furthermore, the model uses all common chest x-ray projections (anteriorposterior, posterior-anterior, and lateral), which represents the standard of care in real-world settings.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Roa et al, studied more closely retrospective peer review systems to minimize false negatives in particular, whereby this AI tool could function as a real-time prospective, peer review for radiologists [19]. Earlier research evaluated on rather small data sets with a high percentage of positive cases or even exclusively positive cases, which does not represent real-time clinical workflow and might influence diagnostic accuracy [17,[27][28][29].…”
Section: Discussionmentioning
confidence: 99%
“…AI and deep learning are currently being tested for imaging processing in several anatomical regions and various clinical scenarios, including disorders of the chest (7,8). In this context, we read with great interest the recently published paper by Wu et al (7), investigating the performance of AI model and human third-year radiology residents in interpreting chest radiographs.…”
mentioning
confidence: 99%