2021
DOI: 10.1001/jamanetworkopen.2021.6096
|View full text |Cite
|
Sign up to set email alerts
|

Development and Validation of a Deep Learning Model Using Convolutional Neural Networks to Identify Scaphoid Fractures in Radiographs

Abstract: Key Points Question Can deep convolutional neural networks (DCNNs) detect occult scaphoid fractures not visible to human observers? Findings In this diagnostic study of 11 838 scaphoid radiographs, the DCNN trained to distinguish scaphoid fractures from scaphoids without fracture achieved an overall sensitivity and specificity of 87.1% and 92.1%, respectively, with an area under the receiver operating curve (AUROC) of 0.955; a second DCNN, which examined ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
33
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(46 citation statements)
references
References 35 publications
1
33
0
1
Order By: Relevance
“…e model-free approach works in the real environment for learning. Instead, the model-based algorithm reduces the interaction with the real environment during the learning phase [17][18][19]. e goal is to build models based on these interactions with the environment and then use that model to simulate other events.…”
Section: Related Workmentioning
confidence: 99%
“…e model-free approach works in the real environment for learning. Instead, the model-based algorithm reduces the interaction with the real environment during the learning phase [17][18][19]. e goal is to build models based on these interactions with the environment and then use that model to simulate other events.…”
Section: Related Workmentioning
confidence: 99%
“…However, Yoon et al later established two model, namely the Apparent Fracture Model and the Occult Fracture Model, which excelled in identifying obvious and potential scaphoid fractures, respectively. Their research used more datasets and escalated deep learning algorithms, and both models showed higher sensitivity and specificity than that of Langerhuizen et al ( 29 ).…”
Section: Discussionmentioning
confidence: 93%
“…format using MRICroGL and pruned to 512×512 pixels, with an average thickness of 3 mm. We selected 2364 MRI water images (1020 of patients and 1344 of control) with recognizable pancreas imaging and manually labeled them using LabelMe [ 38 ] for building up DCNN [ 39 ]. We used 315 participants’ images for training and 79 data sets for testing (approximately 80:20 ratio).…”
Section: Methodsmentioning
confidence: 99%