2020
DOI: 10.1117/1.jmi.7.1.016501
|View full text |Cite
|
Sign up to set email alerts
|

Deep convolutional neural networks in the classification of dual-energy thoracic radiographic views for efficient workflow: analysis on over 6500 clinical radiographs

Abstract: DICOM header information is frequently used to classify medical image types; however, if a header is missing fields or contains incorrect data, the utility is limited. To expedite image classification, we trained convolutional neural networks (CNNs) in two classification tasks for thoracic radiographic views obtained from dual-energy studies: (a) distinguishing between frontal, lateral, soft tissue, and bone images and (b) distinguishing between posteroanterior (PA) or anteroposterior (AP) chest radiographs. C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…The number of training images per class for a given view angle was less than 500 (see Table 1), which is relatively small compared with reference datasets used to train models from scratch. 5,13 Therefore, transfer learning was utilized in this work. CNN hyperparameters, such as the optimizer, number of epochs, etc., were arbitrarily selected and kept fixed in this study.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…The number of training images per class for a given view angle was less than 500 (see Table 1), which is relatively small compared with reference datasets used to train models from scratch. 5,13 Therefore, transfer learning was utilized in this work. CNN hyperparameters, such as the optimizer, number of epochs, etc., were arbitrarily selected and kept fixed in this study.…”
Section: Discussionmentioning
confidence: 99%
“…While these features do not have an obvious interpretation, they could be due to subtle correlations in the training data, which may not be indicative of the class globally. 13,28,29 For example, it could be possible that image edges contain fingerprints of the hardware where the patient was scanned, which may correlate with the anatomic class since patients of the same treatment site are often grouped on the same treatment machine. Retrospective inspection of the projection headers ruled out the presence of imaging blades at the border of the projection images.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The quality of the dataset labeling method is likely to be a cornerstone of safe deep learning model development for systems intended for clinical use. Open source datasets may be vulnerable to adversarial perturbation, which can induce model failure or falsely high performance in image classification tasks [103]. Image perturbations are often difficult to detect.…”
Section: Risk and Safetymentioning
confidence: 99%