2023
DOI: 10.3390/biomedicines11030760
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning: Using Xception, a Deep Convolutional Neural Network Architecture, to Implement Pectus Excavatum Diagnostic Tool from Frontal-View Chest X-rays

Abstract: Pectus excavatum (PE), a chest-wall deformity that can compromise cardiopulmonary function, cannot be detected by a radiologist through frontal chest radiography without a lateral view or chest computed tomography. This study aims to train a convolutional neural network (CNN), a deep learning architecture with powerful image processing ability, for PE screening through frontal chest radiography, which is the most common imaging test in current hospital practice. Posteroanterior-view chest images of PE and norm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 34 publications
1
1
0
Order By: Relevance
“…In addition, the classification of angiographic images (i.e., 170 images) was achieved with five pretrained CNN models as shown in Tables 5-9. Several studies have already used pretrained models such as DenseNet, EfficientNet and Xception to classify medical images efficiently which is in line with the result achieved [48,72,73].…”
Section: Discussionsupporting
confidence: 55%
“…In addition, the classification of angiographic images (i.e., 170 images) was achieved with five pretrained CNN models as shown in Tables 5-9. Several studies have already used pretrained models such as DenseNet, EfficientNet and Xception to classify medical images efficiently which is in line with the result achieved [48,72,73].…”
Section: Discussionsupporting
confidence: 55%
“…Consequently, the correlation of each channel is captured via a regular 3×3 or 5×5 convolution. This idea goes to the extreme of doing 1×1 to each channel, then doing 3×3 to each output [26]. This is identical to replacing the inception module with depth wise separable convolutions.…”
Section: Introductionmentioning
confidence: 99%