THORACIC IMAGINGC hest radiography is the most common radiologic examination, despite its inferiority to low-dose CT, for lung cancer screening (1). Some authors showed that up to 90% of "missed" lung cancer nodules can be found when the baseline chest radiograph is re-reviewed with the benefit of the follow-up examination showing the mass that has grown in size (2). Misdiagnoses of lung cancer can occur for many reasons. This oversight can be due to a lack of perception of the nodule, the decision to ignore a subtle density, and the satisfaction of search when another abnormality is identified (3-5). Lesion characteristics including size, density, and location make the detection of lung nodules more challenging on chest radiographs (6-8).To improve the efficacy of chest radiography for nodule detection, computer-aided detection (CAD) software has been developed and evaluated. In 2004, Kakeda et al (9) tested their CAD and reported that it was beneficial in analyzing radiographs with nodules but had an average falsepositive rate of 3.15 per image. de Hoop et al (10) showed Purpose: To compare the performance of radiologists in detecting malignant pulmonary nodules on chest radiographs when assisted by deep learning-based DCNN software with that of radiologists or DCNN software alone in a multicenter setting. Materials and Methods: Investigators at four medical centers retrospectively identified 600 lung cancer-containing chest radiographs and 200 normal chest radiographs. Each radiograph with a lung cancer had at least one malignant nodule confirmed by CT and pathologic examination. Twelve radiologists from the four centers independently analyzed the chest radiographs and marked regions of interest. Commercially available deep learning-based computer-aided detection software separately trained, tested, and validated with 19 330 radiographs was used to find suspicious nodules. The radiologists then reviewed the images with the assistance of DCNN software. The sensitivity and number of false-positive findings per image of DCNN software, radiologists alone, and radiologists with the use of DCNN software were analyzed by using logistic regression and Poisson regression.
The purpose of this study was to evaluate the predictive performance of ultrasonography (US)-based radiomics for axillary lymph node metastasis and to compare it with that of a clinicopathologic model. Methods: A total of 496 patients (mean age, 52.5±10.9 years) who underwent breast cancer surgery between January 2014 and December 2014 were included in this study. Among them, 306 patients who underwent surgery between January 2014 and August 2014 were enrolled as a training cohort, and 190 patients who underwent surgery between September 2014 and December 2014 were enrolled as a validation cohort. To predict axillary lymph node metastasis in breast cancer, we developed a preoperative clinicopathologic model using multivariable logistic regression and constructed a radiomics model using 23 radiomic features selected via least absolute shrinkage and selection operator regression. Results: In the training cohort, the areas under the curve (AUC) were 0.760, 0.812, and 0.858 for the clinicopathologic, radiomics, and combined models, respectively. In the validation cohort, the AUCs were 0.708, 0.831, and 0.810, respectively. The combined model showed significantly better diagnostic performance than the clinicopathologic model. Conclusion: A radiomics model based on the US features of primary breast cancers showed additional value when combined with a clinicopathologic model to predict axillary lymph node metastasis.
Objective: To investigate the feasibility of using a deep learning-based analysis of auscultation data to predict significant stenosis of arteriovenous fistulas (AVF) in patients undergoing hemodialysis requiring percutaneous transluminal angioplasty (PTA). Materials and Methods: Forty patients (24 male and 16 female; median age, 62.5 years) with dysfunctional native AVF were prospectively recruited. Digital sounds from the AVF shunt were recorded using a wireless electronic stethoscope before (pre-PTA) and after PTA (post-PTA), and the audio files were subsequently converted to mel spectrograms, which were used to construct various deep convolutional neural network (DCNN) models (DenseNet201, EfficientNetB5, and ResNet50). The performance of these models for diagnosing ≥ 50% AVF stenosis was assessed and compared. The ground truth for the presence of ≥ 50% AVF stenosis was obtained using digital subtraction angiography. Gradient-weighted class activation mapping (Grad-CAM) was used to produce visual explanations for DCNN model decisions. Results: Eighty audio files were obtained from the 40 recruited patients and pooled for the study. Mel spectrograms of "pre-PTA" shunt sounds showed patterns corresponding to abnormal high-pitched bruits with systolic accentuation observed in patients with stenotic AVF. The ResNet50 and EfficientNetB5 models yielded an area under the receiver operating characteristic curve of 0.99 and 0.98, respectively, at optimized epochs for predicting ≥ 50% AVF stenosis. However, Grad-CAM heatmaps revealed that only ResNet50 highlighted areas relevant to AVF stenosis in the mel spectrogram. Conclusion: Mel spectrogram-based DCNN models, particularly ResNet50, successfully predicted the presence of significant AVF stenosis requiring PTA in this feasibility study and may potentially be used in AVF surveillance.
Background Accurate interpretation of chest radiographs requires years of medical training, and many countries face a shortage of medical professionals to meet such requirements. Recent advancements in artificial intelligence (AI) have aided diagnoses; however, their performance is often limited due to data imbalance. The aim of this study was to augment imbalanced medical data using generative adversarial networks (GANs) and evaluate the clinical quality of the generated images via a multi-center visual Turing test. Methods Using six chest radiograph datasets, (MIMIC, CheXPert, CXR8, JSRT, VBD, and OpenI), starGAN v2 generated chest radiographs with specific pathologies. Five board-certified radiologists from three university hospitals, each with at least five years of clinical experience, evaluated the image quality through a visual Turing test. Further evaluations were performed to investigate whether GAN augmentation enhanced the convolutional neural network (CNN) classifier performances. Results In terms of identifying GAN images as artificial, there was no significant difference in the sensitivity between radiologists and random guessing (result of radiologists: 147/275 (53.5%) vs result of random guessing: 137.5/275, (50%); p = .284). GAN augmentation enhanced CNN classifier performance by 11.7%. Conclusion Radiologists effectively classified chest pathologies with synthesized radiographs, suggesting that the images contained adequate clinical information. Furthermore, GAN augmentation enhanced CNN performance, providing a bypass to overcome data imbalance in medical AI training. CNN based methods rely on the amount and quality of training data; the present study showed that GAN augmentation could effectively augment training data for medical AI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.