To develop a convolutional neural network (CNN) algorithm that can predict the molecular subtype of a breast cancer based on MRI features. An IRB-approved study was performed in 216 patients with available pre-treatment MRIs and immunohistochemical staining pathology data. First post-contrast MRI images were used for 3D segmentation using 3D slicer. A CNN architecture was designed with 14 layers. Residual connections were used in the earlier layers to allow stabilization of gradients during backpropagation. Inception style layers were utilized deeper in the network to allow learned segregation of more complex feature mappings. Extensive regularization was utilized including dropout, L2, feature map dropout, and transition layers. The class imbalance was addressed by doubling the input of underrepresented classes and utilizing a class sensitive cost function. Parameters were tuned based on a 20% validation group. A class balanced holdout set of 40 patients was utilized as the testing set. Software code was written in Python using the TensorFlow module on a Linux workstation with one NVidia Titan X GPU. Seventy-four luminal A, 106 luminal B, 13 HER2+, and 23 basal breast tumors were evaluated. Testing set accuracy was measured at 70%. The class normalized macro area under receiver operating curve (ROC) was measured at 0.853. Non-normalized microaggregated AUC was measured at 0.871, representing improved discriminatory power for the highly represented Luminal A and Luminal B subtypes. Aggregate sensitivity and specificity was measured at 0.603 and 0.958. MRI analysis of breast cancers utilizing a novel CNN can predict the molecular subtype of breast cancers. Larger data sets will likely improve our model.
To use deep learning with advanced data augmentation to accurately diagnose and classify femoral neck fractures. A retrospective study of patients with femoral neck fractures was performed. One thousand sixty-three AP hip radiographs were obtained from 550 patients. Ground truth labels of Garden fracture classification were applied as follows: (1) 127 Garden I and II fracture radiographs, (2) 610 Garden III and IV fracture radiographs, and (3) 326 normal hip radiographs. After localization by an initial network, a second CNN classified the images as Garden I/II fracture, Garden III/IV fracture, or no fracture. Advanced data augmentation techniques expanded the training set: (1) generative adversarial network (GAN); (2) digitally reconstructed radiographs (DRRs) from preoperative hip CT scans. In all, 9063 images, real and generated, were available for training and testing. A deep neural network was designed and tuned based on a 20% validation group. A holdout test dataset consisted of 105 real images, 35 in each class. Two class prediction of fracture versus no fracture (AUC 0.92): accuracy 92.3%, sensitivity 0.91, specificity 0.93, PPV 0.96, NPV 0.86. Three class prediction of Garden I/II, Garden III/IV, or normal (AUC 0.96): accuracy 86.0%, sensitivity 0.79, specificity 0.90, PPV 0.80, NPV 0.90. Without any advanced augmentation, the AUC for two-class prediction was 0.80. With DRR as the only advanced augmentation, AUC was 0.91 and with GAN only AUC was 0.87. GANs and DRRs can be used to improve the accuracy of a tool to diagnose and classify femoral neck fractures.
Background and Purpose— Hematoma volume measurements influence prognosis and treatment decisions in patients with spontaneous intracerebral hemorrhage (ICH). The aims of this study are to derive and validate a fully automated segmentation algorithm for ICH volumetric analysis using deep learning methods. Methods— In-patient computed tomography scans of 300 consecutive adults (age ≥18 years) with spontaneous, supratentorial ICH who were enrolled in the ICHOP (Intracerebral Hemorrhage Outcomes Project; 2009–2018) were separated into training (n=260) and test (n=40) datasets. A fully automated segmentation algorithm was derived using convolutional neural networks, and it was trained on manual segmentations from the training dataset. The algorithm’s performance was assessed against manual and semiautomated segmentation methods in the test dataset. Results— The mean volumetric Dice similarity coefficients for the fully automated segmentation algorithm when tested against manual and semiautomated segmentation methods were 0.894±0.264 and 0.905±0.254, respectively. ICH volumes derived from fully automated versus manual ( R 2 =0.981; P <0.0001), fully automated versus semiautomated ( R 2 =0.978; P <0.0001), and semiautomated versus manual ( R 2 =0.990; P <0001) segmentation methods had strong between-group correlations. The fully automated segmentation algorithm (mean 12.0±2.7 s/scan) was significantly faster than both of the manual (mean 201.5±92.2 s/scan; P <0.001) and semiautomated (mean 288.58±160.3 s/scan; P <0.001) segmentation methods. Conclusions— The fully automated segmentation algorithm quantified hematoma volumes from computed tomography scans of supratentorial ICH patients with similar accuracy and substantially greater efficiency compared with manual and semiautomated segmentation methods. External validation of the fully automated segmentation algorithm is warranted.
The aim of this study is to evaluate the role of convolutional neural network (CNN) in predicting axillary lymph node metastasis, using a breast MRI dataset. An institutional review board (IRB)-approved retrospective review of our database from 1/2013 to 6/2016 identified 275 axillary lymph nodes for this study. Biopsy-proven 133 metastatic axillary lymph nodes and 142 negative control lymph nodes were identified based on benign biopsies (100) and from healthy MRI screening patients (42) with at least 3 years of negative follow-up. For each breast MRI, axillary lymph node was identified on first T1 post contrast dynamic images and underwent 3D segmentation using an open source software platform 3D Slicer. A 32 × 32 patch was then extracted from the center slice of the segmented tumor data. A CNN was designed for lymph node prediction based on each of these cropped images. The CNN consisted of seven convolutional layers and max-pooling layers with 50% dropout applied in the linear layer. In addition, data augmentation and L2 regularization were performed to limit overfitting. Training was implemented using the Adam optimizer, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. Code for this study was written in Python using the TensorFlow module (1.0.0). Experiments and CNN training were done on a Linux workstation with NVIDIA GTX 1070 Pascal GPU. Two class axillary lymph node metastasis prediction models were evaluated. For each lymph node, a final softmax score threshold of 0.5 was used for classification. Based on this, CNN achieved a mean five-fold cross-validation accuracy of 84.3%. It is feasible for current deep CNN architectures to be trained to predict likelihood of axillary lymph node metastasis. Larger dataset will likely improve our prediction model and can potentially be a non-invasive alternative to core needle biopsy and even sentinel lymph node evaluation.
The aim of this study is to develop a fully automated convolutional neural network (CNN) method for quantification of breast MRI fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). An institutional review board-approved retrospective study evaluated 1114 breast volumes in 137 patients using T1 precontrast, T1 postcontrast, and T1 subtraction images. First, using our previously published method of quantification, we manually segmented and calculated the amount of FGT and BPE to establish ground truth parameters. Then, a novel 3D CNN modified from the standard 2D U-Net architecture was developed and implemented for voxel-wise prediction whole breast and FGT margins. In the collapsing arm of the network, a series of 3D convolutional filters of size 3 × 3 × 3 are applied for standard CNN hierarchical feature extraction. To reduce feature map dimensionality, a 3 × 3 × 3 convolutional filter with stride 2 in all directions is applied; a total of 4 such operations are used. In the expanding arm of the network, a series of convolutional transpose filters of size 3 × 3 × 3 are used to up-sample each intermediate layer. To synthesize features at multiple resolutions, connections are introduced between the collapsing and expanding arms of the network. L2 regularization was implemented to prevent over-fitting. Cases were separated into training (80%) and test sets (20%). Fivefold cross-validation was performed. Software code was written in Python using the TensorFlow module on a Linux workstation with NVIDIA GTX Titan X GPU. In the test set, the fully automated CNN method for quantifying the amount of FGT yielded accuracy of 0.813 (cross-validation Dice score coefficient) and Pearson correlation of 0.975. For quantifying the amount of BPE, the CNN method yielded accuracy of 0.829 and Pearson correlation of 0.955. Our CNN network was able to quantify FGT and BPE within an average of 0.42 s per MRI case. A fully automated CNN method can be utilized to quantify MRI FGT and BPE. Larger dataset will likely improve our model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.