Prompt diagnosis of benign and malignant breast masses is essential for early breast cancer screening. Convolutional neural networks (CNNs) can be used to assist in the classification of benign and malignant breast masses. A persistent problem in current mammography mass classification via CNN is the lack of local-invariant features, which cannot effectively respond to geometric image transformations or changes caused by imaging angles. In this study, a novel model that trains both texton representation and deep CNN representation for mass classification tasks is proposed. Rotation-invariant features provided by the maximum response filter bank are incorporated with the CNN-based classification. The fusion after implementing the reduction approach is used to address the deficiencies of CNN in extracting mass features. This model is tested on public datasets, CBIS-DDSM, and a combined dataset, namely, mini-MIAS and INbreast. The fusion after implementing the reduction approach on the CBIS-DDSM dataset outperforms that of the other models in terms of area under the receiver operating curve (0.97), accuracy (94.30%), and specificity (97.19%). Therefore, our proposed method can be integrated with computer-aided diagnosis systems to achieve precise screening of breast masses.
ObjectivesWe aimed to develop and validate radiomic nomograms to allow preoperative differentiation between benign- and malignant parotid gland tumors (BPGT and MPGT, respectively), as well as between pleomorphic adenomas (PAs) and Warthin tumors (WTs).Materials and MethodsThis retrospective study enrolled 183 parotid gland tumors (68 PAs, 62 WTs, and 53 MPGTs) and divided them into training (n = 128) and testing (n = 55) cohorts. In total, 2553 radiomics features were extracted from fat-saturated T2-weighted images, apparent diffusion coefficient maps, and contrast-enhanced T1-weighted images to construct single-, double-, and multi-sequence combined radiomics models, respectively. The radiomics score (Rad-score) was calculated using the best radiomics model and clinical features to develop the radiomics nomogram. The receiver operating characteristic curve and area under the curve (AUC) were used to assess these models, and their performances were compared using DeLong’s test. Calibration curves and decision curve analysis were used to assess the clinical usefulness of these models.ResultsThe multi-sequence combined radiomics model exhibited better differentiation performance (BPGT vs. MPGT, AUC=0.863; PA vs. MPGT, AUC=0.929; WT vs. MPGT, AUC=0.825; PA vs. WT, AUC=0.927) than the single- and double sequence radiomics models. The nomogram based on the multi-sequence combined radiomics model and clinical features attained an improved classification performance (BPGT vs. MPGT, AUC=0.907; PA vs. MPGT, AUC=0.961; WT vs. MPGT, AUC=0.879; PA vs. WT, AUC=0.967).ConclusionsRadiomics nomogram yielded excellent diagnostic performance in differentiating BPGT from MPGT, PA from MPGT, and PA from WT.
Purpose
Breast mass segmentation is a prerequisite step in the use of computer‐aided tools designed for breast cancer diagnosis and treatment planning. However, mass segmentation remains challenging due to the low contrast, irregular shapes, and fuzzy boundaries of masses. In this work, we propose a mammography mass segmentation model for improving segmentation performance.
Methods
We propose a mammography mass segmentation model called SAP‐cGAN, which is based on an improved conditional generative adversarial network (cGAN). We introduce a superpixel average pooling layer into the cGAN decoder, which utilizes superpixels as a pooling layout to improve boundary segmentation. In addition, we adopt a multiscale input strategy to enable the network to learn scale‐invariant features with increased robustness. The performance of the model is evaluated with two public datasets: CBIS‐DDSM and INbreast. Moreover, ablation analysis is conducted to evaluate further the individual contribution of each block to the performance of the network.
Results
Dice and Jaccard scores of 93.37% and 87.57%, respectively, are obtained for the CBIS‐DDSM dataset. The Dice and Jaccard scores for the INbreast dataset are 91.54% and 84.40%, respectively. These results indicate that our proposed model outperforms current state‐of‐the‐art breast mass segmentation methods. The superpixel average pooling layer and multiscale input strategy has improved the Dice and Jaccard scores of the original cGAN by 7.8% and 12.79%, respectively.
Conclusions
Adversarial learning with the addition of a superpixel average pooling layer and multiscale input strategy can encourage the Generator network to generate masks with increased realism and improve breast mass segmentation performance through the minimax game between the Generator network and Discriminator network.
Multicenter sharing is an effective method to increase the data size for glioma research, but the data inconsistency among different institutions hindered the efficiency. This paper proposes a histogram specification with automatic selection of reference frames for magnetic resonance images to alleviate this problem (HSASR). The selection of reference frames is automatically performed by an optimized grid search strategy with coarse and fine search. The search range is firstly narrowed by coarse search of intraglioma samples, and then the suitable reference frame in histogram is selected by fine search within the sample selected by coarse search. Validation experiments are conducted on two datasets GliomaHPPH2018 and BraTS2017 to perform glioma grading. The results demonstrate the high performance of the proposed method. On the mixed dataset, the average AUC, accuracy, sensitivity, and specificity are 0.9786, 94.13%, 94.64%, and 93.00%, respectively. It is about 15% higher on all indicators compared with those without HSASR and has a slight advantage over the result of a manually selected reference frame by radiologists. Results show that our methods can effectively alleviate multicenter data inconsistencies and lift the performance of the prediction model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.