“…The classification accuracy of the benign class is high compared to others because the malignant and normal class has fewer images (210 and 133, respectively). In addition, augmentation techniques can be applied to increase the model accuracy [35]. This study applies the rotation operation to balance malignant and normal classes images.…”
This paper proposes a fused methodology based upon convolutional neural networks and a shallow classifier to diagnose and differentiate breast cancer between malignant lesions and benign lesions. First, various pre-trained convolutional neural networks are used to calculate the features of breast ultrasonography (BU) images. Then, the computed features are used to train the different shallow classifiers like the tree, naïve Bayes, support vector machine (SVM), k-nearest neighbors, ensemble, and neural network. After extensive training and testing, the DenseNet-201, MobileNet-v2, and ResNet-101 trained SVM show high accuracy. Furthermore, the best BU features are merged to increase the classification accuracy at the cost of high computational time. Finally, the feature dimension reduction ReliefF algorithm is applied to address the computational complexity issue. An online publicly available dataset of 780 BU images is utilized to validate the proposed approach. The dataset was further divided into 80 and 20 percent ratios for training and testing the models. After extensive testing and comprehensive analysis, it is found that the DenseNet-201 and Mobile-Net-v2 trained SVM has an accuracy of 90.39% and 94.57% for the original and augmented BU images online dataset, respectively. This study concluded that the proposed framework is efficient and can easily be implemented to help and reduce the workload of radiologists/doctors to diagnose breast cancer in female patients.
“…The classification accuracy of the benign class is high compared to others because the malignant and normal class has fewer images (210 and 133, respectively). In addition, augmentation techniques can be applied to increase the model accuracy [35]. This study applies the rotation operation to balance malignant and normal classes images.…”
This paper proposes a fused methodology based upon convolutional neural networks and a shallow classifier to diagnose and differentiate breast cancer between malignant lesions and benign lesions. First, various pre-trained convolutional neural networks are used to calculate the features of breast ultrasonography (BU) images. Then, the computed features are used to train the different shallow classifiers like the tree, naïve Bayes, support vector machine (SVM), k-nearest neighbors, ensemble, and neural network. After extensive training and testing, the DenseNet-201, MobileNet-v2, and ResNet-101 trained SVM show high accuracy. Furthermore, the best BU features are merged to increase the classification accuracy at the cost of high computational time. Finally, the feature dimension reduction ReliefF algorithm is applied to address the computational complexity issue. An online publicly available dataset of 780 BU images is utilized to validate the proposed approach. The dataset was further divided into 80 and 20 percent ratios for training and testing the models. After extensive testing and comprehensive analysis, it is found that the DenseNet-201 and Mobile-Net-v2 trained SVM has an accuracy of 90.39% and 94.57% for the original and augmented BU images online dataset, respectively. This study concluded that the proposed framework is efficient and can easily be implemented to help and reduce the workload of radiologists/doctors to diagnose breast cancer in female patients.
“…Then, to retrieve texture features from the US image data, the gray-level cross matrix (GLCM) was utilized [122,123,126]. The produced features have been further condensed using Fisher's discriminant rate of return [123] as well as native class separation [127].…”
Section: Classification Of Defectsmentioning
confidence: 99%
“…Additionally, a 'bag of words' (BoW) codebook was developed, along with the scale-invariant feature reshape as well as the frequency distribution of optical flow descriptors [128]. Finally, classification models like BPNN [122], this same adaptive neurofuzzy inference system classification model (ANFIS) [123], SVM [127,128], as well as the Gaussian procedure [124] were applied to distinguish between healthy and diseased fetal hearts. The DGACNN was created by combining the DANomaly as well as the GACNN (WGAN-GP as well as CNN) architectures [125].…”
Congenital heart defects (CHD) are one of the serious problems that arise during pregnancy. Early CHD detection reduces death rates and morbidity but is hampered by the relatively low detection rates (i.e., 60%) of current screening technology. The detection rate could be increased by supplementing ultrasound imaging with fetal ultrasound image evaluation (FUSI) using deep learning techniques. As a result, the non-invasive foetal ultrasound image has clear potential in the diagnosis of CHD and should be considered in addition to foetal echocardiography. This review paper highlights cutting-edge technologies for detecting CHD using ultrasound images, which involve pre-processing, localization, segmentation, and classification. Existing technique of preprocessing includes spatial domain filter, non-linear mean filter, transform domain filter, and denoising methods based on Convolutional Neural Network (CNN); segmentation includes thresholding-based techniques, region growing-based techniques, edge detection techniques, Artificial Neural Network (ANN) based segmentation methods, non-deep learning approaches and deep learning approaches. The paper also suggests future research directions for improving current methodologies.
“…Subsequently, the median filter of size 5 × 5 is applied to improve the quality of ROI and eliminate the noise. Finally, the obtained images are resized to 256 × 256 to preserve the generality and it offers superior results when using handcrafted feature techniques [44,45].…”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.