Breast cancer is the most widespread type of cancer among women. The diagnosis of breast cancer in its early stages is still a significant problem worldwide. The accurate classification and localization of breast mass help in the early detection of the disease, so in the last few years, a variety of CAD systems are developed to enhance breast cancer classification and localization accuracy, but most of them are fully based on handcrafted feature extraction techniques, which affect its efficiency. Currently, deep learning approaches are able to automatically learn a set of high-level features and consequently, they are achieving remarkable results in object classification and detection tasks. In this paper, the pre-trained ResNet-50 architecture and the Class Activation Map (CAM) technique are employed in breast cancer classification and localization respectively. CAM technique exploits the Convolutional Neural Network (CNN) classifiers with Global Average Pooling (GAP) layer for object localization without any supervised information about its location. According to the experimental results, the proposed approach achieved 96% Area under the Receiver Operating Characteristics (ROC) curve in the classification with 99.8% sensitivity and 82.1% specificity. Furthermore, it is able to localize 93.67% of the masses at an average of 0.122 false positives per image on the Digital Database for Screening Mammography (DDSM) data-set. It is worth noting that the pretrained CNN is able automatically to learn the most discriminative features in the mammogram, and then fulfills superior results in breast cancer classification (normal or mass). Additionally, CAM exhibits the concrete relation between the mass located in the mammogram and the discriminative features learned by the CNN.
most outstanding abilities of human vision. Building an automated system that accomplishes such objective is very challenging. The challenges mainly come from the large variations in the visual stimulus due to illumination conditions, blurring and long distance acquisition. As part of an ongoing project tackling the detection and handling of those three problems, we present in this paper a review and a comparative analysis of the state-of-the-art approaches to enhance the contrast and equalize the illumination of facial images. The comparative performance measurebased on appropriate metricsis accomplished among available methods, using two publicly available facial datasets with a total of about 500 images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.