Deep Learning-Based Breast Cancer Diagnosis at Ultrasound: Initial Application of Weakly-Supervised Algorithm Without Image Annotation Original Research
Abstract:Conventional deep learning (DL) algorithm requires full supervision of annotating the region of interest (ROI) that is laborious and often biased. We aimed to develop a weakly-supervised DL algorithm that diagnosis breast cancer at ultrasound without image annotation. Weakly-supervised DL algorithms were implemented with three networks (VGG16, ResNet34, and GoogLeNet) and trained using 1000 unannotated US images (500 benign and 500 malignant masses). Two sets of 200 images (100 benign and 100 malignant masses)… Show more
“…They applied Grad-CAM to locate the lesions and found that the main attention of their models focused on the lesion regions. In [12], a weakly-supervised deep learning algorithm was developed to diagnose breast cancer without requiring image annotation. A weakly-supervised algorithm was applied to VGG16, ResNet34, and GoogleNet.…”
Section: Lesion Classification From Us Imagesmentioning
confidence: 99%
“…Several attempts have been made to explain how CNN models classify objects in natural images in general [8,9], and a few studies have investigated CNN decision explainability in breast US images in particular [10][11][12][13][14]. Although these efforts made serious attempts to examine the link between DCNN model decisions and regions of US images with the assistance of subject specialists, no effective visualization methods have been fully investigated to establish possible links from image texture features extracted by CNN with domain-known cancer characteristics, but identifying such links is highly desirable for building trusts in the model's decisions.…”
In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions’ decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types.
“…They applied Grad-CAM to locate the lesions and found that the main attention of their models focused on the lesion regions. In [12], a weakly-supervised deep learning algorithm was developed to diagnose breast cancer without requiring image annotation. A weakly-supervised algorithm was applied to VGG16, ResNet34, and GoogleNet.…”
Section: Lesion Classification From Us Imagesmentioning
confidence: 99%
“…Several attempts have been made to explain how CNN models classify objects in natural images in general [8,9], and a few studies have investigated CNN decision explainability in breast US images in particular [10][11][12][13][14]. Although these efforts made serious attempts to examine the link between DCNN model decisions and regions of US images with the assistance of subject specialists, no effective visualization methods have been fully investigated to establish possible links from image texture features extracted by CNN with domain-known cancer characteristics, but identifying such links is highly desirable for building trusts in the model's decisions.…”
In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions’ decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.