Abstract:The survival rate of breast cancer patients is closely related to the pathological stage of cancer. The earlier the pathological stage, the higher the survival rate. Breast ultrasound is a commonly used breast cancer screening or diagnosis method, with simple operation, no ionizing radiation, and real-time imaging. However, ultrasound also has the disadvantages of high noise, strong artifacts, low contrast between tissue structures, which affect the effective screening of breast cancer. Therefore, we propose a… Show more
“…Lesion detection aims to locate the bounding box that encloses the ROI containing a lesion. Recent approaches have employed generic CNN‐based object detectors (e.g., Fast R‐CNN, YOLO, and SSD) 63 and specific‐purpose architectures 64 . On the other hand, lesion segmentation outlines the lesion shape, where generic semantic segmentation models (e.g., SegNet, UNet, and DeepLab) 33 have been used, and specific‐purpose approaches have been developed 65 …”
Section: Discussionmentioning
confidence: 99%
“…Recent approaches have employed generic CNN-based object detectors (e.g., Fast R-CNN, YOLO, and SSD) 63 and specific-purpose architectures. 64 On the other hand, lesion segmentation outlines the lesion shape, where generic semantic segmentation models (e.g., SegNet, UNet, and DeepLab) 33 have been used, and specificpurpose approaches have been developed. 65 Lesion classification methods usually distinguish between pathology classes interpreted as conducting a follow-up imaging study if the lesion is classified as benign or prescribing a biopsy if the lesion is classified as malignant.…”
PurposeComputer‐aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI‐RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above‐mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems.Acquisition and Validation MethodsThe BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy‐proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI‐RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5‐ and 10‐fold cross‐validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions.Data Format and Usage NotesThe BUS dataset is publicly available for academic and research purposes through an open‐access repository under the name BUS‐BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma‐Separated Value (CSV) files.Potential ApplicationsThe BUS‐BRA dataset can be used to develop and assess artificial intelligence‐based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI‐RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
“…Lesion detection aims to locate the bounding box that encloses the ROI containing a lesion. Recent approaches have employed generic CNN‐based object detectors (e.g., Fast R‐CNN, YOLO, and SSD) 63 and specific‐purpose architectures 64 . On the other hand, lesion segmentation outlines the lesion shape, where generic semantic segmentation models (e.g., SegNet, UNet, and DeepLab) 33 have been used, and specific‐purpose approaches have been developed 65 …”
Section: Discussionmentioning
confidence: 99%
“…Recent approaches have employed generic CNN-based object detectors (e.g., Fast R-CNN, YOLO, and SSD) 63 and specific-purpose architectures. 64 On the other hand, lesion segmentation outlines the lesion shape, where generic semantic segmentation models (e.g., SegNet, UNet, and DeepLab) 33 have been used, and specificpurpose approaches have been developed. 65 Lesion classification methods usually distinguish between pathology classes interpreted as conducting a follow-up imaging study if the lesion is classified as benign or prescribing a biopsy if the lesion is classified as malignant.…”
PurposeComputer‐aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI‐RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above‐mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems.Acquisition and Validation MethodsThe BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy‐proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI‐RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5‐ and 10‐fold cross‐validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions.Data Format and Usage NotesThe BUS dataset is publicly available for academic and research purposes through an open‐access repository under the name BUS‐BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma‐Separated Value (CSV) files.Potential ApplicationsThe BUS‐BRA dataset can be used to develop and assess artificial intelligence‐based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI‐RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
“…In Zhang et al [15], the 3D ResNet was added after the tumor detection of YOLOv5 for false-positive reduction, which was trained by a two-stage manner. Different from the anchor based detector, Wang et al [16] employed the anchor-free network FCOS for tumor detection. Besides, an image enhancement method was designed to improve the image contrast.…”
Section: Abus Tumor Detectionmentioning
confidence: 99%
“…Different from the anchor based detector, Wang et al. [16] employed the anchor‐free network FCOS for tumor detection. Besides, an image enhancement method was designed to improve the image contrast.…”
Automated breast ultrasound (ABUS) imaging system is a practical technique to automatically scan the whole breast. Automatic tumor detection plays a significant role in the clinic. However, training deep convolutional neural networks (CNNs) for tumor detection needs a large quantity of labeled data. It is time‐consuming and expensive to manually annotate tumor positions in ABUS images. In this paper, a novel semi‐supervised learning EfficientDet (SSL‐E) model is proposed for ABUS tumor detection. Our SSL‐E model solves the tumor detection problem from high similarity and serious unbalance between tumors and backgrounds. Considering the image contrast variation and tumor scale variations in ABUS images, color transformation and geometric transformation are employed for data augmentations. Then the consistency between image and its augmented version is developed, thus the robustness of the detector can be improved. Aiming at the problem of serious unbalance between tumors and backgrounds, a novel copy‐paste synthesis strategy is designed, which can generate more tumor samples and enhance tumor diversity. This method is tested on 68 tumor volumes and 68 normal volumes, including 43,248 slices (1683 tumor slices and 41,565 normal slices). It obtains a promising result with sensitivity of 90.2% and false positives per image (FPs/I) at 0.15.
“…Frequency domain de-speckling is based on wavelet transform that converts the continuous-time signal into different frequency components (wavelets), essentially converting the speckles into additive noise and performing despeckling in the frequency domain. Yu Wang et al [3] presented an image enhancement algorithm to elevate the visual quality of the image using Contrast limited adaptive histograms equalization (CLAHE) to enhance the Breast ultrasound images (BUSI) and then if any corners of the image are left to enhance it is achieved with the help of the Anisotropic Diffusion. Further, the classification is done through multiple algorithms such as U-Net with an accuracy of 71.2% and Recurrent Residual convolutional Neural Based on u-net (R2U-Net) with an accuracy of 71.1%.…”
In this paper, we investigate breast ultrasound images (BUSI) performing (CAD) Computer Aided Diagnosis of breast cancer, thereby predicting the stage of malignant tumor in breast ultrasound images through effective speckle filtering, contrast enhancement, feature extraction and feature fusion. This model trains BUSI dataset by performing Convolutional neural network (CNN) using NPR tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.