Purpose Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention‐supervised full‐resolution residual network (ASFRRN) to segment tumors from BUS images. Methods In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full‐resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat‐sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. Results For Dataset A, the proposed method achieved higher Dice (84.3 ± 10.0%), JSC (75.2 ± 10.7%), and F1 score (84.3 ± 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 ± 13.0%), JSC (83.7 ± 14.8%), and F1 score (90.7 ± 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 ± 13.1%), JSC (83.3 ± 14.8%), and F1 score (90.5 ± 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). Conclusions We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.
Purpose Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transformer (ViT) is not good at extraction local features. In this study, we proposed a visual geometry group attention ViT (VGGA‐ViT) network to overcome their disadvantages. Methods In the proposed method, we used a CNN module to extract the local features and employed a ViT module to learn the global relationship among different regions and enhance the relevant local features. The CNN module was named the VGGA module. It was composed of a VGG backbone, a feature extraction fully connected layer, and a squeeze‐and‐excitation block. Both the VGG backbone and the ViT module were pretrained on the ImageNet dataset and retrained using BUS samples in this study. Two BUS datasets were employed for validation. Results Cross‐validation was conducted on two BUS datasets. For the Dataset A, the proposed VGGA‐ViT network achieved high accuracy (88.71±$\ \pm \ $1.55%), recall (90.73±$\ \pm \ $1.57%), specificity (85.58±$\ \pm \ $3.35%), precision (90.77±$\ \pm \ $1.98%), F1 score (90.73±$\ \pm \ $1.24%), and Matthews correlation coefficient (MCC) (76.34±7$\ \pm \ 7$3.29%), which were better than those of all compared previous networks in this study. The Dataset B was used as a separate test set, the test results showed that the VGGA‐ViT had highest accuracy (81.72±$\ \pm \ $2.99%), recall (64.45±$\ \pm \ $2.96%), specificity (90.28±$\ \pm \ $3.51%), precision (77.08±$\ \pm \ $7.21%), F1 score (70.11±$\ \pm \ $4.25%), and MCC (57.64±$\ \pm \ $6.88%). Conclusions In this study, we proposed the VGGA‐ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods.
Cell segmentation and counting play a very important role in the medical field. The diagnosis of many diseases relies heavily on the kind and number of cells in the blood. convolution neural network achieves encouraging results on image segmentation. However, this data-driven method requires a large number of annotations and can be a time-consuming and expensive process, prone to human error. In this paper, we present a novel frame to segment and count cells without too many manually annotated cell images. Before training, we generated the cell image labels on single-kind cell images using traditional algorithms. These images were then used to form the train set with the label. Different train sets composed of different kinds of cell images are presented to the segmentation model to update its parameters. Finally, the pretrained U-Net model is transferred to segment the mixed cell images using a small dataset of manually labeled mixed cell images. To better evaluate the effectiveness of the proposed method, we design and train a new automatic cell segmentation and count framework. The test results and analyses show that the segmentation and count performance of the framework trained by the proposed method equal the model trained by large amounts of annotated mixed cell images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.