Retinal vessel segmentation is crucial in the diagnosis of certain ophthalmic and cardiovascular diseases. Although U-shaped networks have been widely used for retinal vessel segmentation, most of the improved methods have insufficient feature extraction capability and fuse different network layers using element or dimension summation, leading to redundant information and inaccurate retinal vessel localization with blurred vessel edges. The asymmetry of small blood vessels in fundus images also increases the difficulty of segmenting blood vessels. To overcome these challenges, we propose a novel multi-scale subtraction network (MS-CANet) with residual coordinate attention to segment the vessels in retinal vessel images. Our approach incorporates a residual coordinate attention module during the encoding phase, which captures long-range spatial dependencies while preserving precise position information. To obtain rich multi-scale information, we also include multi-scale subtraction units at different perceptual field levels. Moreover, we introduce a parallel channel attention module that enhances the contrast between vessel and background, thereby improving the detection of marginal vessels during the decoding phase. We validate our proposed model on three benchmark datasets, namely DRIVE, CHASE, and STARE. The results demonstrate that our method outperforms most advanced methods under different evaluation metrics.
Retinal vessel segmentation is essential for the diagnosis of certain ophthalmic and cardiovascular diseases. Most improved methods based on U-shape networks for retinal vessel segmentation use element-wise addition or dimension-concatenate to fuse the differences between different network layers. However, these operations easily generate redundant information, which will weaken the complementarity between features of different layers and neglect the location information of feature maps, resulting in inaccurate retinal vessel localization and blurred vessel edges. To address these problems, we propose a multi-scale subtraction network (MS-CANet) with residual coordinate attention to segment the vessels in retinal vessel images. Specifically, we first use a residual coordinate attention module in the encoding phase to capture the long-range dependence of spatial direction while preserving accurate position information. Then, we configure the multi-scale subtraction units at different perceptual field levels to obtain rich multi-scale information. In addition, We design a parallel channel attention module to increase the difference between the vessel and the background, facilitating the recovery of marginal vessels in the decoding phase. Experiments on three benchmark datasets, DRIVE, CHASE, and STARE, show that our method has good performance compared to most advanced methods under different evaluation metrics.
Accurate segmentation of the skin lesion region is crucial for diagnosing and screening skin diseases. However, skin lesion segmentation is challenging due to the indistinguishable boundaries of the lesion region, irregular shapes and hair interference. To settle the above issues, we propose a Multi‐scale Depthwise Separable Convolutional Neural Network for skin lesion segmentation named MDSC‐Net. Specifically, a novel Multi‐scale Depthwise Separable Residual Convolution Module is employed in skip connection, conveying more detailed features to the decoder. To compensate for the loss of spatial location information in down‐sampling, we propose a novel Spatial Adaption Module. Furthermore, we propose a Multi‐scale Decoding Fusion Module in the decoder to capture contextual information. We have performed extensive experiments to verify the effectiveness and robustness of the proposed network on three public benchmark skin lesion segmentation datasets and one public benchmark polyp segmentation dataset, including ISIC‐2017, ISIC‐2018, PH2, and Kvasir‐SEG datasets. Experimental results consistently demonstrate the proposed MDSC‐Net achieves superior segmentation across five popularly used evaluation criteria. The proposed network reaches high‐performance skin lesion segmentation, and can provide important clues to help doctors diagnose and treat skin cancer early.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.