2019
DOI: 10.1007/978-3-030-32239-7_80
|View full text |Cite
|
Sign up to set email alerts
|

CS-Net: Channel and Spatial Attention Network for Curvilinear Structure Segmentation

Abstract: The detection of curvilinear structures in medical images, e.g., blood vessels or nerve fibers, is important in aiding management of many diseases. In this work, we propose a general unifying curvilinear structure segmentation network that works on different medical imaging modalities: optical coherence tomography angiography (OCT-A), color fundus image, and corneal confocal microscopy (CCM). Instead of the U-Net based convolutional neural network, we propose a novel network (CS-Net) which includes a self-atte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
137
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 162 publications
(137 citation statements)
references
References 18 publications
0
137
0
Order By: Relevance
“…As we can see from Table 1, the Self-attention model [8,10] drops two percentage points over multi-kernel context encoder networks [13]. Very recently, both DANet [17] and CS-Net [18] have exploited two types of self-attention models acting on the top of a feature encoder path from different pretrained network architectures with better performance than Attention UNet [9] and CE-Net [13], for example, 79.97% vs 78.90% in Dice, 30.94 pixels vs 32.90 pixels in HD, 74.50% vs 73.03% in Precision and 90.38% vs 90.03% in Recall. Moreover, these good segmentation results also illustrate that diverse self-attention strategies can further boost the feature representative capability of a model for accurate tumor localization and segmentation.…”
Section: Quantitative Analysismentioning
confidence: 97%
See 1 more Smart Citation
“…As we can see from Table 1, the Self-attention model [8,10] drops two percentage points over multi-kernel context encoder networks [13]. Very recently, both DANet [17] and CS-Net [18] have exploited two types of self-attention models acting on the top of a feature encoder path from different pretrained network architectures with better performance than Attention UNet [9] and CE-Net [13], for example, 79.97% vs 78.90% in Dice, 30.94 pixels vs 32.90 pixels in HD, 74.50% vs 73.03% in Precision and 90.38% vs 90.03% in Recall. Moreover, these good segmentation results also illustrate that diverse self-attention strategies can further boost the feature representative capability of a model for accurate tumor localization and segmentation.…”
Section: Quantitative Analysismentioning
confidence: 97%
“…(2) Context Based Model: R2U-Net [11] utilizes recurrent and residual networks; CE-Net [13] embeds a multi-kernel context encoding mechanism like Inception architecture; Self-attention [8,10] exploits spatial context information. And (3) Attention Based Model: SENet [15] uses channel attention mechanism; both DANet [17] and CS-Net [18] place self-attention schemes on the top of encoder stage, but with different network architectures. (4) Fused Model: Attention UNet [9] and Self-attention [8,10].…”
Section: Data and Implementation Detailsmentioning
confidence: 99%
“…Deep learning methods have demonstrated superior performance and better prospects for many medical image segmentation problems. In this work, we employed one of the most latest curvilinear structure segmentation network: CS-Net [55], for fully automatic segmentation of corneal nerves, with and without application of image enhancement methods. We trained the CS-Net on randomly sampled 80% images from CCM-A, leaving out 20% of this dataset as a testing set.…”
Section: ) Image Enhancement-guided Fiber Segmentationmentioning
confidence: 99%
“…Guo et al [ 20 ] used the residual block in the channel attention mechanism and proposed that the channel attention residual block improves the recognition ability of the network. Mou et al [ 21 ] used a self-attention mechanism in the encoder and decoder to combine local features and global correlation. However, these attention mechanisms do not take the impact of multi-scale image features on the attention gate into account, and the channel dependence between different scales is ignored.…”
Section: Introductionmentioning
confidence: 99%
“…Inspired by the successful application of the channel attention mechanism in the field of medical image segmentation [ 19 , 20 , 21 ], we introduced an aggregation channel attention network to improve the performance of optic disc segmentation of fundus images. First, in order to alleviate the disappearance of gradients and reduce the number of parameters [ 22 ], we use DenseNet blocks to extract high-level features.…”
Section: Introductionmentioning
confidence: 99%