2018
DOI: 10.1007/978-3-030-00928-1_48
|View full text |Cite
|
Sign up to set email alerts
|

Concurrent Spatial and Channel ‘Squeeze & Excitation’ in Fully Convolutional Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
389
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 674 publications
(391 citation statements)
references
References 8 publications
1
389
0
1
Order By: Relevance
“…The SE module first squeezes the feature map by global average pooling and then passes the squeezed feature to the gating module to get the representation of channel-wise dependencies, which is used to re-calibrate the feature map to emphasize on useful channels. The work in [10] refers to the SE module in [9] as Spatial Squeeze and Channel Excitation (cSE) and proposes a different version called Channel Squeeze and Spatial Excitation (sSE). The sSE module squeezes the feature map along channels to preserve more spatial information, thus is more suitable for image segmentation task.…”
Section: Squeeze and Excitationmentioning
confidence: 99%
“…The SE module first squeezes the feature map by global average pooling and then passes the squeezed feature to the gating module to get the representation of channel-wise dependencies, which is used to re-calibrate the feature map to emphasize on useful channels. The work in [10] refers to the SE module in [9] as Spatial Squeeze and Channel Excitation (cSE) and proposes a different version called Channel Squeeze and Spatial Excitation (sSE). The sSE module squeezes the feature map along channels to preserve more spatial information, thus is more suitable for image segmentation task.…”
Section: Squeeze and Excitationmentioning
confidence: 99%
“…Attention is widely used for re-weighting features with high-level information in deep networks. Roy et al applied a concurrent attention module to semantic segmentation [14], where features were squeezed along spatial and channel axes and then applied to the original feature maps to provide spatial and contextual information. EncNet [15] introduces a context encoding module at the end of the network, to encode global contextual information and re-weight the extracted features for discriminative representations.…”
Section: Related Workmentioning
confidence: 99%
“…The spatial attention block takes the fused features and the output of the spatial branch as input, it helps refine the pixel localizations and object boundaries. Similar to the attention module in [14], the features from the spatial branch go through a 3×3 convolution with batch normalization and Sigmoid non-linearity, and then multiplied by the fused features. Fig.…”
Section: Feature Cross Attention Modulementioning
confidence: 99%
“…For comparison, the U-Net [4], U-Net++ [11], and SE U-Net [12] were trained and finetuned using same loss function and the proposed weakly supervised training strategy. Table 2 lists the segmentation accuracy measured by mean Dice score of all the testing groups.…”
Section: Glomerular Segmentationmentioning
confidence: 99%