2019
DOI: 10.1109/tmi.2018.2867261
|View full text |Cite
|
Sign up to set email alerts
|

Recalibrating Fully Convolutional Networks With Spatial and Channel “Squeeze and Excitation” Blocks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
266
2
7

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 374 publications
(277 citation statements)
references
References 18 publications
2
266
2
7
Order By: Relevance
“…This can be alternatively interpreted as a spatially adapted attention mechanism for each feature map; unlike [15], where a single attention map is generated. Related to our work is the block proposed by Roy et al [17]. But, channel and spatial recalibration are considered separately, while we learn them jointly.…”
Section: A Motivation and Contributionsmentioning
confidence: 99%
See 1 more Smart Citation
“…This can be alternatively interpreted as a spatially adapted attention mechanism for each feature map; unlike [15], where a single attention map is generated. Related to our work is the block proposed by Roy et al [17]. But, channel and spatial recalibration are considered separately, while we learn them jointly.…”
Section: A Motivation and Contributionsmentioning
confidence: 99%
“…However, this approach scales the same regions across all feature maps. Roy et al [17] learn attention at the channel and spatial levels, but a single spatial attention map is inferred for all feature maps.…”
Section: Introductionmentioning
confidence: 99%
“…To our knowledge, attention modules in deep learning either compute the entire self-attention matrix on a low dimensional input or use a local attention mechanism that can be seen as a strong approximation of the non-local self-attention formulation. Specifically in the medical imaging context, previous works [12,15,11] implicitly used a simplification of (2) with a diagonal self-attention matrix. This solution can be applied to large images since it scales linearly with the number of voxels but does not help to capture contextual information.…”
Section: Methodsmentioning
confidence: 99%
“…The U-Net has, since its creation, been subject to modifications and improvements, with the addition of more complex convolutional blocks [74] or squeeze-and-excite methods. [75] While the SSD network structure was well suited for object identification tasks, such as the automated identification of cells, the architecture of the U-Net is particularly optimized for the partitioning of images based on image features such as shapes and edges. This makes the U-Net a natural choice for image segmentation tasks.…”
Section: Deep Learning Finds the Contours Of Cells With Segmentation mentioning
confidence: 99%