2022
DOI: 10.1117/1.jrs.16.026516
|View full text |Cite
|
Sign up to set email alerts
|

Convolutional block attention module U-Net: a method to improve attention mechanism and U-Net for remote sensing images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 20 publications
0
6
0
Order By: Relevance
“…As stated earlier, the segmentation models are UNet models with VGG-16 as a backbone of the encoder. Along with vanilla UNet, this study considers three variants of UNet : UNet-CBAM [37], UNet-SE [37], and UNet-(CBAM+SE) [37] obtained with three attention mechanism CBAM [38], SE [39], and CBAM+SE [37], respectively. The experimental setup of the segmentation and classification model for the two tiers are provided below:…”
Section: Segmentation and Classification Modelsmentioning
confidence: 99%
“…As stated earlier, the segmentation models are UNet models with VGG-16 as a backbone of the encoder. Along with vanilla UNet, this study considers three variants of UNet : UNet-CBAM [37], UNet-SE [37], and UNet-(CBAM+SE) [37] obtained with three attention mechanism CBAM [38], SE [39], and CBAM+SE [37], respectively. The experimental setup of the segmentation and classification model for the two tiers are provided below:…”
Section: Segmentation and Classification Modelsmentioning
confidence: 99%
“…The input feature map is first handled separately by MaxPool3D and AvgPool3D based on height, width and depth, then entered into a multi-layer perceptron (MLP) 15 . The features output by MLP are subjected to elementwise addition and sigmoid activation operations to produce channel attention feature maps.…”
Section: Learning the Temporal-spatial Feature Through 3d-ts Attentionmentioning
confidence: 99%
“…where W denotes the weight matrix, b denotes the bias term, and σ represents the sigmoid activation function 15 .…”
Section: Learning Temporal Features Of Tc's Evolution Process Through...mentioning
confidence: 99%
“…Kong and Zhang [17][18] proposed the parallel attention mechanism module, also known as ProCBAM, in remote sensing scene classification. Fig.…”
Section: Procbam Attention Mechanism Structurementioning
confidence: 99%
“…Fig. 5 depicts the structural movement and basic configuration of the ProCBAM attention technique that was suggested in this work [17][18]. The attention mechanism and its numerous modules including the CAM (i.e.…”
Section: Procbam Attention Mechanism Structurementioning
confidence: 99%