2021
DOI: 10.1109/tpami.2020.2999099
|View full text |Cite
|
Sign up to set email alerts
|

Attention-Based Dropout Layer for Weakly Supervised Single Object Localization and Semantic Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
55
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 83 publications
(55 citation statements)
references
References 52 publications
0
55
0
Order By: Relevance
“…Furthermore, we have combined PSE block and SE block in the model for better performance. In the future, we will study how to use PSE blocks for more complex tasks [46][47][48], and use PSigmoid in other networks [49][50][51][52].…”
Section: Resultsmentioning
confidence: 99%
“…Furthermore, we have combined PSE block and SE block in the model for better performance. In the future, we will study how to use PSE blocks for more complex tasks [46][47][48], and use PSigmoid in other networks [49][50][51][52].…”
Section: Resultsmentioning
confidence: 99%
“…Recently, various studies [36][37][38][39][40] have used the self-attention mechanism to improve the classification accuracy. Hu et al [36] proposed the squeeze-and-excitation (SE) block that increased the accuracy of the classification model based on the use of a one-dimensional (1D) channel self-attention map.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Wang et al [37] formulated self-attention as a non-local operation, covering the entire image region in one operation to model spatial-temporal dependencies in video sequences. Park et al [38] and Choe et al [40] proposed the bottleneck attention module (BAM) and attention-based dropout layer (ADL) that respectively produced spatial self-attention and importance maps with auxiliary convolutional layers. The produced self-attention map is applied to the input feature map to emphasize the object region.…”
Section: Attention Mechanismmentioning
confidence: 99%
See 1 more Smart Citation
“…Since both ReLU and Dropout have the function of zeroing or discarding a certain feature, we call their function here as selective feature transformation. "Attention" mechanism which is a popular technique uses weight to amplify or weaken the value of certain input variables or the block of feature map [27], and the feature selection method we proposed LSTM-RDN which has ReLU and Dropout layers also has similar select functions as Attention mechanism.…”
Section: Lstm-based Selective Feature Transformation Networkmentioning
confidence: 99%