2022
DOI: 10.1109/tpami.2022.3211006
|View full text |Cite
|
Sign up to set email alerts
|

Beyond Self-Attention: External Attention Using Two Linear Layers for Visual Tasks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
63
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 233 publications
(92 citation statements)
references
References 78 publications
1
63
0
Order By: Relevance
“…In the course of continuous learning, we have learned that the deep learning is regarded as a black box, which may not be explained in biological processes [44]. In the following work, we will take biological interpretation into consideration and apply more effective attention modules including the convolutional block attention module (CBAM) [45], external attention (EA) [46], and so on, through which will have more meaningful gains in the following experiments.…”
Section: Discussionmentioning
confidence: 99%
“…In the course of continuous learning, we have learned that the deep learning is regarded as a black box, which may not be explained in biological processes [44]. In the following work, we will take biological interpretation into consideration and apply more effective attention modules including the convolutional block attention module (CBAM) [45], external attention (EA) [46], and so on, through which will have more meaningful gains in the following experiments.…”
Section: Discussionmentioning
confidence: 99%
“…This hybriding approach can be further enhanced by attention algorithms in terms of the feature extraction process. A fused attention mechanism that consisted of self-attention (SA) [113] and external-attention (EA) [114] is used in [115]. The SA was used to weigh the features based on their importance, while EA was used to discover the correlation between different features.…”
Section: Based Hybrid Modelsmentioning
confidence: 99%
“…In this subsection, we introduce our proposed EACNN network. Unlike the traditional CNN2, whose convolution layer learns features only in a fixed receptive field, the key of EACNN is to employ the External Attention [8] module to learn features with a learnable receptive field and capture long-term dependencies, which can detect the correlation in whole signals.…”
Section: Eacnn Networkmentioning
confidence: 99%
“…In [7], the attention mechanism is utilized to perform the recalibration. In [8], the authors proposed an external attention framework for image classification and object detection, which has better performance and lower computational cost than the self-attention mechanism. In addition, compared with CNN, the attention mechanism can pay more attention to global correlations.…”
mentioning
confidence: 99%