2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01104
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Convolution: Attention Over Convolution Kernels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
336
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 615 publications
(336 citation statements)
references
References 19 publications
0
336
0
Order By: Relevance
“…To enable the network to fully learn effective information from the encoded features and upsampled features, we introduce dynamic convolution [38] modules in the decoding process to take advantage of these features adaptively. From the perspective of perception, the traditional or static perception used in other standard convolutional layers could be presented as y = g(W T x + b), where parameters W and b are weight matrix and bias, respectively.…”
Section: Dynamic Convolution Modulesmentioning
confidence: 99%
“…To enable the network to fully learn effective information from the encoded features and upsampled features, we introduce dynamic convolution [38] modules in the decoding process to take advantage of these features adaptively. From the perspective of perception, the traditional or static perception used in other standard convolutional layers could be presented as y = g(W T x + b), where parameters W and b are weight matrix and bias, respectively.…”
Section: Dynamic Convolution Modulesmentioning
confidence: 99%
“…Inspired by Squeeze-and-Excitation (SE) networks [51] and dynamic convolution [52], we proposed a novel attention method for our embedding. Different from SE networks which attention calculated based on channel information and dynamic convolution that attention computed based on average pooling results of input data, we calculate attention directly based on input data.…”
Section: ) Dynamic Saptial Attentionmentioning
confidence: 99%
“…Light-weight CNN has performance degradation when suffers computation constrain. Dynamic convolution [17] is presented to overcome this issue, which improves the model performance without increasing the network depth. The general architecture shows in Fig.…”
Section: B Dynamic Convolutionmentioning
confidence: 99%
“…The ResNet and other networks have achieved many successful results in the image field. Nevertheless, to improve the ability to capture the feature and deduce the reduction of the convolution network, our model use Dynamic Convolution [17] to replace the conventional convolution layer. It can dynamically yield the convolution kernel depending on inputs by utilizing the attentions [18].…”
Section: Introductionmentioning
confidence: 99%