2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.200
|View full text |Cite
|
Sign up to set email alerts
|

Active Convolution: Learning the Shape of Convolution for Image Classification

Abstract: In recent years, deep learning has achieved great success in many computer vision applications. Convolutional neural networks (CNNs) have lately emerged as a major approach to image classification. Most research on CNNs thus far has focused on developing architectures such as the Inception and residual networks. The convolution layer is the core of the CNN, but few studies have addressed the convolution unit itself. In this paper, we introduce a convolution unit called the active convolution unit (ACU). A new … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
113
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 164 publications
(119 citation statements)
references
References 25 publications
3
113
0
Order By: Relevance
“…Dynamic Filter [20] can only adaptively modify the parameters of filters, without the adjustment of kernel size. Active Convolution [19] augments the sampling locations in the convolution with offsets. These offsets are learned end-to-end but become static after training, while in SKNet the RF sizes of neurons can adaptively change during inference.…”
Section: Related Workmentioning
confidence: 99%
“…Dynamic Filter [20] can only adaptively modify the parameters of filters, without the adjustment of kernel size. Active Convolution [19] augments the sampling locations in the convolution with offsets. These offsets are learned end-to-end but become static after training, while in SKNet the RF sizes of neurons can adaptively change during inference.…”
Section: Related Workmentioning
confidence: 99%
“…Spatial Transform Network [18] warped the feature map via a global parametric transformation. The works [19,9] augmented the sampling locations in the convolution with offsets and learn the offsets via back-propagation end-to-end.…”
Section: Related Workmentioning
confidence: 99%
“…The most common parameterization method is to directly learn the aggregation weights ω [18]. There are also some methods that learn a meta network {θ} on input features to generate adaptive aggregation weights [15] or an adaptive aggregation scope across spatial positions [6], or learn a fixed prior about spatial aggregation scope (Ω) [14].…”
Section: A General Formulationmentioning
confidence: 99%
“…The "aggregation weight" column covers three aspects: how aggregation weights are computed from parameterized weights ("computation" sub-column); inclusion of geometric priors ("geo." sub-column); type of computation ("type" sub-column regular ω all/one/no local ω top-down group [17,28] ω group/one/no local ω top-down depthwise [5,12] ω one/one/no local ω top-down dilated [4,29] ω all/one/no atrous ω top-down active [14] ω, Ω all/one/no Ω ω top-down local connected [24] ω all/one/no local ω top-down dynamic filters [15] θ all/one/no local f θ (x p ) top-down deformable [6,32] ω, θ all/one/no…”
Section: Local Relation Layermentioning
confidence: 99%