2018
DOI: 10.1007/978-3-030-01261-8_25
|View full text |Cite
|
Sign up to set email alerts
|

SkipNet: Learning Dynamic Routing in Convolutional Networks

Abstract: While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient. We exploit this observation by learning to skip convolutional layers on a per-input basis. We introduce SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer. We formulate the dynamic skipping problem in the context of sequential decision making and propose a hy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
356
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 452 publications
(375 citation statements)
references
References 18 publications
1
356
0
Order By: Relevance
“…Dynamic Layer Skipping. Many dynamic inference methods [10], [34], [35] propose to selectively execute subsets of layers in the network conditioned on each input, framed as sequential decision making. Most of them used gating networks to skip within chain-like, ResNet-style models [36].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Dynamic Layer Skipping. Many dynamic inference methods [10], [34], [35] propose to selectively execute subsets of layers in the network conditioned on each input, framed as sequential decision making. Most of them used gating networks to skip within chain-like, ResNet-style models [36].…”
Section: Related Workmentioning
confidence: 99%
“…Most of them used gating networks to skip within chain-like, ResNet-style models [36]. SkipNet [10] introduced a hybrid learning algorithm which combines supervised learning with reinforcement learning to learn the layer-wise skipping policy based on the input, enabling greater computational savings and supporting deep architectures. BlockDrop [34] trained one global policy network to skip residual blocks.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Here, we evaluate the partial reconfiguration approach. Also, we compare three selection methods: 1) our proposed feedback procedure; 2) SkipNet method [38]; and 3) an entropy-based method [4].…”
Section: B Overall Evaluationmentioning
confidence: 99%