2020 17th Conference on Computer and Robot Vision (CRV) 2020
DOI: 10.1109/crv50864.2020.00037
|View full text |Cite
|
Sign up to set email alerts
|

Differentiable Mask for Pruning Convolutional and Recurrent Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…However, the unstructured nature of the removed parameters makes it difficult to operate the network. Recently, several works have been proposed to prune in a structured way where a whole kernel, a filter of even a layer is pruned according to a specific criterion [19][20][21][22].…”
Section: State Of the Art On Embedded Execution Of Quantized Neural N...mentioning
confidence: 99%
“…However, the unstructured nature of the removed parameters makes it difficult to operate the network. Recently, several works have been proposed to prune in a structured way where a whole kernel, a filter of even a layer is pruned according to a specific criterion [19][20][21][22].…”
Section: State Of the Art On Embedded Execution Of Quantized Neural N...mentioning
confidence: 99%
“…In other words, if the scaling factor is 0, it is guaranteed that the corresponding unit has no contribution to recognition. We can include mask based pruning methods [33,57] under this category as well since the basic principle is the same. Although quite useful in practice, this approach does not tell us anything about the nature of a "good unit" for recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Yamamoto et al [9] use a channel-pruning technique based on an attention mechanism, where attention blocks are introduced into each layer and updated during training to evaluate the importance of each channel. In [10], the authors propose a learnable differentiable mask, that aims at finding out during training process the less important neurons, channels or even layers and prune them. In [11], the authors propose to give to the DNN the ability to decide during training which criterion should be considered for each layer when pruning.…”
Section: Dnn Compressionmentioning
confidence: 99%