2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00289
|View full text |Cite
|
Sign up to set email alerts
|

Variational Convolutional Neural Network Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
141
0
1

Year Published

2020
2020
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 256 publications
(144 citation statements)
references
References 17 publications
2
141
0
1
Order By: Relevance
“…"smaller-norm-lessimportant" is also an early approach based on a distancebased criterion [20], [21]. Recent studies have proposed new criteria-based methods using variational information [34] and geometric median [22] to overcome the distance-based criterion. Lastly, recent studies have proposed a clusteringbased approach to in depth analyze the similarity among filters [19], [23].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…"smaller-norm-lessimportant" is also an early approach based on a distancebased criterion [20], [21]. Recent studies have proposed new criteria-based methods using variational information [34] and geometric median [22] to overcome the distance-based criterion. Lastly, recent studies have proposed a clusteringbased approach to in depth analyze the similarity among filters [19], [23].…”
Section: Related Workmentioning
confidence: 99%
“…So, when applying a general pruning technique to VGG16, the performance degradation is negligible in general. Fine-tuning may result in a higher performance of the pruned network Li et al [20] VID [34] GAL [38] than a given pretrained network. In fact, we can observe such a phenomenon in GAL [38] and GBN [31].…”
Section: Pruning Performance Evaluationmentioning
confidence: 99%
“…Authors show that many redundant filters -and feature maps -can be dropped, especially in the case of transfer learning. As already mentioned, [48] rely on a variational approach to sparsify convolutional filters through batch normalization layers. Authors redefine the batch normalization in order to make affine transformation mostly dependant on one single scalar parameter (γ) per filter.…”
Section: Pruning and Compressionmentioning
confidence: 99%
“…Reducing the number of parameters in neural networks can be naturally achieved by low-rank compression [21], [8], [16] or pruning strategies [23], [19], [13]. More recently in [48], a variational approach is developed to sparsify convolutional filters through batch normalization layers. In [35] authors learn separable weights for each filter and bias in convolutional layers and further refine them in a meta learning procedure.…”
Section: Introductionmentioning
confidence: 99%
“…Deep CNN also known to solve computer vision issues successfully such as object recognition, semantic segmentation, object detection and video analysis [15]. It is widely used and applied on such as heartbeats classification [16], road crack detection [17], segmentation of blood vessels in retina image, skin cancer and lung lesion [18].…”
Section: Introductionmentioning
confidence: 99%