2019
DOI: 10.1007/s11045-019-00686-z
|View full text |Cite
|
Sign up to set email alerts
|

PAC-Bayesian framework based drop-path method for 2D discriminative convolutional network pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 78 publications
(26 citation statements)
references
References 32 publications
0
26
0
Order By: Relevance
“…Regularization is critical, especially in ill-posed problems with insufficient data. Regularization can be interpreted as a technique for achieving algebraic stability during the reconstruction process but is much more than a simple stabilization technique [ 57 , 58 ]. Data augmentation is a prevalent method for regularizing data.…”
Section: Discussionmentioning
confidence: 99%
“…Regularization is critical, especially in ill-posed problems with insufficient data. Regularization can be interpreted as a technique for achieving algebraic stability during the reconstruction process but is much more than a simple stabilization technique [ 57 , 58 ]. Data augmentation is a prevalent method for regularizing data.…”
Section: Discussionmentioning
confidence: 99%
“…Antoniou et al 36 proposed to train a conditional GAN (DAGAN) to perform data augmentation. It is also worth mentioning that regularized deep learning [37][38][39][40][41] is an efficient and vital way to improve the generalization ability: the full stage data augmentation 37 plays the role of the implicit model ensemble without introducing additional model training costs; PReLU 38 is a new activation function to improve the classification performance with a fast convergence rate; LLb-SGD 40 is a simple gradient-based optimization method with computational efficiency; two-stage training method 39 could regularize the feature boundaries of deep networks from the point of view of data punishment so as to improve the generalization ability of the networks; drop-path 41 could reduce model parameters of deep networks and accelerate the network inference. On the other hand, MetaGAN 42 is a simplistic and versatile framework for improving the performance of few-shot learning models, based on the idea that generators generate fake samples that help the classifier to understand more explicit decision boundaries between different categories from several samples.…”
Section: Related Workmentioning
confidence: 99%
“…This helps in implementing DNN models in a simple way with low computation complexity, further it enables to achieve a gradient‐based optimization of DNN objective functions. Moreover, aiming at accelerating the learning process, a pruning scheme is introduced in Reference [38], in which the model parameters of 2D deep CNN are reduced. By Zheng et al, 39 the performance of CNN models is improved using full‐stage information augmentation strategy.…”
Section: Related Workmentioning
confidence: 99%