2022
DOI: 10.1007/978-3-031-19803-8_17
|View full text |Cite
|
Sign up to set email alerts
|

Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 34 publications
0
1
0
Order By: Relevance
“…A recently proposed method selects useful deep features from a discriminative dimension reduction perspective, through Fisher Linear Discriminant Analysis so that the method captures both the final separation of classes and their holistic inter-layer dependence [25]. The use of Amortized Explanation Models (AEMs) that utilize information from both inputs and outputs have also been proposed in order to predict in real-time smooth saliency masks and leverage the interpretations of the model to steer the pruning process [26]. It has also been proposed to evaluate gradient-based salience to measure the importance of a channel and to prune the lowest-scoring channels and their corresponding filters according to the pruning rate [27].…”
Section: Pruning Methods and Pruning Evaluationmentioning
confidence: 99%
“…A recently proposed method selects useful deep features from a discriminative dimension reduction perspective, through Fisher Linear Discriminant Analysis so that the method captures both the final separation of classes and their holistic inter-layer dependence [25]. The use of Amortized Explanation Models (AEMs) that utilize information from both inputs and outputs have also been proposed in order to predict in real-time smooth saliency masks and leverage the interpretations of the model to steer the pruning process [26]. It has also been proposed to evaluate gradient-based salience to measure the importance of a channel and to prune the lowest-scoring channels and their corresponding filters according to the pruning rate [27].…”
Section: Pruning Methods and Pruning Evaluationmentioning
confidence: 99%
“…Network Pruning: Model compression (Ghimire, Kil, and Kim 2022) is a well-studied topic, and proposed methods have primarily focused on compressing CNN classifiers. They use various techniques like weight pruning (Han et al 2015), light architecture design (Tan and Le 2019;Howard et al 2017), weight quantization (Rastegari et al 2016), structural pruning (Li et al 2017;Ye et al 2020;Ganjdanesh, Gao, and Huang 2022;He et al 2018), knowledge distillation (Ba and Caruana 2014), and NAS (Wu et al 2019;Ganjdanesh, Gao, and Huang 2023). We focus on pruning GANs, which is a more challenging task due to the instability of their training (Wang et al 2020).…”
Section: Related Workmentioning
confidence: 99%
“…The regularization in Eq. 4 has also been utilized for network pruning (Ganjdanesh, Gao, and Huang 2022;Gao et al 2020). As the round(•) function is not differentiable, we calculate the gradients of L param w.r.t θ sp using the Straight Through Estimator (Bengio, Léonard, and Courville 2013).…”
Section: Size Predictormentioning
confidence: 99%