2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00016
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Robust Model Compression using In-Train Pruning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Prior studies indicate that sparse models tend to underperform in Compression Identified Examples (CIE), suggesting that the pruning process exacerbates the inherent algorithmic biases hidden within the datasets (Hooker et al, 2020). In Computer Vision (CV), simultaneous optimization of model pruning and adversarial training has been advocated as an effective solution to this issue (Gui et al, 2019;Ye et al, 2019;Sehwag et al, 2020;Vemparala et al, 2021). In NLP, Du et al (2023) propose to prevent model overfitting on easy samples by leveraging sample difficulty in the context of pruning.…”
Section: Robust Model Pruningmentioning
confidence: 99%
“…Prior studies indicate that sparse models tend to underperform in Compression Identified Examples (CIE), suggesting that the pruning process exacerbates the inherent algorithmic biases hidden within the datasets (Hooker et al, 2020). In Computer Vision (CV), simultaneous optimization of model pruning and adversarial training has been advocated as an effective solution to this issue (Gui et al, 2019;Ye et al, 2019;Sehwag et al, 2020;Vemparala et al, 2021). In NLP, Du et al (2023) propose to prevent model overfitting on easy samples by leveraging sample difficulty in the context of pruning.…”
Section: Robust Model Pruningmentioning
confidence: 99%
“…To date several prior works have explored the efficient integration of model compression to the adversarial training process. To be specific, (Sehwag et al 2019;Ye et al 2019;Sehwag et al 2020;Vemparala et al 2021;Xie et al 2020a;Guo et al 2018;Madaan, Shin, and Hwang 2020;Rakin et al 2019;Özdenizci and Legenstein 2021) investigate the robustnessaware pruning to achieve high model sparsity and adversarial robustness. Also, (Fu et al 2021;Lin, Gan, and Han 2019) propose several approaches to develop low-bit precision robust models.…”
Section: Related Workmentioning
confidence: 99%
“…Motivated by this idea, several prior works have explored to examine and further improve the robustness of the compressed DNNs. In particular, because of the popularity of pruning in many model compression tasks, most of the existing compactness/robustness co-exploration efforts (Sehwag et al 2020(Sehwag et al , 2019Ye et al 2019;Vemparala et al 2021) focus on efficient approaches to generate the robust pruned DNN models. To be specific, (Ye et al 2019) demonstrates that the model robustness and compactness can co-exist for neural networks via concurrent adversarial training and weight pruning.…”
Section: Introductionmentioning
confidence: 99%
“…HAQ (Hardware-aware Quantization) used reinforcement learning to select bit-width for weights and activations to quantize a model during training while considering hardware constraints [46]. Vemparala et al [10] identified the optimal layers precision during the training process. They added hardware awareness to the training optimizer by using a differentiable Gaussian process regressor that provides accurate hardware predictions.…”
Section: Group A: Single-objective Hardware-aware Compressionmentioning
confidence: 99%