2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) 2021
DOI: 10.1109/iccvw54120.2021.00015
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Adversarial Robustness via Test-time Transformation Ensembling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…While several approaches were proposed, such as regularization (Cisse et al 2017) and distillation (Papernot et al 2016b), Adversarial Training (AT) (Madry et al 2018) remains among the most effective. Moreover, recent works showed that AT can be enhanced by combining it with pretraining (Hendrycks, Lee, and Mazeika 2019), exploiting unlabeled data (Carmon et al 2019), or concurrently, conducting transformations at test time (Pérez et al 2021). Further improvements were obtained by introducing regularizers, such as TRADES and MART (Wang et al 2019), or combining AT with network pruning, as in HYDRA (Sehwag et al 2020), or weight perturbations (Wu, Xia, and Wang 2020).…”
Section: Related Workmentioning
confidence: 99%
“…While several approaches were proposed, such as regularization (Cisse et al 2017) and distillation (Papernot et al 2016b), Adversarial Training (AT) (Madry et al 2018) remains among the most effective. Moreover, recent works showed that AT can be enhanced by combining it with pretraining (Hendrycks, Lee, and Mazeika 2019), exploiting unlabeled data (Carmon et al 2019), or concurrently, conducting transformations at test time (Pérez et al 2021). Further improvements were obtained by introducing regularizers, such as TRADES and MART (Wang et al 2019), or combining AT with network pruning, as in HYDRA (Sehwag et al 2020), or weight perturbations (Wu, Xia, and Wang 2020).…”
Section: Related Workmentioning
confidence: 99%
“…They indicate that their scheme outperforms current attacks by lowering the performance of various CV tasks by a huge margin with only the latest perturbations. Perez et al 111 presented a comprehensive experimental study of test time transformation ensembling (TTE), in they have used image transforms and improved the adversarial robustness. They show that TTE continuously improves model robustness against a different powerful attack, excluding retraining.…”
Section: Al Techniques For Security and Privacy Preservationmentioning
confidence: 99%
“…Such transformations could be used for a pre-processing of all the training data or an addition to the existing training set. Until very recently, data transformations are also used during test time [38,40,21,15,42] to improve learning models, e.g., adversarial robustness [38]. However, it remains unclear whether and how data transformation can improve FL particularly under different client heterogeneity.…”
Section: Introductionmentioning
confidence: 99%