2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00763
|View full text |Cite
|
Sign up to set email alerts
|

Towards Understanding the Generative Capability of Adversarially Robust Classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 3 publications
0
8
0
Order By: Relevance
“…We borrow the idea from the score-matching generative models [25,26,27] which propose to estimate the gradient of the ground truth data distribution and then move the initial image from its original distribution p D (x|y 0 ) to the target distribution p D (x|y) iteratively through the Stochastic Gradient Langevin Dynamics (SGLD) [28,56,57]:…”
Section: B Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…We borrow the idea from the score-matching generative models [25,26,27] which propose to estimate the gradient of the ground truth data distribution and then move the initial image from its original distribution p D (x|y 0 ) to the target distribution p D (x|y) iteratively through the Stochastic Gradient Langevin Dynamics (SGLD) [28,56,57]:…”
Section: B Motivationmentioning
confidence: 99%
“…We borrow the idea from the score-matching generative models [25,26,27,28], which propose to estimate the gradient of the ground truth data distribution ∇ x log p D (x|y) and generate the image of the certain distribution iteratively using the estimated gradient of the ground truth data distribution through Langevin dynamics [25,27]. Previous attacks iteratively minimize (maximize) the conditional density of the model p θ (y|x) along the gradient of the conditional density of the model ∇ x log p θ (y|x) to perform the untargeted (targeted) attacks.…”
Section: Introductionmentioning
confidence: 99%
“…Santurkar et al (2019) demonstrated that adversarially robust models can be used for solving various generative tasks, including basic synthesis, inpainting, and superresolution. Zhu et al (2021) drew the connection between adversarial training and energy-based models and proposed a joint energy-adversarial training method for improving the generative capabilities of robust classifiers. Furthermore, Ganz and Elad (2021) proposed using a robust classifier for sample refinement as a post-processing step.…”
Section: Related Workmentioning
confidence: 99%
“…Future research could do more work toward creating models with adversarially robust representations [40]. Researchers could enhance data for adversarial robustness by simulating more data [208], augmenting data [149], repurposing existing real data [30,80], and extracting more information from available data [82]. Others could create architectures that are more adversarially robust [203].…”
Section: Adversarial Robustnessmentioning
confidence: 99%