The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2022
DOI: 10.48550/arxiv.2207.08089
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Threat Model-Agnostic Adversarial Defense using Diffusion Models

Abstract: Deep Neural Networks (DNNs) are highly sensitive to imperceptible malicious perturbations, known as adversarial attacks. Following the discovery of this vulnerability in real-world imaging and vision applications, the associated safety concerns have attracted vast research attention, and many defense techniques have been developed. Most of these defense methods rely on adversarial training (AT) -training the classification network on images perturbed according to a specific threat model, which defines the magn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 27 publications
(66 reference statements)
0
5
0
Order By: Relevance
“…Our future work may focus on several promising directions: (i) generalizing this technique for obtaining better gradients from multi-modal networks such as CLIP (Radford et al 2021), which help guide text-to-image diffusion models (Ramesh et al 2022); (ii) implementing robust classifier guidance beyond diffusion models, e.g. for use in classifierguided GAN training (Sauer, Schwarz, and Geiger 2022); (iii) extending our proposed technique to unlabeled datasets; and (iv) seeking better sources of perceptually aligned gradients (Ganz, Kawar, and Elad 2022), so as to better guide the generative diffusion process.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Our future work may focus on several promising directions: (i) generalizing this technique for obtaining better gradients from multi-modal networks such as CLIP (Radford et al 2021), which help guide text-to-image diffusion models (Ramesh et al 2022); (ii) implementing robust classifier guidance beyond diffusion models, e.g. for use in classifierguided GAN training (Sauer, Schwarz, and Geiger 2022); (iii) extending our proposed technique to unlabeled datasets; and (iv) seeking better sources of perceptually aligned gradients (Ganz, Kawar, and Elad 2022), so as to better guide the generative diffusion process.…”
Section: Discussionmentioning
confidence: 99%
“…These methods have demonstrated unprecedented realism and mode coverage in synthesized images, achieving stateof-the-art results (Dhariwal and Nichol 2021;Song et al 2021;Vahdat, Kreis, and Kautz 2021) in well-known metrics such as Fréchet Inception Distance -FID (Heusel et al 2017). In addition to image generation, these techniques have also been successful in a multitude of downstream applications such as image restoration (Kawar, Vaksman, and Elad 2021a;Kawar et al 2022), unpaired image-to-image translation (Sasaki, Willcocks, and Breckon 2021), image segmentation (Amit et al 2021), image editing (Liu et al 2021;Avrahami, Lischinski, and Fried 2022), text-to-image generation (Ramesh et al 2022;Saharia et al 2022), and more applications in image processing (Theis et al 2022;Gao et al 2022;Nie et al 2022;Blau et al 2022;Han, Zheng, and Zhou 2022) and beyond (Jeong et al 2021;Chen et al 2022;Ho et al 2022b;Zhou, Du, and Wu 2021).…”
Section: Diffusion Modelsmentioning
confidence: 99%
“…Diffusion models [18,51,53,58] are a family of generative models that has recently gained traction, as they advanced the state-of-the-art in image generation [12,26,54,57], and have been deployed in various downstream applications such as image restoration [25,45], adversarial purification [10,34], image compression [55], image classification [61], and others [14,27,37,48,59].…”
Section: Preliminariesmentioning
confidence: 99%
“…Nevertheless, these iterative algorithms are still considerably slower than GANs, so substantial work has been invested in improving their speed without compromising significantly on generation quality [258,135,247], often achieving impressive speedup levels. Diffusion models have since become ubiquitous in many applications [142,209,21,116,6,253,254,144], prompting researchers to prepare surveys of their impact on the image processing field and beyond [315,60,36]. Figure 8.1: Temporal steps along 3 independent synthesis paths of the Annealed Langevin Dynamics [260] algorithm, using a denoiser [261] trained on LSUN bedroom [319] images.…”
Section: Regularization By Denoising (Red)mentioning
confidence: 99%