2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00123
|View full text |Cite
|
Sign up to set email alerts
|

ColorFool: Semantic Adversarial Colorization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
69
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 76 publications
(74 citation statements)
references
References 21 publications
1
69
0
Order By: Relevance
“…Abandoning norm based constraints completely, Patch Attack [37] replaces a certain area of the image with an adversarial patch. Likewise, ColorFool [38] disregards norms and instead recolors the image to make it adversarial. While the non-traditional norm category is not strictly defined, it gives us a concise grouping that highlights the advances being made outside of the l 2 and l ∞ based black-box attacks.…”
Section: B Black-box Attack Categorizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Abandoning norm based constraints completely, Patch Attack [37] replaces a certain area of the image with an adversarial patch. Likewise, ColorFool [38] disregards norms and instead recolors the image to make it adversarial. While the non-traditional norm category is not strictly defined, it gives us a concise grouping that highlights the advances being made outside of the l 2 and l ∞ based black-box attacks.…”
Section: B Black-box Attack Categorizationmentioning
confidence: 99%
“…The Patch Attack is based on completely replacing small part of the original image with an adversarial generated square (patch). The last attack we cover in this section is ColorFool [38]. This attack is based on manipulating the colors within the image as opposed to directly adding adversarial noise.…”
Section: Non-traditional Norm Attacksmentioning
confidence: 99%
“…The diffusion and the wide use of deep learning methods for artificial intelligence systems, thus, pose significant security and privacy issues. From the security point of view, Adversarial Attacks (AA) showed that deep learning models can be easily fooled [ 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ] while, from a privacy point of view, it has been shown that information can be easily extracted from dataset and learned model [ 26 , 27 , 28 ]. It has also been shown that attacking methods based on adversarial samples can be used for privacy-preserving purposes [ 29 , 30 , 31 , 32 , 33 ]: in this case, data are intentionally modified to avoid unauthorized information extraction by fooling the unauthorized software.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, unrestricted attacks use large unconstrained -bounded perturbations manipulating the image to create adversary photorealistic instances. In this case, the intent is not to restrict the transformations on pixels but to limit the human perception that a transformation has been applied [ 17 , 42 , 43 ].…”
Section: Introductionmentioning
confidence: 99%
“…Such methods require either access to the targeted network architecture [2] or additional resources like pretrained networks to perform image segmentation [3], colorization and style transfer [4]. In some cases, it is necessary to train neural networks from scratch for each image in order to find effective adversarial perturbations [5].…”
Section: Introduction and Related Workmentioning
confidence: 99%