2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01488
|View full text |Cite
|
Sign up to set email alerts
|

Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(21 citation statements)
references
References 28 publications
1
11
0
Order By: Relevance
“…Moreover, its experimental results that AEs produced by AdvCam are well camouflaged and highly concealed in both digital and physical world scenarios while still being effective in deceiving state-of-the-art DNN image detectors. SSAH (Luo et al, 2022 ) crafts adversarial examples and disguises adversarial noise in a low-frequency constraints manner. This method limits the adversarial perturbations to the high-frequency components of the specific image to ensure low human perceptual similarity.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, its experimental results that AEs produced by AdvCam are well camouflaged and highly concealed in both digital and physical world scenarios while still being effective in deceiving state-of-the-art DNN image detectors. SSAH (Luo et al, 2022 ) crafts adversarial examples and disguises adversarial noise in a low-frequency constraints manner. This method limits the adversarial perturbations to the high-frequency components of the specific image to ensure low human perceptual similarity.…”
Section: Related Workmentioning
confidence: 99%
“…Adversaries often need to trade between attack strength and imperceptibility of perturbations, which inspires a lot of works [39], [7], [37], [23] to find a reasonable constraint for evaluating the imperceptibility. However, current attacks for ODs [63], [30], [33], [9], [11] clip perturbations based on image level (i.e., they only control the max magnitude of perturbations), which indicates potential uncontrollability (i.e., the position and distribution of learned perturbations is random [37], [46]). To circumvent this problem, [15] factorizes perturbations into magnitude and position vectors, and [37] limits perturbations in frequency space from the global image-based viewpoint against image classifiers.…”
Section: Related Workmentioning
confidence: 99%
“…However, current attacks for ODs [63], [30], [33], [9], [11] clip perturbations based on image level (i.e., they only control the max magnitude of perturbations), which indicates potential uncontrollability (i.e., the position and distribution of learned perturbations is random [37], [46]). To circumvent this problem, [15] factorizes perturbations into magnitude and position vectors, and [37] limits perturbations in frequency space from the global image-based viewpoint against image classifiers. They leverage implicitly the models' attention to guide the perturbations, which brings a big learning burden for neural networks and produces suboptimal results.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, most adversarial attacks are predominantly deployed in the visible light field [16][17][18][19]. Szegedy et al [4] first discovered that well-trained DNNs are susceptible to slight perturbations, leading to a multitude of related studies [20][21][22].…”
Section: Adversarial Attacks In the Visible Light Fieldmentioning
confidence: 99%