2021
DOI: 10.1007/s13042-020-01240-1
|View full text |Cite
|
Sign up to set email alerts
|

Generating transferable adversarial examples based on perceptually-aligned perturbation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Deep learning have achieved significant achievements for Euclidean data, [1][2][3][4][5] such as image and video data. [6][7][8] Recent years, there is an increasing number of applications on non-Euclidean data which are represented as graphs with complex relationship among individuals, for example, graph-based learning system in e-commerce, biological interaction, citation network, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning have achieved significant achievements for Euclidean data, [1][2][3][4][5] such as image and video data. [6][7][8] Recent years, there is an increasing number of applications on non-Euclidean data which are represented as graphs with complex relationship among individuals, for example, graph-based learning system in e-commerce, biological interaction, citation network, and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the existing methods [26][27][28][29][30][31][32] only judge whether an image has been tampered. Furthermore, some methods that provide localization capabilities often rely on heavy, time-consuming pre-/post-processing, for example, patch extraction, [33][34][35][36][37] expectation-maximization, [38][39][40][41][42] feature clustering, [43][44][45][46] and so forth. Whereas, the time complexity of these methods is too high, and the insufficient utilization of feature information in the image context leads to poor detection performance.…”
Section: Introductionmentioning
confidence: 99%
“…An attack with black‐box constraints is often modeled around querying the model on inputs, observing the labels, or confidence scores. Therefore, the studies on black‐box attacks might be more practical than that of white‐box ones in real‐world cases, and the researchers have shown an increased interest in black‐box attacks 26,27 . In brief, numerous attempts have been made to realize black‐box attacks, such as the gradient estimation‐based, 28 query‐based, 29,30 or transferability of AEs‐based 31,32 attacks.…”
Section: Introductionmentioning
confidence: 99%