2021
DOI: 10.7717/peerj-cs.693
|View full text |Cite
|
Sign up to set email alerts
|

Derivative-free optimization adversarial attacks for graph convolutional networks

Abstract: In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model’s classification of the target nodes, or even cause a degradation of the model’s overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…In black-box attack scenarios, however, the gradients are unknown to attackers. There are several approaches [52], [101], [89] designed for solving the optimisation problem without using gradients, such as reinforcement learning [89] and genetic algorithms [52]. Specifically, attackers utilising reinforcement learning algorithms can define graph perturbations as executing actions, then design rewards based on their attack goals [52].…”
Section: Robustness Of Gnnsmentioning
confidence: 99%
See 1 more Smart Citation
“…In black-box attack scenarios, however, the gradients are unknown to attackers. There are several approaches [52], [101], [89] designed for solving the optimisation problem without using gradients, such as reinforcement learning [89] and genetic algorithms [52]. Specifically, attackers utilising reinforcement learning algorithms can define graph perturbations as executing actions, then design rewards based on their attack goals [52].…”
Section: Robustness Of Gnnsmentioning
confidence: 99%
“…These approaches may require additional effort to determine the gradients (e.g., the time cost for the Topology attack using a surrogate model can be six to ten times that of a simple gradient-based approach [104]). Meanwhile, non-gradient methods are also widely employed in many studies [52], [101]. Most of these methods do not require knowledge of gradients, which makes them more practical when dealing with situations in which an attacker has limited knowledge.…”
Section: )mentioning
confidence: 99%
“…its inputs (see § D). For nondifferentiable models, one can use derivative-free optimization (Yang & Long, 2021).…”
Section: Adversarial Robustnessmentioning
confidence: 99%
“…Fortunately, most neural combinatorial solvers rely on GNNs and therefore this does not impose an issue. However, even if assessing a non-differentiable model one could revert to derivative-free optimization (Yang & Long, 2021).…”
Section: Projected Gradient Descent (Pgd)mentioning
confidence: 99%
See 1 more Smart Citation