2022 IEEE 38th International Conference on Data Engineering (ICDE) 2022
DOI: 10.1109/icde53745.2022.00081
|View full text |Cite
|
Sign up to set email alerts
|

Black-box Adversarial Attack and Defense on Graph Neural Networks

Abstract: Graph neural networks (GNNs) have achieved great success on various graph tasks. However, recent studies have revealed that GNNs are vulnerable to adversarial attacks, including topology modifications and feature perturbations. Regardless of the fruitful progress, existing attackers require node labels and GNN parameters to optimize a bi-level problem, or cannot cover both topology modifications and feature perturbations, which are not practical, efficient, or effective. In this paper, we propose a black-box a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0
2

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 65 publications
0
6
0
2
Order By: Relevance
“…First, heuristic attackers intuitively increase the co-occurrence of some selected items and the target item đť‘– in đť‘« đť‘“ via some heuristic rules, enhancing the popularity of item đť‘– [31]. However, such methods cannot directly optimize the attack objectives, leading to poor attack performance [19,30,32]. Besides, to maximize the attack objectives, gradient-based methods directly optimize the interactions of fake users [19,30,32] while neural attackers optimize neural networks to generate fake user interactions [26,34,35,51].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…First, heuristic attackers intuitively increase the co-occurrence of some selected items and the target item đť‘– in đť‘« đť‘“ via some heuristic rules, enhancing the popularity of item đť‘– [31]. However, such methods cannot directly optimize the attack objectives, leading to poor attack performance [19,30,32]. Besides, to maximize the attack objectives, gradient-based methods directly optimize the interactions of fake users [19,30,32] while neural attackers optimize neural networks to generate fake user interactions [26,34,35,51].…”
Section: Related Workmentioning
confidence: 99%
“…However, such methods cannot directly optimize the attack objectives, leading to poor attack performance [19,30,32]. Besides, to maximize the attack objectives, gradient-based methods directly optimize the interactions of fake users [19,30,32] while neural attackers optimize neural networks to generate fake user interactions [26,34,35,51]. Their optimization process typically utilizes the recommendations of the victim model or a surrogate model for gradient descent [44,63].…”
Section: Related Workmentioning
confidence: 99%
“…Bandit [86] extended this gradient estimation by embedding both the spatial prior (neighboring pixels have similar gradients) and the temporal prior (the gradients between consecutive iterations are similar) to obtain more consistent gradients. N Attack [112] extended the NES attack by restricting the vectors sampled from the Gaussian distribution into the feasible space (i.e., the allowed search space of adversarial perturbation). The AdvFlow method [134] further extended N Attack by replacing the Gaussian distribution by a complex distribution which is captured by the normalizing flow model [175] pre-trained on benign data, such that the generated adversarial sample is more close to the benign sample.…”
Section: Score-based Attackmentioning
confidence: 99%
“…2) Preprocessing Filter: In the case where the attack budgets are constrained and fixed, to maximize the damage to the overall performance of the graph model, it is reasonable to focus on the nodes that can be correctly classified by the graph model [21]. Therefore, it is desirable to first find out items already classified correctly.…”
Section: ) Statistics-robustness Relationship Analysismentioning
confidence: 99%