Proceedings of the Web Conference 2020 2020
DOI: 10.1145/3366423.3380149
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Graph Neural Networks via Node Injections: A Hierarchical Reinforcement Learning Approach

Abstract: Graph Neural Networks have achieved immense success for node classification with its power to explore the topological structure in graph data across many domains including social media, Ecommerce, and FinTech. However, recent studies show that GNNs are vulnerable to attacks aimed at adversely impacting their performance, e.g., on the node classification task. Existing studies of adversarial attacks on GNN focus primarily on manipulating the connectivity between existing nodes, a task that requires greater effo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
92
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 100 publications
(92 citation statements)
references
References 36 publications
0
92
0
Order By: Relevance
“…Graph data are ubiquitous in real-world. Recently, graph convolutional neural networks (GCNNs) have achieved state-of-the-art performance for many graph mining tasks [10,15,34], and many efforts have been taken [13,25,26,33,35,36]. In general, these GCNNs could be divided into two categorizes: spectral based GCNNs and spatial-based GCNNs.…”
Section: Graph Convolutional Neural Networkmentioning
confidence: 99%
“…Graph data are ubiquitous in real-world. Recently, graph convolutional neural networks (GCNNs) have achieved state-of-the-art performance for many graph mining tasks [10,15,34], and many efforts have been taken [13,25,26,33,35,36]. In general, these GCNNs could be divided into two categorizes: spectral based GCNNs and spatial-based GCNNs.…”
Section: Graph Convolutional Neural Networkmentioning
confidence: 99%
“…In [39], aiming at GCNs, Wang et al used a greedy algorithm to add fake nodes into a graph to conduct fake node attacks. Another recent work [30] proposed node injection attacks to poison the training graph used by GNNs, in order to reduce classification accuracy of GNNs on the unlabeled nodes. To achieve that, they used a deep hierarchical reinforcement learning based method to launch these attacks.…”
Section: Adversarial Machine Learning On Graphsmentioning
confidence: 99%
“…In [39], the authors considered an attack on graph convolutional networks (GCNs) by adding fake nodes to the graph. Targeting GNNs, [30] formulated the fake node injection problem as a Markov decision process and utilized Q-learning algorithm to address this problem. However, they did not explore such an attack (adding new nodes) on collective classification methods.…”
Section: Introductionmentioning
confidence: 99%
“…In view of the gap, very recent efforts [17,19], including the KDD-CUP 2020 competition 1 , have been devoted to adversarial attacks on GNNs under the setting of graph injection attack (GIA). Specifically, the GIA task in KDD-CUP 2020 is formulated as follows:…”
Section: Introductionmentioning
confidence: 99%
“…Table 1 summarizes the differences. First, NIPA [17] and AFGSM [19] are developed under the poison setting, which requires the re-training of the defense models for each attack. Differently, TDGIA follows KDD-CUP 2020 to use the evasion attack setting, where different attacks are evaluated based on the same set of models and weights.…”
Section: Introductionmentioning
confidence: 99%