2018
DOI: 10.48550/arxiv.1812.10528
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attack and Defense on Graph Data: A Survey

Abstract: Deep neural networks (DNNs) have been widely applied in various applications involving image, text, audio, and graph data. However, recent studies have shown that DNNs are vulnerable to adversarial attack. Though there are several works studying adversarial attack and defense on domains such as images and text processing, it is difficult to directly transfer the learned knowledge to graph data due to its representation challenge. Given the importance of graph analysis, increasing number of works start to analy… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
72
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 52 publications
(72 citation statements)
references
References 25 publications
0
72
0
Order By: Relevance
“…There has been increasing research interest in adversarial attacks on GNNs recently. Detailed expositions of existing literature are made available in a couple of survey papers [12,23]. Given the heterogeneous nature of diverse graph structured data, there are numerous adversarial attack setups for GNN models.…”
Section: Related Workmentioning
confidence: 99%
“…There has been increasing research interest in adversarial attacks on GNNs recently. Detailed expositions of existing literature are made available in a couple of survey papers [12,23]. Given the heterogeneous nature of diverse graph structured data, there are numerous adversarial attack setups for GNN models.…”
Section: Related Workmentioning
confidence: 99%
“…How to cope with misbehaved nodes during neighbor aggregation in GNNs. The input node features X, often extracted based on heuristic methods such as TF-IDF, Bag-of-Words, Doc2Vec, etc., are susceptible to such misbehavior as adversarial attacks, camouflages [12,78], or simply imprecise feature selection. Consequently, the numerical embedding of a central node tends to be assimilated by misbehaved neighboring nodes.…”
Section: Problem Scope and Challengesmentioning
confidence: 99%
“…Adversarial learning for networks includes three types of studies: attack, defense, and certifiable robustness [37,21,5]. Adversarial attacks aim to maximally degrade the model performance through perturbing the input data, which includes the modification of node attributes or changes of the network topology.…”
Section: Related Workmentioning
confidence: 99%