Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence 2019
DOI: 10.24963/ijcai.2019/669
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Examples for Graph Data: Deep Insights into Attack and Defense

Abstract: Graph deep learning models, such as graph convolutional networks (GCN) achieve remarkable performance for tasks on graph data. Similar to other types of deep models, graph deep learning models often suffer from adversarial attacks. However, compared with non-graph data, the discrete features, graph connections and different definitions of imperceptible perturbations bring unique challenges and opportunities for the adversarial attacks and defenses for graph data. In this paper, we propose both attack and defen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
284
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 267 publications
(286 citation statements)
references
References 5 publications
2
284
0
Order By: Relevance
“…On the other hand, proposing a perturbation filtering mechanism to reduce the size of multinode candidate perturbations set is also an effective way. In addition, our method does not consider the constraints of attributed graphs [33], such as attribution-based node similarity constraint [34] and attribution cooccurrence constraint [17]. Parallel multinode adversarial attack on attributed graph and Heterogeneous Information Network (HIN) [35] still needs further exploration.…”
Section: Adversarial Attack On Graphsmentioning
confidence: 99%
“…On the other hand, proposing a perturbation filtering mechanism to reduce the size of multinode candidate perturbations set is also an effective way. In addition, our method does not consider the constraints of attributed graphs [33], such as attribution-based node similarity constraint [34] and attribution cooccurrence constraint [17]. Parallel multinode adversarial attack on attributed graph and Heterogeneous Information Network (HIN) [35] still needs further exploration.…”
Section: Adversarial Attack On Graphsmentioning
confidence: 99%
“…In a follow-up study, Zügner et al [29] study the discreteness of graph data, and solve the bilevel problem of poisoning attacks using meta gradients. Wu et al [22] introduce integrated gradients that could guide the attack of perturbing certain features or edges while still benefiting from the parallel computations. Additionally, several heuristic methods are also proposed to poison the GNNs [1,8,16], revealing the vulnerability of GNNs in different graph analysis tasks.…”
Section: Related Workmentioning
confidence: 99%
“…In addition, it is an intuitive way to preprocess the input data and thus reducing the effects of adversarial examples. Wu et al [22] inspect the input graph and recover the potential adversarial examples with Jaccard Similarity. Entezari et al [9] demonstrate that attackers only affect the high-rank singular components of the graph, and further propose a low-rank approximation method to reduce the adversarial effects.…”
Section: Related Workmentioning
confidence: 99%
“…Due to limited space, for a comprehensive literature survey of adversarial attacks, please refer to a recent study [2]. There are some notable papers [4,26,46,58,59] after the survey published in 2018, but we did not notice any adversarial attack paper on action recognition applications. One of the reasons is that adversarial attack on video level classifiers is more complicated due to the availability of multiple modes of information.…”
Section: Adversarial Attackmentioning
confidence: 99%