2022
DOI: 10.1609/aaai.v36i4.20357
|View full text |Cite
|
Sign up to set email alerts
|

Robust Heterogeneous Graph Neural Networks against Adversarial Attacks

Abstract: Heterogeneous Graph Neural Networks (HGNNs) have drawn increasing attention in recent years and achieved outstanding performance in many tasks. However, despite their wide use, there is currently no understanding of their robustness to adversarial attacks. In this work, we first systematically study the robustness of HGNNs and show that they can be easily fooled by adding the adversarial edge between the target node and large-degree node (i.e., hub). Furthermore, we show two key reasons for such vulnerability … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 27 publications
(8 citation statements)
references
References 18 publications
0
7
0
Order By: Relevance
“…Recently, some studies have been conducted on developing robust GNN models [27][28][29][30][31][32]. This section discusses the use of robustness during embedding graphs, including adversarial attacks and defenses.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, some studies have been conducted on developing robust GNN models [27][28][29][30][31][32]. This section discusses the use of robustness during embedding graphs, including adversarial attacks and defenses.…”
Section: Related Workmentioning
confidence: 99%
“…In this approach, the representation of the central node is iteratively learned and enhanced by amalgamating and transforming the representations of its adjacent nodes. These networks play a significant role in network analysis and learning [15][16][17]. Among various GNN models, the Graph Convolutional Network (GCN), introduced by Kipf et al [18], is particularly noteworthy.…”
Section: B Network Analysis and Learning Based On Gnnmentioning
confidence: 99%
“…In particular, the threats are (1) these representations reveal sensitive attributes, no matter if they explicitly exist in the input text and (2) the representations can be partially recovered via generative models. In our recent paper (Zhan et al 2023), we propose a GRA to recover a graph's adjacency matrix from three types of representation outputs, i.e., representation outputs from graph convolutional networks, graph attention networks, and SNNs. We find that SNN outputs obtain the highest precision and AUC on five real-world networks.…”
Section: Research Planmentioning
confidence: 99%