2024
DOI: 10.21203/rs.3.rs-3887563/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust graph representation learning via out-of-distribution detection approach

Esmaeil Bastami,
Hadi Soltanizadeh,
Mohammad Rahmanimanesh
et al.

Abstract: Graph neural networks (GNNs) are powerful models capable of learning from graph-structured data and performing various tasks. GNNs are susceptible to poisoning attacks, in which sophisticated attackers inject malicious nodes or edges into the graph topology to degrade their performance. The existing defense mechanisms, such as adversarial training, are ineffective in improving the robustness of GNN models. Fake nodes can be utilized creatively to deceive traditional GNN neural functions. In this paper, we prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 54 publications
(81 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?