Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2021
DOI: 10.48550/arxiv.2106.13427
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Reliable Graph Neural Network Explanations Through Adversarial Training

Donald Loveland,
Shusen Liu,
Bhavya Kailkhura
et al.

Abstract: Graph neural network (GNN) explanations have largely been facilitated through post-hoc introspection. While this has been deemed successful, many post-hoc explanation methods have been shown to fail in capturing a model's learned representation. Due to this problem, it is worthwhile to consider how one might train a model so that it is more amenable to post-hoc analysis. Given the success of adversarial training in the computer vision domain to train models with more reliable representations, we propose a simi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 10 publications
(12 reference statements)
0
1
0
Order By: Relevance
“…However, the same complexity also makes these models difficult to interpret, giving rise to the term black-box models . To alleviate this drawback, there is extensive research going on in the field of deep learning to explain and/or interpret such models, and also on the design of models that inherently incorporate such characteristics. …”
Section: Feature Representation and Model Selectionmentioning
confidence: 99%
“…However, the same complexity also makes these models difficult to interpret, giving rise to the term black-box models . To alleviate this drawback, there is extensive research going on in the field of deep learning to explain and/or interpret such models, and also on the design of models that inherently incorporate such characteristics. …”
Section: Feature Representation and Model Selectionmentioning
confidence: 99%