Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022
DOI: 10.48550/arxiv.2201.08557
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Unsupervised Graph Representation Learning via Mutual Information Maximization

Abstract: Recent studies have shown that GNNs are vulnerable to adversarial attack. Thus, many approaches are proposed to improve the robustness of GNNs against adversarial attacks. Nevertheless, most of these methods measure the model robustness based on label information and thus become infeasible when labels information is not available. Therefore, this paper focuses on robust unsupervised graph representation learning. In particular, to quantify the robustness of GNNs without label information, we propose a robustne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…In contrastive learning, maximizing the representation consistency between the original graphs and the augmented views of edge perturbation [36,218] can also result in a more robust model. Some adversarial graph contrastive learning and variants [56,67,187,210] are developed to further improve the robustness by introducing an adversarial view of graphs.…”
Section: Other Types Of Defense Methods Against Graph Adversarial Att...mentioning
confidence: 99%
See 1 more Smart Citation
“…In contrastive learning, maximizing the representation consistency between the original graphs and the augmented views of edge perturbation [36,218] can also result in a more robust model. Some adversarial graph contrastive learning and variants [56,67,187,210] are developed to further improve the robustness by introducing an adversarial view of graphs.…”
Section: Other Types Of Defense Methods Against Graph Adversarial Att...mentioning
confidence: 99%
“…Generally, adversarial training simultaneously generate adversarial samples that can fool a classifier and force the classifier to give similar predictions for a clean sample and its perturbed version so as to improve the robustness of the classifier. Adversarial training [38,41,55,187,211] is also investigated to defend against graph adversarial attacks, which can be generally formulated as the following min max game:…”
Section: Adversarial Trainingmentioning
confidence: 99%
“…Zhao et al [29] maximized the mutual information of different pose representations under varying views to learn the integrative representation. Wang et al [30] utilized the subgraph-level summary to build an effective mutual information estimator, which was optimized to strengthen the robustness of graph representation. Mao et al [31] explored the shared information across modalities via maximizing the mutual information between them.…”
Section: B Representation Learning Based On Mutual Informationmentioning
confidence: 99%