2022
DOI: 10.1609/aaai.v36i7.20767
|View full text |Cite
|
Sign up to set email alerts
|

Graph Transplant: Node Saliency-Guided Graph Mixup with Local Structure Preservation

Abstract: Graph-structured datasets usually have irregular graph sizes and connectivities, rendering the use of recent data augmentation techniques, such as Mixup, difficult. To tackle this challenge, we present the first Mixup-like graph augmentation method called Graph Transplant, which mixes irregular graphs in data space. To be well defined on various scales of the graph, our method identifies the sub-structure as a mix unit that can preserve the local information. Since the mixup-based methods without special consi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 36 publications
(40 reference statements)
0
6
0
Order By: Relevance
“…Table 1 shows the statistics of these datasets. Furthermore, we compare our Mix-Key method with a vanilla neural network, three topology-based augmentation methods (PermE [ 37 ], MaskN [ 38 ] and NodeSam [ 29 ]), two mixup-based augmentation methods (MixupGraph [ 27 ] and Graph Transplant [ 30 ]) and four graph contrastive learning methods (AutoGCL [ 39 ], GraphMVP [ 40 ], MolCLR [ 9 ] and KANO [ 41 ]).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 1 shows the statistics of these datasets. Furthermore, we compare our Mix-Key method with a vanilla neural network, three topology-based augmentation methods (PermE [ 37 ], MaskN [ 38 ] and NodeSam [ 29 ]), two mixup-based augmentation methods (MixupGraph [ 27 ] and Graph Transplant [ 30 ]) and four graph contrastive learning methods (AutoGCL [ 39 ], GraphMVP [ 40 ], MolCLR [ 9 ] and KANO [ 41 ]).…”
Section: Methodsmentioning
confidence: 99%
“…However, applying Mixup to graph data is challenging due to the irregular nature of graphs with varying node numbers and difficulties in graph alignment. To address this issue, some existing Mixup methods for graphs have been proposed, such as MixupGraph [ 27 ], \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} $\mathcal{G}$\end{document} -Mixup [ 28 ], SubMix [ 29 ] and Graph Transplant [ 30 ]. While these methods have made some strides in graph data augmentation, none of them are tailored to the specific requirements and structures of the molecular property prediction domain, nor do they design specific mixing ratios for each graph.…”
Section: Introductionmentioning
confidence: 99%
“…Graph Transparent [ 70 ] blends graph in data space similarly to ifMixup. Graph Transparent preserves the local structure by using substructures as mixing units instead of ifMixup, which randomly matches nodes during mixing.…”
Section: Addressing the Challenge Of Limited Datasets Through Innovat...mentioning
confidence: 99%
“…To tackle these problems, in this paper, we are the first to study mixup-based graph invariant learning for graph OOD generalization, to the best of our knowledge. Although Mixup (Zhang et al 2018) and their variations (Verma et al 2019;Chou et al 2020;Kim et al 2020;Yun et al 2019), as one type of interpolation-based data augmentation methods that amalgamate two training instances and the labels to generate new instances are proposed in the literature, the existing graph mixup methods (Han et al 2022;Wang et al 2021;Park, Shim, and Yang 2022;Guo and Mao 2021) are only based on mixing up entire graphs, which can definitely introduce spurious correlations since they do not explicitly distinguish invariant and environment subgraphs during conducting mixup, so as to degrade the model's generalization performance on OOD graph data. Incorporating mixup with invariant learning for graph out-of-distribution generalization is promising but poses great challenges as follows and has not been explored:…”
Section: Introductionmentioning
confidence: 99%