2019
DOI: 10.48550/arxiv.1908.01000
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization

Abstract: This paper studies learning the representations of whole graphs in both unsupervised and semisupervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
134
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 91 publications
(145 citation statements)
references
References 28 publications
0
134
0
Order By: Relevance
“…Results: We report the performance accuracy of the different methods in Table 4 with boldface numbers indicating the best performance. We observe that our method is consistently better than the baseline (Sun et al, 2019) in all datasets improving the accuracy by 1.4 and 1.3 percentage points on the MUTAG and PTC datasets, respectively, and is competitive with the state-of-the-art method in .…”
Section: Graph Datasetmentioning
confidence: 77%
See 1 more Smart Citation
“…Results: We report the performance accuracy of the different methods in Table 4 with boldface numbers indicating the best performance. We observe that our method is consistently better than the baseline (Sun et al, 2019) in all datasets improving the accuracy by 1.4 and 1.3 percentage points on the MUTAG and PTC datasets, respectively, and is competitive with the state-of-the-art method in .…”
Section: Graph Datasetmentioning
confidence: 77%
“…We also apply our method to learn graph representations on five graph datasets: MUTAG, ENZYMES, PTC, IMDB-BINARY, IMDB-MULTI by (Morris et al, 2020). We employ InfoGraph (Sun et al, 2019) Training Procedure: Similar to the experiments with image datasets, we only replace the negative distribution in code implementation from with ours. The dimension of the node embedding is set to 96.…”
Section: Graph Datasetmentioning
confidence: 99%
“…For visual information, contrastive framework is applied for tasks such as image classification [29,30], object detection [31][32][33], image segmentaion [34][35][36], etc. Other applications different from image include adversarial training [37][38][39], graph [40][41][42][43], and sequence modeling [44][45][46]. Specific positive sampling strategies have been proposed to improve the performance of contrastive learning, e.g.…”
Section: Contrastive Framework and Sampling Techniquesmentioning
confidence: 99%
“…In natural language processing (NLP), many works propose pretext tasks based on language models, including the context-word prediction [35] the Cloze task, the next sentence prediction [8], [9] and so on [36]. For graph data, the pretext task are usually designed to predict the central nodes given node context [37], [38] or subgraph context [39], or maximize mutual information between local and global graph [40], [41]. Recently, many works [42]- [45] on different domains has integrated self-supervised learning with multi-task learning, i.e., joint training multiple self-supervised tasks on the underlying models, which can introduce useful information of different facets and improve the generalization performance.…”
Section: B Self-supervised Learningmentioning
confidence: 99%