Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/296
|View full text |Cite
|
Sign up to set email alerts
|

Active Discriminative Network Representation Learning

Abstract: Most of current network representation models are learned in unsupervised fashions, which usually lack the capability of discrimination when applied to network analysis tasks, such as node classification. It is worth noting that label information is valuable for learning the discriminative network representations. However, labels of all training nodes are always difficult or expensive to obtain and manually labeling all nodes for training is inapplicable. Different sets of labeled nodes for model learning lead… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 58 publications
(55 citation statements)
references
References 18 publications
0
55
0
Order By: Relevance
“…Labels are chosen from informative nodes with the help of an oracle for training GNNs. Cai et al [18], Gao et al [49], Hu et al [64] show that the labeling efficiency can be substantially increased by choosing high-degree nodes and uncertain nodes (informative nodes).…”
Section: Theoretical Aspectmentioning
confidence: 99%
“…Labels are chosen from informative nodes with the help of an oracle for training GNNs. Cai et al [18], Gao et al [49], Hu et al [64] show that the labeling efficiency can be substantially increased by choosing high-degree nodes and uncertain nodes (informative nodes).…”
Section: Theoretical Aspectmentioning
confidence: 99%
“…While the above methods concentrate on supervised learning, AGE [3] and ANRMAB [15] are proposed for active learning on semi-supervised scenarios (e.g., graph) by using both the node features and graph structure. Recently, clustering-based active learning methods have been proposed for GNNs, such as Featprop [61] and LSCALE [38].…”
Section: Data Selection Problemsmentioning
confidence: 99%
“…Baselines. We compare Grain (ball-D) and Grain (NN-D) with the following baselines: Random, Degree, AGE [3], ANRMAB [15], K-Center-Greedy (KCG) [46]. A detailed introduction of these baseline methods can be found in Appendix A.5.…”
Section: Experiments 41 Experimental Settingsmentioning
confidence: 99%
“…Therefore, selecting query nodes uniformly in random coupled with a recent GNN model can easily outperform such AL models. AL models which utilize recent GNN architectures [ 41 , 42 ] are limited. Moreover, a comprehensive comparison of AL algorithms proposed for other domains of data has not been done yet.…”
Section: Active Learning For Graph Classification Problemsmentioning
confidence: 99%