2019
DOI: 10.1007/978-3-030-16142-2_11
|View full text |Cite
|
Sign up to set email alerts
|

Robust Semi-supervised Representation Learning for Graph-Structured Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…As illustrated in [19], semi-supervised learning uses labelled as well as unlabeled data to perform certain learning tasks such as classification. Some methods [20], [21] which can be classes as self-training use base classifiers to obtain predictions for unlabeled data and these pseudo-labelled data are therefore used for supervised training. In [20], unlabeled images are labeled as the most confident of their predictions.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…As illustrated in [19], semi-supervised learning uses labelled as well as unlabeled data to perform certain learning tasks such as classification. Some methods [20], [21] which can be classes as self-training use base classifiers to obtain predictions for unlabeled data and these pseudo-labelled data are therefore used for supervised training. In [20], unlabeled images are labeled as the most confident of their predictions.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…The unlabeled data are weighted to fit the training process since pseudo labels are not reliable enough in the beginning. [21] balances the influence of unlabeled data by giving pseudo label when the maximum prediction confidence is greater than a threshold. Co-training [22] and Tri-training [23] are extensions of self-training which learn from multiple supervised classifiers.…”
Section: Semi-supervised Learningmentioning
confidence: 99%
“…Consequently, deep SSL that introduces the superiority of deep models to the classic SSL frameworks has attracted much attention and a considerable number of deep SSL methods have been proposed. Deep SSL methods can be classified into five categories, i.e., consistency regularization methods [24], [25], [26], [27], [28], [29], [30], [31], [32], [33], pseudo-labeling methods [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], holistic methods of consistency regularization and pseudo-labeling [45], [46], [47], [48], [49], deep generative SSL methods [50], [51], [52], [53], [54], [55] and deep graph-based SSL methods [56], [57], [58], [59], [60]. Deep SSL methods have been successful applied into various tasks such as image classification [49], object detection [61], semantic segmentation [62], text classification [63], question answering [64], etc.…”
Section: Introductionmentioning
confidence: 99%
“…In the second stage, it usually adopts supervised fine-tuning [25] or traditional semi-supervised classifiers [14,17,29,30,16] to further enhance the discriminative capability of the learned feature and learn the final classifier. Since network pre-training is task-agnostic, the representations generated by the network pre-training are likely to be suboptimal for the ultimate classification tasks [31,19,32,33]. Nevertheless, the pre-training-based methods are less sensitive to the confirmation bias thanks to its decoupling learning scheme, which pursues the outstanding performance of each stage separately without considering the quality of pseudo labels.…”
Section: Introductionmentioning
confidence: 99%