2018
DOI: 10.1016/j.neucom.2017.05.072
|View full text |Cite
|
Sign up to set email alerts
|

Self-training semi-supervised classification based on density peaks of data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 93 publications
(40 citation statements)
references
References 30 publications
0
40
0
Order By: Relevance
“…Moreover, self-training, a type of semi-supervised learning, to learn by gradually including high-to low-confidence samples as pseudo-labelled samples has been proposed [39]. Self-training has been successfully applied to computer vision [40], data density peaks [41], computed tomography (CT) colonography [42] and other fields. In this paper, self-training with deep forest as base learners is used to learn from both labelled and unlabelled instances; in particular, the experiments shows that an ensemble learner provides additional improvement over the performance of adapted learners [43].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Moreover, self-training, a type of semi-supervised learning, to learn by gradually including high-to low-confidence samples as pseudo-labelled samples has been proposed [39]. Self-training has been successfully applied to computer vision [40], data density peaks [41], computed tomography (CT) colonography [42] and other fields. In this paper, self-training with deep forest as base learners is used to learn from both labelled and unlabelled instances; in particular, the experiments shows that an ensemble learner provides additional improvement over the performance of adapted learners [43].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Many semi-supervised learning methods were proposed in the literature [159], e.g. self-training [160], [161], co-training [162], [163], expectation-maximization (EM) [164], [165] and graph based methods [166], [167]. Among these methods, co-training was theoretically proved to be very appropriate and successful in combining the labeled and unlabeled data under three strong assumptions in [168].…”
Section: Semi-supervised Learning Based Mmcnns For Rgb-d Object Rmentioning
confidence: 99%
“…Some highest reliable unlabeled data will then be selected and added incrementally into the labeled training set along with their predicted labels. The procedure is repeated until convergence (Wu et al, 2018). The advantage of self-training is that it does not require a specific assumption, and as a result, it can be used in almost any situation.…”
Section: Semi-supervised Self-labelled Techniquesmentioning
confidence: 99%