2016
DOI: 10.1016/j.neucom.2015.09.031
|View full text |Cite
|
Sign up to set email alerts
|

On measuring confidence levels using multiple views of feature set for useful unlabeled data selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(14 citation statements)
references
References 54 publications
0
14
0
Order By: Relevance
“…In this section, the SemiBoost [23] algorithm and the related criteria, which are closely related to the present paper, are briefly overviewed in order to make it complete. The detailed description can also be found in the literature [19,20,23].…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…In this section, the SemiBoost [23] algorithm and the related criteria, which are closely related to the present paper, are briefly overviewed in order to make it complete. The detailed description can also be found in the literature [19,20,23].…”
Section: Related Workmentioning
confidence: 99%
“…Then, a few examples with higher confidence levels are selected to retrain the ensemble classifier together with L. However, it is not guaranteed that adding the selected data to the training data will lead to a situation in which the classification performance can be improved [35]. Therefore, various approaches have been proposed in the literature for selecting a small amount of useful unlabeled data (U s ) from U : these include the self-training [25,30,40] and co-training [3,12,18] approaches, confidence-based approaches [19,20,23], density/distance-based approaches [8,27,28], and other approaches used in active learning (AL) algorithms [7,11,33].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Different from a scalar feature, feature types, which can be scalars, vectors, or matrices, are highly diverse in dimension and expression. However, existing methods simply ensemble the selection of each feature type [13] or concatenate all features types into a single vector [14]. These methods ignore the relation between different feature types.…”
Section: Introductionmentioning
confidence: 99%