2015
DOI: 10.1007/s10115-015-0841-8
|View full text |Cite
|
Sign up to set email alerts
|

Soft-constrained Laplacian score for semi-supervised multi-label feature selection

Abstract: Feature selection, semi-supervised learning and multi-label classification are different challenges for machine learning and data mining communities. While other works have addressed each of these problems separately, in this paper we show how they can be addressed together. We propose a unified framework for semi-supervised multi-label feature selection, based on Laplacian score. In particular, we show how to constrain the function of this score, when data are partially labeled and each instance is associated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 36 publications
(9 citation statements)
references
References 39 publications
0
9
0
Order By: Relevance
“…The key challenge lies in how to use the labeled samples to efficiently process the unlabeled samples. At present, unsupervised feature selection methods mainly focus on clustering-based models, for example, Laplacian score [38], trace ratio [39], and sparsity regularization–based models [40]. For example, a coregularized unsupervised feature selection algorithm was proposed in a study by Zhu et al [41], which was intended to ensure that the selected features could preserve both data distribution and reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…The key challenge lies in how to use the labeled samples to efficiently process the unlabeled samples. At present, unsupervised feature selection methods mainly focus on clustering-based models, for example, Laplacian score [38], trace ratio [39], and sparsity regularization–based models [40]. For example, a coregularized unsupervised feature selection algorithm was proposed in a study by Zhu et al [41], which was intended to ensure that the selected features could preserve both data distribution and reconstruction.…”
Section: Introductionmentioning
confidence: 99%
“…The multi-label multiclass problem corresponds to multiple labels, and each label has multiple categories of problems. The original single-label learning method cannot be directly used in multi-label learning [53], because every sample of multi-label data is labeled with one or more labels simultaneously. Moreover, the relationship between the labels may be related.…”
Section: Multi-label Learningmentioning
confidence: 99%
“…The first family is typically implemented over complete label information. Some multi-label feature selection approaches [27], [28] divide the multi-label learning problem into multiple subproblems, which fails to take the label interdependency into account. A majority of approaches try to incorporate label correlations into the process of model construction to help select discriminative features [2], [3], [29], [30].…”
Section: B Multi-label Feature Selectionmentioning
confidence: 99%