2008
DOI: 10.1016/j.neucom.2007.06.014
|View full text |Cite
|
Sign up to set email alerts
|

Locality sensitive semi-supervised feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 174 publications
(86 citation statements)
references
References 20 publications
0
82
0
Order By: Relevance
“…In this paper, we use Laplacian regularized least square (LapRLS) [12], which is a state-of-theart semisupervised learning algorithm. LapRLS is formulated on the graph assumption, which has become popular in recent literature of manifold learning [13]- [17]. Specifically, it assumes that when two data points are close, their corresponding labels should also be similar.…”
Section: Machine Learning-based Methods Formentioning
confidence: 99%
“…In this paper, we use Laplacian regularized least square (LapRLS) [12], which is a state-of-theart semisupervised learning algorithm. LapRLS is formulated on the graph assumption, which has become popular in recent literature of manifold learning [13]- [17]. Specifically, it assumes that when two data points are close, their corresponding labels should also be similar.…”
Section: Machine Learning-based Methods Formentioning
confidence: 99%
“…denote label cardinality, label density and the number of distinct label combinations, respectively. [7], [16], [23], over popular multi-label datasets. MIFS is chosen because it can capture label correlations as READER does.…”
Section: Experiments Settingsmentioning
confidence: 99%
“…Like in supervised and unsupervised FS, these methods can be divided into three categories, depending on how they interact with the learning algorithm: filter, wrapper and embedded approaches. Filter methods discover the relevant and redundant features through analyzing the correlation and dependence among features without involving any learning algorithms [10], [11]. The most common filter strategies are based on feature ranking.…”
Section: A Semi-supervised Feature Selectionmentioning
confidence: 99%
“…Clearly, the combination of both paradigms (supervised and unsupervised) allows the merging of sophisticated semi-supervised approaches that can handle both labeled and unlabeled data. The problem of semi-supervised feature selection has attracted a great deal of interest recently and its effectiveness has already been demonstrated in many applications [8], [9], [10], [11].…”
Section: Introductionmentioning
confidence: 99%