2013
DOI: 10.1016/j.patcog.2013.02.012
|View full text |Cite
|
Sign up to set email alerts
|

Large Margin Subspace Learning for feature selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…A similar thought of the large margin feature selection is shared by LMSL [14], which optimizes the criterion (17) to find an appropriate projection matrix W. We can indicate that LMSL may be seen as a special case of our proposed trace ratio framework. That is, the basic idea of our algorithm is to solve the reduced nonlinear eigenvalue problem (26), which has the same trace difference function as LMSL.…”
Section: Discussionmentioning
confidence: 77%
See 2 more Smart Citations
“…A similar thought of the large margin feature selection is shared by LMSL [14], which optimizes the criterion (17) to find an appropriate projection matrix W. We can indicate that LMSL may be seen as a special case of our proposed trace ratio framework. That is, the basic idea of our algorithm is to solve the reduced nonlinear eigenvalue problem (26), which has the same trace difference function as LMSL.…”
Section: Discussionmentioning
confidence: 77%
“…On the one hand, the above objective function is nonconvex since − n i=1 (A i − B i ) is an indefinite matrix, it is difficult to directly solve. Though there are several algorithms proposed to deal with this problem [14], [18], they achieve a suboptimal solution. On the other hand, as the sum of distances between x i and NM (x i ) is usually much larger than it between x i and NH (x i ), it may be better to assign a weight factor η to Tr W T XA i X T W for balance, and η should be smaller than 1 intuitively.…”
Section: Large Margin Feature Selection Based On Subspace Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…In [63] feature selection is realized by differential evolution to reduce the search space. Some other feature selection algorithms are based on a maximum margin criteria [64], [65], [66], [67], [68], [69]. These methods are sample-based where the "margin" is defined as the difference between distance to the nearest same class sample (near-hit) and the nearest sample from opposite classes (near-miss).…”
Section: Feature Selectionmentioning
confidence: 99%
“…Thus it is better to consider the class label to conduct the sparse coding and codebook learning, so that the generated sparse code and codebook can also benefit the classification problem. For example, in [17], the codebook is learned by using the class labels to maximize the margins of samples [55]- [59].…”
Section: Introductionmentioning
confidence: 99%