2017
DOI: 10.1007/978-3-319-58961-9_25
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Feature Selection Based on the Most Informative Graph-Based Features

Abstract: Abstract. In this paper, we propose a novel method to adaptively select the most informative and least redundant feature subset, which has strong discriminating power with respect to the target label. Unlike most traditional methods using vectorial features, our proposed approach is based on graph-based features and thus incorporates the relationships between feature samples into the feature selection process. To efficiently encapsulate the main characteristics of the graphbased features, we probe each graph s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…This method can avoid discarding some valuable features arising in individual feature combinations. g) GF-RW [27]: Is a graph-based feature selection method which incorporates pairwise relationship between samples of each feature dimension.…”
Section: Experiments On Standard Machine Learning Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…This method can avoid discarding some valuable features arising in individual feature combinations. g) GF-RW [27]: Is a graph-based feature selection method which incorporates pairwise relationship between samples of each feature dimension.…”
Section: Experiments On Standard Machine Learning Datasetsmentioning
confidence: 99%
“…The most relevant vectorial features are located by selecting the graph-based features that are most similar to the graph-based target feature, in terms of the Jensen-Shannon divergence measure between the graphs. To adaptively determine the most relevant feature subset, Cui et al [27] have further developed a new information theoretic feature selection method which a) encapsulates the relationship between sample pairs for each feature dimension and b) automatically identifies the subset containing the most informative and least redundant features by solving a quadratic programming problem.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Neighbor Embedding (NE) approaches assume that features which are drawn from small low-and high-resolution patches lie on two local geometrically similar manifolds (Wang et al, 2019;Bai et al, 2014Bai et al, , 2018. Based on this assumption NE approaches reconstruct high-resolution features with local geometric structure recording coefficients which are shared in lowresolution space (Liu and Bai, 2012;Cui et al, 2017Cui et al, , 2019. A representative NE approach is A+ method proposed by Timofte et al (Timofte et al, 2014).…”
Section: Related Workmentioning
confidence: 99%
“…Most of these methods are based on transforming the original TS using a di↵erent representation. For instance, methods such as shapelets [22], symbolic dynamic methods [23], pseudo-observations [24] are implemented, while others extract features as graph-based features [25], pairwise mutual information [26] (no details are given for the computation of the MI between TS) or correlations [27]. Recently, in [28], for the extraction and the selection of relevant and non-redundant multivariate ordinal patterns for classification, a technique called Ordex is presented.…”
Section: Introductionmentioning
confidence: 99%