2017
DOI: 10.1007/s11280-017-0502-9
|View full text |Cite
|
Sign up to set email alerts
|

Supervised feature selection algorithm via discriminative ridge regression

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 42 publications
0
9
0
Order By: Relevance
“…Ridge. is defined as a biased estimate model to evaluate the error between feature variables and class label information [36]. Based on [37], suppose that X = [x 1 , ..., x n ] ∈ R d * n represents the original dataset that contains n ddimensional instances and belongs to c clusters.…”
Section: Embedded Methodsmentioning
confidence: 99%
“…Ridge. is defined as a biased estimate model to evaluate the error between feature variables and class label information [36]. Based on [37], suppose that X = [x 1 , ..., x n ] ∈ R d * n represents the original dataset that contains n ddimensional instances and belongs to c clusters.…”
Section: Embedded Methodsmentioning
confidence: 99%
“…The most restrictive one is the missing completely at random (MCAR), such as Zhang (2008b, Zhang et al ( , 2016, Zhang (2002a, 2002b). The less restrictive one is the MAR (missing at random), such as Zhang (2008a), Qin et al (2007) and Zhang et al (2017aZhang et al ( , 2017bZhang et al ( , 2018aZhang et al ( , 2018bZhang et al ( , 2018cZhang et al ( , 2010Zhang et al ( , 2005. The unrestrictive one is the missing not at random (MNAR).…”
Section: Missing Data Mechanismsmentioning
confidence: 99%
“…Wrappers usually run much slower than filter methods due to their consideration of inter-feature relationships [29]. Embedded methods [30][31][32][33] use a classification learning algorithm to evaluate the validity of features, which retain the high precision of the wrapper methods and have the high efficiency of filter methods. However, the time complexity is relatively high when processing high-dimensional data, and the redundant features cannot be completely removed [34].…”
Section: Introductionmentioning
confidence: 99%