2023
DOI: 10.1016/j.eswa.2022.118864
|View full text |Cite
|
Sign up to set email alerts
|

k-best feature selection and ranking via stochastic approximation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 45 publications
0
3
0
Order By: Relevance
“…Variance threshold was used to remove features with variance <0.8. Secondly, the k-best method ( 21 ) was employed to remove features with P-value >0.05. To establish a more refined model, the least absolute shrinkage and selection operator (LASSO, version 1.0.2) algorithm was used to formulate a penalty function to compress select regression coefficients ( 22 ).…”
Section: Methodsmentioning
confidence: 99%
“…Variance threshold was used to remove features with variance <0.8. Secondly, the k-best method ( 21 ) was employed to remove features with P-value >0.05. To establish a more refined model, the least absolute shrinkage and selection operator (LASSO, version 1.0.2) algorithm was used to formulate a penalty function to compress select regression coefficients ( 22 ).…”
Section: Methodsmentioning
confidence: 99%
“…Due to the lack of universal knowledge regarding LC, the C classifier is trained on the dataset and measures the value of 𝑦 𝐶 (X′, Y), where 𝑦 𝐶 = 𝐿 𝐶 plus. The non-empty feature set X is specified as the solution to the wrapper-based FS problem [29].…”
Section: K-best Feature Selection Methodsmentioning
confidence: 99%
“…Feature selection aims to obtain a minimal, informative subset, excluding irrelevant or highly correlated features [28]. The process involves selecting k-best features, ranging from 5 to 15 features with an increment of 5, denoted as k ∈ {5, 10, 15} [29]. Finally, data normalization, utilizing Z-Score transformation, ensures consistent ranges between data points [27].…”
Section: Preprocessing Datamentioning
confidence: 99%