2020
DOI: 10.26599/bdma.2020.9020005
|View full text |Cite
|
Sign up to set email alerts
|

Novel and efficient randomized algorithms for feature selection

Abstract: Feature selection is a crucial problem in efficient machine learning, and it also greatly contributes to the explainability of machine-driven decisions. Methods, like decision trees and Least Absolute Shrinkage and Selection Operator (LASSO), can select features during training. However, these embedded approaches can only be applied to a small subset of machine learning models. Wrapper based methods can select features independently from machine learning models but they often suffer from a high computational c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 44 publications
0
14
0
Order By: Relevance
“…The filtering feature selection method has the advantages of strong independence, fast running speed and low computational complexity in machine learning algorithms, but makes it difficult to completely delete redundant features when there are many redundant features and high target relevance 41 . Wrapping methods can be independent of machine learning models but typically have high computational costs 42 . The embedded method embeds feature selection into other algorithms and selects new features during the training process, which can effectively improve the efficiency of model learning 43 .…”
Section: Related Workmentioning
confidence: 99%
“…The filtering feature selection method has the advantages of strong independence, fast running speed and low computational complexity in machine learning algorithms, but makes it difficult to completely delete redundant features when there are many redundant features and high target relevance 41 . Wrapping methods can be independent of machine learning models but typically have high computational costs 42 . The embedded method embeds feature selection into other algorithms and selects new features during the training process, which can effectively improve the efficiency of model learning 43 .…”
Section: Related Workmentioning
confidence: 99%
“…Filter-based methods select features based on their score computed by different statistical approaches. Wrapper-based FS methods select features by iteratively generating and calculating the predictive performance of the features subset [12]. Hybrid or embedded feature selection combines the merits of filter and wrapping methods.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, with the adverse growth of big data in all science and engineering domains especially in physical, biological, and biomedical sciences, there is faster development of networking, data collection, and data storage capacity. [1][2][3][4][5][6][7][8][9][10][11] Data mining refers to be the activity of searching relevant or pertinent information by processing the big datasets. This has greatly influenced in the decision-making process.…”
Section: Introductionmentioning
confidence: 99%