2018
DOI: 10.1007/s11042-018-6083-5
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection for text classification: A review

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
125
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 259 publications
(127 citation statements)
references
References 82 publications
0
125
0
1
Order By: Relevance
“…K-means clustering has been shown to work well for large-scale data and its accuracy level is also high compared to other clustering algorithms [68]. The K-means clustering algorithm collects the extracted terms according to their feature values into K number clusters, and K is any positive number that is used to determine the number of clusters.…”
Section: K-means Clusteringmentioning
confidence: 99%
“…K-means clustering has been shown to work well for large-scale data and its accuracy level is also high compared to other clustering algorithms [68]. The K-means clustering algorithm collects the extracted terms according to their feature values into K number clusters, and K is any positive number that is used to determine the number of clusters.…”
Section: K-means Clusteringmentioning
confidence: 99%
“…Three feature selection algorithms, namely, filter model [29], wrapper model with recursive feature elimination (RFE) algorithm [30,31], and RF model [32,33], are applied to select the features most relevant to the gearbox oil temperature as the input variables of the model. These three algorithms are typical feature selection methods based on how to generate feature subsets [34]. They will calculate the most relevant features according to different rules, which can avoid the defect of a single method.…”
Section: Feature Selectionmentioning
confidence: 99%
“…Feature selection methods mostly involve iterations of two steps until their stopping criterion is met (Deng et al, 2008). The two iterative steps are feature subset selection and evaluation of performance of the chosen subset (Deng et al, 2008).…”
Section: Application Backgroundmentioning
confidence: 99%