2018
DOI: 10.1007/978-3-319-98776-7_133
|View full text |Cite
|
Sign up to set email alerts
|

Mixed Feature Selection Method Based on SVM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 7 publications
0
1
0
Order By: Relevance
“…Finally, based on the feature relevance ranking results, the features in each class with high feature-category relevance are selected for retention, the remaining features in the same class are considered redundant features for removal and the top N 2 features are retained to obtain the optimal feature subset S 2 . Studies have shown [25] that the proportion of features retained in the final feature selection should preferably be between 20% and 40%. Therefore, the number of features N 2 = Total number of datasets * 40.…”
Section: Algorithm Descriptionmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, based on the feature relevance ranking results, the features in each class with high feature-category relevance are selected for retention, the remaining features in the same class are considered redundant features for removal and the top N 2 features are retained to obtain the optimal feature subset S 2 . Studies have shown [25] that the proportion of features retained in the final feature selection should preferably be between 20% and 40%. Therefore, the number of features N 2 = Total number of datasets * 40.…”
Section: Algorithm Descriptionmentioning
confidence: 99%
“…This experiment was set up as follows, setting the labelling rate R of the experimental dataset to 0.3, that is, 30% of the dataset was randomly selected as the training set of labelled samples, and the remaining 70% of unlabelled samples were used as the auxiliary dataset and test set. With reference to the literature [25], the number of features selected was set to 40% of the original number of features. In order to eliminate chance from the experimental results, the experiment was repeated 20 times for a specific tagging rate R for the method in this paper and the chosen comparison method, and the average of the 20 times was taken as the final result of the experiment.…”
Section: Experiments and Analysismentioning
confidence: 99%
“…The investigations have revealed that inclusion of the trivial features while training the models not only increases the computational complexity of the algorithm but also adversely impacts the prediction accuracy of the model [ 28 , 29 , 30 ]. It is worth noting that usually the machine learning tools perform efficiently under the circumstances where the decision boundaries are well-defined.…”
Section: Introductionmentioning
confidence: 99%