2018 IEEE Symposium Series on Computational Intelligence (SSCI) 2018
DOI: 10.1109/ssci.2018.8628830
|View full text |Cite
|
Sign up to set email alerts
|

Stacked Generalization with Wrapper-Based Feature Selection for Human Activity Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 14 publications
0
6
0
Order By: Relevance
“…These methods are the combination of the best qualities of filter and wrapper methods in which the variable selection process and classification have been implemented simultaneously using a learning algorithm [77]. Assessment of the importance of variables using embedded methods can be referred to in [79]. In this study, six feature selection methods (i.e., two filter methods including Pearson's R (PR) and mutual information (MI), two wrapper methods including Boruta and Stepwise Feature Selection (SFS), and two embedded methods including Random Forest (RF) and Recursive Feature Elimination (RFE)) were used to assess the contribution of variables to ML models.…”
Section: Feature Selection Methodsmentioning
confidence: 99%
“…These methods are the combination of the best qualities of filter and wrapper methods in which the variable selection process and classification have been implemented simultaneously using a learning algorithm [77]. Assessment of the importance of variables using embedded methods can be referred to in [79]. In this study, six feature selection methods (i.e., two filter methods including Pearson's R (PR) and mutual information (MI), two wrapper methods including Boruta and Stepwise Feature Selection (SFS), and two embedded methods including Random Forest (RF) and Recursive Feature Elimination (RFE)) were used to assess the contribution of variables to ML models.…”
Section: Feature Selection Methodsmentioning
confidence: 99%
“…This is due to the fact that raw sensor data are always noise-corrupted, which makes it hard to measure and reflect the true motion change of smartphones accurately. After preprocessing the raw data, traditional methods extract a large amount of features and select some principal features [21] representing the essential difference between different activities. Features extracted from the time domain, frequency domain, wavelet energy and interquartile range are extensively used.…”
Section: A Traditional Methods For Harmentioning
confidence: 99%
“…The stacked ensemble model can achieve an accuracy of 0.96 for various HAR tasks. HAR task prediction accuracy is further improved in [21] where Boruta, a wrapper-based all-relevant feature selection method is used for feature extraction, before model training. The stacking approach is composed of various machine learning algorithms like the random forest, multi-layer perceptron, logistic regression, and SVM with linear kernel and Boruta shows good performance with an accuracy of 0.97%.…”
Section: Related Workmentioning
confidence: 99%
“…Table 11 contains the results and the models of the above-mentioned studies. [44] Deep CNN 0.946 [45] Deep CNN 0.951 [46] Deep CNN 0.947 [47] Deep CNN 0.900 [48] Deep ConvLSTM 0.958 [16] Residual Bi LSTM 0.905 [17] Multiview stacking 0.925 [18] Stacked LSTM 0.930 [19] SDAE+GBM 0.959 [20] Stacked ensemble 0.960 [21] Stacked ensemble 0.968 DS-MLP Deep Stacked Ensemble 0.973 Performance analysis suggests that the approaches based on deep CNN have an accuracy between 0.90 to 0.95. Ensemble approaches, on the other hand, tend to have higher accuracy than that of simple deep learning approaches.…”
Section: Performance Comparison With State-of-the-art Studiesmentioning
confidence: 99%