Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing 2016
DOI: 10.1145/2971648.2971708
|View full text |Cite
|
Sign up to set email alerts
|

Wearable sensor based multimodal human activity recognition exploiting the diversity of classifier ensemble

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(35 citation statements)
references
References 35 publications
0
30
0
Order By: Relevance
“…Authors of [16] referred to the feature-level fusion as early fusion, which merges feature data extracted from each modality prior to the classification procedure. Using a single feature vector for the training model is straightforward, but feature compatibility issues regarding heterogeneous sampling frequencies and configuration parameters can deteriorate the classifier performance [35]. Contrarily, classifier-level fusion aims to combine the classification results given by the base-learners trained on different sensor modalities.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Authors of [16] referred to the feature-level fusion as early fusion, which merges feature data extracted from each modality prior to the classification procedure. Using a single feature vector for the training model is straightforward, but feature compatibility issues regarding heterogeneous sampling frequencies and configuration parameters can deteriorate the classifier performance [35]. Contrarily, classifier-level fusion aims to combine the classification results given by the base-learners trained on different sensor modalities.…”
Section: Related Workmentioning
confidence: 99%
“…Contrarily, classifier-level fusion aims to combine the classification results given by the base-learners trained on different sensor modalities. As class-probability predictions are uniform in their expression, classifier-level ensemble learning techniques are widely adopted to integrate multiple machine learning models [34,35,36]. In this paper, we apply two-level stacking and voting ensembles, which fall into the classifier-level sensor fusion technique.…”
Section: Related Workmentioning
confidence: 99%
“…• Lara et al [18] apply both statistical and structural detectors features to discriminate among activities. • Guo et al [12] exploit the diversity of base classifiers to construct a good ensemble for multimodal activity recognition, and the diversity measure is obtained from both labelled and unlabelled data. Note: If the compared method can not deal with unsupervised samples, it will be trained only by the supervised samples.…”
Section: Baselinesmentioning
confidence: 99%
“…Similar to [2], the experiments conducted on the two public datasets perform background activity recognition task [8]. The activities are categorized into 6 classes: lying, sitting/standing, walking, running, cycling and other activities.…”
Section: A Datasets and Experimental Settingsmentioning
confidence: 99%
“…To evaluate the performance of the proposed approach, our model, we conduct extensive experiments to compare its performance with the state-of-the-art methods on PAMAP2 and MHEALTH. We elaborately select other four state-of-theart and multimodal feature-based approaches (MARCEL [2], FEM [11], CEM [22] and MKL [23]) and five baseline methods (Support Vector Machine (SVM), Random Forest(RF), K-Nearest Neighbors(KNN), Decision Tree(DT) and Single Neural Networks) to show the competitive power of the proposed method. To ensure fair comparison, the best parameters test, our model, is used on both datasets; the best trade-off parameter (λ = 0.7) is deployed for MARCEL; time-domain features including mean, variance, standard deviation, median and frequency-domain features including entropy and spectral entropy are utilized for FEM; each modality feature group are defined an independent kernel for MKL; and for other baseline methods, all modality features are deployed.…”
Section: B Accuracy Comparison and Performance Analysismentioning
confidence: 99%