This paper proposes the use of posterior-adapted class-based weighted decision fusion to effectively combine multiple accelerometer data for improving physical activity recognition. The cutting-edge performance of this method is benchmarked against model-based weighted fusion and class-based weighted fusion without posterior adaptation, based on two publicly available datasets, namely PAMAP2 and MHEALTH. Experimental results show that: 1) posterior-adapted class-based weighted fusion outperformed model-based and class-based weighted fusion; 2) decision fusion with two accelerometers showed statistically significant improvement in average performance compared to the use of a single accelerometer; 3) generally, decision fusion from three accelerometers did not show further improvement from the best combination of two accelerometers; and 4) a combination of ankle and wrist located accelerometers showed the best overall performance compared to any combination of two or three accelerometers.
Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Machine learning classification models for accelerometer data are potentially more accurate methods to measure physical activity in young children than traditional cut point methods. However, existing algorithms have been trained on laboratory-based activity trials, and their performance has not been investigated under free-living conditions. Purpose This study aimed to evaluate the accuracy of laboratory-trained hip and wrist random forest and support vector machine classifiers for the automatic recognition of five activity classes: sedentary (SED), light-intensity activities and games (LIGHT_AG), walking (WALK), running (RUN), and moderate to vigorous activities and games (MV_AG) in preschool-age children under free-living conditions. Methods Thirty-one children (4.0 ± 0.9 yr) were video recorded during a 20-min free-living play session while wearing an ActiGraph GT3X+ on their right hip and nondominant wrist. Direct observation was used to continuously code ground truth activity class and specific activity types occurring within each class using a bespoke two-stage coding scheme. Performance was assessed by calculating overall classification accuracy and extended confusion matrices summarizing class-level accuracy and the frequency of specific activities observed within each class. Results Accuracy values for the hip and wrist random forest algorithms were 69.4% and 59.1%, respectively. Accuracy values for hip and wrist support vector machine algorithms were 66.4% and 59.3%, respectively. Compared with the laboratory cross validation, accuracy decreased by 11%–15% for the hip classifiers and 19%–21% for the wrist classifiers. Classification accuracy values were 72%–78% for SED, 58%–79% for LIGHT_AG, 71%–84% for MV_AG, 9%–15% for WALK, and 66%–75% for RUN. Conclusion The accuracy of laboratory-based activity classifiers for preschool-age children was attenuated when tested on new data collected under free-living conditions. Future studies should train and test machine learning activity recognition algorithms using accelerometer data collected under free-living conditions.
This study examined the feasibility of a non-laboratory approach that uses machine learning on multimodal sensor data to predict relative physical activity (PA) intensity. A total of 22 participants completed up to 7 PA sessions, where each session comprised 5 trials (sitting and standing, comfortable walk, brisk walk, jogging, running). Participants wore a wrist-strapped sensor that recorded heart-rate (HR), electrodermal activity (Eda) and skin temperature (Temp). After each trial, participants provided ratings of perceived exertion (RPE). Three classifiers, including random forest (RF), neural network (NN) and support vector machine (SVM), were applied independently on each feature set to predict relative PA intensity as low (RPE ≤ 11), moderate (RPE 12–14), or high (RPE ≥ 15). Then, both feature fusion and decision fusion of all combinations of sensor modalities were carried out to investigate the best combination. Among the single modality feature sets, HR provided the best performance. The combination of modalities using feature fusion provided a small improvement in performance. Decision fusion did not improve performance over HR features alone. A machine learning approach using features from HR provided acceptable predictions of relative PA intensity. Adding features from other sensing modalities did not significantly improve performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.