Abstract. Activity-Based Computing [1] aims to capture the state of the user and its environment by exploiting heterogeneous sensors in order to provide adaptation to exogenous computing resources. When these sensors are attached to the subject's body, they permit continuous monitoring of numerous physiological signals. This has appealing use in healthcare applications, e.g. the exploitation of Ambient Intelligence (AmI) in daily activity monitoring for elderly people. In this paper, we present a system for human physical Activity Recognition (AR) using smartphone inertial sensors. As these mobile phones are limited in terms of energy and computing power, we propose a novel hardware-friendly approach for multiclass classification. This method adapts the standard Support Vector Machine (SVM) and exploits fixed-point arithmetic for computational cost reduction. A comparison with the traditional SVM shows a significant improvement in terms of computational costs while maintaining similar accuracy, which can contribute to develop more sustainable systems for AmI.
This work presents the Transition-Aware Human Activity Recognition (TAHAR) system architecture for the recognition of physical activities using smartphones. It targets real-time classification with a collection of inertial sensors while addressing issues regarding the occurrence of transitions between activities and unknown activities to the learning algorithm. We propose two implementations of the architecture which differ in their prediction technique as they deal with transitions either by directly learning them or by considering them as unknown activities. This is accomplished by combining the probabilistic output of consecutive activity predictions of a Support Vector Machine (SVM) with a heuristic filtering approach. The architecture is validated over three case studies that involve data from people performing a broad spectrum of activities (up to 33), while carrying smartphones or wearable sensors. Results show that TAHAR outperforms state-of-the-art baseline works and reveal the main advantages of the architecture
We propose robust multi-dimensional motion features for human activity recognition from first-person videos. The proposed features encode information about motion magnitude, direction and variation, and combine them with virtual inertial data generated from the video itself. The use of grid flow representation, per-frame normalization and temporal feature accumulation enhances the robustness of our new representation. Results on multiple datasets demonstrate that the proposed feature representation outperforms existing motion features, and importantly it does so independently of the classifier. Moreover, the proposed multi-dimensional motion features are general enough to make them suitable for vision tasks beyond those related to wearable cameras. (C) 2015 The Authors. Published by Elsevier Inc.Peer ReviewedPostprint (published version
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.