Smart phone platforms, equipped with a rich set of sensors enable mobile sensing applications that support users for both personal sensing and large-scale community sensing. In such mobile sensing applications, the position/placement of the phone relative to the user body provides valuable context information. For example, in physical activity recognition using motion sensors, the position of the phone provides important information, since the sensors generate different signals when the phone is carried in different positions and this makes it difficult to successfully identify the activities with sensor data coming from different positions. In this paper, we investigate whether it is possible to successfully identify phone positions using only accelerometer data which is the most commonly used sensor on physical activity recognition studies, rather than using additional sensors. Additionally, we explore how much this position information increases the activity recognition accuracy compared with position independent activity recognition. For this purpose, we collected activity data from 15 participants carrying three phones in different positions, performing activities of walking, running, sitting, standing, climbing up/down stairs, transportation with a bus, making a phone call, interacting with an application on the smart phone, sending an SMS. The collected data is processed with the Random Forest classifier. According to the results of position recognition, using basic accelerometer features which are also used in the activity recognition, can achieve an accuracy of 77.34%, however, this ratio increases to 85% when basic features are combined with angular features calculated from the orientation of the phone. According to the results of the activity recognition experiments, on average the results are similar for position specific and position independent recognition. Only for the pocket case, 2% increase was observed.
International audienceActivity Recognition (AR) from smartphone sensors has be-come a hot topic in the mobile computing domain since it can provide ser-vices directly to the user (health monitoring, fitness, context-awareness) as well as for third party applications and social network (performance sharing, profiling). Most of the research effort has been focused on direct recognition from accelerometer sensors and few studies have integrated the audio channel in their model despite the fact that it is a sensor that is always available on all kinds of smartphones. In this study, we show that audio features bring an important performance improvement over an accelerometer based approach. Moreover, the study demonstrates the interest of considering the smartphone location for on-line context-aware AR and the prediction power of audio features for this task. Finally, an-other contribution of the study is the collected corpus that is made avail-able to the community for AR recognition from audio and accelerometer sensors
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.