BackgroundA dramatic rise in health-tracking apps for mobile phones has occurred recently. Rich user interfaces make manual logging of users’ behaviors easier and more pleasant, and sensors make tracking effortless. To date, however, feedback technologies have been limited to providing overall statistics, attractive visualization of tracked data, or simple tailoring based on age, gender, and overall calorie or activity information. There are a lack of systems that can perform automated translation of behavioral data into specific actionable suggestions that promote healthier lifestyle without any human involvement.ObjectiveMyBehavior, a mobile phone app, was designed to process tracked physical activity and eating behavior data in order to provide personalized, actionable, low-effort suggestions that are contextualized to the user’s environment and previous behavior. This study investigated the technical feasibility of implementing an automated feedback system, the impact of the suggestions on user physical activity and eating behavior, and user perceptions of the automatically generated suggestions.MethodsMyBehavior was designed to (1) use a combination of automatic and manual logging to track physical activity (eg, walking, running, gym), user location, and food, (2) automatically analyze activity and food logs to identify frequent and nonfrequent behaviors, and (3) use a standard machine-learning, decision-making algorithm, called multi-armed bandit (MAB), to generate personalized suggestions that ask users to either continue, avoid, or make small changes to existing behaviors to help users reach behavioral goals. We enrolled 17 participants, all motivated to self-monitor and improve their fitness, in a pilot study of MyBehavior. In a randomized two-group trial, investigators randomly assigned participants to receive either MyBehavior’s personalized suggestions (n=9) or nonpersonalized suggestions (n=8), created by professionals, from a mobile phone app over 3 weeks. Daily activity level and dietary intake was monitored from logged data. At the end of the study, an in-person survey was conducted that asked users to subjectively rate their intention to follow MyBehavior suggestions.ResultsIn qualitative daily diary, interview, and survey data, users reported MyBehavior suggestions to be highly actionable and stated that they intended to follow the suggestions. MyBehavior users walked significantly more than the control group over the 3 weeks of the study (P=.05). Although some MyBehavior users chose lower-calorie foods, the between-group difference was not significant (P=.15). In a poststudy survey, users rated MyBehavior’s personalized suggestions more positively than the nonpersonalized, generic suggestions created by professionals (P<.001).ConclusionsMyBehavior is a simple-to-use mobile phone app with preliminary evidence of efficacy. To the best of our knowledge, MyBehavior represents the first attempt to create personalized, contextualized, actionable suggestions automatically from self-tracked information...
Abstract. We are developing a personal activity recognition system that is practical, reliable, and can be incorporated into a variety of health-care related applications ranging from personal fitness to elder care. To make our system appealing and useful, we require it to have the following properties: (i) data only from a single body location needed, and it is not required to be from the same point for every user; (ii) should work out of the box across individuals, with personalization only enhancing its recognition abilities; and (iii) should be effective even with a cost-sensitive subset of the sensors and data features. In this paper, we present an approach to building a system that exhibits these properties and provide evidence based on data for 8 different activities collected from 12 different subjects. Our results indicate that the system has an accuracy rate of approximately 90% while meeting our requirements. We are now developing a fully embedded version of our system based on a cell-phone platform augmented with a Bluetooth-connected sensor board.
Abstract-A key challenge for mobile health is to develop new technology that can assist individuals in maintaining a healthy lifestyle by keeping track of their everyday behaviors. Smartphones embedded with a wide variety of sensors are enabling a new generation of personal health applications that can actively monitor, model and promote wellbeing. Automated wellbeing tracking systems available so far have focused on physical fitness and sleep and often require external non-phone based sensors. In this work, we take a step towards a more comprehensive smartphone based system that can track activities that impact physical, social, and mental wellbeing namely, sleep, physical activity, and social interactions and provides intelligent feedback to promote better health. We present the design, implementation and evaluation of BeWell, an automated wellbeing app for the Android smartphones and demonstrate its feasibility in monitoring multi-dimensional wellbeing. By providing a more complete picture of health, BeWell has the potential to empower individuals to improve their overall wellbeing and identify any early signs of decline.
Automatic smartphone sensing is a feasible approach for inferring rhythmicity, a key marker of wellbeing for individuals with BD.
We propose an approach to activity recognition based on detecting and analyzing the sequence of objects that are being manipulated by the user. In domains such as cooking, where many activities involve similar actions, object-use information can be a valuable cue. In order for this approach to scale to many activities and objects, however, it is necessary to minimize the amount of human-labeled data that is required for modeling. We describe a method for automatically acquiring object models from video without any explicit human supervision. Our approach leverages sparse and noisy readings from RFID tagged objects, along with common-sense knowledge about which objects are likely to be used during a given activity, to bootstrap the learning process. We present a dynamic Bayesian network model which combines RFID and video data to jointly infer the most likely activity and object labels. We demonstrate that our approach can achieve activity recognition rates of more than 80% on a real-world dataset consisting of 16 household activities involving 33 objects with significant background clutter. We show that the combination of visual object recognition with RFID data is significantly more effective than the RFID sensor alone. Our work demonstrates that it is possible to automatically learn object models from video of household activities and employ these models for activity recognition, without requiring any explicit human labeling.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.