Abstract-A key challenge for mobile health is to develop new technology that can assist individuals in maintaining a healthy lifestyle by keeping track of their everyday behaviors. Smartphones embedded with a wide variety of sensors are enabling a new generation of personal health applications that can actively monitor, model and promote wellbeing. Automated wellbeing tracking systems available so far have focused on physical fitness and sleep and often require external non-phone based sensors. In this work, we take a step towards a more comprehensive smartphone based system that can track activities that impact physical, social, and mental wellbeing namely, sleep, physical activity, and social interactions and provides intelligent feedback to promote better health. We present the design, implementation and evaluation of BeWell, an automated wellbeing app for the Android smartphones and demonstrate its feasibility in monitoring multi-dimensional wellbeing. By providing a more complete picture of health, BeWell has the potential to empower individuals to improve their overall wellbeing and identify any early signs of decline.
How we feel is greatly influenced by how well we sleep. Emerging quantified-self apps and wearable devices allow people to measure and keep track of sleep duration, patterns and quality. However, these approaches are intrusive, placing a burden on the users to modify their daily sleep related habits in order to gain sleep data; for example, users have to wear cumbersome devices (e.g., a headband) or inform the app when they go to sleep and wake up. In this paper, we present a radically different approach for measuring sleep duration based on a novel best effort sleep (BES) model. BES infers sleep using smartphones in a completely unobtrusive way-that is, the user is completely removed from the monitoring process and does not interact with the phone beyond normal user behavior. A sensor-based inference algorithm predicts sleep duration by exploiting a collection of soft hints that tie sleep duration to various smartphone usage patterns (e.g., the time and length of smartphone usage or recharge events) and environmental observations (e.g., prolonged silence and darkness). We perform quantitative and qualitative comparisons between two smartphone only approaches that we developed (i.e., BES model and a sleep-with-the-phone approach) and two popular commercial wearable systems (i.e., the Zeo headband and Jawbone wristband). Results from our one-week 8-person study look very promising and show that the BES model can accurately infer sleep duration (± 42 minutes) using a completely "hands off" approach that can cope with the natural variation in users' sleep routines and environments.
No abstract
Based on the Unity3D engine, the article uses deep reinforcement learning strategies to train the robotic arm through the reward function, and realizes machine learning and intelligent control of the robotic arm. After training and learning, the robotic arm can quickly and accurately find movement point in the environment and has high environmental adaptability. The application of powerful deep reinforcement learning strategies and virtual reality technology in engineering technology teaching has improved the teaching effect and teaching efficiency of mechanical structure cognition, curriculum design and other teaching links.
Reliable smartphone app prediction can strongly benefit both users and phone system performance alike. However, realworld smartphone app usage behavior is a complex phenomena driven by a number of competing factors. In this paper, we develop an app usage prediction model that leverages three key everyday factors that affect app usage decisions -(1) intrinsic user app preferences and user historical patterns; (2) user activities and the environment as observed through sensor-based contextual signals; and, (3) the shared aggregate patterns of app behavior that appear in various user communities. While rapid progress has been made recently in smartphone app prediction, existing prediction models tend to focus on only one of these factors. We evaluate a multi-faceted approach to prediction using (1) a 3-week 35-user field trial, along with (2) analysis of app usage logs of 4,606 smartphone users worldwide. We find our app usage model can not only produce more robust app predictions than conventional techniques, but it can also enable significant smartphone system optimizations.
Location is the most important information in the field of context-aware computing. Normally, one location represented as absolute physical coordinate is less understandable than semantically meaningful place like "home", "office", etc. This paper proposes a novel calibration free based algorithm called InferLoc to infer user's daily significant locations using Wi-Fi signals obtained from mobile phone. InferLoc contains three main steps: 1) Stop point detection based on trajectory segmentation through similarity calculation between neighbor sampling windows; 2) Location discovery through density based clustering and 3) Semantically significant location inference through matching clustered locations and recorded places in personal diary. Furthermore, we implement and validate InferLoc algorithm on realistic data collected from real-world wireless environment. Experimental results show that InferLoc can recognize visiting locations both in temporal and spatial fine-granularity magnitude under short response delay.
Abstract. Smartphones represent powerful mobile computing devices enabling a wide variety of new applications and opportunities for human interaction, sensing and communications. Because smartphones come with front-facing cameras, it is now possible for users to interact and drive applications based on their facial responses to enable participatory and opportunistic face-aware applications. This paper presents the design, implementation and evaluation of a robust, real-time face interpretation engine for smartphones, called Visage, that enables a new class of face-aware applications for smartphones. Visage fuses data streams from the phone's front-facing camera and built-in motion sensors to infer, in an energy-e cient manner, the user's 3D head poses (i.e., the pitch, roll and yaw of user's heads with respect to the phone) and facial expressions (e.g., happy, sad, angry, etc.). Visage supports a set of novel sensing, tracking, and machine learning algorithms on the phone, which are specifically designed to deal with challenges presented by user mobility, varying phone contexts, and resource limitations. Results demonstrate that Visage is e↵ective in di↵erent real-world scenarios. Furthermore, we developed two distinct proof-of-concept applications, Streetview+ and Mood Profiler driven by Visage.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.