a b s t r a c tWe report on our extended research on GymSkill, a smartphone system for comprehensive physical exercising support, from sensor data logging, activity recognition to on-top skill assessment, using the phone's built-in sensors. In two iterations, we used principal component breakdown analysis (PCBA) and criteria-based scores for individualized and personalized automated feedback on the phone, with the goal to track training quality and success and give feedback to the user, as well as to engage and motivate regular exercising. Qualitative feedback on the system was collected in a user study, and the system showed good evaluation results in an evaluation against manual expert assessments of videorecorded trainings.
Vision-based approaches for mobile indoor localization do not rely on the infrastructure and are therefore scalable and cheap. The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far.Such interfaces should adapt to localization accuracy, which strongly relies on distinctive reference images, and other factors, such as the phone's pose. If necessary, the system should motivate the user to point at distinctive regions with the smartphone to improve localization quality.We present a combined interface of Virtual Reality (VR) and Augmented Reality (AR) elements with indicators that communicate and ensure localization accuracy. In an evaluation with 81 participants, we found that AR was preferred in case of reliable localization, but with VR, navigation instructions were perceived more accurate in case of localization and orientation errors. The additional indicators showed a potential for making users choose distinctive reference images for reliable localization.
Self-reporting techniques, such as data logging or a diary, are frequently used in long-term studies, but prone to subjects' forgetfulness and other sources of inaccuracy. We conducted a six-week self-reporting study on smartphone usage in order to investigate the accuracy of self-reported information, and used logged data as ground truth to compare the subjects' reports against. Subjects never recorded more than 70% and, depending on the requested reporting interval, down to less than 40% of actual app usages. They significantly overestimated how long they used apps. While subjects forgot self-reports when no automatic reminders were sent, a high reporting frequency was perceived as uncomfortable and burdensome. Most significantly, self-reporting even changed the actual app usage of users and hence can lead to deceptive measures if a study relies on no other data sources. With this contribution, we provide empirical quantitative long-term data on the reliability of self-reported data collected with mobile devices. We aim to make researchers aware of the caveats of self-reporting and give recommendations for maximizing the reliability of results when conducting large-scale, long-term app usage studies.
In this paper, we review the use of gameful design in the automotive domain. Outside of vehicles the automotive industry is mainly using gameful design for marketing and brand forming. For in-vehicle applications and for applications directly connected to real vehicles, the main usage scenarios of gameful design are navigation, eco-driving and driving safety. The objective of this review is to answer the following questions: (1) What elements of gameful design are currently used in the automotive industry? (2) What other automotive applications could be realized or enhanced by applying gameful design? (3) What are the challenges and limitations of gameful design in this domain especially for in-vehicle applications? The review concludes that the use of gameful design for in-vehicle applications seems to be promising. However, gamified applications related to the serious task of driving require thought-out rules and extensive testing in order to achieve the desired goal.
Figure 1. We present and evaluate a novel user interface for indoor navigation, incorporating two modes. In augmented reality (AR) mode, navigation instructions are shown as an overlay over the live camera image and the phone is held as depicted in Picture a). In virtual reality (VR) mode, a correctly oriented 360 • panorama image is shown when holding the phone as in Picture b). The interface particularly addresses the vision-based localization method by including special UI elements that support the acquisition of "good" query images. Screenshot c) shows a prototype incorporating the presented VR user interface. ABSTRACTMobile location recognition by capturing images of the environment (visual localization) is a promising technique for indoor navigation in arbitrary surroundings. However, it has barely been investigated so far how the user interface (UI) can cope with the challenges of the vision-based localization technique, such as varying quality of the query images. We implemented a novel UI for visual localization, consisting of Virtual Reality (VR) and Augmented Reality (AR) views that actively communicate and ensure localization accuracy. If necessary, the system encourages the user to point the smartphone at distinctive regions to improve localization quality. We evaluated the UI in an experimental navigation task with a prototype, informed by initial evaluation results using design mockups. We found that VR can contribute to efficient and effective indoor navigation even at unreliable location and orientation accuracy. We discuss identified challenges and share lessons learned as recommendations for future work.
Figure 1: We conducted a user study to evaluate the usability of handheld AR applications for the elderly (a,b). Since the results show that elderly users have difficulties holding up a tablet computer over a long period of time, we tested whether using head-mounted displays is an alternative for them (c). We also propose improved AR user interfaces for tablet PCs that do not require continuously holding up the device (d). ABSTRACTMobility and independence are key aspects for self-determined living in today's world and demographic change presents the challenge to retain these aspects for the aging population. Augmented Reality (AR) user interfaces might support the elderly, for example, when navigating as pedestrians or by explaining how devices and mobility aids work and how they are maintained. This poster reports on the results of practical field tests with elderly subjects testing handheld AR applications. The main finding is that common handheld AR user interfaces are not suited for the elderly because they require the user to hold up the device so the back-facing camera captures the object or environment related to which digital information shall be presented. Tablet computers are too heavy and they do not provide sufficient grip to hold them over a long period of time. One possible alternative is using head-mounted displays (HMD). We present the promising results of a user test evaluating whether elderly people can deal with AR interfaces on a lightweight HMD. We conclude with an outlook to improved handheld AR user interfaces that do not require continuously holding up the device, which we hope are better suited for the elderly.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.