While mobile phones affect our behavior and tend to separate us from our physical environment, our environment could instead become a responsive part of the information domain. For navigation using a map while cycling in an urban environment, we studied two alternative solutions: smartphone display and projection on the road. This paper firstly demonstrates by proof-of-concept a GPS-based map navigation using a bike-mounted projector. Secondly, it implements a prototype using both a projector and a smartphone mounted on a bike, comparing them for use in a navigation system for nighttime cycling. Thirdly, it examines how visuo-spatial factors influence navigation. Our findings will be useful for designing navigation systems for bikes and even for cars, helping cyclists and drivers be more attentive to their environment while navigating, providing useful information while moving.
Figure 1. (Left) Comparison between head-up display and projected display (Center) Comparison between gesture input and Signal Pod, a commercialturn signalling system, (Right) Evaluation using videos recorded from the perspective of participants in traffic
In this paper we describe experiments in which we acquire range images of underwater surfaces with four types of depth sensors and attempt to reconstruct underwater surfaces. Two conditions are tested: acquiring range images by submersing the sensors and by holding the sensors over the water line and recording through water. We found out that only the Kinect sensor is able to acquire depth images of submersed surfaces by holding the sensor above water. We compare the reconstructed underwater geometry with meshes obtained when the surfaces were not submersed. These findings show that 3D underwater reconstruction using depth sensors is possible, despite the high water absorption of the near infrared spectrum in which these sensors operate.
The traditional method for acquiring a motor skill is to focus on ones limbs while performing the movement. A theory of motor learning validated during the last ten years is contradicting the traditional method. The new theory states that it is more beneficial to focus on external markers outside the human body and predicts acquiring the motor skill better and faster. Using a mixed reality environment, we tested if the new motor learning approach is also valid using a virtual trainer and virtual markers.
Advances in display technologies could soon make wearable mid-air displays-devices that present dynamic images floating in mid-air relative to a mobile user-available. Such devices may enable new input and output modalities compared to current mobile devices, and seamlessly offer information on the go. This paper presents a functional prototype for the purpose of understanding these modalities in more detail, including suitable applications and device placement. We first collected results from an online survey identified map navigation as one of the most desirable applications and suggested placement preferences. Based on these rankings, we built a wearable mid-air display mockup consisting of mobile phone, pico projector, and a holder frame, mountable in two alternative ways: wrist and chest. We then designed an experiment, asking participants to navigate different urban routes using map navigation displayed in mid-air. For map navigation, participants ranked wrist-mount safer than chest-mount. The experiment results validate the use of a wearable mid-air display for map navigation. Based on our online survey and experiment, we offer insights and recommendations for the design of wearable mid-air displays.
The use of technology while being mobile now takes place in many areas of people's lives in a wide range of scenarios, for example users cycle, climb, run and even swim while interacting with devices. Conflict between locomotion and system use can reduce interaction performance and also the ability to safely move. We discuss the risks of such "interaction in motion", which we argue make it desirable to design with locomotion in mind. To aid such design we present a taxonomy and framework based on two key dimensions: relation of interaction task to locomotion task, and the amount that a locomotion activity inhibits use of input and output interfaces. We accompany this with four strategies for interaction in motion. With this work, we ultimately aim to enhance our understanding of what being "mobile" actually means for interaction, and help practitioners design truly mobile interactions.
People with Visual Impairments (PVI) experience greater difficulties with daily tasks, such as supermarket shopping. Identifying and purchasing an item proves challenging for PVI. Using a user-centered design process, we understand the difficulties PVI encounter in their daily routines. Consequently, the previous FingerReader model was elevated to a new level. In contrast, FingerReader2.0 incorporates a highly integrated hardware design, as it is standalone, wearable, and not tethered to a computer. Software-wise, the prototype utilizes a deep learning system, relying on a hybrid, an on-board and a cloud-based model. The advanced design significantly extends the range of mobile assistive technology, particularly for shopping purposes. This paper presents the findings from interviews, several iterative studies, and a field study in supermarkets to demonstrate the FingerReader2.0's enhanced capabilities for those with varied levels of visual impairment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.