Eye movements can carry a rich set of information about someone's intentions. In the case of physically impaired people gaze can be the only communication channel they can use. People with severe disabilities are usually assisted by helpers during everyday life activity, which in time can lead to a development of an effective visual communication protocol between helper and disabled. This protocol allows them to communicate at some extent only by glancing one towards the other. Starting from this premise, we propose a new model of attentive user interface featured with some of the visual comprehension abilities of a human helper. The purpose of this user interface is to be able to identify user's intentions, and so to assist him/her in the process of achieving simple interaction goals (i.e. object selection, task selection). Implementation of this attentive interface is accomplished by way of statistical analysis of user's gaze data, based on a hidden Markov model.
For a natural interaction, people immersed within a virtual environment (like a CAVE system) use multimodal input devices (i.e. pointing devices, haptic devices, 3D mouse, infrared markers and so on). In the case of physically impaired people who are limited in their ability of moving their hands, it is necessary to use other special input devices in order to be able to perform a natural interaction. For the inference of their preference or interests regarding the surrounding environment, it is possible to take in consideration the movements of their eyes or head. Based on the analysis of eye movements, an assistive high level eye tracking interface can be designed to find the intentions of the users. A natural interaction can also be performed at some extent using head movements. This work is a compared study regarding the promptness of selection between two interaction interfaces, one based on head tracking and the other based on eye tracking. Several experiments have been conducted in order to obtain a selection speed ratio during the process of selecting virtual objects. This parameter is useful in the evaluation of promptness or ergonomics of a certain selection method, provided that eyes focus almost instantly on the objects of interest, long before a selection is completed with any other kind of interaction device (i.e. mouse, pointing wand, infrared markers). For the tests, the tracking of eyes and head movements has been performed with a high speed and highly accurate head mounted eye tracker and a 6 DoF magnetic sensor attached to the head. Direction of gaze is considered with respect to the orientation of head, thus users are free to turn around or move freely during the experiments. The interaction interface based on eye tracking allows the users to make selections just by gazing at objects, while the head tracking method forces the users to turn their heads towards the objects they want to be selected.
In this paper the gaze point and gaze line were used for generation of the path of a mobile robot in an immersive virtual environment. The gaze point is considered the intersection of the lines of sight of the two eyes. The experiments were conducted in computer generated virtual reality environment using polarized glasses. The virtual scene was projected in front of the user on a 3D stereoscopic visualization screen. The user was immersed in the environment through polarized glasses and a head-mounted eye tracking device was placed on the head of the user. The floor of the environment was tilted to reproduce the real floor position relative to the user's head when they are seated in a wheelchair. In the first part is presented the evaluation of the accuracy of the estimated gaze point, followed by the results of the experiments on path generation in virtual reality environment based on gaze analysis.
In this paper are presented some of the main issues in designing and developing of mobile robots in virtual environments that can be used for completing various tasks. Due to high complexity of robotic systems, real experiments are demanding unsatisfying timeframes and high financial resources. Other problems scientists encounter in real tests are the difficulty of concurrent use, space restrictions or hardware malfunctions. Several related studies are treating these issues by proposing the usage of virtual reality (VR) as a solution for reducing costs and decreasing development time. We are focusing our study on designing and developing of a virtual mobile robot and integrating it within various virtual environments. Furthermore, we describe the technologies that can be used to accomplish our goals, listing their good and weak points, we integrate our work in an immersive VR environment like Cave Automatic Virtual Environment (CAVE) and we present the study results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.