Over the last 20 years research has been done on the question of how egocentric distances, i.e., the subjectively reported distance from a human observer to an object, are perceived in virtual environments. This review surveys the existing literature on empirical user studies on this topic. In summary, there is a mean estimation of egocentric distances in virtual environments of about 74% of the modeled distances. Many factors possibly influencing distance estimates were reported in the literature. We arranged these factors into four groups, namely measurement methods, technical factors, compositional factors, and human factors. The research on these factors is summarized, conclusions are drawn, and promising areas for future research are outlined.
Social neuroscience has shed light on the underpinnings of understanding other minds. The current study investigated the effect of self-involvement during social interaction on attention, arousal, and facial expression. Specifically, we sought to disentangle the effect of being personally addressed from the effect of decoding the meaning of another person's facial expression. To this end, eye movements, pupil size, and facial electromyographic (EMG) activity were recorded while participants observed virtual characters gazing at them or looking at someone else. In dynamic animations, the virtual characters then displayed either socially relevant facial expressions (similar to those used in everyday life situations to establish interpersonal contact) or arbitrary facial movements. The results show that attention allocation, as assessed by eye-tracking measurements, was specifically related to self-involvement regardless of the social meaning being conveyed. Arousal, as measured by pupil size, was primarily related to perceiving the virtual character's gender. In contrast, facial EMG activity was determined by the perception of socially relevant facial expressions irrespective of whom these were directed towards.
Is there any relationship between visual fixation durations and saccade amplitudes in free exploration of pictures and scenes? In four experiments with naturalistic stimuli, we compared eye movements during early and late phases of scene perception. Influences of repeated presentation of similar stimuli (Experiment 1), object density (Experiment 2), emotional stimuli (Experiment 3) and mood induction (Experiment 4) were examined. The results demonstrate a systematic increase in the durations of fixations and a decrease for saccadic amplitudes over the time course of scene perception. This relationship was very stable across the variety of studied conditions. It can be interpreted in terms of a shifting balance of the two modes of visual information processing.
Saccadic peak velocity is affected by variations in mental workload during ecologically valid tasks. We conclude that saccadic peak velocity could be a useful diagnostic index for the assessment of operators' mental workload and attentional state in hazardous environments.
The data demonstrate a positive transfer of motor skills between laparoscopic appendectomy and cholecystectomy on the virtual reality simulator; however, the transfer of cognitive skills is limited. Separate training curricula seem to be necessary for each procedure for trainees to practise task-specific cognitive skills effectively. Mentoring could help trainees to get a deeper understanding of the procedures, thereby increasing the chance for the transfer of acquired skills.
Our data reveals that phone call distractions result in impaired laparoscopic performance under certain circumstances. To ensure patient safety, phone calls should be avoided as far as possible in operating rooms.
Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.