In non-orthostereoscopic video see-through (VST) head-mounted displays (HMDs), the perception of the three-dimensional space is negatively altered by geometrical aberrations, which may lead to perceptual errors, problems of hand-eye coordination, and discomfort for the user. Parallax-free VST HMDs have been proposed, yet their embodiments are generally difficult to create. The present study investigates the guidelines for the development of non-orthostereoscopic VST HMDs capable of providing perceptually coherent augmentations for close-up views, hence specifically devoted to guide high-precision manual tasks. Our underlying rationale is that, under VST view, a perspective-preserving conversion of the camera frames is sufficient to restore the natural perception of the relative depths around a pre-defined working distance in non-orthostereoscopic VST HMDs. This perspective conversion needs to account for the geometry of the visor and the working distance. A simulation platform was designed to compare the on-image displacements between the direct view of the world and the perspective-corrected VST view, considering three different geometrical arrangements of cameras and displays. A user study with a custom-made VST HMD was then conducted to evaluate quantitatively and qualitatively which of the three configurations was the most effective in mitigating the impact of the geometrical aberrations around the reference distance. The results of the simulations and of the user study both proved that, in non-orthostereoscopic VST HMDs, display convergence can be prevented, as the perspective conversion of the camera frames is sufficient to restore the correct stereoscopic perception by the user in the peripersonal space. INDEX TERMS Head-mounted display, stereoscopic displays, augmented reality, video see-through displays, orthoscopic view, optical aberrations.
Abstract:In non-orthoscopic video see-through (VST) head-mounted displays (HMDs), depth perception through stereopsis is adversely affected by sources of spatial perception errors. Solutions for parallax-free and orthoscopic VST HMDs were considered to ensure proper space perception but at expenses of an increased bulkiness and weight. In this work, we present a hybrid video-optical see-through HMD the geometry of which explicitly violates the rigorous conditions of orthostereoscopy. For properly recovering natural stereo fusion of the scene within the personal space in a region around a predefined distance from the observer, we partially resolve the eye-camera parallax by warping the camera images through a perspective preserving homography that accounts for the geometry of the VST HMD and refers to such distance. For validating our solution; we conducted objective and subjective tests. The goal of the tests was to assess the efficacy of our solution in recovering natural depth perception in the space around said reference distance. The results obtained showed that the quasi-orthoscopic setting of the HMD; together with the perspective preserving image warping; allow the recovering of a correct perception of the relative depths. The perceived distortion of space around the reference plane proved to be not as severe as predicted by the mathematical models.
In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings.
Advances in technology offer new opportunities for a better understanding of how different disorders affect motor function. In this paper, we explore the potential of an augmented reality (AR) game implemented using free hand and body tracking to develop a uniform, cost-effective and objective methods for evaluation of upper extremity motor dysfunction in different patient groups. We conducted a study with 20 patients (10 Parkinson's Disease patients and 10 stroke patients) who performed hand/arm movement tasks in four different conditions in AR and one condition in real world. Despite usability issues mainly due to non-robust hand tracking, the patients were moderately engaged while playing the AR game. Our findings show that moving virtual objects was less targeted, took more time and was associated with larger trunk displacement and a lower variability of elbow angle and upper arm angle than moving real objects. No significant correlations were observed between characteristics of movements in AR and movements in the real world. Still, our findings suggest that the AR game may be suitable for assessing the hand and arm function of mildly affected patients if usability can be further improved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.