Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.
As the use of virtual and augmented reality applications becomes more common, the need to fully understand how observers perceive spatial relationships grows more critical. One of the key requirements in engineering a practical virtual or augmented reality system is accurately conveying depth and layout. This requirement has frequently been assessed by measuring judgments of egocentric depth. These assessments have shown that observers in virtual reality (VR) perceive virtual space as compressed relative to the realworld, resulting in systematic underestimations of egocentric depth. Previous work has indicated that similar effects may be present in augmented reality (AR) as well.This paper reports an experiment that directly measured egocentric depth perception in both VR and AR conditions; it is believed to be the first experiment to directly compare these conditions in the same experimental framework. In addition to VR and AR, two control conditions were studied: viewing real-world objects, and viewing real-world objects through a head-mounted display. Finally, the presence and absence of motion parallax was crossed with all conditions. Like many previous studies, this one found that depth perception was underestimated in VR, although the magnitude of the effect was surprisingly low. The most interesting finding was that no underestimation was observed in AR.
Abstract-A fundamental problem in optical, see-through augmented reality (AR) is characterizing how it affects the perception of spatial layout and depth. This problem is important because AR system developers need to both place graphics in arbitrary spatial relationships with real-world objects, and to know that users will perceive them in the same relationships. Furthermore, AR makes possible enhanced perceptual techniques that have no real-world equivalent, such as x-ray vision, where AR users are supposed to perceive graphics as being located behind opaque surfaces. This paper reviews and discusses protocols for measuring egocentric depth judgments in both virtual and augmented environments, and discusses the well-known problem of depth underestimation in virtual environments. It then describes two experiments that measured egocentric depth judgments in AR. Experiment I used a perceptual matching protocol to measure AR depth judgments at medium and far-field distances of 5 to 45 meters. The experiment studied the effects of upper versus lower visual field location, the x-ray vision condition, and practice on the task. The experimental findings include evidence for a switch in bias, from underestimating to overestimating the distance of AR-presented graphics, at approximately 23 meters, as well as a quantification of how much more difficult the x-ray vision condition makes the task. Experiment II used blind walking and verbal report protocols to measure AR depth judgments at distances of 3 to 7 meters. The experiment examined real-world objects, real-world objects seen through the AR display, virtual objects, and combined real and virtual objects. The results give evidence that the egocentric depth of AR objects is underestimated at these distances, but to a lesser degree than has previously been found for most virtual reality environments. The results are consistent with previous studies that have implicated a restricted field-of-view, combined with an inability for observers to scan the ground plane in a near-to-far direction, as explanations for the observed depth underestimation.
Many compelling augmented reality (AR) applications require users to correctly perceive the location of virtual objects, some with accuracies as tight as 1 mm. However, measuring the perceived depth of AR objects at these accuracies has not yet been demonstrated. In this paper, we address this challenge by employing two different depth judgment methods, perceptual matching and blind reaching, in a series of three experiments, where observers judged the depth of real and AR target objects presented at reaching distances. Our experiments found that observers can accurately match the distance of a real target, but when viewing an AR target through collimating optics, their matches systematically overestimate the distance by 0.5 to 4.0 cm. However, these results can be explained by a model where the collimation causes the eyes' vergence angle to rotate outward by a constant angular amount. These findings give error bounds for using collimating AR displays at reaching distances, and suggest that for these applications, AR displays need to provide an adjustable focus. Our experiments further found that observers initially reach ∼4 cm too short, but reaching accuracy improves with both consistent proprioception and corrective visual feedback, and eventually becomes nearly as accurate as matching.
How do users of virtual environments perceive virtual space? Many experiments have explored this question, but most of these have used head-mounted immersive displays. This paper reports an experiment that studied large-screen immersive displays at mediumfield distances of 2 to 15 meters. The experiment measured egocentric depth judgments in a CAVE, a tiled display wall, and a realworld outdoor field as a control condition. We carefully modeled the outdoor field to make the three environments as similar as possible. Measuring egocentric depth judgments in large-screen immersive displays requires adapting new measurement protocols; the experiment used timed imagined walking, verbal estimation, and triangulated blind walking.We found that depth judgments from timed imagined walking and verbal estimation were very similar in all three environments. However, triangulated blind walking was accurate only in the outdoor field; in the large-screen immersive displays it showed underestimation effects that were likely caused by insufficient physical space to perform the technique. These results suggest using timed imagined walking as a primary protocol for assessing depth perception in large-screen immersive displays. We also found that depth judgments in the CAVE were more accurate than in the tiled display wall, which suggests that the peripheral scenery offered by the CAVE is helpful when perceiving virtual space.
A frequently observed problem in medium-field virtual environments is the underestimation of egocentric depth. This problem has been described numerous times and with widely varying degrees of severity, and although there has been considerable progress made in modifying observer behavior to compensate for these misperceptions, the question of why these errors exist is still an open issue. This paper presents the findings of a series of experiments, comprising 103 participants, that attempts to identify and quantify the source of a pattern of adaptation and improved depth judgment accuracy over time scales of less than one hour. Taken together, these experiments suggest that peripheral visual information is an important source of information for the calibration of movement within mediumfield virtual environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.