The recent rise of consumer virtual reality (VR) hardware raises important questions in the field of online marketing: what makes 3D VR more informative and playful than conventional 2D media such as a still image and a video, and how it affects the online purchase decision-making process. In this study, we mainly focus on three interface features—interactivity, visual–spatial cues, and graphics quality. We explore how each of these three interface features enhances playfulness and informativeness of shopping interface and further influences subsequent product evaluation and purchase intention. The results of the study provide two meaningful insights. First, interactivity and visual–spatial cues significantly enhance perceived informativeness and playfulness; however, the role of graphics quality was found to be more critical for 2D displays than for 3D VR environment. Second, informativeness and playfulness influence the purchase decision-making process in distinct ways. More specifically, a playful interface may enhance consumers’ preference for hedonic product benefits (e.g., a stylish and attractive design), whereas informativeness is a more important explanatory variable for subsequent purchase intentions. We discuss the theoretical contribution and managerial insights the research provides for online retailers and designers.
The CAVE, a walk-in virtual reality environment typically consisting of 4–6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.
This study examined how individuals with and without neck pain performed exercises under the influence of altered visual feedback in virtual reality. Chronic neck pain (n=9) and asymptomatic (n=10) individuals were recruited for this cross-sectional study. Participants performed head rotations while receiving programmatically manipulated visual feedback from a head-mounted virtual reality display. The main outcome measure was the control-display gain (ratio between actual head rotation angle and visual rotation angle displayed) recorded at the just-noticeable difference. Actual head rotation angles were measured for different gains. Detection of the manipulated visual feedback was affected by gain. The just-noticeable gain for asymptomatic individuals, below and above unity gain, was 0.903 and 1.159, respectively. Head rotation angle decreased or increased 5.45° for every 0.1 increase or decrease in gain, respectively. The just-noticeable gain for chronic pain individuals, below unity gain, was 0.950. The head rotation angle increased 4.29° for every 0.1 decrease in gain. On average, chronic pain individuals reported that neck rotation was feasible for 84% of the unity gain trials, 66% of the individual just-noticeable difference trials, and 50% of the "nudged" just-noticeable difference trials. This research demonstrated that virtual reality may be useful for promoting the desired outcome of increased range of motion in neck rehabilitation exercises by altering visual feedback.
The perception of objects, depth, and distance has been repeatedly shown to be divergent between virtual and physical environments. We hypothesize that many of these discrepancies stem from incorrect geometric viewing parameters, specifically that physical measurements of eye position are insufficiently precise to provide proper viewing parameters. In this paper, we introduce a perceptual calibration procedure derived from geometric models. While most research has used geometric models to predict perceptual errors, we instead use these models inversely to determine perceptually correct viewing parameters. We study the advantages of these new psychophysically determined viewing parameters compared to the commonly used measured viewing parameters in an experiment with 20 subjects. The perceptually calibrated viewing parameters for the subjects generally produced new virtual eye positions that were wider and deeper than standard practices would estimate. Our study shows that perceptually calibrated viewing parameters can significantly improve depth acuity, distance estimation, and the perception of shape.
Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.