In virtual environments, perceived distances are frequently reported to be shorter than intended. One important parameter for spatial perception in a stereoscopic virtual environment is the stereo base-that is, the distance between the two viewing cameras. We systematically varied the stereo base relative to the interpupillary distance (IPD) and examined influences on distance and size perception. Furthermore, we tested whether an individual adjustment of the stereo base through an alignment task would reduce the errors in distance estimation. Participants performed reaching movements toward a virtual tennis ball either with closed eyes (blind reaches) or open eyes (sighted reaches). Using the participants' individual IPD, the stereo base was set to (a) the IPD, (b) proportionally smaller, (c) proportionally larger, or (d) adjusted according to the individual performance in an alignment task that was conducted beforehand. Overall, consistent with previous research, distances were underestimated. As expected, with a smaller stereo base, the virtual object was perceived as being farther away and bigger, in contrast to a larger stereo base, where the virtual object was perceived to be nearer and smaller. However, the manipulation of the stereo base influenced blind reaching estimates to a smaller extent than expected, which might be due to a combination of binocular disparity and pictorial depth cues. In sighted reaching, when visual feedback was available, presumably the use of disparity matching led to a larger effect of the stereo base. The use of an individually adjusted stereo base diminished the average underestimation but did not reduce interindividual variance. Interindividual differences were task specific and could not be explained through differences in stereo acuity or fixation disparity.
A high variety of peripherals for the interaction between human and computer is available (e.g. Mouse, Touch and Camera). Therefore the peripherals communicate in different ways with the computer and its applications. The library VRPN is a common, generalized interface between peripherals and VR applications to reduce the development effort. Its main advantages are the system independent client-server architecture with real-time capability and the easy implementation of new peripheral devices.The proposed paper describes the adaption and extension of the VRPN concept to address the challenges of engineering like modeling, evaluation, simulation and modification. Innovative interaction devices have the capability to enhance engineering applications with comparatively small effort but great benefit. As an example a VRPN client is implemented into the CAD application SolidWorks. This enables the use of any interaction device which is supported by VRPN. For example, the designer can control the model view by human movement via tracking device like the Microsoft Kinect or the Geomagic Touch.The data transfer can be either established in a synchronous or in an asynchronous manner. Regarding synchronous transfer, the server-client architecture was implemented in different applications (e.g. CAD, VR). In order to realize a time shifted asynchronous transfer a recorder-player middleware was developed.
Virtual testing is a significant part of the product development process. It is possible to completely solve many problems through the interaction of geometric models, simulation tools, human models with the help of the appropriate software. If, in the course of testing, it is necessary to take into account subjective human perception, one may profitably use a full-size system for immersive projection (VR system). Such a system particularly makes sense in evaluating manufacturing, operating, application or maintenance. The human being interacts with a product whose physical shape does not yet exist in a virtual environment. In this case, the movements of product components are generally indirectly controlled by using a flystick, a wand or a similar input device. In reality many operations in maintenance are determined by the position and posture of the maintenance personal as well as by the mass, center of gravity and dimensions of the object to be manipulated. In a Mixed Reality Environment, it is possible to achieve a meaningful subjective ergonomic evaluation of the abovementioned operations. The paper elucidates a strategy to integrate real product components into a virtual environment. The user applies the real components or tools in the immersive full-size projection of the VR system. The VR system tracks the real object. This way, it is possible to move an invisible object model in the VR system in sync with the movements of the real object. The collision detection tool provided in the VR system is available and signalises contact of the real object with the virtual environment. The demonstrated solution is under consideration for the planning and ergonomic evaluation of service activities. The need of industry for a process that can be controlled in a safe manner is of particular concern. The solution given here is aimed at maintenance to be performed on the brake system of a light-duty truck.
Augmented Reality (AR) is a technology used for maintenance support of existing products. The service instructions are depicted directly into the operator’s vision and enhance his product view while maintaining the item. In order to provide high-quality maintenance support the manuals have to be created and tested at the development stage of the product. This paper describes a possible solution for testing AR instructions on virtual prototypes by combining AR and Virtual Reality (VR) technology. Technical aspects for combining VR and AR displays are outlined and a combined system is presented. Furthermore, the paper shows the development of a specific test scenario for evaluating the created system and contains design patterns for depicting AR instructions in Virtual Environments. Finally, the usability of the solution is tested by a user study.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.