Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Abstract. We introduce a project-based concept for teaching Augmented Reality (AR) applications in a lab course. The key element of the course is that the students develop a stand-alone application based on their own idea. The complexity of Augmented Reality applications requires software engineering skills and the integration of AR specific solutions to occurring problems. The students work self-responsible in a team with state-of-the-art methods and systems. Moreover they gain presentation and documentation skills. They define and work on their individual goals and challenges, which are assembled in a final application. The identification with the goal of the group creates passion and motivation for creating the AR application. Beside of the teaching concept we present some of the students' applications in this paper. Furthermore we discuss the supervision effort, our experiences from the supervisors' view and students' feedback.
The task of vision based people tracking is a major research problem in the context of surveillance applications or human behavior estimation, but it has had only minimal impact on (Ubiquitous) Augmented Reality applications thus far. Deploying stationary infrastructural hardware within indoor environments for the purpose of Augmented Reality could provide a users' devices with additional functionality that a small device and mobile sensors cannot provide to its user. Therefore people tracking could be expected to become an ubiquitously available infrastructural element in buildings since surveillance cameras are widely used. The use for scenarios indoors or close to buildings is obvious. We present and discuss several different ways where people tracking in real-time could influence the fields of Augmented Reality and further vision based applications. MOTIVATIONA new generation of small and powerful hardware, like Ultra Mobile PCs (UMPCs) and Netbooks, offer the possibility to transfer AR applications to wearable platforms. Recent advances in the hardware specific optimization of tracking algorithms for mobile phones [8] allow for a completely new quality of Ubiquitous Augmented Reality (UAR) applications on mobile devices. Yet, although these platforms show increasing potential to present virtual information, there is still a need to combine the sensor information from the mobile clients with information that is retrieved from stationary systems.In robotics and surveillance applications, various stationary tracking environments based on GPS, RFID, infrared or UWB tags have been merged with mobile sensor data. Schulz et al. have presented a fusion system that combines an anonymous laser rangefinder providing highly accurate position data with infrared and ultrasound badges, providing more rough position data [6]. The system allocates correct personal ids to the gathered trajectories. The correspondence algorithm uses a Rao-Blackwellized particle filter approach to determine the position as well as the id of an object. This approach is used to localize persons on a map of a building. Graumann et al. have presented a multi-level framework to combine different sensor data and encapsulate certain properties within certain layers [2]. They showed a location based application with switching sensors for outdoor and indoor scenarios. As a fusion * e-mail: christian.waechter@cs.tum.edu † e-mail:daniel-pustka@cs.tum.edu ‡ e-mail:gudrun.klinker@cs.tum.edu system they use a mobile laptop that integrates the different device related estimated data. The mobile device is equipped with GPS, Wifi and UC Berkeley sensor motes that gather user specific position data and fuses them on a local bases. The fused data is used for a navigation application. Use of these systems requires the mobile units to carry special targets, such as RFID, GPS or Wifi units. Vision based people tracking systems with stationary cameras, e.g. for surveillance applications ([3], [1]), offer the possibility to track persons without requesting s...
The calibration of optical see-through head-mounted displays is an important fundament for correct object alignment in augmented reality. Any calibration process for OSTHMDs requires users to align 2D points in screen space with 3D points in the real world and to confirm each alignment. In this poster, we present the results of our empiric evaluation where we compared four confirmation methods: Keyboard, Hand-held, Voice, and Waiting. The Waiting method, designed to reduce head motion during confirmation, showed a significantly higher accuracy than all other methods. Averaging over a time frame for sampling user input before the time of confirmation improved the accuracy of all methods in addition. We conducted a further expert study proving that the results achieved with a video see-through head-mounted display showed valid for optical see-through head-mounted display calibration, too.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.