In this paper , we present a VR-based first-person view paradigm applied to a tele-surveillance application. Using an Unmanned Air Vehicle (UAV), we have developed an intuitive tangible interface between the pilot and his airship (blimp). The idea is to make transparent the manipulation of an embedded camera by controlling it instinctively with the head's movement so that the user is available for other tasks such as piloting the blimp. IN other words, the user becomes part of the interface. Using the same paradigm for sensing real-time acquired sensor data, a vibro-tactile belt worn by the user will indicate the resistance offered by the wind and thus will increase the feeling of telepresence. The results of our experience show that our system is reliable and enhances the situational awareness of the pilot.CR Categories: I.3.7 Three-Dimensional Graphics and RealismVirtual Reality Keywords: Telepresence, tele-operation, Head-Mounted Display, surveillance, haptic interface. INTRODUCTIONAerial surveillance is a fast-growing topic in the commercial and research field. Indeed, many "security sensitive" events such as olympic games are now fully monitored by drones, helicopters or blimps. Aerial surveillance offers the best point of view to ensure maximum visibility while giving an outline of the general situation. Other possible applications include search and rescue operations, environmental surveillance and modeling and traffic monitoring. Furthermore, compared to airplanes, blimps are able to hover in a stationary position, which is a primordial capability for many monitoring applications. They produce very low noise, turbulences and internal vibrations, generate low operational costs and have a long endurance. In addition, blimps are the safest UAVs: even if the blimp is perforated, it doesn't represent a potential danger for the surrounding people. Thus, blimps seem to be the perfect platform for aerial observation. In many cases, monitoring missions require the blimp to carry at least a pilot and a copilot. Hence, it has a substantial incidence on the fixed and operational costs. For these reasons, we have opted for an unmanned airship, entirely controlled by only one person on the ground. In this paper, blimps are considered as a natural extension of our selves. They let us wander and observe in a first-person view.* e-mail: Xavier.Righetti@epfl.ch † e-mail: Sylvain.Cardin@epfl.ch ‡ e-mail: Daniel.Thalmann@epfl.ch § e-mail: Frederic.Vexo@epfl.ch Moreover, this increasing amount of input and output data passing from the blimp to the user urges us to elaborate new custom interfaces of communication so that no information may be neglected or misunderstood. This is especially true in our case where we often deal with delicate operations such as the piloting of the blimp without direct visual contact.Next chapter will review previous works published in this field of research. Our system will be tackled in Section 3 by describing the interaction paradigms. PREVIOUS WORKSIn [3], Alberto Elfes et al. categori...
In our everyday life we often see objects or persons and are aware that there are related digital services such as an online ticket service when seeing a poster advertising a concert. Currently it is a rather time consuming activity to find the related information. Using our Contextual Bookmark system a user can define a snapshot with her mobile phone consisting of a picture, time stamp and location. Such a bookmark can then be stored on the mobile phone, exchanged with friends and in particular be used to access related videos, web pages, and other services. This helps the user to bridge the gap between the virtual and the real world in order to use related services. Combining content and context analysis objects are recognized without any visual markers or electronic tags. We would like to demonstrate our system based on a nomadic usage scenario in which a person defines a Contextual Bookmark of a movie trailer, buys the corresponding movie, plays the movie on a TV, and exchanges the bookmark with a friend.
In this paper, we introduce a European research project, interactive media with personal networked devices (INTERMEDIA) in which we seek to progress beyond the home and device-centric convergence toward truly usercentric convergence of multimedia. Our vision is to make the user the multimedia center: the user as the point at which multimedia services and the means for interacting with them converge. This paper proposes the main research goals in providing users with a personalized interface and content independent of physical networked devices, and space and time. As a case study, we describe an indoors, mobile mixed reality guide system: Chloe@University. With a see-through head-mounted display (HMD) connected to a small wearable computing device, Chloe@University provides users with an efficient way to guide someone in a building. A 3D virtual character in front of the user guides him/her to the required destination.
In this paper, we present the technical details and the challenges we faced during the development and evaluation phases of our wearable indoor guiding system which consists of a virtual personal assistant guiding the user to his/her desired destination. The main issues that will be discussed can be classified in three categories: context detection, real-time 3D rendering and user interaction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.