Drones allow exploring dangerous or impassable areas safely from a distant point of view. However, flight control from an egocentric view in narrow or constrained environments can be challenging. Arguably, an exocentric view would afford a better overview and, thus, more intuitive flight control of the drone. Unfortunately, such an exocentric view is unavailable when exploring indoor environments. This paper investigates the potential of drone-augmented human vision, i.e., of exploring the environment and controlling the drone indirectly from an exocentric viewpoint. If used with a see-through display, this approach can simulate X-ray vision to provide a natural view into an otherwise occluded environment. The user's view is synthesized from a three-dimensional reconstruction of the indoor environment using image-based rendering. This user interface is designed to reduce the cognitive load of the drone's flight control. The user can concentrate on the exploration of the inaccessible space, while flight control is largely delegated to the drone's autopilot system. We assess our system with a first experiment showing how drone-augmented human vision supports spatial understanding and improves natural interaction with the drone.
Exploration of challenging indoor environments is a demanding task. While automation with aerial robots seems a promising solution, fully autonomous systems still struggle with high-level cognitive tasks and intuitive decision making. To facilitate automation, we introduce a novel teleoperation system with an aerial telerobot that is capable of handling all demanding low-level tasks. Motivated by the typical structure of indoor environments, the system creates an interactive scene topology in real-time that reduces scene details and supports affordances. Thus, difficult high-level tasks can be effectively supervised by a human operator. To elaborate on the effectiveness of our system during a real-world exploration mission, we conducted a user study. Despite being limited by real-world constraints, results indicate that our system better supports operators with indoor exploration, compared to a baseline system with traditional joystick control.
Indoor navigation with micro aerial vehicles (MAVs) is of growing importance nowadays. State of the art flight management controllers provide extensive interfaces for control and navigation, but most commonly aim for performing in outdoor navigation scenarios. Indoor navigation with MAVs is challenging, because of spatial constraints and lack of drift-free positioning systems like GPS. Instead, vision and/or inertial-based methods are used to localize the MAV against the environment. For educational purposes and moreover to test and develop such algorithms, since 2015 the so called droneSpace was established at the Institute of Computer Graphics and Vision at Graz University of Technology. It consists of a flight arena which is equipped with a highly accurate motion tracking system and further holds an extensive robotics framework for semi-autonomous MAV navigation. A core component of the droneSpace is a Scalable and Lightweight Indoornavigation MAV design, which we call the SLIM (A detailed description of the SLIM and related projects can be found at our website: https:// sites.google.com/view/w-a-isop/home/education/slim). It allows flexible vision-sensor setups and moreover provides interfaces to inject accurate pose measurements form external tracking sources to achieve stable indoor hover-flights. With this work we present capabilities of the framework and its flexibility, especially with regards to research and education at university level. We present use cases from research projects but also courses at the Graz University of Technology, whereas we discuss results and potential future work on the platform.
For a variety of applications remote navigation of an unmanned aerial vehicle (UAV) along a flight trajectory is an essential task. For instance, during search and rescue missions in outdoor scenes, an important goal is to ensure safe navigation. Assessed by the remote operator, this could mean avoiding collisions with obstacles, but moreover avoiding hazardous flight areas. State of the art approaches enable navigation along trajectories, but do not allow for indirect manipulation during motion. In addition, they suggest to use egocentric views which could limit understanding of the remote scene. With this work we introduce a novel indirect manipulation method, based on gravitational law, to recover safe navigation in the presence of hazardous flight areas. The indirect character of our method supports manipulation at far distances where common direct manipulation methods typically fail. We combine it with an immersive exocentric view to improve understanding of the scene. We designed three flavors of our method and compared them during a user study in a simulated scene. While with this method we present a first step towards a more extensive navigation interface, as future work we plan experiments in dynamic real-world scenes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.