Achieving accurate, high-rate pose estimates from proprioceptive and/or exteroceptive measurements is the first step in the development of navigation algorithms for agile mobile robots such as Unmanned Aerial Vehicles (UAVs). In this paper, we propose a decoupled Graph-Optimization based Multi-Sensor Fusion approach (GOMSF) that combines generic 6 Degree-of-Freedom (DoF) visual-inertial odometry poses and 3 DoF globally referenced positions to infer the global 6 DoF pose of the robot in real-time. Our approach casts the fusion as a real-time alignment problem between the local base frame of the visual-inertial odometry and the global base frame. The alignment transformation that relates these coordinate systems is continuously updated by optimizing a sliding window pose graph containing the most recent robot's states. We evaluate the presented pose estimation method on both simulated data and large outdoor experiments using a small UAV that is capable to run our system onboard. Results are compared against different state-of-the-art sensor fusion frameworks, revealing that the proposed approach is substantially more accurate than other decoupled fusion strategies. We also demonstrate comparable results in relation with a finely tuned Extended Kalman Filter that fuses visual, inertial and GPS measurements in a coupled way and show that our approach is generic enough to deal with different input sources in a straightforward manner.Videohttps://youtu.be/GIZNSZ2soL8
On-site robotic construction not only has the potential to enable architectural assemblies that exceed the size and complexity practical with laboratory-based prefabrication methods, but also offers the opportunity to leverage context-specific, locally sourced materials that are inexpensive, abundant, and low in embodied energy. We introduce a process for constructing dry stone walls in situ, facilitated by a customized autonomous hydraulic excavator. Cabin-mounted LiDAR sensors provide for terrain mapping, stone localization and digitization, and a planning algorithm determines the placement position of each stone. As the properties of the materials are unknown at the beginning of construction, and because error propagation can hinder the efficacy of pre-planned assemblies with non-uniform components, the structure is planned on-the-fly: the desired position of each stone is computed immediately before it is placed, and any settling or unexpected deviations are accounted for. We present the first result of this geometric- and motion-planning process: a 3-m-tall wall composed of 40 stones with an average weight of 760 kg.
Automating building processes through robotic systems has the potential to address the need for safer, more efficient, and sustainable construction operations. While ongoing research effort often targets the use of prefabricated materials in controlled environments, here we focus on utilizing objects found on‐site, such as irregularly shaped rocks and rubble, as a way of enabling novel types of construction in remote and extreme environments, where standard building materials might not be easily accessible. In this article, we present a perception and grasp pose planning pipeline for autonomous manipulation of objects of interest with a robotic walking excavator. The system incrementally builds a LiDAR‐based map of the robot's surroundings and provides the ability to register externally reconstructed point clouds of the scene, for example, from images captured by a drone‐borne camera, which helps increasing map coverage. In addition, object‐like instances, such as stones, are segmented out of this map. Based on this information, collision‐free grasping poses for the robotic manipulator are planned to enable picking and placing of these objects, while keeping track of them during the manipulation. The approach is validated in a real setting on an architectural relevant scale by segmenting and manipulating boulders of several hundred kilograms, which is a first step towards the full automation of dry‐stack wall building processes. Video – https://youtu.be/4bc5n2-zj3Q
Semantic segmentation is fundamental for enabling scene understanding in several robotics applications, such as aerial delivery and autonomous driving. While scenarios in autonomous driving mainly comprise roads and small viewpoint changes, imagery acquired from aerial platforms is usually characterized by extreme variations in viewpoint. In this paper, we focus on aerial delivery use cases, in which a drone visits the same places repeatedly from distinct viewpoints. Although such applications are already under investigation (e.g. transport of blood between hospitals), current approaches depend heavily on ground personnel assistance to ensure safe delivery. Aiming at enabling safer and more autonomous operation, in this work, we propose a novel deep-learningbased semantic segmentation approach capable of running on small aerial vehicles, as well as a practical dataset-capturing method and a networktraining strategy that enables greater viewpoint tolerance in such scenarios. Our experiments show that the proposed method greatly outperforms a state-of-the-art network for embedded computers while maintaining similar inference speed and memory consumption. In addition, it achieves slightly better accuracy compared to a much larger and slower state-of-the-art network, which is unsuitable for small aerial vehicles, as considered in this work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.