As light is transmitted from subject to observer it is absorbed and scattered by the medium it passes through. In mediums with large suspended particles, such as fog or turbid water, the effect of scattering can drastically decrease the quality of images. In this paper we present an algorithm for removing the effects of light scattering, referred to as dehazing, in underwater images. Our key contribution is to propose a simple, yet effective, prior that exploits the strong difference in attenuation between the three image color channels in water to estimate the depth of the scene. We then use this estimate to reduce the spatially varying effect of haze in the image. Our method works with a single image and does not require any specialized hardware or prior knowledge of the scene. As a by-product of the dehazing process, an up-to-scale depth map of the scene is produced. We present results over multiple real underwater images and over a controlled test set where the target distance and true colors are known.
This paper documents a large scale, long-term autonomy dataset for robotics research collected on the University of Michigan's North Campus. The dataset consists of omnidirectional imagery, 3D lidar, planar lidar, GPS, and proprioceptive sensors for odometry collected using a Segway robot. The dataset was collected to facilitate research focusing on long-term autonomous operation in changing environments. The dataset is composed of 27 sessions spaced approximately biweekly over the course of 15 months. The sessions repeatedly explore the campus, both indoors and outdoors, on varying trajectories, and at different times of the day across all four seasons. This allows the dataset to capture many challenging elements including: moving obstacles (e.g. pedestrians, bicyclists and cars), changing lighting, varying viewpoint, seasonal and weather changes (e.g. falling leaves and snow), and long-term structural changes caused by construction projects. To further facilitate research, we also provide ground-truth pose for all sessions in a single frame of reference.
Abstract-This paper reports on a factor-based method for node marginalization in simultaneous localization and mapping (SLAM) pose-graphs. Node marginalization in a pose-graph induces fill-in and leads to computational challenges in performing inference. The proposed method is able to produce a new set of constraints over the elimination clique that can represent either the true marginalization, or a sparse approximation of the true marginalization using a Chow-Liu tree. The proposed algorithm improves upon existing methods in two key ways: First, it is not limited to strictly full-state relative-pose constraints and works equally well with other low-rank constraints such as those produced by monocular vision. Second, the new factors are produced in a way that accounts for measurement correlation, a problem ignored in other methods that rely upon measurement composition. We evaluate the proposed method over several realworld SLAM graphs and show that it outperforms other stateof-the-art methods in terms of Kullback-Leibler divergence.
Abstract-This paper reports on a generic factor-based method for node removal in factor-graph simultaneous localization and mapping (SLAM), which we call generic linear constraints (GLCs). The need for a generic node removal tool is motivated by long-term SLAM applications whereby nodes are removed in order to control the computational cost of graph optimization. GLC is able to produce a new set of linearized factors over the elimination clique that can represent either the true marginalization (i.e., dense GLC), or a sparse approximation of the true marginalization using a Chow-Liu tree (i.e., sparse GLC). The proposed algorithm improves upon commonly used methods in two key ways: First, it is not limited to graphs with strictly full-state relative-pose factors and works equally well with other low-rank factors such as those produced by monocular vision. Second, the new factors are produced in a way that accounts for measurement correlation, a problem encountered in other methods that rely strictly upon pairwise measurement composition. We evaluate the proposed method over multiple real-world SLAM graphs and show that it outperforms other recently-proposed methods in terms of Kullback-Leibler divergence. Additionally, we experimentally demonstrate that the proposed GLC method provides a principled and flexible tool to control the computational complexity of long-term graph SLAM, with results shown for 34.9 h of real-world indoor-outdoor data covering 147.4 km collected over 27 mapping sessions spanning a period of 15 months.
This paper reports on a system for an autonomous underwater vehicle to perform in-situ, multiple session, hull inspection using long-term simultaneous localization and mapping (SLAM). Our method assumes very little a-priori knowledge, and does not require the aid of acoustic beacons for navigation, which is a typical mode of navigation in this type of application. Our system combines recent techniques in underwater saliency-informed visual SLAM and a method for representing the ship hull surface as a collection of many locally planar surface features. This methodology produces accurate maps that can be constructed in real-time on consumer-grade computing hardware. A single-session SLAM result is initially used as a prior map for later sessions, where the robot automatically merges the multiple surveys into a common hull-relative reference frame. To perform the re-localization step, we use a particle filter that leverages the locally planar representation of the ship hull surface, and a fast visual descriptor matching algorithm. Finally, we apply the recentlydeveloped graph sparsification tool, generic linear constraints (GLC), as a way to manage the computational complexity of the SLAM system as the robot accumulates information across multiple sessions. We show results for 20 SLAM sessions for two large vessels over the course of days, months, and even up to three years, with a total path length of approximately 10.2 km.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.