2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8968049
|View full text |Cite
|
Sign up to set email alerts
|

Experimental Comparison of Open Source Visual-Inertial-Based State Estimation Algorithms in the Underwater Domain

Abstract: A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing cur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 61 publications
(30 citation statements)
references
References 36 publications
3
27
0
Order By: Relevance
“…Results showed an absolute translation error between 24-52 cm, 24-67 cm, and 2-56 cm for each of the methods applied, which highlighted the potential of vision-based localization methods for underwater environments. With the same idea, Joshi et al [85] formed their own datasets from an underwater sensor suite-equipped with a 100 Hz IMU and a 15 fps, 1600 × 1200 px stereo camera-operated by a diver, an underwater sensor suite mounted on a diver propulsion vehicle, and an AUV. Experiments were conducted for each dataset considering the following combinations: monocular; monocular with IMU; stereo; and stereo with IMU, based on the modes supported by each Visual Odometry (VO) or Visual Inertial Odometry (VIO) algorithm.…”
Section: Dmentioning
confidence: 99%
“…Results showed an absolute translation error between 24-52 cm, 24-67 cm, and 2-56 cm for each of the methods applied, which highlighted the potential of vision-based localization methods for underwater environments. With the same idea, Joshi et al [85] formed their own datasets from an underwater sensor suite-equipped with a 100 Hz IMU and a 15 fps, 1600 × 1200 px stereo camera-operated by a diver, an underwater sensor suite mounted on a diver propulsion vehicle, and an AUV. Experiments were conducted for each dataset considering the following combinations: monocular; monocular with IMU; stereo; and stereo with IMU, based on the modes supported by each Visual Odometry (VO) or Visual Inertial Odometry (VIO) algorithm.…”
Section: Dmentioning
confidence: 99%
“…Thus, the raw point-cloud could be processed to extract visual objectives with high density of features, then these visual objectives could assist the odometry as landmarks. Such an approach is a necessity in the underwater domain, which is notoriously challenging for vision-based state estimation [2], [3], in part because the quality of the features is often low, and their spatial distribution uneven, with most features concentrated in only a few places.…”
Section: A Extracting Visual Objectivesmentioning
confidence: 99%
“…Moreover, good visual features are often concentrated on few nearby objects; while much of the visible terrain has few features. As a result, state-of-the-art methods fail to provide robust state estimation for the robot [2], [3], although previous work has addressed this problem by providing a very capable SLAM framework [mariosx,nare,jvj1]@email.sc.edu, [jokane,yiannisr]@cse.sc.edu 2 M. Kalaitzakis and N. Vitzilaios are with the Department of Mechanical Engineering, University of South Carolina, Columbia, SC, USA. michailk@email.sc.edu,vitzilaios@sc.edu called SVIn [4], [5], under the assumption that an adequate number of high-quality features are visible throughout the path.…”
Section: Introductionmentioning
confidence: 99%
“…For underwater deployments, this becomes even more important as vision is often occluded as well as is negatively affected by the lack of features for tracking. Indeed, from our comparative study of visual-inertial based state estimation systems [37], in underwater datasets, most of the state-of-the-art systems either fail to initialize or make wrong initialization resulting into divergence. Hence, we propose a robust initialization method using the sensory information from stereo camera, IMU, and depth for underwater state estimation.…”
Section: Initialization: Two-step Scale Refinementmentioning
confidence: 99%