Visual-inertial simultaneous localization and mapping (VI-SLAM) is popular research topic in robotics. Because of its advantages in terms of robustness, VI-SLAM enjoys wide applications in the field of localization and mapping, including in mobile robotics, self-driving cars, unmanned aerial vehicles, and autonomous underwater vehicles. This study provides a comprehensive survey on VI-SLAM. Following a short introduction, this study is the first to review VI-SLAM techniques from filtering-based and optimization-based perspectives. It summarizes state-of-the-art studies over the last 10 years based on the back-end approach, camera type, and sensor fusion type. Key VI-SLAM technologies are also introduced such as feature extraction and tracking, core theory, and loop closure. The performance of representative VI-SLAM methods and famous VI-SLAM datasets are also surveyed. Finally, this study contributes to the comparison of filtering-based and optimization-based methods through experiments. A comparative study of VI-SLAM methods helps understand the differences in their operating principles. Optimization-based methods achieve excellent localization accuracy and lower memory utilization, while filtering-based methods have advantages in terms of computing resources. Furthermore, this study proposes future development trends and research directions for VI-SLAM. It provides a detailed survey of VI-SLAM techniques and can serve as a brief guide to newcomers in the field of SLAM and experienced researchers looking for possible directions for future work.
Purpose This study aims to present a visual-inertial simultaneous localization and mapping (SLAM) method for accurate positioning and navigation of mobile robots in the event of global positioning system (GPS) signal failure in buildings, trees and other obstacles. Design/methodology/approach In this framework, a feature extraction method distributes features on the image under texture-less scenes. The assumption of constant luminosity is improved, and the features are tracked by the optical flow to enhance the stability of the system. The camera data and inertial measurement unit data are tightly coupled to estimate the pose by nonlinear optimization. Findings The method is successfully performed on the mobile robot and steadily extracts the features on low texture environments and tracks features. The end-to-end error is 1.375 m with respect to the total length of 762 m. The authors achieve better relative pose error, scale and CPU load than ORB-SLAM2 on EuRoC data sets. Originality/value The main contribution of this study is the theoretical derivation and experimental application of a new visual-inertial SLAM method that has excellent accuracy and stability on weak texture scenes.
When mobile robots are working in indoor unknown environments, the surrounding scenes are mainly low texture or repeating texture. This means that image features are easily lost when tracking the robots, and poses are difficult to estimate as the robot moves back and forth in a narrow area. In order to improve such tracking problems, we propose a one-circle feature-matching method, which refers to a sequence of the circle matching for the time after space (STCM), and an STCM-based visual-inertial simultaneous localization and mapping (STCM-SLAM) technique. This strategy tightly couples the stereo camera and the inertial measurement unit (IMU) in order to better estimate poses of the mobile robot when working indoors. Forward backward optical flow is used to track image features. The absolute accuracy and relative accuracy of STCM increase by 37.869% and 129.167%, respectively, when compared with correlation flow. In addition, we compare our proposed method with other state-of-the-art methods. In terms of relative pose error, the accuracy of STCM-SLAM is an order of magnitude greater than ORB-SLAM2, and two orders of magnitude greater than OKVIS. Our experiments show that STCM-SLAM has obvious advantages over the OKVIS method, specifically in terms of scale error, running frequency, and CPU load. STCM-SLAM also performs the best under real-time conditions. In the indoor experiments, STCM-SLAM is able to accurately estimate the trajectory of the mobile robot. Based on the root mean square error, mean error, and standard deviation, the accuracy of STCM-SLAM is ultimately superior to that of either ORB-SLAM2 or OKVIS.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.