This article presents a new dataset of ultra-wide field of view images with accurate ground truth, called PanoraMIS. The dataset covers a large spectrum of panoramic cameras (catadioptric, twin-fisheye), robotic platforms (wheeled, aerial, and industrial robots), and testing environments (indoors and outdoors), and it is well suited to rigorously validate novel image-based robot-motion estimation algorithms, including visual odometry, visual SLAM, and deep learning-based methods. PanoraMIS and the accompanying documentation is publicly available on the Internet for the entire research community.
Visual odometry is the process of estimating the motion of mobile through the camera attached to it, by matching point features between pairs of consecutive image frames. For mobile robots, a reliable method for comparing images can constitute a key component for localization and motion estimation tasks. In this paper, we study and compare the SIFT and SURF detector/ descriptor in terms of accurate motion determination and runtime efficiency in context the mobile robot-monocular visual odometry. We evaluate the performance of these detectors/ descriptors from the repeatability, recall, precision and cost of computation. To estimate the relative pose of camera from outlier-contaminated feature correspondences, the essential matrix and inlier set is estimated using RANSAC. Experimental results demonstrate that SURF, outperform the SIFT, in both accuracy and speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.