“…The authors extended it to utilize the IMU [14], although, currently, the extended system is not available open source. e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
“…e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
“…The overall performance of the tested packages is discussed next. LSD-SLAM [8], REBiVO [24], Dense Piecewise Planar Tracking and Mapping (DPPTAM) [34], and Monocular SVO were unable to produce any consistent results, as such, they were excluded from Table II. DSO [9] requires full photometric calibration accounting for the exposure time, lens vignetting and non-linear gamma response function for best performance. Even without photometric calibration, it worked well on areas having high intensity gradients and when subjected to large rotation.…”
A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online.
“…The authors extended it to utilize the IMU [14], although, currently, the extended system is not available open source. e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
“…e) REBiVO: Realtime Edge Based Inertial Visual Odometry [24] is specifically designed for Micro Aerial Vehicles (MAV). In particular, it tracks the pose of a robot by fusing data from a monocular camera and an IMU.…”
Section: Related Work and Methods Evaluatedmentioning
confidence: 99%
“…The overall performance of the tested packages is discussed next. LSD-SLAM [8], REBiVO [24], Dense Piecewise Planar Tracking and Mapping (DPPTAM) [34], and Monocular SVO were unable to produce any consistent results, as such, they were excluded from Table II. DSO [9] requires full photometric calibration accounting for the exposure time, lens vignetting and non-linear gamma response function for best performance. Even without photometric calibration, it worked well on areas having high intensity gradients and when subjected to large rotation.…”
A plethora of state estimation techniques have appeared in the last decade using visual data, and more recently with added inertial data. Datasets typically used for evaluation include indoor and urban environments, where supporting videos have shown impressive performance. However, such techniques have not been fully evaluated in challenging conditions, such as the marine domain. In this paper, we compare ten recent open-source packages to provide insights on their performance and guidelines on addressing current challenges. Specifically, we selected direct methods and tightly-coupled optimization techniques that fuse camera and Inertial Measurement Unit (IMU) data together. Experiments are conducted by testing all packages on datasets collected over the years with underwater robots in our laboratory. All the datasets are made available online.
“…Este factor se puede calcular si se conoce la profundidad de dicho punto. En la literatura se han propuesto varias aproximaciones para estimar este factor de escala: En (Tarrio, 2017) utilizan una cámara como sensor principal y una unidad de medición inercial para determinar la escala.…”
Objective: Estimate the location of a camera with respect to objects in the real world, using monocular vision. Methodology: In this paper we introduce a method to calculate the relative location of the camera with respect to a group of points located in the three-dimensional space. The method requires only three fixed reference points of which the real distance between each pair of points must be known. With this information it is possible to estimate the relative location of the camera when it is moved, using successive images that contain the same points. Contribution: In recent years, processing power of computers has grown considerably and, with this, the interest of the scientific community in visual odometry has also increased. For this purpose, in many cases, it is convenient to use a single camera (monocular system). Unfortunately, a monocular system allows to estimate the location of the camera with respect to some object in the real world only up to a scale factor. The main contribution of this work is the estimation of the location of the camera in real world coordinates with respect to a reference object.
“…This scale factor must be obtained by using any bootstrap method. In the literature, several approaches have been proposed to estimate this scale factor: in reference [18], the authors use a camera as the main sensor and an inertial measurement unit (IMU) to determine the scale. In [19], the depth is estimated by using a convolutional neural network, this estimation is refined, and the error is reduced by training the network with consecutive images.…”
Estimation of distance from objects in real-world scenes is an important topic in several applications such as navigation of autonomous robots, simultaneous localization and mapping (SLAM), and augmented reality (AR). Even though there is a technology for this purpose, in some cases, this technology has some disadvantages. For example, GPS systems are susceptible to interference, especially in places surrounded by buildings, under bridges or indoors; alternatively, RGBD sensors can be used, but they are expensive, and their operational range is limited. Monocular vision is a low-cost suitable alternative that can be used indoor and outdoor. However, monocular odometry is challenging because the object location can be known up a scale factor. Moreover, when objects are moving, it is necessary to estimate the location from consecutive images accumulating error. This paper introduces a new method to compute the distance from a single image of the desired object, with known dimensions, captured with a monocular calibrated vision system. This method is less restrictive than other proposals in the state-of-the-art literature. For the detection of interest points, a Region-based Convolutional Neural Network combined with a corner detector were used. The proposed method was tested on a standard dataset and images acquired by a low-cost and low-resolution webcam, under noncontrolled conditions. The system was tested and compared with a calibrated stereo vision system. Results showed the similar performance of both systems, but the monocular system accomplished the task in less time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.