2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00495
|View full text |Cite
|
Sign up to set email alerts
|

VOLDOR: Visual Odometry From Log-Logistic Dense Optical Flow Residuals

Abstract: We propose a dense indirect visual odometry method taking as input externally estimated optical flow fields instead of hand-crafted feature correspondences. We define our problem as a probabilistic model and develop a generalized-EM formulation for the joint inference of camera motion, pixel depth, and motion-track confidence. Contrary to traditional methods assuming Gaussian-distributed observation errors, we supervise our inference framework under an (empirically validated) adaptive log-logistic distribution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 39 publications
(17 citation statements)
references
References 100 publications
(109 reference statements)
0
16
0
1
Order By: Relevance
“…Deep learning has gained impressive progress in visual odometry [8], [9]. However, the 3D LiDAR odometry with deep learning is still a challenging problem.…”
Section: Deep Lidar Odometrymentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning has gained impressive progress in visual odometry [8], [9]. However, the 3D LiDAR odometry with deep learning is still a challenging problem.…”
Section: Deep Lidar Odometrymentioning
confidence: 99%
“…It is found that learning-based methods can deal with sparse features and dynamic environments [1], [2], which are usually difficult for conventional methods. To our knowledge, most learning-based methods are on the 2D visual odometry [3], [4], [5], [6], [7], [8], [9] or utilize 2D convolution on the projected information of LiDAR [10], [11], [12], [13], [14]. 3D learning methods from the raw point cloud have been developed rapidly and have recently made remarkable progress on many problem [15], [16], [17], [18], [19], while the deep LiDAR odometry in 3D point clouds is underexplored.…”
Section: Introductionmentioning
confidence: 99%
“…As it can be seen, the best performing method used Colmap [33], a Structure-from- 1.09 0.0033 ORB-SLAM [2] 1.15 0.0027 S-PTAM [31] 1.19 0.0025 RTAB-Map [32] 1.26 0.0026 Motion (SfM) library, along with deep features [34], [35] for matching. The second best performing method was based on Voldor [36], a dense VSLAM methods that computes residual flow for pose estimation, along with Colmap to scale the estimated translation. OV 2 SLAM ranks 3rd on both the monocular and stereo tracks of this challenge and is actually the 1st if we only consider online methods, i.e.…”
Section: Tartanair Datasetmentioning
confidence: 99%
“…O PTICAL flow represents the 2D motion and correspondence relationship between two images at the pixel level, which is a fundamental problem in the field of computer vision. Optical flow has lots of applications in autonomous driving, such as visual odometry [1], target tracking [2], moving object detection, and mapping [3], [4]. In addition, the optical flow can be used to analyze the motion attributes of pedestrians and vehicles, so as to realize the dynamic understanding of scenes and decision-making.…”
Section: Introductionmentioning
confidence: 99%