2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2016
DOI: 10.1109/iros.2016.7759342
|View full text |Cite
|
Sign up to set email alerts
|

Real-time probabilistic fusion of sparse 3D LIDAR and dense stereo

Abstract: Real-time 3D perception is critical for localisation, mapping, path planning and obstacle avoidance for mobile robots and autonomous vehicles. For outdoor operation in real-world environments, 3D perception is often provided by sparse 3D LIDAR scanners, which provide accurate but lowdensity depth maps, and dense stereo approaches, which require significant computational resources for accurate results. Here, taking advantage of the complementary error characteristics of LIDAR range sensing and dense stereo, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
68
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 84 publications
(68 citation statements)
references
References 21 publications
0
68
0
Order By: Relevance
“…Our method successfully recovers accurate disparities on tiny and moving objects while the other methods are misled by drifted and noisy Lidar points. sion studies [2,20,26] strongly depend on the availability of large-scale ground truth depth maps, and thus their performance is fundamentally limited by their generalization ability to real-world applications.…”
Section: Input Lidarmentioning
confidence: 99%
See 2 more Smart Citations
“…Our method successfully recovers accurate disparities on tiny and moving objects while the other methods are misled by drifted and noisy Lidar points. sion studies [2,20,26] strongly depend on the availability of large-scale ground truth depth maps, and thus their performance is fundamentally limited by their generalization ability to real-world applications.…”
Section: Input Lidarmentioning
confidence: 99%
“…Our plane fitting loss then can be defined as (e)S2D [19] (f) SINet [30] (g) Probabilistic fusion [20] (h) CNN fusion [26] Figure 5. Qualitative results of the methods from Tab.…”
Section: Plane Fitting Lossmentioning
confidence: 99%
See 1 more Smart Citation
“…As an alternative to deep neural networks, graphical models have been successfully applied to point clouds for various tasks. Previous works by Maddern & Newman [26] and Schoenberg et al [27] use graphical models to fuse lidar scans and stereo camera depth maps to produce accurate dense depth maps suitable for use on autonomous vehicles. Wang et al [28] propose a semantic segmentation method for image-aligned 3D point clouds by retrieving referenced labeled images of similar appearances and then propagating their labels to the 3D points using a graphical model.…”
Section: B Graphical Models and 2d-3d Fusionmentioning
confidence: 99%
“…In recent years, different kinds of depth fusion methods have emerged in different sub-tasks, such as stereo-ToF fusion ( [6,7,8]), stereo-stereo fusion ( [9]), Lidar-stereo fusion ( [10,11]) and general depth fusion ( [12]). Additionally, deep-learning based methods perform much better than the rest.…”
Section: Introductionmentioning
confidence: 99%