2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00107
|View full text |Cite
|
Sign up to set email alerts
|

Guided Stereo Matching

Abstract: 93.21% 4.36% (a) (b) (c) Figure 1. Guided stereo matching. (a) Challenging, reference image from KITTI 2015 [20] and disparity maps estimated by (b) iResNet [14] trained on synthetic data [19], or (c) guided by sparse depth measurements (5% density). Error rate (> 3) superimposed on each map. AbstractStereo is a prominent technique to infer dense depth maps from images, and deep learning further pushed forward the state-of-the-art, making end-to-end architectures unrivaled when enough data is available for tra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
76
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 76 publications
(77 citation statements)
references
References 41 publications
1
76
0
Order By: Relevance
“…Batsos et al [2] soften this effect by combining traditional functions and confidence measures [15,31] within a random forest framework, proving better generalization compared to CNN-based method [44]. Finally, guiding end-to-end CNNs with external depth measurements (e.g.Lidar) allows for reducing the domain-shift effect, as reported in [30]. Image reconstruction for unsupervised learning.…”
Section: Related Workmentioning
confidence: 99%
“…Batsos et al [2] soften this effect by combining traditional functions and confidence measures [15,31] within a random forest framework, proving better generalization compared to CNN-based method [44]. Finally, guiding end-to-end CNNs with external depth measurements (e.g.Lidar) allows for reducing the domain-shift effect, as reported in [30]. Image reconstruction for unsupervised learning.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast to our work, (Poggi et al 2019) uses disparities by sparse LiDAR points to filter cost volume during both training and testing. Both AcfNet and the method of (Poggi et al 2019) are trained on Scene Flow from scratch, and directly evaluated on training sets of KITTI 2012 and 2015 since (Poggi et al 2019) requires sparse LiDAR points as inputs. Table 3 reports the comparison results, where AcfNet outperforms (Poggi et al 2019) on all performance metrics by large margins even without using LiDAR points as inputs.…”
Section: Cost Volume Filtering Comparisonsmentioning
confidence: 96%
“…To further validate the superiority of the proposed cost volume filtering, experiments are designed to compare with the concurrent work (Poggi et al 2019). In contrast to our work, (Poggi et al 2019) uses disparities by sparse LiDAR points to filter cost volume during both training and testing. Both AcfNet and the method of (Poggi et al 2019) are trained on Scene Flow from scratch, and directly evaluated on training sets of KITTI 2012 and 2015 since (Poggi et al 2019) requires sparse LiDAR points as inputs.…”
Section: Cost Volume Filtering Comparisonsmentioning
confidence: 99%
“…synthetic to real) has been addressed in either offline [49] or online [50] fashion, or greatly reduced by guiding them with external depth measurements (e.g. Lidar) [42]. Monocular depth estimation.…”
Section: Related Workmentioning
confidence: 99%