2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01466
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth Estimation Using Displacement Fields

Abstract: Current methods for depth map prediction from monocular images tend to predict smooth, poorly localized contours for the occlusion boundaries in the input image. This is unfortunate as occlusion boundaries are important cues to recognize objects, and as we show, may lead to a way to discover new objects from scene reconstruction. To improve predicted depth maps, recent methods rely on various forms of filtering or predict an additive residual depth map to refine a first estimate. We instead learn to predict, g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
44
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 59 publications
(52 citation statements)
references
References 57 publications
0
44
0
Order By: Relevance
“…We keep the original training and testing data of BSDS. NYUv2-OR is tested on a subset of NYUv2 [33] with occlusion boundaries from [39] and 1 As DOOBNet and OFNet are coded in Caffe, in order to have an unified platform for experimenting them on new datasets, we carefully re-implemented them in PyTorch (following the Caffe code). We could not reproduce exactly the same quantitative values provided in the original papers (ODS and OIS metrics are a bit less while AP is a bit better), probably due to some intrinsic differences between frameworks Caffe and PyTorch, however, the difference is very small (less than 0.03, cf.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…We keep the original training and testing data of BSDS. NYUv2-OR is tested on a subset of NYUv2 [33] with occlusion boundaries from [39] and 1 As DOOBNet and OFNet are coded in Caffe, in order to have an unified platform for experimenting them on new datasets, we carefully re-implemented them in PyTorch (following the Caffe code). We could not reproduce exactly the same quantitative values provided in the original papers (ODS and OIS metrics are a bit less while AP is a bit better), probably due to some intrinsic differences between frameworks Caffe and PyTorch, however, the difference is very small (less than 0.03, cf.…”
Section: Methodsmentioning
confidence: 99%
“…Depth map refinement. To assess our refinement approach, we compare with [39], which is the current state-of-the-art for depth refinement on boundaries.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Since an accurate depth value is difficult to obtain in large-scale training, an unsupervised frame was developed that could use spatial (between left-right pairs) or temporal (forwardbackward pairs) photometric warp error [114][115][116] or both [117]. The latest research [118] sought to obtain clear boundaries in estimating depth by resampling pixels around occlusion boundaries. One obstacle in training is the patientspecific nature of the texture of the tissue when first used in depth reconstruction in colonoscopy [119].…”
Section: ) Three-dimensional Surface Reconstructionmentioning
confidence: 99%