2013
DOI: 10.1016/j.patrec.2012.06.003
|View full text |Cite
|
Sign up to set email alerts
|

Structure guided fusion for depth map inpainting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 95 publications
(69 citation statements)
references
References 13 publications
0
69
0
Order By: Relevance
“…By contrast, we offer improved depth recovery from a single sensing unit (consumer depth camera) without the need for individual per scene optimization, making it highly suitable for mobile sensing applications and dynamic scenes. This extends both the work of [21], which requires additional sensors to achieve the similar results, and the disparity in-painting approach of [17] which does not readily recover transparent and specular object disparity.…”
Section: Discussionmentioning
confidence: 62%
See 2 more Smart Citations
“…By contrast, we offer improved depth recovery from a single sensing unit (consumer depth camera) without the need for individual per scene optimization, making it highly suitable for mobile sensing applications and dynamic scenes. This extends both the work of [21], which requires additional sensors to achieve the similar results, and the disparity in-painting approach of [17] which does not readily recover transparent and specular object disparity.…”
Section: Discussionmentioning
confidence: 62%
“…Other work has considered depth recovery improvement in depth cameras as a classical image in-painting problem [17]. Qi et al [17] successfully tackles the in-painting of depth shadows due to object occlusion [1] without considering the challenges transparent and specular surfaces.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared our method to various methods that focus on depth completion for RGB-D images, including Joint bilateral filter (JBF) [14,21], Nonlocal means filter (NLM) [12],Structure-guided fusion (SGF) [20], Spatiotemporal hole filling (SHF) [5], and Guided inpainting and filtering (GIF) [17]. For JBF and SHF, we modified their method to handle single input image.…”
Section: Methodsmentioning
confidence: 99%
“…A variety of joint bilateral filter-based methods have been developed based on this observation to use color images for hole filling [5,20,21]. Median filter has also been extended for depth image completion guided by the color image [18].…”
Section: Related Workmentioning
confidence: 99%