2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00248
|View full text |Cite
|
Sign up to set email alerts
|

DeepDNet: Deep Dense Network for Depth Completion Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Gradient information has been used in previous depth completion works, such as [45][46][47][48][49][50]. Commonly, there are two ways to introduce gradient information into deep networks: 1) incorporating gradients into the model to guide depth completion [45], and 2) introducing gradients into the loss for constraints [45][46][47][48][49][50].…”
Section: Gradient-related Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Gradient information has been used in previous depth completion works, such as [45][46][47][48][49][50]. Commonly, there are two ways to introduce gradient information into deep networks: 1) incorporating gradients into the model to guide depth completion [45], and 2) introducing gradients into the loss for constraints [45][46][47][48][49][50].…”
Section: Gradient-related Methodsmentioning
confidence: 99%
“…Gradient information has been used in previous depth completion works, such as [45][46][47][48][49][50]. Commonly, there are two ways to introduce gradient information into deep networks: 1) incorporating gradients into the model to guide depth completion [45], and 2) introducing gradients into the loss for constraints [45][46][47][48][49][50]. Specifically, Hwang et al [45] designed a teacher network to learn gradient depth images, which were then used to train their geometrical edge CNN through a Knowledge-Distillation loss function.…”
Section: Gradient-related Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our method takes an RGB image as input and predicts dense surface normals and occlusion boundaries to solve the problem of missing pixels in the original observation. Hegde et al [ 23 ] utilized an exact sparse depth as input to the RGB image to generate a dense depth map. The method focuses on a quadtree decomposition modeling approach.…”
Section: Related Workmentioning
confidence: 99%