2020
DOI: 10.48550/arxiv.2012.08270
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for Depth Completion

Abstract: Depth completion aims to recover a dense depth map from a sparse depth map with the corresponding color image as input. Recent approaches mainly formulate the depth completion as a one-stage end-to-end learning task, which outputs dense depth maps directly. However, the feature extraction and supervision in one-stage frameworks are insufficient, limiting the performance of these approaches. To address this problem, we propose a novel end-to-end residual learning framework, which formulates the depth completion… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(27 citation statements)
references
References 40 publications
(25 reference statements)
0
27
0
Order By: Relevance
“…DSPN [47] proposes a deformable SPN to adaptively generates different receptive field and affinity matrix at each pixel for effective propagation. Another pattern is using two independent branches to extract the color image and depth features respectively and then fuse them at multi-scale stages [44,48,42,25,28,16]. For example, PENet [16] employs feature addition to guide depth learning at different stages.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…DSPN [47] proposes a deformable SPN to adaptively generates different receptive field and affinity matrix at each pixel for effective propagation. Another pattern is using two independent branches to extract the color image and depth features respectively and then fuse them at multi-scale stages [44,48,42,25,28,16]. For example, PENet [16] employs feature addition to guide depth learning at different stages.…”
Section: Related Workmentioning
confidence: 99%
“…For example, PENet [16] employs feature addition to guide depth learning at different stages. FCFRNet [28] proposes channel-shuffle technology to enhance RGB-D feature fusion. ACMNet [54] chooses to adopt the graph propagation to capture the observed spatial contexts in the encoder stage.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations