2020
DOI: 10.1007/978-3-030-66823-5_9
|View full text |Cite
|
Sign up to set email alerts
|

Densely Connecting Depth Maps for Monocular Depth Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 33 publications
0
3
0
Order By: Relevance
“…In Table 2, we present the comparison of results on the KITTI dataset. Compared to the recent competing methods such as TransDepth, Zhang et al [64] and BTS [32], our approach is superior by a significant margin. Especially notable are results on the metric Sq Rel, and ours improves it by approximately 15% than TransDepth in both the capturing ranges of 0-80m and 0-50m.…”
Section: Comparison To Previous State-of-the-art Approachesmentioning
confidence: 89%
See 1 more Smart Citation
“…In Table 2, we present the comparison of results on the KITTI dataset. Compared to the recent competing methods such as TransDepth, Zhang et al [64] and BTS [32], our approach is superior by a significant margin. Especially notable are results on the metric Sq Rel, and ours improves it by approximately 15% than TransDepth in both the capturing ranges of 0-80m and 0-50m.…”
Section: Comparison To Previous State-of-the-art Approachesmentioning
confidence: 89%
“…Saxena et al [48] introduced one of the first learning-based studies in this area. Thereafter, significant advancements have been made followed by the explosion of deep learning [12,14,30,32,35,37,57,64]. [32] and TransDepth [57], which are highlighted by white and green boxes.…”
Section: Introductionmentioning
confidence: 99%
“…And many works use the atrous spatial pyramid pooling (ASPP) (Chen et al 2018) or a similar module to model contextual information. Some works (Fu et al 2018;Fang et al 2020;Zhang et al 2020;Lee et al 2021) Different from modules similar to ASPP, we combine the dilated convolutions as an equivalent large kernel convolution and apply it to replace the regular convolution layers in ResNet blocks. This operator enlarges the receptive field and produces a variety of interest areas in the receptive field.…”
Section: Dilated Convolutionmentioning
confidence: 99%