2018
DOI: 10.1007/978-3-030-01231-1_28
|View full text |Cite
|
Sign up to set email alerts
|

OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas

Abstract: Recent work on depth estimation up to now has only focused on projective images ignoring 360 o content which is now increasingly and more easily produced. We show that monocular depth estimation models trained on traditional images produce sub-optimal results on omnidirectional images, showcasing the need for training directly on 360 o datasets, which however, are hard to acquire. In this work, we circumvent the challenges associated with acquiring high quality 360 o datasets with ground truth depth annotation… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
247
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 159 publications
(247 citation statements)
references
References 62 publications
0
247
0
Order By: Relevance
“…Dataset: Given the unavailability of stereo 360 o datasets, we take a similar approach to [49] and render panoramas from displaced viewpoints in both vertical and horizontal placements as shown in Fig. 3.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Dataset: Given the unavailability of stereo 360 o datasets, we take a similar approach to [49] and render panoramas from displaced viewpoints in both vertical and horizontal placements as shown in Fig. 3.…”
Section: Resultsmentioning
confidence: 99%
“…The most apparent issue is the unavailability of data, and thus, two concurrent works addressed 360 o depth estimation by generating data via rendering existing 3D datasets. Two baseline models were presented in [49] after creating a large dataset of color and depth pairs using a mix of synthetic and real scenes. Further, [41] utilized the more recent advances in depth estimation from videos and rendered videos from a purely synthetic 3D dataset.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To the best of our knowledge, there is no other similar work on monocular 360 o surface normal estimation. In an effort to show the importance of training directly on the omnidirectional domain, we provide comparisons of our model with learning-based methods trained on traditional perspec- Table 4: Quantitative results of our model trained on our dataset's train-split and evaluated on our test-split compared to the two neural network architectures for omnidirectional monocular depth estimation presented in [64] and the method of [61] re-trained on our dataset's train-set. We present the mean, median and root mean square angular error across our dataset's test-set.…”
Section: Comparison Against Other Methodsmentioning
confidence: 99%
“…They train their networks on cubemap projections of 360 o video sequences rendered from the SunCG [51] dataset. Moreover, in [64], the authors use an endto-end approach to learn 360 o depth from equirectangular indoors scenes. They present a dataset generated via rendering existing 3D datasets and two neural network architectures, one more typical and the other constructed with rectangular filters and dilated convolutions [59] to account for the distortion in the spherical domain.…”
Section: Learning On 360 O Imagesmentioning
confidence: 99%