2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 2019
DOI: 10.1109/aivr46125.2019.00018
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Learning of Depth and Ego-Motion From Cylindrical Panoramic Video

Abstract: We introduce a convolutional neural network model for unsupervised learning of depth and ego-motion from cylindrical panoramic video. Panoramic depth estimation is an important technology for applications such as virtual reality, 3D modeling, and autonomous robotic navigation. In contrast to previous approaches for applying convolutional neural networks to panoramic imagery, we use the cylindrical panoramic projection which allows for the use of the traditional CNN layers such as convolutional filters and max … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
4

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 27 publications
1
7
0
Order By: Relevance
“…4) We demonstrate how to use the trained neural network for novel view synthesis and stereo panorama conversion from a single input panorama. This journal paper extends our previous conference paper [16] in several ways. We evaluate our method on a larger and more complete synthetic dataset rendered using the CARLA simulator [17] that allows us to more effectively evaluate our approach.…”
Section: Introductionsupporting
confidence: 70%
“…4) We demonstrate how to use the trained neural network for novel view synthesis and stereo panorama conversion from a single input panorama. This journal paper extends our previous conference paper [16] in several ways. We evaluate our method on a larger and more complete synthetic dataset rendered using the CARLA simulator [17] that allows us to more effectively evaluate our approach.…”
Section: Introductionsupporting
confidence: 70%
“…For the vertical coordinate, it projects a surface of a sphere onto a cylinder using the tangent of latitude φ, which can be envisioned by surrounding the circumference of the sphere with a flat piece of paper. Using the cylindrical projection, several authors estimate the depth from wide angle cameras [21,23]. The projection is expressed as follows:…”
Section: Cylindrical Projectionmentioning
confidence: 99%
“…It has previously been observed that L syn suffers from a gradient locality problem, where both depth loss and L syn stagnate, since it relies on the comparison of localized pixel intensities [32,42]. Multi-scale regularization, i.e., estimating disparities and computing L syn at different spatial scales (L ms ), has been reported to mitigate the gradient locality problem [32,9,10]. Additionally, an edge-aware smoothness regularizing term L smth on the disparity map has also been included [19,35] to encourage smoothness of the estimation,…”
Section: Additional Regularization and Auxiliariesmentioning
confidence: 99%