2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) 2019
DOI: 10.1109/vr.2019.8798016
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Panoramic Depth Maps from Omni-directional Stereo Images for 6 DoF Videos in Virtual Reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
24
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 34 publications
(24 citation statements)
references
References 24 publications
0
24
0
Order By: Relevance
“…It is equipped with 16 high-definition cameras, and is equipped with powerful processor and memory array. Each interval of two cameras constitutes a binocular stereo vision, which is responsible for the 45° field of view in the circumferential direction, and then realizes the 360° panoramic stereo imaging in the circumferential direction [ 12 ]. The current panoramic stereo imaging system still has many shortcomings, such as panoramic imaging visual effect flattening, immersion experience feeling is not strong, image processing real-time, unable to show real-time dynamic changes, image stereo matching accuracy is not high, the observer is prone to dizziness and so on [ 13 , 14 ].…”
Section: Related Workmentioning
confidence: 99%
“…It is equipped with 16 high-definition cameras, and is equipped with powerful processor and memory array. Each interval of two cameras constitutes a binocular stereo vision, which is responsible for the 45° field of view in the circumferential direction, and then realizes the 360° panoramic stereo imaging in the circumferential direction [ 12 ]. The current panoramic stereo imaging system still has many shortcomings, such as panoramic imaging visual effect flattening, immersion experience feeling is not strong, image processing real-time, unable to show real-time dynamic changes, image stereo matching accuracy is not high, the observer is prone to dizziness and so on [ 13 , 14 ].…”
Section: Related Workmentioning
confidence: 99%
“…approaches [10,11,18,30]. While closed meshes do not introduce holes, visible distortions often appear at depth discontinuities.…”
Section: Related Workmentioning
confidence: 99%
“…There are only few works [21]- [23] that estimate depth from pairs of views. The studies [21], [22] (and their prior analyses) estimate the relative pose between the cameras using traditional A-KAZE features [24] and the eight-point algorithm (8-PA) [25]; derotate and estimate dense features from the views; and refine the pose using an iterative nonlinear approach.…”
Section: Related Workmentioning
confidence: 99%
“…The studies [21], [22] (and their prior analyses) estimate the relative pose between the cameras using traditional A-KAZE features [24] and the eight-point algorithm (8-PA) [25]; derotate and estimate dense features from the views; and refine the pose using an iterative nonlinear approach. Lai and colleagues [23] present an encoderdecoder model that deals with stereo-rectfied image pairs with a small, fixed baseline. Their convolutional neural network, although not adapted to the spherical distortions, encourages the depth estimates from left and right boundaries to connect.…”
Section: Related Workmentioning
confidence: 99%