2017 IEEE/SICE International Symposium on System Integration (SII) 2017
DOI: 10.1109/sii.2017.8279335
|View full text |Cite
|
Sign up to set email alerts
|

Virtual reality with motion parallax by dense optical flow-based depth generation from two spherical images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…Classic mathematical approaches exploit camera motion, spline approximations and epipolar geometry to infer relative depth maps [19]- [21]. The work of Pathak et al (2017) [22] takes advantage of the motion parallax phenomena in a virtual reality context. Two spherical images are used to estimate a dense optical flow, which is then decomposed into a relative depth map.…”
Section: B Estimating Depth From Motionmentioning
confidence: 99%
“…Classic mathematical approaches exploit camera motion, spline approximations and epipolar geometry to infer relative depth maps [19]- [21]. The work of Pathak et al (2017) [22] takes advantage of the motion parallax phenomena in a virtual reality context. Two spherical images are used to estimate a dense optical flow, which is then decomposed into a relative depth map.…”
Section: B Estimating Depth From Motionmentioning
confidence: 99%
“…One benefit is that the sense of "presence" seems to increase in virtual environments in head-mounted displays compared with 2D monitor displays (Kim, Rosenthal, Zielinski, & Brady, 2014). Another benefit is the possibility of implementing a clear sense of depth (Wann, Rushton, & Mon-Williams, 1995) through stereopsis and the use of depth cues such as motion parallax that cannot be extracted from a two-dimensional plane (Pathak, Moro, Fujii, Yamashita, & Asama, 2017). Finally, the affordances of locomoting for successful search completion, change how search is guided in naturalistic settings (Draschkow & Võ, 2016;Hayhoe et al, 2003;Li et al, 2016;Võ, Boettcher & Draschkow, 2019).…”
Section: Using Virtual Realitymentioning
confidence: 99%
“…There are only few works [21]- [23] that estimate depth from pairs of views. The studies [21], [22] (and their prior analyses) estimate the relative pose between the cameras using traditional A-KAZE features [24] and the eight-point algorithm (8-PA) [25]; derotate and estimate dense features from the views; and refine the pose using an iterative nonlinear approach. Lai and colleagues [23] present an encoderdecoder model that deals with stereo-rectfied image pairs with a small, fixed baseline.…”
Section: Related Workmentioning
confidence: 99%