2020
DOI: 10.2352/issn.2470-1173.2020.13.ervr-382
|View full text |Cite
|
Sign up to set email alerts
|

RaViS: Real-time accelerated View Synthesizer for immersive video 6DoF VR

Abstract: Fast track article for IS&T International Symposium on Electronic Imaging 2020: The Engineering Reality of Virtual Reality proceedings.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
1

Relationship

4
5

Authors

Journals

citations
Cited by 17 publications
(18 citation statements)
references
References 17 publications
0
17
0
Order By: Relevance
“…This pipeline is shown in Figure 4. The warping and blending operations are performed alternatively for each input image using OpenGL [32] or on the CPU [15].…”
Section: Rvs In Practicementioning
confidence: 99%
“…This pipeline is shown in Figure 4. The warping and blending operations are performed alternatively for each input image using OpenGL [32] or on the CPU [15].…”
Section: Rvs In Practicementioning
confidence: 99%
“…Though only two virtual views following the user's pose must be rendered at any time, the high frame rates used in VR (60-120 fps) impose stringent real-time constraints. Optimizing an in-house OpenGL implementation of RVS -called RaViSwe reach real-time performances, generating two synthesized views at 60 to 90 fps [2].…”
Section: View Synthesis In Stereoscopic Hmdmentioning
confidence: 99%
“…This paper presents a demonstration centered on the topic of view synthesis that is complementary to the tutorial session "The MPEG Immersive Video coding standard" to be held at the Visual Communication and Image Processing (VCIP) conference 2021. Our approach [1], [2], we will demonstrate at VCIP for Light-Field VR, is based on Depth Image Based Rendering (DIBR): the pixels from input views are first projected to 3D space using their depth maps, and afterwards reprojected to any virtual view one wishes to synthesize.…”
Section: Introductionmentioning
confidence: 99%
“…However, for certain content types (especially natural scenes with high levels of motion) it still produces noticeable artifacts (e.g. bad blending and disocclusions) that often degrade the quality of experience [5]. Efforts to improve the performance of TMIV have primarily focused on improving the depth estimation and on the synthesis at the decoder side [6,7].…”
Section: Introductionmentioning
confidence: 99%