2018 IEEE 4th International Conference on Computer and Communications (ICCC) 2018
DOI: 10.1109/compcomm.2018.8780991
|View full text |Cite
|
Sign up to set email alerts
|

Enabling Multiview- and Light Field-Video for Veridical Visual Experiences

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 8 publications
0
17
0
Order By: Relevance
“…While this puts the principal point in the centre of the views by simulating the images being taken by a single camera, it comes at the price of a small loss of resolution due to cropping. In addition we use one image from the Stanford gantry dataset (l eg o kni g ht s) [Stanford, 2021], and four high resolution images from the Technicolor (bi r t hd a y, pai nt er ) [Sabater et al, 2017] and SAUCE datasets (cel l i st , f i r e_d ancer ) [Herfet et al, 2018, Trottnow et al, 2019. These high resolution images, with a wider baseline, pose some difficulty for traditional light field methods which are not designed for such sets.…”
Section: Comparing Novel View Synthesismentioning
confidence: 99%
“…While this puts the principal point in the centre of the views by simulating the images being taken by a single camera, it comes at the price of a small loss of resolution due to cropping. In addition we use one image from the Stanford gantry dataset (l eg o kni g ht s) [Stanford, 2021], and four high resolution images from the Technicolor (bi r t hd a y, pai nt er ) [Sabater et al, 2017] and SAUCE datasets (cel l i st , f i r e_d ancer ) [Herfet et al, 2018, Trottnow et al, 2019. These high resolution images, with a wider baseline, pose some difficulty for traditional light field methods which are not designed for such sets.…”
Section: Comparing Novel View Synthesismentioning
confidence: 99%
“…While this puts the principal point in the centre of the views by simulating the images being taken by a single camera, it comes at the price of a small loss of resolution due to cropping. In addition we use one image from the Stanford gantry dataset (l eg o kni g ht s) [Stanford, 2021], and four high resolution images from the Technicolor (bi r t hd a y, pai nt er ) [Sabater et al, 2017] and SAUCE datasets (cel l i st , f i r e_d ancer ) [Herfet et al, 2018, Trottnow et al, 2019. These high resolution images, with a wider baseline, pose some difficulty for traditional light field methods which are not designed for such sets.…”
Section: Comparing Novel View Synthesismentioning
confidence: 99%
“…While this puts the principal point in the centre of the views by simulating the images being taken by a single camera, it comes at the price of a small loss of resolution due to cropping. In addition we use one image from the Stanford gantry dataset (l eg o kni g ht s) [Stanford, 2021], and four high resolution images from the Technicolor (bi r t hd a y, pai nt er ) [Sabater et al, 2017] and SAUCE datasets (cel l i st , f i r e_d ancer ) [Herfet et al, 2018, Trottnow et al, 2019. These high resolution images, with a wider baseline, pose some difficulty for traditional light field methods which are not designed for such sets.…”
Section: Comparing Novel View Synthesismentioning
confidence: 99%