2016
DOI: 10.1111/cgf.13037
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Multi‐image Correspondences for On‐line Light Field Video Processing

Abstract: Figure 1: Our real-time multi-view correspondence algorithm extracts multi-view depth maps from sparse, wide-baseline light field video (here 3×3 cameras), in order to produce high-quality novel views for applications such as virtual apertures or virtual camera positions. AbstractLight field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 41 publications
(68 reference statements)
0
21
0
Order By: Relevance
“…Very similar view positions [Kalantari et al 2016] as for a Lytro camera can be considered dense, while 34 views on a sphere [Lombardi et al 2019] or 40 lights on a hemisphere [Malzbender et al 2001] is sparse. In this paper we focus on wider baselines, with typically × cameras spaced by 5-10 cm [Flynn et al 2019], and respectively a large disparity ranging up to 250 pixels [Dabała et al 2016;Mildenhall et al 2019], where and are single-digit numbers, e.g., 3×3, 5×5 or even 2×1. Depending on resources, a capture setup can be considered simple (cell phone, as we use) or more involved (light stage) as denoted in the "easy capture" column in Tbl.…”
Section: View Interpolation (Light Fields)mentioning
confidence: 99%
See 1 more Smart Citation
“…Very similar view positions [Kalantari et al 2016] as for a Lytro camera can be considered dense, while 34 views on a sphere [Lombardi et al 2019] or 40 lights on a hemisphere [Malzbender et al 2001] is sparse. In this paper we focus on wider baselines, with typically × cameras spaced by 5-10 cm [Flynn et al 2019], and respectively a large disparity ranging up to 250 pixels [Dabała et al 2016;Mildenhall et al 2019], where and are single-digit numbers, e.g., 3×3, 5×5 or even 2×1. Depending on resources, a capture setup can be considered simple (cell phone, as we use) or more involved (light stage) as denoted in the "easy capture" column in Tbl.…”
Section: View Interpolation (Light Fields)mentioning
confidence: 99%
“…Warping and SuperSlowMo first estimate the correspondence in image pairs [Sun et al 2018b] or light field data [Dabała et al 2016] and later apply warping [Mark et al 1997] with ULR-style weights [Buehler et al 2001]. Note how ULR weighting accounts for occlusion.…”
Section: Comparisonmentioning
confidence: 99%
“…We chose an angular configuration that is similar to the one of real light fields captured by rigs of cameras, such as e.g. in [48] and [49], which respectively provide 5 × 3 and 4 × 4 views. We also use the dataset of [48] to test our method on a real light field sequence: 'Bar'.…”
Section: A Scene Flow Datasetsmentioning
confidence: 99%
“…in [48] and [49], which respectively provide 5 × 3 and 4 × 4 views. We also use the dataset of [48] to test our method on a real light field sequence: 'Bar'. Each frame is a 5 × 3 light field, in which each view has a spatial resolution of 1920 × 1080 pixels.…”
Section: A Scene Flow Datasetsmentioning
confidence: 99%
See 1 more Smart Citation