2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00499
|View full text |Cite
|
Sign up to set email alerts
|

EPINET: A Fully-Convolutional Neural Network Using Epipolar Geometry for Depth from Light Field Images

Abstract: Light field cameras capture both the spatial and the angular properties of light rays in space. Due to its property, one can compute the depth from light fields in uncontrolled lighting environments, which is a big advantage over active sensing devices. Depth computed from light fields can be used for many applications including 3D modelling and refocusing. However, light field images from hand-held cameras have very narrow baselines with noise, making the depth estimation difficult. Many approaches have been … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
303
1
1

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 255 publications
(327 citation statements)
references
References 35 publications
4
303
1
1
Order By: Relevance
“…For the quantitative evaluation in Figure 7, we plot 13 error measures from the HCI 4D Light Field Benchmark, for details please refer to [10]. Our method outperforms EPINET-Cross [20], in 11 out of 13 metrics with a close tie of the other two. Because our network is run several times (once for each EPI-Shift), the runtime increases linearly with the disparity range and is therefore slightly higher compared to [20].…”
Section: Results On the Hci 4d Light Field Benchmarkmentioning
confidence: 99%
See 4 more Smart Citations
“…For the quantitative evaluation in Figure 7, we plot 13 error measures from the HCI 4D Light Field Benchmark, for details please refer to [10]. Our method outperforms EPINET-Cross [20], in 11 out of 13 metrics with a close tie of the other two. Because our network is run several times (once for each EPI-Shift), the runtime increases linearly with the disparity range and is therefore slightly higher compared to [20].…”
Section: Results On the Hci 4d Light Field Benchmarkmentioning
confidence: 99%
“…For another 30000 iterations, we decreased the learning rate to 10 −5 and fixed the learned batch normalization parameters. We apply a large variety of data augmentation, comparable to [20], including random color channel re-distribution, random brightness and contrast adjustments, random rotations by multiples of 90 • , random scales between 0.5 and 1 and random crops to a patch size of 225 × 225. This patch size leverages the utilization of global information by the U-Net.…”
Section: Trainingmentioning
confidence: 99%
See 3 more Smart Citations