2018
DOI: 10.1007/978-3-030-01234-2_26
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Capture Light Fields Through a Coded Aperture Camera

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
115
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(118 citation statements)
references
References 37 publications
3
115
0
Order By: Relevance
“…As we can see from the error maps shown in the figure and Table II, for the challenging case of single coded image (N = 1), our approach outperforms Inagaki et al [11] by 2 dB on the testset. The red and blue insets shown in the figure detail the top-right view reconstructed along with the EPIs for N = 1 case.…”
Section: B Coded Aperture Light Field Reconstructionmentioning
confidence: 66%
See 2 more Smart Citations
“…As we can see from the error maps shown in the figure and Table II, for the challenging case of single coded image (N = 1), our approach outperforms Inagaki et al [11] by 2 dB on the testset. The red and blue insets shown in the figure detail the top-right view reconstructed along with the EPIs for N = 1 case.…”
Section: B Coded Aperture Light Field Reconstructionmentioning
confidence: 66%
“…This ensures that the data has variety of relative defocus w.r.t the focal plane during training. We compared our reconstructions with both direct regression (Nabati et al [20], Inagaki et al [11]) and disparity based rendering from four corner views of LF (Kalantari et al [6]) and from a single image (Srinivasan et al [18]). Our CLF and focus-defocus approaches perform as well as the four corner view method of Kalantari et al and outperform direct regression and single image based approaches.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…distribution can uniquely recover x in Equation (4.9) [171,172,173]. It is also possible to find the optimal Φ using learning-based methods, which is an ongoing research problem [174,175,176].…”
Section: Compressive Sensingmentioning
confidence: 99%
“…Three light field video data sets were used for this purpose: Boxer-Gladiator-Irish, Chess, and Chess-moving from [211]. We compared the result of reconstruction using SM3 with temporal window size of 3 (β = 3), with respect to the method of Marwah et al [22], Miandji et al [23], and the deep [22], Miandji et al [23], and Inagaki et al [176].…”
Section: Sensing Model 3 (Sm3)mentioning
confidence: 99%