2020
DOI: 10.1109/access.2020.3004477
|View full text |Cite
|
Sign up to set email alerts
|

LightGAN: A Deep Generative Model for Light Field Reconstruction

Abstract: A light field image captured by a plenoptic camera can be considered a sampling of light distribution within a given space. However, with the limited pixel count of the sensor, the acquisition of a high-resolution sample often comes at the expense of losing parallax information. In this work, we present a learning-based generative framework to overcome such tradeoff by directly simulating the light field distribution. An important module of our model is the high-dimensional residual block, which fully exploits… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 45 publications
0
7
0
Order By: Relevance
“…Farrugia [75] embeds lowrank priors into the deep convolutional network to restore the consistency of the entire light field on all sub-aperture images. Meng [56] merged the high-dimensional convolutional layer for the special structure of light field images into GAN [76] to find the correlation between adjacent light field images.…”
Section: Inter-image-similarity-based Lfsrmentioning
confidence: 99%
See 1 more Smart Citation
“…Farrugia [75] embeds lowrank priors into the deep convolutional network to restore the consistency of the entire light field on all sub-aperture images. Meng [56] merged the high-dimensional convolutional layer for the special structure of light field images into GAN [76] to find the correlation between adjacent light field images.…”
Section: Inter-image-similarity-based Lfsrmentioning
confidence: 99%
“…Wu [27] and Yuan [54] et al used EPI for super-resolution processing of the light field. With the prosperity of deep learning, many novel network structures have also been applied to the super-resolution processing of light field images, such as Zhang [24] used residual networks, Zhu [55] combined CNN with long short term memory (LSTM), Meng [56] used generative adversarial network (GAN). At present, the research on the super-resolution of light field images is not satisfied with higher super-resolution quality, however, has expanded to how to improve the processing speed of super-resolution while ensuring the super-resolution quality, such as Wang [57], Ma [58].…”
mentioning
confidence: 99%
“…Through the proposed adversarial loss and content loss, SRGAN could recover the fine texture details of super-resolved images, and infer photo-realistic natural images. Similarly, considering such benefits, Meng et al [27] also incorporate the high-dimensional convolution layers into the GAN framework to learn the high correlations among the neighboring light field views. By doing so, their model could generate novel views images with good fidelity and clearer edges.…”
Section: Related Workmentioning
confidence: 99%
“…To reconstruct the high-frequency spatial details [46], [47], we also add the content perceptual loss component c given by…”
Section: F Loss Functionmentioning
confidence: 99%