The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2010
DOI: 10.1109/cvpr.2010.5540166
|View full text |Cite
|
Sign up to set email alerts
|

Sensor saturation in Fourier multiplexed imaging

Abstract: Optically multiplexed image acquisition techniques have become increasingly popular for encoding different exposures, color channels, light fields, and other properties of light onto two-dimensional image sensors. Recently, Fourier-based multiplexing and reconstruction approaches have been introduced in order to achieve a superior light transmission of the employed modulators and better signalto-noise characteristics of the reconstructed data.We show in this paper that Fourier-based reconstruction approaches s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 27 publications
0
22
0
Order By: Relevance
“…In particular, spatial patterns are chosen such that the different slices of the plenoptic function are encoded into different frequency bands. In computer graphics, this optical heterodyne approach has so far been used for capturing light fields [32,31], occluder information [14], and high dynamic range color photographs [34]. Spatially encoded light fields were recently analyzed in Fourier space and it was demonstrated that Fourier reconstruction algorithms apply as well [10].…”
Section: Background and Related Workmentioning
confidence: 99%
“…In particular, spatial patterns are chosen such that the different slices of the plenoptic function are encoded into different frequency bands. In computer graphics, this optical heterodyne approach has so far been used for capturing light fields [32,31], occluder information [14], and high dynamic range color photographs [34]. Spatially encoded light fields were recently analyzed in Fourier space and it was demonstrated that Fourier reconstruction algorithms apply as well [10].…”
Section: Background and Related Workmentioning
confidence: 99%
“…Unlike this approach and later approaches (e.g. [12,18,8,23]), we do not aim to change the photography process to increase the amount of information captured about a scene, but instead aim to extract as much information as possible from a single, given photograph. LDR to HDR enhancement: Reconstructing an HDR image from a single exposure with clipped values is a challenging problem that yields only approximate solutions based on heuristics or manual user intervention [17,2,22,7].…”
Section: Related Workmentioning
confidence: 97%
“…Mitsunaga et al [101] performed high dynamic range imaging by estimating the sensor response first and performing specific modifications, while Tumblin et al [45] designed a sensor recording image gradient, and Wetzstein et al [46] inserted plug-in filters (e.g. graduated neutral density filters) in front of the lens or sensor.…”
Section: Dynamic Rangementioning
confidence: 99%
“…1) Depth can be recovered from defocus analysis, because the depth of field is closely related to the distance. The typical approaches include introducing coded aperture patterns [46,[143][144][145] or multiple apertures [114], computing from the image pairs captured using different aperture sizes [146][147][148]33]. Levin [149] compares the performances of different aperture codes in depth estimation and gives a mathematical analysis of the results using a geometrical optics model.…”
Section: Extracting Depth or Shapementioning
confidence: 99%
See 1 more Smart Citation