2016
DOI: 10.1364/oe.24.006145
|View full text |Cite
|
Sign up to set email alerts
|

Computational imaging with a highly parallel image-plane-coded architecture: challenges and solutions

Abstract: This paper investigates a highly parallel extension of the single-pixel camera based on a focal plane array. It discusses the practical challenges that arise when implementing such an architecture and demonstrates that system-specific optical effects must be measured and integrated within the system model for accurate image reconstruction. Three different projection lenses were used to evaluate the ability of the system to accommodate varying degrees of optical imperfection. Reconstruction of binary and graysc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 22 publications
0
19
0
Order By: Relevance
“…In particular (1) relates to computational sensing applications in which unknown d are associated to positive gains (see Figure 1) while p random matrix instances can be applied on a source x by a suitable (i.e., programmable) light modulation embodiment. This setup matches compressive imaging configurations [4], [10], [12]- [14] with an important difference in that the absence of a priori structure on (x, d) in (1) implies an over-Nyquist sampling regime with respect to (w.r.t.) n, i.e., exceeding the number of unknowns as mp ≥ n + m. When the effect of d is critical (i.e., assumingd ≈ I m would lead to Table I FINITE-SAMPLE AND EXPECTED VALUES OF THE OBJECTIVE FUNCTION; ITS GRADIENT COMPONENTS AND HESSIAN MATRIX; THE INITIALISATION…”
Section: Introductionmentioning
confidence: 81%
“…In particular (1) relates to computational sensing applications in which unknown d are associated to positive gains (see Figure 1) while p random matrix instances can be applied on a source x by a suitable (i.e., programmable) light modulation embodiment. This setup matches compressive imaging configurations [4], [10], [12]- [14] with an important difference in that the absence of a priori structure on (x, d) in (1) implies an over-Nyquist sampling regime with respect to (w.r.t.) n, i.e., exceeding the number of unknowns as mp ≥ n + m. When the effect of d is critical (i.e., assumingd ≈ I m would lead to Table I FINITE-SAMPLE AND EXPECTED VALUES OF THE OBJECTIVE FUNCTION; ITS GRADIENT COMPONENTS AND HESSIAN MATRIX; THE INITIALISATION…”
Section: Introductionmentioning
confidence: 81%
“…(2) Revise the Hadamard matrix to applicable for DMD, turning all "−1" to "0". Using the complement vector of the second instead of the original one, all blocks have the same number of opened micro-mirrors, which may reduce the impact of pixel crosstalk [33]. (3) Divide the revised Hadamard matrix into separate row vectors.…”
Section: Dmd Masksmentioning
confidence: 99%
“…These bilinear sensing models are related to computational sensing applications in which unknown g are associated to positive gains in a sensor array, while p random matrix instances can be applied on a source x by means of a suitable, typically programmable medium. In particular, the setup in (1.1) matches compressive imaging configurations [15,16,17,6,18] with an important difference in that the absence of a priori structure on (x, g) in Def. 1.1 implies an over-Nyquist sampling regime with respect to n, i.e., exceeding the number of unknowns as mp ≥ n + m. When the effect of g is critical, i.e., assuming diag(g) ≈ I m would lead to an inaccurate recovery of x, finding solutions to (1.1) in (x, g) justifies a possibly over-Nyquist sampling regime (that is mp > n) as long as both quantities can be recovered accurately (e.g., as an on-line calibration modality).…”
Section: Sensing Modelsmentioning
confidence: 99%
“…Notice that, using the linear random operator A and its adjoint A * introduced in (A. 18 , using the developments of gradients obtained in Sec. 4 and (4.6), and recalling (to maintain a lighter notation) that γ i = c i Bβ and g i = c i Bb, we observe that ∇ ζ f s (ζ, β), u = 1 mp i,l γ i ζ Z a i,l − g i z Z a i,l γ i a i,l Zu = 1 mp i,l γ 2 i (ζ − z) Z a i,l a i,l Zu + 1 mp i,l γ i (γ i − g i )z Z a i,l a i,l Zu By rearranging its terms, the last expression can be further developed on D s κ,ρ as 1 mp i,l γ 2 i (ζ − z) Z a i,l a i,l Zu + 1 mp i,l γ i (γ i − g i ) z Z a i,l a i,l Zu ≤ (1 + ρ) 2 ζ − z 1 mp i,l Z a i,l a i,l Z + 1 mp i,l (γ i − g i ) 2 z Z a i,l a i,l Zz where the second term has been bounded by the Cauchy-Schwarz inequality.…”
Section: Proofs On the Convergence Guarantees Of The Descent Algorimentioning
confidence: 99%