2018
DOI: 10.1093/imaiai/iay004
|View full text |Cite
|
Sign up to set email alerts
|

Through the haze: a non-convex approach to blind gain calibration for linear random sensing models

Abstract: Computational sensing strategies often suffer from calibration errors in the physical implementation of their ideal sensing models. Such uncertainties are typically addressed by using multiple, accurately chosen training signals to recover the missing information on the sensing model, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not employ any training signal, but corresponds to a bilinear inverse problem whose algorithmic solution is an open issue. We here addr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
34
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 19 publications
(34 citation statements)
references
References 43 publications
(120 reference statements)
0
34
0
Order By: Relevance
“…It is not obvious that the unconstrained gradient descent defined in iterations (8) and the corresponding notion of basin of attraction is suitable to perform constrained minimization (7). In fact, we show in this paper (essentially through Lemma 3.1) that the global minimum of constrained minimization (7) has a basin of attraction.…”
Section: Basin Of Attraction and Descent Algorithmsmentioning
confidence: 76%
See 4 more Smart Citations
“…It is not obvious that the unconstrained gradient descent defined in iterations (8) and the corresponding notion of basin of attraction is suitable to perform constrained minimization (7). In fact, we show in this paper (essentially through Lemma 3.1) that the global minimum of constrained minimization (7) has a basin of attraction.…”
Section: Basin Of Attraction and Descent Algorithmsmentioning
confidence: 76%
“…where E = R k(d+1) or E = Θ k,ǫ and g(θ) = f (φ(θ)). Note that when E = Θ k,ǫ , performing minimization (7) allows to recover the minima of the ideal minimization (2), yielding stable recovery guarantees under a RIP assumption. Hence we are particularly interested in this case.…”
Section: Parametrization Of the Model Set σmentioning
confidence: 99%
See 3 more Smart Citations