1999
DOI: 10.1109/18.771159
|View full text |Cite
|
Sign up to set email alerts
|

A new look at entropy for solving linear inverse problems

Abstract: Entropy-based methods are widely used for solving inverse problems, particularly when the solution is known to be positive. Here, we address linear ill-posed and noisy inverse problems of the form z = Ax + n z = Ax + n z = Ax + n with a general convex constraint x 2 X x 2 X x 2 X, where X X X is a convex set. Although projective methods are well adapted to this context, we study alternative methods which rely highly on some "information-based" criteria. Our goal is to clarify the role played by entropy in this… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2002
2002
2015
2015

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(44 citation statements)
references
References 39 publications
0
44
0
Order By: Relevance
“…However, large deviation results (Ellis, 1985;Le Besnerais et al, 1999) of pdfs strongly support this particular form, on theoretical grounds. Indeed, the exponential of minus the relative entropy is known to be the a measure of the deviation around the prior law, in the large deviation limit.…”
Section: Level-one Primal Cost Functionmentioning
confidence: 99%
“…However, large deviation results (Ellis, 1985;Le Besnerais et al, 1999) of pdfs strongly support this particular form, on theoretical grounds. Indeed, the exponential of minus the relative entropy is known to be the a measure of the deviation around the prior law, in the large deviation limit.…”
Section: Level-one Primal Cost Functionmentioning
confidence: 99%
“…Indeed, it can be shown that when we optimize a convex criterion subject to the data constraints we are optimizing the entropy of some quantity related to the unknowns and vise versa. However, as we have mentioned, basically, in this approach the data and the model are assumed to be exact even if some extensions to the approach gives the possibility to account for the errors [9]. In the next section, we see how the Bayesian approach can naturally account for both uncertainties on the data and on the unknown parameters x.…”
Section: Maximum Entropy In the Meanmentioning
confidence: 99%
“…If, on the other hand, the problem is ill-posed (e.g., X is not invertible), then the solution is not unique, and a combination of the above two methods (16 and 18) can be used. This yields the regularization method consisting of finding E such that: (20) is achieved (see for example, Donoho et al [25] for a nice discussion of regularization within the ME formulation.) Traditionally, the positive penalization parameter D is specified to favor small sized reconstructions, meaning that out of all possible reconstructions with a given discrepancy, those with the smallest norms are chosen.…”
Section: The General Casementioning
confidence: 99%
“…See also Gzyl and Velásquez [2], which builds upon Golan and Gzyl [18] where the synthesis was first proposed. If, in addition, the data are ill-conditioned, one often has to resort to the class of regularization methods (e.g., Hoerl and Kennard [19] O'Sullivan [20], Breiman [21], Tibshirani [22], Titterington [23], Donoho et al [24]; Besnerais et al [25]. A reference for regularization in statistics is Bickel and Li [26].…”
Section: Introductionmentioning
confidence: 99%