2019
DOI: 10.1137/18m1174076
|View full text |Cite
|
Sign up to set email alerts
|

Revisiting Maximum-A-Posteriori Estimation in Log-Concave Models

Abstract: Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in imaging sciences, where high dimensionality is often addressed by using Bayesian models that are log-concave and whose posterior mode can be computed efficiently by convex optimisation. However, despite its success and wide adoption, MAP estimation is not theoretically well understood yet. In particular, the prevalent view in the community is that MAP estimation is not proper Bayesian estimation in the sense of Bayesian decisi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 42 publications
(86 reference statements)
0
19
0
Order By: Relevance
“…x t+1 := x t + at−xt t 7: end for estimation as an optimization problem. In some cases, the MAP problem (3) is a convex problem [4], then it can be solved well. In addition, in probabilistic models, we usually study the MAP problem in high dimensions.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…x t+1 := x t + at−xt t 7: end for estimation as an optimization problem. In some cases, the MAP problem (3) is a convex problem [4], then it can be solved well. In addition, in probabilistic models, we usually study the MAP problem in high dimensions.…”
Section: Related Workmentioning
confidence: 99%
“…It plays an essential role in various practical scenarios where there exist hidden variables or uncertainty. Some applications include image processing [3], [4], text analysis [5]- [7], recommender system [8], protein design and protein side-chain prediction problems [9], [10]. Adding the prior probability information reduces the overdependence on the observed data for parameter estimation, MAP estimation be seen as a regularization of Maximum Likelihood Estimation (MLE), MAP can deal well with low training data.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…In the rest of this article we assume f (x) and g y (x) are closed convex functions which are not necessarily differentiable, and the objective functional (3) is computationally tractable with respect to x given the value of µ. For further details about MAP estimation see, e.g., [18]. A main computational advantage of the MAP estimator (3) is that it can be computed very efficiently, even in high dimensions, by using convex optimisation algorithms (e.g.…”
Section: Problem Formulationmentioning
confidence: 99%