2018
DOI: 10.1007/s10107-018-1241-0
|View full text |Cite
|
Sign up to set email alerts
|

Maximum a posteriori estimators as a limit of Bayes estimators

Abstract: Maximum a posteriori and Bayes estimators are two common methods of point estimation in Bayesian Statistics. It is commonly accepted that maximum a posteriori estimators are a limiting case of Bayes estimators with 0-1 loss. In this paper, we provide a counterexample which shows that in general this claim is false. We then correct the claim that by providing a levelset condition for posterior densities such that the result holds. Since both estimators are defined in terms of optimization problems, the tools of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(48 citation statements)
references
References 15 publications
0
40
0
Order By: Relevance
“…This optimization called the maximum a posteriori (MAP) estimation. [68][69][70] (2) by-instance-based, which is the standard kNN approach, in which a new case is classified by a majority vote of its neighbors, with the case being assigned to a class that is most common among its k nearest neighbors measured by a distance function. If k is 1, then the case is simply assigned to the class of its nearest neighbor.…”
Section: Discussionmentioning
confidence: 99%
“…This optimization called the maximum a posteriori (MAP) estimation. [68][69][70] (2) by-instance-based, which is the standard kNN approach, in which a new case is classified by a majority vote of its neighbors, with the case being assigned to a class that is most common among its k nearest neighbors measured by a distance function. If k is 1, then the case is simply assigned to the class of its nearest neighbor.…”
Section: Discussionmentioning
confidence: 99%
“…The analysis is further complicated by the fact that p(x|y) may have an infinite number of maximisers in disconnected areas of the parameter space. As mentioned previously, the derivation of MAP estimation as an approximation arising from the degenerate loss L (u, x) = 1 x−u < with → 0 also fails in this case [8]. Interestingly, MMSE estimation may also struggle here given that models in this class may not have a posterior mean [38].…”
Section: Models With Heavy-tailsmentioning
confidence: 92%
“…Currently the predominant view is that MAP estimation is not formal Bayesian estimation in the decision-theoretic sense postulated by Definition 1.1 because it does not generally minimise a known expected loss. The prevailing interpretation is that MAP estimation is in fact an approximation arising from the degenerate loss L (u, x) = 1 x−u < with → 0 [38] (this derivation holds for all log-concave models, but is not generally true [8]). However, this asymptotic derivation does not lead to a proper Bayesian estimator.…”
Section: Introductionmentioning
confidence: 99%
“…The methods stated here like the least squares estimators (LSE) is commonly used in estimating parameters in linear regression models assuming a normal distribution of errors of observation and the condition of linear combination of the values with error variance being minimum. The maximum likelihood estimates (MLE) which yields an unbiased estimator, the method of moments, Bayesian estimators, minimum variance estimators [6,7] and Cramer Rao lower bound [8] that is used to obtain the Uniformly Minimum variance unbiased estimator (UMVUE) are some of the techniques employed when estimating the unknown parameters.…”
Section: Methodsmentioning
confidence: 99%