1996
DOI: 10.1287/mnsc.42.10.1420
|View full text |Cite
|
Sign up to set email alerts
|

Maximum Entropy Aggregation of Expert Predictions

Abstract: This paper presents a maximum entropy framework for the aggregation of expert opinions where the expert opinions concern the prediction of the outcome of an uncertain event. The event to be predicted and individual predictions rendered are assumed to be discrete random variables. A measure of expert competence is defined using a distance metric between the actual outcome of the event and each expert's predicted outcome. Following Levy and Delic (Levy, W. B., H. Delic. 1994. Maximum entropy aggregation of indiv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
26
0

Year Published

1999
1999
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(26 citation statements)
references
References 29 publications
0
26
0
Order By: Relevance
“…First, it draws heavily from approaches to maximize the entropy of a discrete probability distribution given a set of hard constraints about that distribution, such as knowing some expected values over the distribution with certainty (Jaynes, 1957a(Jaynes, , 1957bJaynes and Bretthorst, 2003). For example, Myung, Ramahoorti, and Bailey (1996) use a maximum entropy approach to aggregate the opinions of experts about the moments of a probability distribution, with the analysis running in exponential time in the number of experts.…”
Section: The Maximum Entropy/minimum Penalty Fusion Methods Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…First, it draws heavily from approaches to maximize the entropy of a discrete probability distribution given a set of hard constraints about that distribution, such as knowing some expected values over the distribution with certainty (Jaynes, 1957a(Jaynes, , 1957bJaynes and Bretthorst, 2003). For example, Myung, Ramahoorti, and Bailey (1996) use a maximum entropy approach to aggregate the opinions of experts about the moments of a probability distribution, with the analysis running in exponential time in the number of experts.…”
Section: The Maximum Entropy/minimum Penalty Fusion Methods Overviewmentioning
confidence: 99%
“…* It should be noted that MEMP (and the other methods in this paper) have some similarities to expert opinion aggregation mechanisms that generate a fused estimate of an event probability given a number of experts' assessments for that probability. We have already noted Myung, Ramahoorti, and Bailey (1996) using a maximum entropy method; more broadly, a great deal of work has been done in this area. As just a few examples, Satopää et al (2014) provide a review of aggregation approaches as part of developing a logistic regression approach to aggregation that provides for additional certainty about a prediction if multiple experts are consistently arriving at similar estimates.…”
Section: The Maximum Entropy/minimum Penalty Fusion Methods Overviewmentioning
confidence: 99%
“…In the latter case the issue has been subject of research for more than 200 years and resulted in well known paradoxes, such as rank reversals in the Borda count, the Condorcet paradox of non-transitivity, and Arrow's impossibility theorem [4]. Risk management methods for the aggregation of expert assessments have been addressed by Beinat et al [3], Cooke [9], DeWispelare et al [10], Sandri et al [20], and Myung et al [18]. The procedures proposed in the literature to aggregate expert assessments can be classified as behavioral and mechanical [10].…”
Section: Orm For Multi-expert Situationsmentioning
confidence: 99%
“…One of these approaches is based on maximum entropy modeling (see [17], [19]). Maximum entropy is a versatile modeling technique allowing to easily integrate various constraints, such as correlation between experts, reliability of these experts, etc.…”
Section: Introductionmentioning
confidence: 99%
“…While the main idea is similar, our model differs from [19] in the formulation of the problem (we focus on quantities that are relevant to classification problems, and can easily be computed for classifiers: success rate, degree of agreement, etc) and in the way the individual opinions are aggregated. Furthermore, we also tackle the problem of incompatible constraints; that is, when there is no feasible solution to the problem, a situation that is not mentionned by [19].…”
Section: Introductionmentioning
confidence: 99%