2021
DOI: 10.1016/j.ejor.2020.07.052
|View full text |Cite
|
Sign up to set email alerts
|

Maximum entropy distributions with quantile information

Abstract: HighLights Explore maximum entropy minimum elaborations of simpler maximum entropy models. Compare maximum entropy priors with parametric models fitted to elicited quantiles. Measure uncertainty and disagreement of forecasters based on their probability forecasts. Include the maximizing profit quantile in the newsvendor’s demand distribution.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…Wang and Lahiri (2021) avoided the discrete entropy and applied the information framework of Shoja and Soofi (2017) using entropies of beta and triangular distributions fitted to the histograms of the subjective probabilities. Bajgiran et al (2021) developed piecewise uniform ME models that only use the quantiles given by the forecasters' subjective probabilities. Quantile constraints can be represented in terms of the following expectations:…”
Section: Uncertainty and Disagreement Of Forecastersmentioning
confidence: 99%
See 2 more Smart Citations
“…Wang and Lahiri (2021) avoided the discrete entropy and applied the information framework of Shoja and Soofi (2017) using entropies of beta and triangular distributions fitted to the histograms of the subjective probabilities. Bajgiran et al (2021) developed piecewise uniform ME models that only use the quantiles given by the forecasters' subjective probabilities. Quantile constraints can be represented in terms of the following expectations:…”
Section: Uncertainty and Disagreement Of Forecastersmentioning
confidence: 99%
“…This piecewise uniform PDF is a density histogram with unequal bins B k . Bajgiran et al (2021) observed that, unlike the ME with moment constraints, the ME model subject to the mixtures of the averagē p k = n i=1 p i p ik of the bin probabilities…”
Section: Uncertainty and Disagreement Of Forecastersmentioning
confidence: 99%
See 1 more Smart Citation
“…(1996) studied elaboration by including a prior distribution for θ and embedding F0 in the Dirichlet process prior where θ represents the degree of belief (dispersion) parameter and F0 is given by θ 0 = lim θ→∞ Bajgiran et al . (2021) derived the minimum information elaboration by including an additional constraint to a maximum entropy model, where θ is the Lagrange multiplier and θ0=0.…”
Section: Introductionmentioning
confidence: 99%
“…This quantity measures the average surprisal of the random vector [17]. Finding the distribution that maximizes the continuous entropy given some restrictions is a highly studied field, some examples can be found in [1,4,29]. Given the variances and some of the covariances between a collection of variables, the GMRF distribution (over the graph where the nodes are adjacent if and only if the covariance is specified) maximizes the continuous entropy, since the resultant covariance matrix maximizes the determinant [6].…”
Section: Introductionmentioning
confidence: 99%