2015
DOI: 10.1007/978-3-319-23699-5_1
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Bayesian inference for simulation and optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…The main result of this section is that, by following offline KG, we are guaranteed to explore every pre-decision state (and, in some cases, every post-decision state) infinitely often. By itself, this fact is not equivalent to statistical consistency of the approximationV n , which is generally quite difficult to show for approximate Bayesian models (Ryzhov 2015). However, it does provide insight into the behaviour of the algorithm; we see that the policy is driven to explore a large part of the state space and does not get stuck.…”
Section: Asymptotic Analysis Of Offline Kgmentioning
confidence: 99%
“…The main result of this section is that, by following offline KG, we are guaranteed to explore every pre-decision state (and, in some cases, every post-decision state) infinitely often. By itself, this fact is not equivalent to statistical consistency of the approximationV n , which is generally quite difficult to show for approximate Bayesian models (Ryzhov 2015). However, it does provide insight into the behaviour of the algorithm; we see that the policy is driven to explore a large part of the state space and does not get stuck.…”
Section: Asymptotic Analysis Of Offline Kgmentioning
confidence: 99%
“…We would like to retain the multivariate normal distribution in order to use the power of correlated beliefs. Since this is not possible using standard Bayesian updating, we use the methods of approximate Bayesian inference (Ryzhov 2015). Essentially, if the posterior distribution is not conjugate with the prior, we replace it by a simpler distribution that does belong to our chosen family (multivariate normal), and optimally approximates the true, non‐normal posterior.…”
Section: Demand Modelmentioning
confidence: 99%
“…However, there is no analogous model for logistic regression, making it difficult to represent beliefs about logistic demand curves. We approach this problem using approximate Bayesian inference (Ryzhov 2015), and create a new learning mechanism that allows us to maintain and update a multivariate normal belief on the regression coefficients using rigorous statistical approximations. We then develop a "Bayes-greedy" pricing strategy that optimizes an estimate of expected revenue by averaging over all possible revenue curves.…”
Section: Introductionmentioning
confidence: 99%
“…For the opponent y, we apply (37)-(38) with x and y reversed. The updating equations are derived using the moment-matching technique (Ryzhov, 2015a). This learning model is an approximation, as the normal prior is not conjugate with the binary observation.…”
Section: Application To Competitive Online Gamingmentioning
confidence: 99%