2014
DOI: 10.3390/e16084353
|View full text |Cite
|
Sign up to set email alerts
|

Learning Functions and Approximate Bayesian Computation Design: ABCD

Abstract: A general approach to Bayesian learning revisits some classical results, which study which functionals on a prior distribution are expected to increase, in a preposterior sense. The results are applied to information functionals of the Shannon type and to a class of functionals based on expected distance. A close connection is made between the latter and a metric embedding theory due to Schoenberg and others. For the Shannon type, there is a connection to majorization theory for distributions. A computational … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 28 publications
0
9
0
Order By: Relevance
“…To discuss this connection more precisely in our particular setting, let us consider the following definition. Then it is easy to see that any functional of the form H(ν) = H ′ (ν β ) is DoA, where H ′ denotes a DoA functional defined on some appropriate subset of the set of all probability measures on R p ; the reader is referred to [30] for a variety of examples of such functionals. Section 4.2 provides an example of this construction, with p = 1 and H ′ the variance functional.…”
Section: Uncertainty Functionals and Uncertainty Reductionmentioning
confidence: 99%
“…To discuss this connection more precisely in our particular setting, let us consider the following definition. Then it is easy to see that any functional of the form H(ν) = H ′ (ν β ) is DoA, where H ′ denotes a DoA functional defined on some appropriate subset of the set of all probability measures on R p ; the reader is referred to [30] for a variety of examples of such functionals. Section 4.2 provides an example of this construction, with p = 1 and H ′ the variance functional.…”
Section: Uncertainty Functionals and Uncertainty Reductionmentioning
confidence: 99%
“…We agree that the consideration of the expected performance (or worst‐case performance as in ) for a set of possible ν would be more robust and could be more or less straightforwardly implemented in our framework. Another possibility would be to take a fully Bayesian approach and apply a simulation‐based technique such as ABCD (see and ).…”
Section: Dependence Upon Parameter Valuesmentioning
confidence: 99%
“…Hainy et al. 2014 ), which is also called the expected utility. As utility function one would typically use convex functionals of the posterior distribution, such as the Kullback-Leibler divergence between the (uninformative) prior and the posterior distribution, to measure the additional information gained by conducting the experiment (Chaloner and Verdinelli 1995 ).…”
Section: Introductionmentioning
confidence: 99%