2016
DOI: 10.1002/mma.4086
|View full text |Cite
|
Sign up to set email alerts
|

A learning algorithm for source aggregation

Abstract: The problem of model aggregation from various information sources of unknown validity is addressed in terms of a variational problem in the space of probability measures. A weight allocation scheme to the various sources is proposed, which is designed to lead to the best aggregate model compatible with the available data and the set of prior measures provided by the information sources. Copyright © 2016 John Wiley & Sons, Ltd.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 12 publications
(23 reference statements)
0
12
0
Order By: Relevance
“…where θ := (r −λ(1−β))/β. Then, substituting (34) to ( 22) we obtain the stated result (19) in Proposition 1. Also, combining equations ( 25) and ( 34) and substituting to (21) are obtained the optimal controls stated in (20) in Proposition 1.…”
Section: 1mentioning
confidence: 84%
See 1 more Smart Citation
“…where θ := (r −λ(1−β))/β. Then, substituting (34) to ( 22) we obtain the stated result (19) in Proposition 1. Also, combining equations ( 25) and ( 34) and substituting to (21) are obtained the optimal controls stated in (20) in Proposition 1.…”
Section: 1mentioning
confidence: 84%
“…Lastly, the issue of model validity is something that someone has to be aware of, especially in cases where the decision making process is performed under a dynamic setting and possibly during the process new evidence become available. In this direction, learning schemes that determine the optimal weight allocation to each model, taking into account criteria assessing the models' performance comparing to the recorded data, are proposed and investigated under different metric tools in [19].…”
mentioning
confidence: 99%
“…As points on this space corresponds to probability measures, the Wasserstein barycenter of a collection of probability measures scriptM corresponds to the best approximation of the entire collection by a single probability measure. Wasserstein barycenters are a very active field of research recently and have been studied from various perspectives (see, e.g., Agueh & Carlier, ; Kim & Pass, ; Le Gouic & Loubes, ) including statistical learning (Papayiannis & Yannacopoulos, ).…”
Section: Weight Selection Approaches For the Aggregate Modelsmentioning
confidence: 99%
“…In this section, we consider a learning scheme for the optimal weight selection of the predictions aggregate model that can be considered as an extension of the algorithm presented in the previous Section 3.2, relying on the algorithmic approach proposed in the work of Papayiannis and Yannacopoulos (). The proposed scheme relies on the superior performance of the Wasserstein distance when used as a loss/merit function in the space of probability measures.…”
Section: Weight Selection Approaches For the Aggregate Modelsmentioning
confidence: 99%
See 1 more Smart Citation