1996
DOI: 10.1103/physrevlett.77.4693
|View full text |Cite
|
Sign up to set email alerts
|

Field Theories for Learning Probability Distributions

Abstract: Imagine being shown N samples of random variables drawn independently from the same distribution. What can you say about the distribution? In general, of course, the answer is nothing, unless you have some prior notions about what to expect. From a Bayesian point of view one needs an a priori distribution on the space of possible probability distributions, which defines a scalar field theory. In one dimension, free field theory with a normalization constraint provides a tractable formulation of the problem, an… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
152
0

Year Published

2002
2002
2017
2017

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 102 publications
(153 citation statements)
references
References 3 publications
1
152
0
Order By: Relevance
“…However, an "optimal" value for the smoothness controlling parameter was derived from the data itself, a topic also addressed by Stoica et al [46] and by a follow up publication to ours [47]. Bialek et al [45] also recognized, as we do, that an IFT can easily be non-local.…”
Section: Statistical and Bayesian Field Theorymentioning
confidence: 93%
See 1 more Smart Citation
“…However, an "optimal" value for the smoothness controlling parameter was derived from the data itself, a topic also addressed by Stoica et al [46] and by a follow up publication to ours [47]. Bialek et al [45] also recognized, as we do, that an IFT can easily be non-local.…”
Section: Statistical and Bayesian Field Theorymentioning
confidence: 93%
“…Bialek et al [45] applied a field theoretical approach to recover a probability distribution from data. Here, a Bayesian prior was used to regularize the solution, which was set up ad-hoc to enforce smoothness of the reconstruction, obtained from the classical (or saddlepoint, or maximum a posteriori) solution of the problem.…”
Section: Statistical and Bayesian Field Theorymentioning
confidence: 99%
“…It encodes a priori assumptions about the typical variability of model functions f with the input x. Such Gaussian process models (GP) have attracted considerable attention in recent years as they represent a flexible and widely applicable concept [4,5,6,7]. GP models can be understood as a limit of Bayesian feed-forward neural networks when the number of hidden units grows to infinity [5].…”
mentioning
confidence: 99%
“…There it has also been suggested that the decay of the estimation error with the number of available data points can be made largely independent of model mismatch by optimal hyperparameter tuning [18,19,20,21,22,23,24,25].…”
Section: Resultsmentioning
confidence: 99%