2001
DOI: 10.1063/1.1381874
|View full text |Cite
|
Sign up to set email alerts
|

Maximum entropy, fluctuations and priors

Abstract: The method of maximum entropy (ME) is extended to address the following problem: Once one accepts that the ME distribution is to be preferred over all others, the question is to what extent are distributions with lower entropy supposed to be ruled out. Two applications are given. The first is to the theory of thermodynamic fluctuations. The formulation is exact, covariant under changes of coordinates, and allows fluctuations of both the extensive and the conjugate intensive variables. The second application is… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
40
0

Year Published

2001
2001
2013
2013

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 27 publications
(42 citation statements)
references
References 16 publications
1
40
0
Order By: Relevance
“…We deal with an effectively smaller fluid sample. The actual calculation of N eff will be pursued elsewhere; it is not difficult to see that the ME method itself still applies [9], all that is needed is a broader family of trial distributions.…”
Section: A More Complete Me Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…We deal with an effectively smaller fluid sample. The actual calculation of N eff will be pursued elsewhere; it is not difficult to see that the ME method itself still applies [9], all that is needed is a broader family of trial distributions.…”
Section: A More Complete Me Analysismentioning
confidence: 99%
“…[1,9] This question is an inquiry about the probability of r d , P d (r d ). Thus, we are uncertain not just about q N given r d , but also about the right r d and what we actually seek is the joint probability of q N and r d , P J (q N , r d ).…”
Section: A More Complete Me Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…In other words, to what extent do we rule out all those distributions with entropies less than the maximum. This matter is addressed in Section 4 following the treatment in [17]. In Section 5 we collect miscellaneous remarks on the choice and nature of the prior distribution, on using entropy as a measure of amount of information, on choosing constraints, and on the choices of axioms and how they are justified by other authors.…”
Section: Introductionmentioning
confidence: 99%
“…Unfortunately they are derived mainly for continuous spaces and for large sample sets and their application to classification is not necessarily appropriate. Entropic priors are derived from the maximization of model entropy and seem to be an excellent candidate for solving some of the inconsistencies related to model uncertainties [20,19,13,3,4,7,14]. We have presented a discussion on entropic priors in [17] and applied it to target classification [15], to graphical models [5] and to AR process classification [16].…”
Section: Introductionmentioning
confidence: 99%