Quantum mechanics is derived as an application of the method of maximum entropy. No appeal is made to any underlying classical action principle whether deterministic or stochastic. Instead, the basic assumption is that in addition to the particles of interest x there exist extra variables y whose entropy S(x) depends on x. The Schrödinger equation follows from their coupled dynamics: the entropy S(x) drives the dynamics of the particles x while they in their turn determine the evolution of S(x). In this "entropic dynamics" time is introduced as a device to keep track of change. A welcome feature of such an entropic time is that it naturally incorporates an arrow of time. Both the magnitude and the phase of the wave function are given statistical interpretations: the magnitude gives the distribution of x in agreement with the usual Born rule and the phase carries information about the entropy S(x) of the extra variables. Extending the model to include external electromagnetic fields yields further insight into the nature of the quantum phase.
We show that Skilling's method of induction leads to a unique general theory of inductive inference, the method of Maximum relative Entropy (ME). The main tool for updating probabilities is the logarithmic relative entropy; other entropies such as those of Renyi or Tsallis are ruled out. We also show that Bayes updating is a special case of ME updating and thus, that the two are completely compatible.
I explore the possibility that the laws of physics might be laws of inference rather than laws of nature. What sort of dynamics can one derive from well-established rules of inference? Specifically, I ask: Given relevant information codified in the initial and the final states, what trajectory is the system expected to follow? The answer follows from a principle of inference, the principle of maximum entropy, and not from a principle of physics. The entropic dynamics derived this way exhibits some remarkable formal similarities with other generally covariant theories such as general relativity.
The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting "entropic prior" is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.
We discuss how the method of maximum entropy, MaxEnt, can be extended beyond its original scope, as a rule to assign a probability distribution, to a full-fledged method for inductive inference. The main concept is the (relative) entropy S[p|q] which is designed as a tool to update from a prior probability distribution q to a posterior probability distribution p when new information in the form of a constraint becomes available. The extended method goes beyond the mere selection of a single posterior p, but also addresses the question of how much less probable other distributions might be. Our approach clarifies how the entropy S[p|q] is used while avoiding the question of its meaning. Ultimately, entropy is a tool for induction which needs no interpretation. Finally, being a tool for generalization from special examples, we ask whether the functional form of the entropy depends on the choice of the examples and we find that it does. The conclusion is that there is no single general theory of inductive inference and that alternative expressions for the entropy are possible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.