For a broad class of input-output maps, arguments based on the coding theorem from algorithmic information theory (AIT) predict that simple (low Kolmogorov complexity) outputs are exponentially more likely to occur upon uniform random sampling of inputs than complex outputs are. Here, we derive probability bounds that are based on the complexities of the inputs as well as the outputs, rather than just on the complexities of the outputs. The more that outputs deviate from the coding theorem bound, the lower the complexity of their inputs. Our new bounds are tested for an RNA sequence to structure map, a finite state transducer and a perceptron. These results open avenues for AIT to be more widely used in physics. PACS numbers:Deep links between physics and theories of computation [1, 2] are being increasingly exploited to uncover new fundamental physics and to provide novel insights into theories of computation. For example, advances in understanding quantum entanglement are often expressed in sophisticated information theoretic language, while providing new results in computational complexity theory such as polynomial time algorithms for integer factorization [3]. These connections are typically expressed in terms of Shannon information, with its natural analogy with thermodynamic entropy.There is, however, another branch of information theory, called algorithmic information theory (AIT) [4], which is concerned with the information content of individual objects. It has been much less applied in physics (although notable exceptions occur, see [5] for a recent overview). Reasons for this relative lack of attention include that AIT's central concept, the Kolmogorov complexity K U (x) of a string x, defined as the length of the shortest program that generates x on a universal Turing machine (UTM) U , is formally uncomputable due to its link to the famous halting problem of UTMs [6]. Moreover, many important results, such as the invariance theorem which states that for two UTMs U and W , the Kolmogorov complexities K U (x) = K W (x) + O(1) are equivalent, hold asymptotically up to O(1) terms that are independent of x, but not always well understood, and therefore hard to control.Another reason applications of AIT to many practical problems have been hindered can be understood in terms of hierarchies of computing power. For example, one of the oldest such categorisations, the Chomsky hierarchy [7], ranks automata into four different classes, of which the UTMs are the most powerful, and finite state machines (FSMs) are the least. Many key results in AIT are derived by exploiting the power of UTMs. Interestingly, if physical processes can be mapped onto UTMs, then certain properties can be shown to be uncom-putable [8, 9]. However, many problems in physics are fully computable, and therefore lower on the Chomsky hierarchy than UTMs. For example, finite Markov processes are equivalent to FSMs, and RNA secondary structure (SS) folding algorithms can be recast as context-free grammars, the second level in the hiearchy. Th...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.