Information has emerged as a language in its own right, bridging several disciplines that analyze natural phenomena and man-made systems. Integrated information has been introduced as a metric to quantify the amount of information generated by a system beyond the information generated by its elements. Yet, this intriguing notion comes with the price of being prohibitively expensive to calculate, since the calculations require an exponential number of sub-divisions of a system. Here we introduce a novel framework to connect algorithmic randomness and integrated information, and a numerical method for estimating integrated information using a perturbation test rooted in algorithmic information dynamics. This method quantifies the change in program size of a system, when subjected to a perturbation. The intuition behind is that if an object is random then random perturbations have little to no effect to what happens when a shorter program but when an objects has the ability to move in both directions (towards or away from randomness) it will be shown to be better integrated as a measure of sophistication telling apart randomness and simplicity from structure. We show that an object with a high integrated information value is also more compressible, and is therefore more sensitive to perturbations. We find that such a perturbation test quantifying compression sensitivity, provides a system with a means to extract explanations-causal accounts-of its own behaviour. Our technique can reduce the number of calculations to arrive to some bounds or estimations, as the algorithmic perturbation test guides an efficient search for estimating integrated information. Our work sets the stage for a systematic exploration of connections between algorithmic complexity and integrated information at the level of both theory and practice.
Computationalism is a relatively vague term used to describe attempts to apply Turing's model of computation to phenomena outside its original purview: in modelling the human mind, in physics, mathematics, etc. Early versions of computationalism faced strong objections from many (and varied) quarters, from philosophers to practitioners of the aforementioned disciplines. Here we will not address the fundamental question of whether computational models are appropriate for describing some or all of the wide range of processes that they have been applied to, but will focus instead on whether 'renovated' versions of the new computationalism shed any new light on or resolve previous tensions between proponents and skeptics. We find this, however, not to be the case, because the new computationalism falls short by using limited versions of "traditional computation", or proposing computational models that easily fall within the scope of Turing's original model, or else proffering versions of hypercomputation with its many pitfalls.
Information has emerged as a language in its own right, bridging several disciplines that analyze natural phenomena and man-made systems. Integrated information has been introduced as a metric to quantify the amount of information generated by a system beyond the information generated by its elements. Yet, this intriguing notion comes with the price of being prohibitively expensive to calculate, since the calculations require an exponential number of sub-divisions of a system. Here we introduce a novel framework to connect algorithmic randomness and integrated information, and a numerical method for estimating integrated information using a perturbation test rooted in algorithmic information dynamics. This method quantifies the change in program size of a system, when subjected to a perturbation. The intuition behind is that if an object is random then random perturbations have little to no effect to what happens when a shorter program but when an objects has the ability to move in both directions (towards or away from randomness) it will be shown to be better integrated as a measure of sophistication telling apart randomness and simplicity from structure. We show that an object with a high integrated information value is also more compressible, and is therefore more sensitive to perturbations. We find that such a perturbation test quantifying compression sensitivity, provides a system with a means to extract explanations-causal accounts-of its own behaviour. Our technique can reduce the number of calculations to arrive to some bounds or estimations, as the algorithmic perturbation test guides an efficient search for estimating integrated information. Our work sets the stage for a systematic exploration of connections between algorithmic complexity and integrated information at the level of both theory and practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.