Drawing on various notions from theoretical computer science, we present a novel numerical approach, motivated by the notion of algorithmic probability, to the problem of approximating the Kolmogorov-Chaitin complexity of short strings. The method is an alternative to the traditional lossless compression algorithms, which it may complement, the two being serviceable for different string lengths. We provide a thorough analysis for all binary strings of length and for most strings of length by running all Turing machines with 5 states and 2 symbols ( with reduction techniques) using the most standard formalism of Turing machines, used in for example the Busy Beaver problem. We address the question of stability and error estimation, the sensitivity of the continued application of the method for wider coverage and better accuracy, and provide statistical evidence suggesting robustness. As with compression algorithms, this work promises to deliver a range of applications, and to provide insight into the question of complexity calculation of finite (and short) strings.Additional material can be found at the Algorithmic Nature Group website at http://www.algorithmicnature.org. An Online Algorithmic Complexity Calculator implementing this technique and making the data available to the research community is accessible at http://www.complexitycalculator.com.
We propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluating n-dimensional complexity by using an n-dimensional deterministic Turing machine. The technique is interesting because it provides a natural algorith-mic process for symmetry breaking generating complex n-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. Algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. Experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. We then use the output frequency of the set of 2-dimensional Turing machines to classify the algorithmic complexity of the space-time evolutions of Elementary Cellular Automata.
As human randomness production has come to be more closely studied and used to assess executive functions (especially inhibition), many normative measures for assessing the degree to which a sequence is randomlike have been suggested. However, each of these measures focuses on one feature of randomness, leading researchers to have to use multiple measures. Although algorithmic complexity has been suggested as a means for overcoming this inconvenience, it has never been used, because standard Kolmogorov complexity is inapplicable to short strings (e.g., of length l ≤ 50), due to both computational and theoretical limitations. Here, we describe a novel technique (the coding theorem method) based on the calculation of a universal distribution, which yields an objective and universal measure of algorithmic complexity for short strings that approximates Kolmogorov-Chaitin complexity.
Critical thinking is of paramount importance in our society. People regularly assume that critical thinking is a way to reduce conspiracy belief, although the relationship between critical thinking and conspiracy belief has never been tested. We conducted two studies (Study 1, N = 86; Study 2, N = 252), in which we found that critical thinking ability-measured by an open-ended test emphasizing several areas of critical thinking ability in the context of argumentation-is negatively associated with belief in conspiracy theories. Additionally, we did not find a significant relationship between self-reported (subjective) critical thinking ability and conspiracy belief. Our results support the idea that conspiracy believers have less developed critical thinking ability and stimulate discussion about the possibility of reducing conspiracy beliefs via the development of critical thinking.
Belief in conspiracy theories has often been associated with a biased perception of randomness, akin to a nothing-happens-by-accident heuristic. Indeed, a low prior for randomness (i.e., believing that randomness is a priori unlikely) could plausibly explain the tendency to believe that a planned deception lies behind many events, as well as the tendency to perceive meaningful information in scattered and irrelevant details; both of these tendencies are traits diagnostic of conspiracist ideation. In three studies, we investigated this hypothesis and failed to find the predicted association between low prior for randomness and conspiracist ideation, even when randomness was explicitly opposed to malevolent human intervention. Conspiracy believers' and nonbelievers' perceptions of randomness were not only indistinguishable from each other but also accurate compared with the normative view arising from the algorithmic information framework. Thus, the motto "nothing happens by accident," taken at face value, does not explain belief in conspiracy theories.
We show that real-value approximations of Kolmogorov-Chaitin (K m ) using the algorithmic Coding theorem as calculated from the output frequency of a large set of small deterministic Turing machines with up to 5 states (and 2 symbols), is in agreement with the number of instructions used by the Turing machines producing s, which is consistent with strict integer-value program-size complexity. Nevertheless, K m proves to be a finer-grained measure and a potential alternative approach to lossless compression algorithms for small entities, where compression fails. We also show that neither K m nor the number of instructions used shows any correlation with Bennett's Logical Depth LD(s) other than what's predicted by the theory. The agreement between theory and numerical calculations shows that despite the undecidability of these theoretical measures, approximations are stable and meaningful, even for small programs and for short strings. We also announce a first Beta version of an Online Algorithmic Complexity Calculator (OACC), based on a combination of theoretical concepts, as a numerical implementation of the Coding Theorem Method.
Random Item Generation tasks (RIG) are commonly used to assess high cognitive abilities such as inhibition or sustained attention. They also draw upon our approximate sense of complexity. A detrimental effect of aging on pseudo-random productions has been demonstrated for some tasks, but little is as yet known about the developmental curve of cognitive complexity over the lifespan. We investigate the complexity trajectory across the lifespan of human responses to five common RIG tasks, using a large sample (n = 3429). Our main finding is that the developmental curve of the estimated algorithmic complexity of responses is similar to what may be expected of a measure of higher cognitive abilities, with a performance peak around 25 and a decline starting around 60, suggesting that RIG tasks yield good estimates of such cognitive abilities. Our study illustrates that very short strings of, i.e., 10 items, are sufficient to have their complexity reliably estimated and to allow the documentation of an age-dependent decline in the approximate sense of complexity.
Kolmogorov-Chaitin complexity has long been believed to be impossible to approximate when it comes to short sequences (e.g. of length 5-50). However, with the newly developed coding theorem method the complexity of strings of length 2-11 can now be numerically estimated. We present the theoretical basis of algorithmic complexity for short strings (ACSS) and describe an R-package providing functions based on ACSS that will cover psychologists' needs and improve upon previous methods in three ways: (1) ACSS is now available not only for binary strings, but for strings based on up to 9 different symbols, (2) ACSS no longer requires time-consuming computing, and (3) a new approach based on ACSS gives access to an estimation of the complexity of strings of any length. Finally, three illustrative examples show how these tools can be applied to psychology. Keywords Algorithmic complexity · Randomness · Subjective probability · Coding theorem method Randomness and complexity are two concepts which are intimately related and are both central to numerous recent developments in various fields, including finance (Taufemback et al., 2011, Brandouy et al., 2012, linguistics (Gruber, 2010;Naranan, 2011), neuropsychology (Machado et al., 2010;Fernández et al., 2011Fernández et al., , 2012, psychiatry (Yang and Tsai, 2012;Takahashi 2013), genetics (Yagil, 2009;Ryabko et al. 2013), sociology (Elzinga, 2010 and the behavioral sciences (Watanabe et al., 2003;Scafetta et al., 2009). In psychology, randomness and complexity have recently attracted interest, following the realization that they could shed light on a diversity of previously undeciphered behaviors and mental processes. It has been found, for instance, that the subjective difficulty of a concept is directly related to its "boolean complexity", defined as the shortest logical description of a concept (Feldman, 2000(Feldman, , 2003(Feldman, , 2006. In the same vein, visual detection of shapes has been shown to be related to contour complexity (Wilder et al., 2011).More generally, perceptual organization itself has been described as based on simplicity or, equivalently, likelihood (Chater, 1996;Chater and Vitányi, 2003), in a model reconciling the complexity approach (perception is organized to minimize complexity) and a probability approach (perception is organized to maximize likelihood), very much in line with our view in this paper. Even the perception of similarity may be viewed through the lens of (conditional) complexity (Hahn et al., 2003).Randomness and complexity also play an important role in modern approaches to selecting the "best" among a set of candidate models (i.e., model selection; e.g., Myung et Behav Res (2016) 48:314-329 315 al., 2006Kellen et al., 2013), as discussed in more detail below in the section called "Relationship to complexity based model selection".Complexity can also shed light on short term memory storage and recall, more specifically, on the process underlying chunking. It is well known that the short term memory span lies b...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.