Two experimental tasks in psychology, the two-stage gambling game and the Prisoner's Dilemma game, show that people violate the sure thing principle of decision theory. These paradoxical findings have resisted explanation by classical decision theory for over a decade. A quantum probability model, based on a Hilbert space representation and Schrödinger's equation, provides a simple and elegant explanation for this behaviour. The quantum model is compared with an equivalent Markov model and it is shown that the latter is unable to account for violations of the sure thing principle. Accordingly, it is argued that quantum probability provides a better framework for modelling human decision-making.
This is the unspecified version of the paper.This version of the publication may differ from the final published version. Quantum Probability 1 Permanent repository link:Abstract A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector spaces defined by features, and similarities between vectors to determine probability judgments. On the other hand, quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of (von Neumann) axioms that relax some of the classic (Kolmogorov) axioms. The quantum model is compared and contrasted with other competing explanations for these judgment errors including the anchoring and adjustment model for probability judgments. The quantum model introduces a new fundamental concept to cognition --the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments.We conclude that quantum information processing principles provide a viable and promising new way to understand human judgment and reasoning. Quantum Probability 2Over 30 years ago, Kahneman and Tversky (1982) began their influential program of research to discover the heuristics and biases that form the basis of human probability judgments.Since that time, a great deal of new and challenging empirical phenomena have been discovered including conjunction and disjunction fallacies, unpacking effects, and order effects on inference (Gilovich, Griffin, & Kahneman, 2002). Although heuristic concepts (such as representativeness, availability, anchor-adjustment) initially served as a guide to researchers in this area, there is a growing need to move beyond these intuitions, and develop more coherent, comprehensive, and deductive theoretical explanations (Shah & Oppenheimer, 2008). The purpose of this article is to propose a new way of understanding human probability judgment using quantum probability principles (Gudder, 1988).At first, it might seem odd to apply quantum theory to human judgments. Before we address this general issue, we point out that we are not claiming the brain to be a quantum computer; rather we only use quantum principles to derive cognitive models and leave the neural basis for later research. That is, we use the mathematical principles of quantum probability detached from the physical meaning associated with quantum mechanics. This approach is similar to the application of complexity theory or stochastic processes to domains outside of physics. 1There are at least five reasons for doing so: (1) judgment is not a simple read out from a pre-existing or recorded state, instead it is constructed from the question and the cognitive state created by the current context; from this first point it then follows that (2) drawing a conclus...
Decisions about using addictive substances are influenced by distractions by addiction-related stimuli, of which the user might be unaware. The addiction-Stroop task is a paradigm used to assess this distraction. The empirical evidence for the addiction-Stroop effect is critically reviewed, and meta-analyses of alcohol-related and smoking-related studies are presented. Studies finding the strongest effects were those in which participants had strong current concerns about an addictive substance or such concerns were highlighted through experimental manipulations, especially those depriving participants of the substance. Theories to account for addiction-related attentional bias are discussed, of which the motivational theory of current concerns appears to provide the most complete account of the phenomenon. Recommendations are made for maximizing the precision of the addiction-Stroop test in future research.
Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context-and orderdependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality.Keywords: category membership; classical probability theory; conjunction effect; decision making; disjunction effect; interference effects; judgment; quantum probability theory; rationality; similarity ratings 1. Preliminary issues 1.1. Why move toward quantum probability theory?In this article we evaluate the potential of quantum probability (QP) theory for modeling cognitive processes. What is the motivation for employing QP theory in cognitive modeling? Does the use of QP theory offer the promise of any unique insights or predictions regarding cognition? Also, what do quantum models imply regarding the nature of human rationality? In other words, is there anything to be gained, by seeking to develop cognitive models based on QP theory? Especially over the last decade, there has been growing interest in such models, encompassing publications in major journals, special issues, dedicated workshops, and a comprehensive book (Busemeyer & Bruza 2012). Our strategy in this article is to briefly introduce QP theory, summarize progress with selected, QP models, and motivate answers to the abovementioned questions. We note that this article is not about the application of quantum physics to brain physiology. This is a controversial issue (Hammeroff 2007;Litt et al. 2006) about which ...
Artificial grammar learning (AGL) is one of the most commonly used paradigms for the study of implicit learning and the contrast between rules, similarity, and associative learning. Despite five decades of extensive research, however, a satisfactory theoretical consensus has not been forthcoming. Theoretical accounts of AGL are reviewed, together with relevant human experimental and neuroscience data. The author concludes that satisfactory understanding of AGL requires (a) an understanding of implicit knowledge as knowledge that is not consciously activated at the time of a cognitive operation; this could be because the corresponding representations are impoverished or they cannot be concurrently supported in working memory with other representations or operations, and (b) adopting a frequency-independent view of rule knowledge and contrasting rule knowledge with specific similarity and associative learning (co-occurrence) knowledge.
The authors examine the role of similarity in artificial grammar learning (AGL; A. S. Reber, 1989). A standard finite-state language was used to create stimuli that were arrangements of embedded geometric shapes (Experiment 1), connected lines (Experiment 2), and sequences of shapes (Experiment 3). Main effects for well-known predictors from the literature (grammaticality, associative global and anchor chunk strength, novel global and anchor chunk strength, length of items, and edit distance) were observed, thus replicating previous work. However, the authors extend previous research by using a widely known similarity-based exemplar model of categorization (the generalized context model; R. M. Nosofsky, 1989) to fit grammaticality judgments, by nested regression analyses. The results suggest that any explanation of AGL that is based on the existing theories is incomplete without a similarity process as well. Also, the results provide a foundation for further interpreting AGL in the wider context of categorization research.
The distinction between rules and similarity is central to our understanding of much of cognitive psychology. Two aspects of existing research have motivated the present work. First, in different cognitive psychology areas we typically see different conceptions of rules and similarity; for example, rules in language appear to be of a different kind compared to rules in categorization. Second, rules processes are typically modeled as separate from similarity ones; for example, in a learning experiment, rules and similarity influences would be described on the basis of separate models. In the present article, I assume that the rules versus similarity distinction can be understood in the same way in learning, reasoning, categorization, and language, and that a unified model for rules and similarity is appropriate. A rules process is considered to be a similarity one where only a single or a small subset of an object's properties are involved. Hence, rules and overall similarity operations are extremes in a single continuum of similarity operations. It is argued that this viewpoint allows adequate coverage of theory and empirical findings in learning, reasoning, categorization, and language, and also a reassessment of the objectives in research on rules versus similarity.
We address the problem of predicting how people will spontaneously divide into groups a set of novel items. This is a process akin to perceptual organization. We therefore employ the simplicity principle from perceptual organization to propose a simplicity model of unconstrained spontaneous grouping. The simplicity model predicts that people would prefer the categories for a set of novel items that provide the simplest encoding of these items. Classification predictions are derived from the model without information either about the number of categories sought or information about the distributional properties of the objects to be classified. These features of the simplicity model distinguish it from other models in unsupervised categorization (where, for example, the number of categories sought is determined via a free parameter), and we discuss how these computational differences are related to differences in modeling objectives. The predictions of the simplicity model are validated in four experiments. We also discuss the significance of simplicity in cognitive modeling more generally.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.