How should tests (or queries, questions, or experiments) be selected? Does it matter if only a single test is allowed, or if a sequential test strategy can be planned in advance? This article contributes two sets of theoretical results bearing on these questions. First, for selecting a single test, several Optimal Experimental Design (OED) ideas have been proposed in statistics and other disciplines. The OED models are mathematically nontrivial. How is it that they often predict human behavior well? One possibility is that simple heuristics can approximate or exactly implement OED models. We prove that heuristics can identify the highest information value queries (as quantified by OED models) in several situations, thus providing a possible algorithmic-level theory of human behavior. Second, we address whether OED models are optimal for sequential search, as is frequently presumed. We consider the Person Game, a 20-questions scenario, as well as a two-category, binary feature scenario, both of which have been widely used in psychological research. In each task, we demonstrate via specific examples and extended computational simulations that neither the OED models nor the heuristics considered in the literature are optimal. Little research addresses human behavior in such situations. We call for experimental research into how people approach the sequential planning of tests, and theoretical research on what sequential planning procedures are most successful, and we offer a number of testable predictions for discriminating among candidate models.
A primary goal in recent research on contextuality has been to extend this concept to cases of inconsistent connectedness, where observables have different distributions in different contexts. This article proposes a solution within the framework of probabi- listic causal models, which extend hidden-variables theories, and then demonstrates an equivalence to the contextuality-by-default (CbD) framework. CbD distinguishes contextuality from direct influences of context on observables, defining the latter purely in terms of probability distributions. Here, we take a causal view of direct influences, defining direct influence within any causal model as the probability of all latent states of the system in which a change of context changes the outcome of a measurement. Model-based contextuality (M-contextuality) is then defined as the necessity of stronger direct influences to model a full system than when considered individually. For consistently connected systems, M-contextuality agrees with standard contextuality. For general systems, it is proved that M-contextuality is equivalent to the property that any model of a system must contain ‘hidden influences’, meaning direct influences that go in opposite directions for different latent states, or equivalently signalling between observers that carries no information. This criterion can be taken as formalizing the ‘no-conspiracy’ principle that has been proposed in connection with CbD. M-contextuality is then proved to be equivalent to CbD-contextuality, thus providing a new interpretation of CbD-contextuality as the non-existence of a model for a system without hidden direct influences. This article is part of the theme issue ‘Contextuality and probability in quantum mechanics and beyond’.
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more'' effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and Take-the Best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics.
Adapting flexibly to recent events is essential in everyday life. A robust measure of such adaptive behavior is the congruency sequence effect (CSE) in the prime-probe task, which refers to a smaller congruency effect after incongruent trials than after congruent trials. Prior findings indicate that the CSE in the prime-probe task reflects control processes that modulate response activation after the prime onsets but before the probe appears. They also suggest that similar control processes operate even in a modified prime-probe task wherein the initial prime is a relevant target, rather than merely a distractor. Because adaptive behavior frequently occurs in the absence of irrelevant stimuli, the present study investigates the nature of the control processes that operate in this modified prime-probe task. Specifically, it investigates whether these control processes modulate only the response cued by the prime (response-specific control) or also other responses (response-general control). To make this distinction, we employed a novel variant of the modified prime-probe task wherein primes and probes are mapped to different responses (i.e., effectors), such that only response-general control processes can engender a CSE. Critically, we observed a robust CSE in each of 2 experiments. This outcome supports the response-general control hypothesis. More broadly, it suggests that the control processes underlying the CSE overlap with general mechanisms for adapting to sequential dependencies in the environment. Public Significance StatementAdapting flexibly to recent events is a crucial aspect of cognitive control. For example, after discovering that a passenger's directions for reaching one destination are incorrect, a driver may become cautious about following the same passenger's directions to a second destination. It remains unclear, however, exactly how control processes adapt flexibly to whether or not advance information (e.g., driving directions) was recently useful. More specifically, it remains unclear whether they adapt solely by modulating the response that advance information currently cues (e.g., by inhibiting a "turn left" response that a passenger suggests) or also by modulating a different response (e.g., by activating an alternative "turn right" response). Our findings support the latter possibility and thereby distinguish between competing accounts of adaptive control.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.