Behavioural studies over half a century indicate that making categorical choices alters beliefs about the state of the world. People seem biased to confirm previous choices, and to suppress contradicting information. These choice-dependent biases imply a fundamental bound of human rationality. However, it remains unclear whether these effects extend to lower level decisions, and only little is known about the computational mechanisms underlying them. Building on the framework of sequential-sampling models of decision-making, we developed novel psychophysical protocols that enable us to dissect quantitatively how choices affect the way decisionmakers accumulate additional noisy evidence. We find robust choice-induced biases in the accumulation of abstract numerical (experiment 1) and low-level perceptual (experiment 2) evidence. These biases deteriorate estimations of the mean value of the numerical sequence (experiment 1) and reduce the likelihood to revise decisions (experiment 2). Computational modelling reveals that choices trigger a reduction of sensitivity to subsequent evidence via multiplicative gain modulation, rather than shifting the decision variable towards the chosen alternative in an additive fashion. Our results thus show that categorical choices alter the evidence accumulation mechanism itself, rather than just its outcome, rendering the decision-maker less sensitive to new information.
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
The distinction between access consciousness and phenomenal consciousness is a subject of intensive debate. According to one view, visual experience overflows the capacity of the attentional and working memory system: We see more than we can report. According to the opposed view, this perceived richness is an illusion-we are aware only of information that we can subsequently report. This debate remains unresolved because of the inevitable reliance on report, which is limited in capacity. To bypass this limitation, this study utilized color diversity-a unique summary statistic-which is sensitive to detailed visual information. Participants were shown a Sperling-like array of colored letters, one row of which was precued. After reporting a letter from the cued row, participants estimated the color diversity of the noncued rows. Results showed that people could estimate the color diversity of the noncued array without a cost to letter report, which suggests that color diversity is registered automatically, outside focal attention, and without consuming additional working memory resources.
The minimal state of consciousness is sentience. This includes any phenomenal sensory experience – exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach.
Perceptual decisions are thought to be mediated by a mechanism of sequential sampling and integration of noisy evidence whose temporal weighting profile affects the decision quality. To examine temporal weighting, participants were presented with two brightness-fluctuating disks for 1, 2 or 3 seconds and were requested to choose the overall brighter disk at the end of each trial. By employing a signal-perturbation method, which deploys across trials a set of systematically controlled temporal dispersions of the same overall signal, we were able to quantify the participants’ temporal weighting profile. Results indicate that, for intervals of 1 or 2 sec, participants exhibit a primacy-bias. However, for longer stimuli (3-sec) the temporal weighting profile is non-monotonic, with concurrent primacy and recency, which is inconsistent with the predictions of previously suggested computational models of perceptual decision-making (drift-diffusion and Ornstein-Uhlenbeck processes). We propose a novel, dynamic variant of the leaky-competing accumulator model as a potential account for this finding, and we discuss potential neural mechanisms.
Humans possess a remarkable ability to rapidly form coarse estimations of numerical averages. This ability is important for making decisions that are based on streams of numerical or value-based information, as well as for preference formation. Nonetheless, the mechanism underlying rapid approximate numerical averaging remains unknown, and several competing mechanism may account for it. Here, we tested the hypothesis that approximate numerical averaging relies on perceptual-like processes, instantiated by population coding. Participants were presented with rapid sequences of numerical values (four items per second) and were asked to convey the sequence average. We manipulated the sequences' length, variance, and mean magnitude and found that similar to perceptual averaging, the precision of the estimations improves with the length and deteriorates with (higher) variance or (higher) magnitude. To account for the results, we developed a biologically plausible population-coding model and showed that it is mathematically equivalent to a population vector. Using both quantitative and qualitative model comparison methods, we compared the population-coding model to several competing models, such as a step-by-step running average (based on leaky integration) and a midrange model. We found that the data support the population-coding model. We conclude that humans' ability to rapidly form estimations of numerical averages has many properties of the perceptual (intuitive) system rather than the arithmetic, linguistic-based (analytic) system and that population coding is likely to be its underlying mechanism.
The parietal cortex has been implicated in a variety of numerosity and numerical cognition tasks and was proposed to encompass dedicated neural populations that are tuned for analogue magnitudes as well as for symbolic numerals. Nonetheless, it remains unknown whether the parietal cortex plays a role in approximate numerical averaging (rapid, yet coarse computation of numbers' mean)-a process that is fundamental to preference formation and decision-making. To causally investigate the role of the parietal cortex in numerical averaging, we have conducted a transcranial direct current stimulation (tDCS) study, in which participants were presented with rapid sequences of numbers and asked to convey their intuitive estimation of each sequence's average. During the task, the participants underwent anodal (excitatory) tDCS (or sham), applied either on a parietal or a frontal region. We found that, although participants exhibit above-chance accuracy in estimating the average of numerical sequences, they did so with higher precision under parietal stimulation. In a second experiment, we have replicated this finding and confirmed that the effect is number-specific rather than domain-general or attentional. We present a neurocomputational model postulating population-coding underlying rapid numerical averaging to account for our findings. According to this model, stimulation of the parietal cortex elevates neural activity in number-tuned dedicated detectors, leading to increase in the system's signal-to-noise level and thus resulting in more precise estimations.
The quantity and nature of the processes underlying recognition memory remains an open question. A majority of behavioral, neuropsychological, and brain studies have suggested that recognition memory is supported by two dissociable processes: recollection and familiarity. It has been conversely argued, however, that recollection and familiarity map onto a single continuum of mnemonic strength and hence that recognition memory is mediated by a single process. Previous electrophysiological studies found marked dissociations between recollection and familiarity, which have been widely held as corroborating the dual-process account. However, it remains unknown whether a strength interpretation can likewise apply for these findings. Here we describe an ERP study, using a modified remember-know (RK) procedure, which allowed us to control for mnemonic strength. We find that ERPs of high and low mnemonic strength mimicked the electrophysiological distinction between R and K responses, in a lateral positive component (LPC), 500-1000 msec poststimulus onset. Critically, when contrasting strength with RK experience, by comparing weak R to strong K responses, the electrophysiological signal mapped onto strength, not onto subjective RK experience. Invoking the LPC as support for dual-process accounts may, therefore, be amiss.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.