It is well established that neural imaging technology can predict preferences for consumer products. However, the applicability of this method to consumer marketing research remains uncertain, partly because of the expense required. In this article, the authors demonstrate that neural measurements made with a relatively low-cost and widely available measurement method—electroencephalography (EEG)—can predict future choices of consumer products. In the experiment, participants viewed individual consumer products in isolation, without making any actual choices, while their neural activity was measured with EEG. At the end of the experiment, participants were offered choices between pairs of the same products. The authors find that neural activity measured from a midfrontal electrode displays an increase in the N200 component and a weaker theta band power that correlates with a more preferred product. Using recent techniques for relating neural measurements to choice prediction, they demonstrate that these measures predict subsequent choices. Moreover, the accuracy of prediction depends on both the ordinal and cardinal distance of the EEG data; the larger the difference in EEG activity between two products, the better the predictive accuracy.
Normalization is a widespread neural computation, mediating divisive gain control in sensory processing and implementing a contextdependent value code in decision-related frontal and parietal cortices. Although decision-making is a dynamic process with complex temporal characteristics, most models of normalization are time-independent and little is known about the dynamic interaction of normalization and choice. Here, we show that a simple differential equation model of normalization explains the characteristic phasicsustained pattern of cortical decision activity and predicts specific normalization dynamics: value coding during initial transients, time-varying value modulation, and delayed onset of contextual information. Empirically, we observe these predicted dynamics in saccade-related neurons in monkey lateral intraparietal cortex. Furthermore, such models naturally incorporate a time-weighted average of past activity, implementing an intrinsic reference-dependence in value coding. These results suggest that a single network mechanism can explain both transient and sustained decision activity, emphasizing the importance of a dynamic view of normalization in neural coding.
Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior.
Humans are often inconsistent (irrational) when choosing among simple bundles of goods, even without any particular changes to framing or context. However, the neural computations that give rise to such inconsistencies are still unknown. Similar to sensory perception and motor output, we propose that a substantial component of inconsistent behavior is due to variability in the neural computation of value. Here, we develop a novel index that measures the severity of inconsistency of each choice, enabling us to directly trace its neural correlates. We find that the BOLD signal in the vmPFC, ACC, and PCC is correlated with the severity of inconsistency on each trial and with the subjective value of the chosen alternative. This suggests that deviations from rational choice arise in the regions responsible for value computation. We offer a computational model of how variability in value computation is a source of inconsistent choices.
The standard framework for modeling stochastic choice, the random utility model, is agnostic about the temporal dynamics of the decision process. In contrast, a general class of bounded accumulation models from psychology and neuroscience explicitly relate decision times to stochastic choice behavior. This article demonstrates that a random utility model can be derived from the general class of bounded accumulation models, and characterizes how the resulting distribution of random utility depends on response time. This relationship can bias the estimation of structural preference parameters. The bias can be alleviated via the inclusion of standard observables directly in the econometric specification, or through incorporating novel observables such as response time or neurobiological data. Examples of estimating risk and brand preferences are pursued.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.