A common question in perceptual science is to what extent different stimulus dimensions are processed independently. General recognition theory (GRT) offers a formal framework via which different notions of independence can be defined and tested rigorously, while also dissociating perceptual from decisional factors. This article presents a new GRT model that overcomes several shortcomings with previous approaches, including a clearer separation between perceptual and decisional processes and a more complete description of such processes. The model assumes that different individuals share similar perceptual representations, but vary in their attention to dimensions and in the decisional strategies they use. We apply the model to the analysis of interactions between identity and emotional expression during face recognition. The results of previous research aimed at this problem have been disparate. Participants identified four faces, which resulted from the combination of two identities and two expressions. An analysis using the new GRT model showed a complex pattern of dimensional interactions. The perception of emotional expression was not affected by changes in identity, but the perception of identity was affected by changes in emotional expression. There were violations of decisional separability of expression from identity and of identity from expression, with the former being more consistent across participants than the latter. One explanation for the disparate results in the literature is that decisional strategies may have varied across studies and influenced the results of tests of perceptual interactions, as previous studies lacked the ability to dissociate between perceptual and decisional interactions.
Many research questions in visual perception involve determining whether stimulus properties are represented and processed independently. In visual neuroscience, there is great interest in determining whether important object dimensions are represented independently in the brain. For example, theories of face recognition have proposed either completely or partially independent processing of identity and emotional expression. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. This article develops a new quantitative framework that links signal detection theory from psychophysics and encoding models from computational neuroscience, focusing on a special form of independence defined in the psychophysics literature: perceptual separability. The new theory allowed us, for the first time, to precisely define separability of neural representations and to theoretically link behavioral and brain measures of separability. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In particular, the theory identifies exactly what valid inferences can be made about independent encoding of stimulus dimensions from the results of multivariate analyses of neuroimaging data and psychophysical studies. In addition, commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we apply this new framework to the study of separability of brain representations of face identity and emotional expression (neutral/sad) in a human fMRI study with male and female participants.
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications.
Most behaviors unfold in time and include a sequence of submovements or cognitive activities. In addition, most behaviors are automatic and repeated daily throughout life. Yet, relatively little is known about the neurobiology of automatic sequence production. Past research suggests a gradual transfer from the associative striatum to the sensorimotor striatum, but a number of more recent studies challenge this role of the BG in automatic sequence production. In this article, we propose a new neurocomputational model of automatic sequence production in which the main role of the BG is to train cortical-cortical connections within the premotor areas that are responsible for automatic sequence production. The new model is used to simulate four different data sets from human and nonhuman animals, including (1) behavioral data (e.g., RTs), (2) electrophysiology data (e.g., single-neuron recordings), (3) macrostructure data (e.g., TMS), and (4) neurological circuit data (e.g., inactivation studies). We conclude with a comparison of the new model with existing models of automatic sequence production and discuss a possible new role for the BG in automaticity and its implication for Parkinson's disease.
Many research questions in visual perception involve determining whether stimulus properties are represented and processed independently. In visual neuroscience, there is great interest in determining whether important object dimensions are represented independently in the brain. Unfortunately, most previous research has only vaguely defined what is meant by "independence," which hinders its precise quantification and testing. Here we present a new quantitative framework that links general recognition theory (GRT) and encoding models from computational neuroscience, focusing on a special form of independence: perceptual separability. Without loss of generality, consider the special case in which stimuli vary along two stimulus dimensions, represented by A and B, each with only two levels indexed by i = 1, 2 for dimension A and j = 1, 2 for dimension B. A stimulus is represented by a combination of these dimension levels, A i B j .In the computational neuroscience literature, an encoding model is a formal representation of the relation between stimuli and the response of a number of channels (single neurons or neural populations), represented by r. The channel responses are assumed to be random variables, and thus the response of the model is characterized by a probability distribution p (r|A i B j , θ), where θ represents a set of parameters describing neural noise. Encoding separability of dimension A from dimension B holds when encoding of the value of A does not change with the stimulus' value on B. That is, if and only if, for all values of r and i:( 1) The term neural decoding refers both to a series of methods used by researchers to extract information about a stimulus from neural data and to the mechanisms used by readout neurons to extract similar information. If a dimension is encoded by N channels, then the decoded estimate of a dimensional value is = g (r), where g () is a function from R N to R.Because r is a random vector, the decoded value is a random value that follows a probability distribution p  |A i B j , θ . Decoding separability of dimension A from dimension B holds when the distribution of decoded values of A does not change with the value of B in the stimulus-that is, if and only if, for all values of and i:It can be shown that encoding separability and decoding separability are related as summarized in Figure 1. In addition, when decoding separability is measured through the L1 distance between kernel density estimates of p  |A i B j , θ , the relations in Figure 1 hold even if decoding is performed on indirect measures of neural activity contaminated with error, as those obtained through fMRI. Figure 1 entails that when decoding separability is measured and fails, one can make the valid inference that encoding separability fails as well, but when decoding separability holds, only weak evidence of encoding separability holding has been obtained. Encoding Separability Decoding SeparabilityHolds Fails Holds Fails Figure 1 Importantly, it is possible to link these ideas to GRT by assumi...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.