11Theories of generalization distinguish between elemental and configural stimulus processing, 12 depending on whether stimuli in a compound are processed independently or as distinct 13 entities. Evidence for elemental processing comes from findings of summation in animals, 14 where a compound of two stimuli that independently predict an outcome is deemed to be 15 more predictive of the outcome than each stimulus alone. Configural processing, on the other 16 hand, is supported by experiments that fail to find this effect when the compound is 17 comprised of similar stimuli. In humans, by contrast, summation seems to be robust and 18 independent of similarity. We show how these results are best explained by an alternative 19 view in which generalization comes about from a visual search process in which subjects 20 process the most predictive or salient stimulus in a compound. We offer empirical support 21 for this theory in three human experiments on causal learning and formalize a new elemental 22 visual search model based on reinforcement learning principles which can capture the present 23 and previous data on generalization, bridging two different research areas in psychology into 24 a unitary framework.25
Theories of learning distinguish between elemental and configural stimulus processing depending on whether stimuli are processed independently or as whole configurations. Evidence for elemental processing comes from findings of summation in animals where a compound of two dissimilar stimuli is deemed to be more predictive than each stimulus alone, whereas configural processing is supported by experiments using similar stimuli in which summation is not found. However, in humans the summation effect is robust and impervious to similarity manipulations. In three experiments in human predictive learning, we show that summation can be obliterated when partially reinforced cues are added to the summands in training and tests. This lack of summation only holds when the partially reinforced cues are similar to the reinforced cues (experiment 1) and seems to depend on participants sampling only the most salient cue in each trial (experiments 2a and 2b) in a sequential visual search process. Instead of attributing our and others’ instances of lack of summation to the customary idea of configural processing, we offer a formal subsampling rule that might be applied to situations in which the stimuli are hard to parse from each other.
Many research questions in sensory neuroscience involve determining whether the neural representation of a stimulus property is invariant to some irrelevant stimulus change (e.g., viewpoint-invariant face representation, or modality-invariant object representations). Most neuroimaging studies have studied invariance using operational tests that have only face validity, of which the most popular in recent years is the cross-classification test. A recently proposed theoretical framework suggests that operational tests of invariance commonly used in the neuroimaging literature, such as cross-classification, might lead to invalid conclusions. Here, we provide empirical and computational evidence supporting this theoretical insight. In our empirical study, we use encoding of orientation and spatial position in primary visual cortex as a case study, as previous research has established that these properties are not encoded in an invariant way. In a functional MRI study with human participants of both sexes, we show that the cross-classification test produces false positives, in many cases leading to the conclusion that orientation is encoded invariantly from spatial position, and that spatial position is encoded invariantly from orientation, in primary visual cortex. The results of two simulations further suggest that the test can lead to the conclusion of invariance when no sensible definition of invariance holds at the neural level, and that encoding strategies known to be used in cortex may easily lead to such false positives. On the other hand, we show that it is possible to provide evidence against invariance (i.e., contextdependent or configural encoding) through appropriate theory-driven decoding tests.
Many research questions in sensory neuroscience involve determining whether the neural representation of a stimulus property is invariant or specific to a particular stimulus context (e.g., Is object representation invariant to translation? Is the representation of a face feature specific to the context of other face features?). Between these two extremes, representations may also be context-tolerant or context-sensitive. Most neuroimaging studies have used operational tests in which a target property is inferred from a significant test against the null hypothesis of the opposite property. For example, the popular cross-classification test concludes that representations are invariant or tolerant when the null hypothesis of specificity is rejected. A recently developed neurocomputational theory suggests two insights regarding such tests. First, tests against the null of context-specificity, and for the alternative of context-invariance, are prone to false positives due to the way in which the underlying neural representations are transformed into indirect measurements in neuroimaging studies. Second, jointly performing tests against the nulls of invariance and specificity allows one to reach more precise and valid conclusions about the underlying representations, particularly when the null of invariance is tested using the fine-grained information from classifier decision variables rather than only accuracies (i.e., using the decoding separability test). Here, we provide empirical and computational evidence supporting both of these theoretical insights. In our empirical study, we use encoding of orientation and spatial position in primary visual cortex as a case study, as previous research has established that these properties are encoded in a context-sensitive way. Using fMRI decoding, we show that the cross-classification test produces false-positive conclusions of invariance, but that more valid conclusions can be reached by jointly performing tests against the null of invariance. The results of two simulations further support both of these conclusions. We conclude that more valid inferences about invariance or specificity of neural representations can be reached by jointly testing against both hypotheses, and using neurocomputational theory to guide the interpretation of results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.