Individuals with Autism Spectrum Disorder (ASD) appear to show a general face discrimination deficit across a range of tasks including social–emotional judgments as well as identification and discrimination. However, functional magnetic resonance imaging (fMRI) studies probing the neural bases of these behavioral differences have produced conflicting results: while some studies have reported reduced or no activity to faces in ASD in the Fusiform Face Area (FFA), a key region in human face processing, others have suggested more typical activation levels, possibly reflecting limitations of conventional fMRI techniques to characterize neuron-level processing. Here, we test the hypotheses that face discrimination abilities are highly heterogeneous in ASD and are mediated by FFA neurons, with differences in face discrimination abilities being quantitatively linked to variations in the estimated selectivity of face neurons in the FFA. Behavioral results revealed a wide distribution of face discrimination performance in ASD, ranging from typical performance to chance level performance. Despite this heterogeneity in perceptual abilities, individual face discrimination performance was well predicted by neural selectivity to faces in the FFA, estimated via both a novel analysis of local voxel-wise correlations, and the more commonly used fMRI rapid adaptation technique. Thus, face processing in ASD appears to rely on the FFA as in typical individuals, differing quantitatively but not qualitatively. These results for the first time mechanistically link variations in the ASD phenotype to specific differences in the typical face processing circuit, identifying promising targets for interventions.
Changes in the spectral content of wide-band auditory stimuli have been repeatedly implicated as a possible cue to the distance of a sound source. Few of the previous studies of this factor, however, have considered whether the cue provided by spectral content serves as an absolute or a relative cue. That is, can differences in spectral content indicate systematic differences in distance even on their first presentation to a listener, or must the listener be able to compare sounds with one another in order to perceive some change in their distances? An attempt to answer this question and simultaneously to evaluate the possibly confounding influence of changes in the sound level and/or the loudness of the stimuli are described in this paper. The results indicate that a decrease in high-frequency content (as might physically be produced by passage through a greater amount of air) can lead to increases in perceived auditory distance, but only when compared with similar sounds having a somewhat different high-frequency content, ie spectral information can serve as a relative cue for auditory distance, independent of changes in overall sound level.
Professions such as radiology and aviation security screening that rely on visual search—the act of looking for targets among distractors—often cannot provide operators immediate feedback, which can create situations where performance may be largely driven by the searchers’ own expectations. For example, if searchers do not expect relatively hard-to-spot targets to be present in a given search, they may find easy-to-spot targets but systematically quit searching before finding more difficult ones. Without feedback, searchers can create self-fulfilling prophecies where they incorrectly reinforce initial biases (e.g., first assuming and then, perhaps wrongly, concluding hard-to-spot targets are rare). In the current study, two groups of searchers completed an identical visual search task but with just a single difference in their initial task instructions before the experiment started; those in the “high-expectation” condition were told that each trial could have one or two targets present (i.e., correctly implying no target-absent trials) and those in the “low-expectation” condition were told that each trial would have up to two targets (i.e., incorrectly implying there could be target-absent trials). Compared to the high-expectation group, the low-expectation group had a lower hit rate, lower false alarm rate and quit trials more quickly, consistent with a lower quitting threshold (i.e., performing less exhaustive searches) and a potentially higher target-present decision criterion. The expectation effect was present from the start and remained across the experiment—despite exposure to the same true distribution of targets, the groups’ performances remained divergent, primarily driven by the different subjective experiences caused by each groups’ self-fulfilling prophecies. The effects were limited to the single-targets trials, which provides insights into the mechanisms affected by the initial expectations set by the instructions. In sum, initial expectations can have dramatic influences—searchers who do not expect to find a target, are less likely to find a target as they are more likely to quit searching earlier.
The ability to recognize objects in clutter is crucial for human vision, yet the underlying neural computations remain poorly understood. Previous single-unit electrophysiology recordings in inferotemporal cortex in monkeys and fMRI studies of object-selective cortex in humans have shown that the responses to pairs of objects can sometimes be well described as a weighted average of the responses to the constituent objects. Yet, from a computational standpoint, it is not clear how the challenge of object recognition in clutter can be solved if downstream areas must disentangle the identity of an unknown number of individual objects from the confounded average neuronal responses. An alternative idea is that recognition is based on a subpopulation of neurons that are robust to clutter, i.e., that do not show response averaging, but rather robust object-selective responses in the presence of clutter. Here we show that simulations using the HMAX model of object recognition in cortex can fit the aforementioned single-unit and fMRI data, showing that the averaging-like responses can be understood as the result of responses of object-selective neurons to suboptimal stimuli. Moreover, the model shows how object recognition can be achieved by a sparse readout of neurons whose selectivity is robust to clutter. Finally, the model provides a novel prediction about human object recognition performance, namely, that target recognition ability should show a U-shaped dependency on the similarity of simultaneously presented clutter objects. This prediction is confirmed experimentally, supporting a simple, unifying model of how the brain performs object recognition in clutter.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.