Processing of binocular disparity is thought to be widespread throughout cortex, highlighting its importance for perception and action. Yet the computations and functional roles underlying this activity across areas remain largely unknown. Here, we trace the neural representations mediating depth perception across human brain areas using multivariate analysis methods and high-resolution imaging. Presenting disparity-defined planes, we determine functional magnetic resonance imaging (fMRI) selectivity to near versus far depth positions. First, we test the perceptual relevance of this selectivity, comparing the pattern-based decoding of fMRI responses evoked by random dot stereograms that support depth perception (correlated RDS) with the decoding of stimuli containing disparities to which the perceptual system is blind (anticorrelated RDS). Preferential disparity selectivity for correlated stimuli in dorsal (visual and parietal) areas and higher ventral area LO (lateral occipital area) suggests encoding of perceptually relevant information, in contrast to early (V1, V2) and intermediate ventral (V3v, V4) visual cortical areas that show similar selectivity for both correlated and anticorrelated stimuli. Second, manipulating disparity parametrically, we show that dorsal areas encode the metric disparity structure of the viewed stimuli (i.e., disparity magnitude), whereas ventral area LO appears to represent depth position in a categorical manner (i.e., disparity sign). Our findings suggest that activity in both visual streams is commensurate with the use of disparity for depth perception but the neural computations may differ. Intriguingly, perceptually relevant responses in the dorsal stream are tuned to disparity content and emerge at a comparatively earlier stage than categorical representations for depth position in the ventral stream.
A method for estimating the configurational (i.e., non-kinetic) part of the entropy of internal motion in complex molecules is introduced that does not assume any particular parametric form for the underlying probability density function. It is based on the nearest-neighbor (NN) distances of the points of a sample of internal molecular coordinates obtained by a computer simulation of a given molecule. As the method does not make any assumptions about the underlying potential energy function, it accounts fully for any anharmonicity of internal molecular motion. It provides an asymptotically unbiased and consistent estimate of the configurational part of the entropy of the internal degrees of freedom of the molecule. The NN method is illustrated by estimating the configurational entropy of internal rotation of capsaicin and two stereoisomers of tartaric acid, and by providing a much closer upper bound on the configurational entropy of internal rotation of a pentapeptide molecule than that obtained by the standard quasi-harmonic method. As a measure of dependence between any two internal molecular coordinates, a general coefficient of association based on the information-theoretic quantity of mutual information is proposed. Using NN estimates of this measure, statistical clustering procedures can be employed to group the coordinates into clusters of manageable dimensions and characterized by minimal dependence between coordinates belonging to different clusters.
. Extensive psychophysical and computational work proposes that the perception of coherent and meaningful structures in natural images relies on neural processes that convert information about local edges in primary visual cortex to complex object features represented in the temporal cortex. However, the neural basis of these mid-level vision mechanisms in the human brain remains largely unknown. Here, we examine functional MRI (fMRI) selectivity for global forms in the human visual pathways using sensitive multivariate analysis methods that take advantage of information across brain activation patterns. We use Glass patterns, parametrically varying the perceived global form (concentric, radial, translational) while ensuring that the local statistics remain similar. Our findings show a continuum of integration processes that convert selectivity for local signals (orientation, position) in early visual areas to selectivity for global form structure in higher occipitotemporal areas. Interestingly, higher occipitotemporal areas discern differences in global form structure rather than low-level stimulus properties with higher accuracy than early visual areas while relying on information from smaller but more selective neural populations (smaller voxel pattern size), consistent with global pooling mechanisms of local orientation signals. These findings suggest that the human visual system uses a code of increasing efficiency across stages of analysis that is critical for the successful detection and recognition of objects in complex environments.
Making successful decisions under uncertainty due to noisy sensory signals is thought to benefit from previous experience. However, the human brain mechanisms that mediate flexible decisions through learning remain largely unknown. Comparing behavioral choices of human observers with those of a pattern classifier based on multivoxel single-trial fMRI signals, we show that category learning shapes processes related to decision variables in frontal and higher occipitotemporal regions rather than signal detection or response execution in primary visual or motor areas. In particular, fMRI signals in prefrontal regions reflect the observers' behavioral choice according to the learned decision criterion only in the context of the categorization task. In contrast, higher occipitotemporal areas show learning-dependent changes in the representation of perceived categories that are sustained after training independent of the task. These findings demonstrate that learning shapes selective representations of sensory readout signals in accordance with the decision criterion to support flexible decisions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.