The current experiment investigated the extent to which perceptual categorization of animacy (i.e., the ability to discriminate animate and inanimate objects) is facilitated by image-based features that distinguish the two object categories. We show that, with nominal training, naïve macaques could classify a trial-unique set of 1000 novel images with high accuracy. To test whether image-based features that naturally differ between animate and inanimate objects, such as curvilinear and rectilinear information, contribute to the monkeys’ accuracy, we created synthetic images using an algorithm that distorted the global shape of the original animate/inanimate images while maintaining their intermediate features ( Portilla & Simoncelli, 2000 ). Performance on the synthesized images was significantly above chance and was predicted by the amount of curvilinear information in the images. Our results demonstrate that, without training, macaques can use an intermediate image feature, curvilinearity, to facilitate their categorization of animate and inanimate objects.
The right and left cerebral hemispheres are important for face and word recognition, respectively—a specialization that emerges over human development. The question is whether this bilateral distribution is necessary or whether a single hemisphere, be it left or right, can support both face and word recognition. Here, face and word recognition accuracy in patients (median age 16.7 y) with a single hemisphere following childhood hemispherectomy was compared against matched typical controls. In experiment 1, participants viewed stimuli in central vision. Across both face and word tasks, accuracy of both left and right hemispherectomy patients, while significantly lower than controls' accuracy, averaged above 80% and did not differ from each other. To compare patients' single hemisphere more directly to one hemisphere of controls, in experiment 2, participants viewed stimuli in one visual field to constrain initial processing chiefly to a single (contralateral) hemisphere. Whereas controls had higher word accuracy when words were presented to the right than to the left visual field, there was no field/hemispheric difference for faces. In contrast, left and right hemispherectomy patients, again, showed comparable performance to one another on both face and word recognition, albeit significantly lower than controls. Altogether, the findings indicate that a single developing hemisphere, either left or right, may be sufficiently plastic for comparable representation of faces and words. However, perhaps due to increased competition or “neural crowding,” constraining cortical representations to one hemisphere may collectively hamper face and word recognition, relative to that observed in typical development with two hemispheres.
Humans can label and categorize objects in a visual scene with high accuracy and speed, a capacity well characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of “object kinematograms” to isolate motion-defined signals from other sources of visual information. Object kinematograms were generated by extracting motion information from videos of 6 object categories and applying the motion to limited-lifetime random dot patterns. Using functional magnetic resonance imaging (fMRI) (n= 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher-order regions of interest (ROIs). More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition, while more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing.SIGNIFICANCE STATEMENTMuch research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that, while higher-order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.