Large-scale topographic representations of the body have long been established in the somatosensory and motor cortices. Using functional imaging, we identified a topographically organized body part map within the occipitotemporal cortex (OTC), with distinct clusters of voxels showing clear preference for different visually presented body parts. This representation was consistent both across hemispheres and participants. Using converging methods, the preference for specific body parts was demonstrated to be robust and did not merely reflect shape differences between the categories. Finally, execution of (unseen) movements with different body parts resulted in a limited topographic representation of the limbs and trunk, which partially overlapped with the visual body part map. This motor-driven activation in the OTC could not be explained solely by visual or motor imagery of the body parts. This suggests that visual and motor-related information converge within the OTC in a body part specific manner.
In the absence of vision, perception of space is likely to be highly dependent on memory. As previously stated, the blind tend to code spatial information in the form of "route-like" sequential representations [1-3]. Thus, serial memory, indicating the order in which items are encountered, may be especially important for the blind to generate a mental picture of the world. In accordance, we find that the congenitally blind are remarkably superior to sighted peers in serial memory tasks. Specifically, subjects heard a list of 20 words and were instructed to recall the words according to their original order in the list. The blind recalled more words than the sighted (indicating better item memory), but their greatest advantage was in recalling longer word sequences (according to their original order). We further show that the serial memory superiority of the blind is not merely a result of their advantage in item recall per se (as we additionally confirm via a separate recognition memory task). These results suggest the refinement of a specific cognitive ability to compensate for blindness in humans.
The recall of a list of items in a serial order is a basic cognitive skill. However, it is unknown whether a list of arbitrary items is remembered by associations between sequential items or by associations between each item and its ordinal position. Here, to study the nonverbal strategies used for such memory tasks, we trained three macaque monkeys on a delayed sequence recall task. Thirty abstract images, divided into ten triplets, were presented repeatedly in fixed temporal order. On each trial the monkeys viewed three sequentially presented sample stimuli, followed by a test stimulus consisting of the same three images and a distractor image (chosen randomly from the remaining 27). The task was to touch the three images in their original order without touching the distractor. The most common error was touching the distractor when it had the same ordinal number (in its own triplet) as the correct image. Thus, the monkeys' natural tendency was to categorize images by their ordinal number. Additional, secondary strategies were used eventually to avoid the distractor images. These included memory of the sample images (working memory) and associations between sequence triplet members. Thus, monkeys use multiple mnemonic strategies according to their innate tendencies and the requirements of the task.
Can the brain repurpose neural resources originally developed to support hand function for the control of artificial limbs? By studying individuals with congenital or acquired hand-loss using functional MRI, van den Heiligenberg et al. show that prosthesis usage shapes brain activity and connectivity. Neural resources can be repurposed to support artificial limbs.
We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. Visual objects are recognized through spatial integration of features available simultaneously on the retina. The lateral occipital complex (LOC) represents shape faithfully in such conditions even if the object is partially occluded. However, shape must sometimes be reconstructed over both space and time. Such is the case in anorthoscopic perception, when an object is moving behind a narrow slit. In this scenario, spatial information is limited at any moment so the whole-shape percept can only be inferred by integration of successive shape views over time. We find that LOC carries shape-specific information recovered using such temporal integration processes. The shape representation is invariant to slit orientation and is similar to that evoked by a fully viewed image. Existing models of object recognition lack such capabilities.
Macaque monkeys were trained to recognize the repetition of one of the images already seen in a sequence of random length. On average, performance decreased with sequence length. However, this was due to a complex combination of factors, as follows: performance was found to decrease with the separation in the sequence of the test (repetition image) from the cue (its first appearance in the sequence), for trials with sequences of fixed length. In contrast, performance improved as a function of sequence length, for equal cue-test separations. Reaction times followed a complementary trend: they increased with cue-test separation and decreased with sequence length. The frequency of false positives (FPs) indicates that images are not always removed from working memory between successive trials, and that the monkeys rarely confuse different images. The probability of miss errors depends on number of intervening stimulus presentations, while FPs depend on elapsed time. A simple two-state stochastic model of multi-item working memory is proposed that guides the account for the main effects of performance and false positives, as well as their interaction. In the model, images enter WM when they are presented, or by spontaneous jump-in. Misses are due to spontaneous jump-out of images previously seen.
Serial memory is the ability to encode and retrieve a list of items in their correct temporal order. To study nonverbal strategies involved in serial memory, we trained four macaque monkeys on a novel delayed sequence-recall task and analysed the mechanisms underlying their performance in terms of a neural network model. Thirty fractal images, divided into 10 triplets, were presented repeatedly in fixed temporal order. On each trial the monkeys viewed three sequentially presented sample images, followed by a test stimulus consisting of the same triplet of images and a distractor image (chosen randomly from the remaining 27). The task was to touch the three images in their original order, avoiding the distractor. The monkeys' most common error was touching the distractor when it had the same ordinal position (in its own triplet) as the correct image. This finding suggests that monkeys naturally categorize images by their ordinal number. Additional, secondary strategies were eventually used to avoid distractor images. These include memory of the sample images (working memory) and associations between triplet members. Further direct evidence for ordinal number categorization was provided by a transfer of learning to untrained images of the same ordinal category, following reassignment of image categories within each triplet. We propose a generic three-tier neuronal framework that can explain the components and complex set of characteristics of the observed behavior. This framework, with its intermediate level representing ordinal categories, can also explain the transfer of learning following category reassignment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.