The somatotopic representation of the body is a well-established organizational principle in the human brain. Classic invasive direct electrical stimulation for somatotopic mapping cannot be used to map the whole-body topographical representation of healthy individuals. Functional magnetic resonance imaging (fMRI) has become an indispensable tool for the noninvasive investigation of somatotopic organization of the human brain using voluntary movement tasks. Unfortunately, body movements during fMRI scanning often cause large head motion artifacts. Consequently, there remains a lack of publicly accessible fMRI datasets for whole-body somatotopic mapping. Here, we present public high-resolution fMRI data to map the somatotopic organization based on motor movements in a large cohort of healthy adults (N = 62). In contrast to previous studies that were mostly designed to distinguish few body representations, most body parts are considered, including toe, ankle, leg, finger, wrist, forearm, upper arm, jaw, lip, tongue, and eyes. Moreover, the fMRI data are denoised by combining spatial independent component analysis with manual identification to clean artifacts from head motion associated with body movements.
Deep convolutional neural networks (DCNN) nowadays can match human performance in challenging complex tasks, but it remains unknown whether DCNNs achieve human-like performance through human-like processes. Here we applied a reverse-correlation method to make explicit representations of DCNNs and humans when performing face gender classification. We found that humans and a typical DCNN, VGG-Face, used similar critical information for this task, which mainly resided at low spatial frequencies. Importantly, the prior task experience, which the VGG-Face was pre-trained to process faces at the subordinate level (i.e., identification) as humans do, seemed necessary for such representational similarity, because AlexNet, a DCNN pre-trained to process objects at the basic level (i.e., categorization), succeeded in gender classification but relied on a completely different representation. In sum, although DCNNs and humans rely on different sets of hardware to process faces, they can use a similar and implementation-independent representation to achieve the same computation goal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.