The ABCD study is recruiting and following the brain development and health of over 10,000 9–10 year olds through adolescence. The imaging component of the study was developed by the ABCD Data Analysis and Informatics Center (DAIC) and the ABCD Imaging Acquisition Workgroup. Imaging methods and assessments were selected, optimized and harmonized across all 21 sites to measure brain structure and function relevant to adolescent development and addiction. This article provides an overview of the imaging procedures of the ABCD study, the basis for their selection and preliminary quality assurance and results that provide evidence for the feasibility and age-appropriateness of procedures and generalizability of findings to the existent literature.
The growing consensus that language is distributed into large-scale cortical and subcortical networks has brought with it an increasing focus on the connectional anatomy of language, or how particular fibre pathways connect regions within the language network. Understanding connectivity of the language network could provide critical insights into function, but recent investigations using a variety of methodologies in both humans and non-human primates have provided conflicting accounts of pathways central to language. Some of the pathways classically considered language pathways, such as the arcuate fasciculus, are now argued to be domain-general rather than specialized, which represents a radical shift in perspective. Other pathways described in the non-human primate remain to be verified in humans. In this review, we examine the consensus and controversy in the study of fibre pathway connectivity for language. We focus on seven fibre pathways-the superior longitudinal fasciculus and arcuate fasciculus, the uncinate fasciculus, extreme capsule, middle longitudinal fasciculus, inferior longitudinal fasciculus and inferior fronto-occipital fasciculus-that have been proposed to support language in the human. We examine the methods in humans and non-human primate used to investigate the connectivity of these pathways, the historical context leading to the most current understanding of their anatomy, and the functional and clinical correlates of each pathway with reference to language. We conclude with a challenge for researchers and clinicians to establish a coherent framework within which fibre pathway connectivity can be systematically incorporated to the study of language.
With the advancement of cognitive neuroscience and neuropsychological research, the field of language neurobiology is at a cross-roads with respect to its framing theories. The central thesis of this article is that the major historical framing model, the Classic "Wernicke-Lichtheim-Geschwind" model, and associated terminology, is no longer adequate for contemporary investigations into the neurobiology of language. We argue that the Classic model (1) is based on an outdated brain anatomy; (2) does not adequately represent the distributed connectivity relevant for language, (3) offers a modular and "language centric" perspective, and (4) focuses on cortical structures, for the most part leaving out subcortical regions and relevant connections. To make our case, we discuss the issue of anatomical specificity with a focus on the contemporary usage of the terms "Broca's and Wernicke's area", including results of a survey that was conducted within the language neurobiology community. We demonstrate that there is no consistent anatomical definition of "Broca's and Wernicke's Areas", and propose to replace these terms with more precise anatomical definitions. We illustrate the distributed nature of the language connectome, which extends far beyond the single-pathway notion of arcuate fasciculus connectivity established in Geschwind's version of the Classic Model. By illustrating the definitional confusion surrounding "Broca's and Wernicke's areas", and by illustrating the difficulty integrating the emerging literature on perisylvian white matter connectivity into this model, we hope to expose the limits of the model, argue for its obsolescence, and suggest a path forward in defining a replacement.
Everyday communication is accompanied by visual information from several sources, including cospeech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, she made semantically unrelated hand movements. In the third, she kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.Keywords discourse comprehension; fMRI; gestures; semantic processing; inferior frontal gyrus Face-to-face communication is based on more than speech alone. Audible speech is only one component of a communication system that also includes co-speech gestures-hand and arm movements that accompany spoken language (Kendon, 1994;McNeill, 1992; McNeill, 2005). Such co-speech gestures serve an important role in face-to-face communication for both speaker and listener. Listeners not only process the words that speakers produce, but also continuously integrate gestures with speech and with other visual information (e.g., the speaker's lips, mouth, and eyes) to arrive at the speaker's meaning (Goldin-Meadow, 2006;Kendon, 1994; McNeill, 2005). Despite the importance of co-speech gesture to communicative
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.