The present study addresses word recognition automaticity in Spanish-speaking adults who are neoliterate by assessing the event-related potential N170 for word stimuli. Participants engaged in two reading conditions that vary the degree of attention required for linguistic components of reading: (a) an implicit reading task, in which they detected immediate repetitions of words and symbols (one-back paradigm); (b) an explicit reading task, in which they determined if pairs of visual-auditory words matched (reading verification task). Results were compared to those of a group of people who learned to read in childhood. N170 amplitudes on left and right occipito-temporal regions were registered for each condition. A left-lateralization of N170 for word stimuli was considered as an index of word reading automaticity. No left-lateralized N170 was found for the neoliterate group in either condition. In addition, N170 amplitude for words was larger on the right than the left occipito-temporal region for the reading verification task. Participants from the comparison group showed left-lateralized N170 amplitude for words in both conditions. Findings suggest that the neoliterate group investigated here had not yet acquired automaticity of word recognition, but could be showing evidence of word familiarization.
When a speaker talks, the consequences of this can both be heard (audio) and seen (visual). A novel visual phonemic restoration task was used to assess behavioral discrimination and neural signatures (event-related potentials, or ERP) of audiovisual processing in typically developing children with a range of social and communicative skills assessed using the social responsiveness scale, a measure of traits associated with autism. An auditory oddball design presented two types of stimuli to the listener, a clear exemplar of an auditory consonant–vowel syllable /ba/ (the more frequently occurring standard stimulus), and a syllable in which the auditory cues for the consonant were substantially weakened, creating a stimulus which is more like /a/ (the infrequently presented deviant stimulus). All speech tokens were paired with a face producing /ba/ or a face with a pixelated mouth containing motion but no visual speech. In this paradigm, the visual /ba/ should cause the auditory /a/ to be perceived as /ba/, creating an attenuated oddball response; in contrast, a pixelated video (without articulatory information) should not have this effect. Behaviorally, participants showed visual phonemic restoration (reduced accuracy in detecting deviant /a/) in the presence of a speaking face. In addition, ERPs were observed in both an early time window (N100) and a later time window (P300) that were sensitive to speech context (/ba/ or /a/) and modulated by face context (speaking face with visible articulation or with pixelated mouth). Specifically, the oddball responses for the N100 and P300 were attenuated in the presence of a face producing /ba/ relative to a pixelated face, representing a possible neural correlate of the phonemic restoration effect. Notably, those individuals with more traits associated with autism (yet still in the non-clinical range) had smaller P300 responses overall, regardless of face context, suggesting generally reduced phonemic discrimination.
Learning at any age is neurobiological: a process occurring through alterations in the microscopic structure and functioning of the brain. The inputs, processes, and outputs of learning are brain functions. Learning can be visualized, located, and measured through brain imaging techniques that depend methodologically on the biological nature of perception, memory, and learning. The stages of cognitive development, which represent the cumulative neurobiological effects of many interactions between persons and the world around them, are generated by multitudes of changes in cells, circuits, and networks of the brain. There is no mind without brain; the experiences of consciousness, thinking, learning, and memory are physical expressions of the work of the brain. The state of mind/brain is a major determinant of a learner’s readiness to learn; recognizing the oneness of mind and brain—and therefore of mind and body—should cause reassessment of many structures, policies, and practices in education.
Extant research documents impaired language among children with prenatal cocaine exposure (PCE) relative to nondrug exposed (NDE) children, suggesting that cocaine alters development of neurobiological systems that support language. The current study examines behavioral and neural (electrophysiological) indices of language function in older adolescents. Specifically, we compare performance of PCE (N = 59) and NDE (N = 51) adolescents on a battery of cognitive and linguistic assessments that tap word reading, reading comprehension, semantic and grammatical processing, and IQ. In addition, we examine event related potential (ERP) responses in in a subset of these children across three experimental tasks that examine word level phonological processing (rhyme priming), word level semantic processing (semantic priming), and sentence level semantic processing (semantic anomaly). Findings reveal deficits across a number of reading and language assessments, after controlling for socioeconomic status and exposure to other substances. Additionally, ERP data reveal atypical orthography to phonology mapping (reduced N1/P2 response) and atypical rhyme and semantic processing (N400 response). These findings suggest that PCE continues to impact language and reading skills into the late teenage years.
Visual information on a talker's face can influence what a listener hears. Commonly used approaches to study this include mismatched audiovisual stimuli (e.g., McGurk type stimuli) or visual speech in auditory noise. In this paper we discuss potential limitations of these approaches and introduce a novel visual phonemic restoration method. This method always presents the same visual stimulus (e.g., /ba/) dubbed with a matched auditory stimulus (/ba/) or one that has weakened consonantal information and sounds more /a/-like). When this reduced auditory stimulus (or /a/) is dubbed with the visual /ba/, a visual influence will result in effectively 'restoring' the weakened auditory cues so that the stimulus is perceived as a /ba/. An oddball design in which participants are asked to detect the /a/ among a stream of more frequently occurring /ba/s while either a speaking face or face with no visual speech was used. In addition, the same paradigm was presented for a second contrast in which participants detected /pa/ among /ba/s, a contrast which should be unaltered by the presence of visual speech. Behavioral and some ERP findings reflect the expected phonemic restoration for the /ba/ vs. /a/ contrast; specifically, we observed reduced accuracy and P300 response in the presence of visual speech. Further, we report an unexpected finding of reduced accuracy and P300 response for both speech contrasts in the presence of visual speech, suggesting overall modulation of the auditory signal in the presence of visual speech. Consistent with this, we observed a mismatch negativity (MMN) effect for the /ba/ vs. /pa/ contrast only that was larger in absence of visual speech. We discuss the potential utility for this paradigm for listeners who cannot respond actively, such as infants and individuals with developmental disabilities.
Perceptual studies of children with autism spectrum disorders (ASD) strongly implicate deficits in processing of audiovisual (AV) speech. Previous research with AV stimuli has typically been conducted in the context of auditory noise or with mismatched auditory and visual (“McGurk”) stimuli. Although both types of stimuli are well-established methods for testing typically developing (TD) participants, they may create additional processing problems for children with ASD. To more precisely examine audiovisual (AV) speech perception in children with ASD, we developed a novel measure of AV processing that involves neither noise nor AV cross-category conflict. The speech stimuli include clear exemplars of the syllable /ba/ and a modified version of /ba/ in which the consonant is substantially weakened so that the syllable is heard as “/a/ ”. These are dubbed with a video of the speaker saying /ba/. Audiovisual integration should result in the visual information effectively “restoring” the weakened auditory “/a/ ” cues so that the stimulus is perceived as /ba/. Using event related potentials (ERP), we will present evidence from typically developing adults and preliminary results from children with ASD and TD to examine whether children with ASD are weaker in AV speech integration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.