General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.-Users may download and print one copy of any publication from the public portal for the purpose of private study or research-You may not further distribute the material or use it for any profit-making activity or commercial gain-You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright, please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a documented history of DD have deficits in their ability to gain benefit from lip-read information that disambiguates noise-masked speech. We show with another group of adult individuals with DD that these deficits persist into adulthood. These deficits could not be attributed to impairments in unisensory auditory word recognition. Rather, the results indicate a specific deficit in audio-visual speech processing and suggest that impaired multisensory integration might be an important aspect of DD.
Recent studies suggest that sub-clinical levels of autistic symptoms may be related to reduced processing of artificial audiovisual stimuli. It is unclear whether these findings extent to more natural stimuli such as audiovisual speech. The current study examined the relationship between autistic traits measured by the Autism spectrum Quotient and audiovisual speech processing in a large non-clinical population using a battery of experimental tasks assessing audiovisual perceptual binding, visual enhancement of speech embedded in noise and audiovisual temporal processing. Several associations were found between autistic traits and audiovisual speech processing. Increased autistic-like imagination was related to reduced perceptual binding measured by the McGurk illusion. Increased overall autistic symptomatology was associated with reduced visual enhancement of speech intelligibility in noise. Participants reporting increased levels of rigid and restricted behaviour were more likely to bind audiovisual speech stimuli over longer temporal intervals, while an increased tendency to focus on local aspects of sensory inputs was related to a more narrow temporal binding window. These findings demonstrate that increased levels of autistic traits may be related to alterations in audiovisual speech processing, and are consistent with the notion of a spectrum of autistic traits that extends to the general population.
Autism spectrum disorder is a pervasive neurodevelopmental disorder that has been linked to a range of perceptual processing alterations, including hypo- and hyperresponsiveness to sensory stimulation. A recently proposed theory that attempts to account for these symptoms, states that autistic individuals have a decreased ability to anticipate upcoming sensory stimulation due to overly precise internal prediction models. Here, we tested this hypothesis by comparing the electrophysiological markers of prediction errors in auditory prediction by vision between a group of autistic individuals and a group of age-matched individuals with typical development. Between-group differences in prediction error signaling were assessed by comparing event-related potentials evoked by unexpected auditory omissions in a sequence of audiovisual recordings of a handclap in which the visual motion reliably predicted the onset and content of the sound. Unexpected auditory omissions induced an increased early negative omission response in the autism spectrum disorder group, indicating that violations of the prediction model produced larger prediction errors in the autism spectrum disorder group compared to the typical development group. The current results show that autistic individuals have alterations in visual-auditory predictive coding, and support the notion of impaired predictive coding as a core deficit underlying atypical sensory perception in autism spectrum disorder. Lay abstract Many autistic individuals experience difficulties in processing sensory information (e.g. increased sensitivity to sound). Here we show that these difficulties may be related to an inability to process unexpected sensory stimulation. In this study, 29 older adolescents and young adults with autism and 29 age-matched individuals with typical development participated in an electroencephalography study. The electroencephalography study measured the participants’ brain activity during unexpected silences in a sequence of videos of a handclap. The results showed that the brain activity of autistic individuals during these silences was increased compared to individuals with typical development. This increased activity indicates that autistic individuals may have difficulties in processing unexpected incoming sensory information, and might explain why autistic individuals are often overwhelmed by sensory stimulation. Our findings contribute to a better understanding of the neural mechanisms underlying the different sensory perception experienced by autistic individuals.
The amplitude of the auditory N1 component of the event‐related potential (ERP) is typically attenuated for self‐initiated sounds, compared to sounds with identical acoustic and temporal features that are triggered externally. This effect has been ascribed to internal forward models predicting the sensory consequences of one's own motor actions. The predictive coding account of autistic symptomatology states that individuals with autism spectrum disorder (ASD) have difficulties anticipating upcoming sensory stimulation due to a decreased ability to infer the probabilistic structure of their environment. Without precise internal forward prediction models to rely on, perception in ASD could be less affected by prior expectations and more driven by sensory input. Following this reasoning, one would expect diminished attenuation of the auditory N1 due to self‐initiation in individuals with ASD. Here, we tested this hypothesis by comparing the neural response to self‐ versus externally‐initiated tones between a group of individuals with ASD and a group of age matched neurotypical controls. ERPs evoked by tones initiated via button‐presses were compared with ERPs evoked by the same tones replayed at identical pace. Significant N1 attenuation effects were only found in the TD group. Self‐initiation of the tones did not attenuate the auditory N1 in the ASD group, indicating that they may be unable to anticipate the auditory sensory consequences of their own motor actions. These results show that individuals with ASD have alterations in sensory attenuation of self‐initiated sounds, and support the notion of impaired predictive coding as a core deficit underlying autistic symptomatology. Autism Res 2019, 12: 589–599 . © 2019 The Authors. Autism Research published by International Society for Autism Research published by Wiley Periodicals, Inc. Lay Summary Many individuals with ASD experience difficulties in processing sensory information (for example, increased sensitivity to sound). Here we show that these difficulties may be related to an inability to anticipate upcoming sensory stimulation. Our findings contribute to a better understanding of the neural mechanisms underlying the different sensory perception experienced by individuals with ASD.
The amplitude of the auditory N1 component of the event‐related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition.
Humans quickly adapt to variations in the speech signal. Adaptation may surface as recalibration, a learning effect driven by error-minimisation between a visual face and an ambiguous auditory speech signal, or as selective adaptation, a contrastive aftereffect driven by the acoustic clarity of the sound. Here, we examined whether these aftereffects occur for vowel identity and voice gender. Participants were exposed to male, female, or androgynous tokens of speakers pronouncing /e/, /ø/, (embedded in words with a consonant-vowel-consonant structure), or an ambiguous vowel halfway between /e/ and /ø/ dubbed onto the video of a male or female speaker pronouncing /e/ or /ø/. For both voice gender and vowel identity, we found assimilative aftereffects after exposure to auditory ambiguous adapter sounds, and contrastive aftereffects after exposure to auditory clear adapter sounds. This demonstrates that similar principles for adaptation in these dimensions are at play.
When listening to distorted speech, does one become a better listener by looking at the face of the speaker or by reading subtitles that are presented along with the speech signal? We examined this question in two experiments in which we presented participants with spectrally distorted speech (4-channel noise-vocoded speech). During short training sessions, listeners received auditorily distorted words or pseudowords that were partially disambiguated by concurrently presented lipread information or text. After each training session, listeners were tested with new degraded auditory words. Learning effects (based on proportions of correctly identified words) were stronger if listeners had trained with words rather than with pseudowords (a lexical boost), and adding lipread information during training was more effective than adding text (a lipread boost). Moreover, the advantage of lipread speech over text training was also found when participants were tested more than a month later. The current results thus suggest that lipread speech may have surprisingly long-lasting effects on adaptation to distorted speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.