2021
DOI: 10.1186/s11689-020-09348-9
|View full text |Cite
|
Sign up to set email alerts
|

Read my lips! Perception of speech in noise by preschool children with autism and the impact of watching the speaker’s face

Abstract: Background Adults and adolescents with autism spectrum disorders show greater difficulties comprehending speech in the presence of noise. Moreover, while neurotypical adults use visual cues on the mouth to help them understand speech in background noise, differences in attention to human faces in autism may affect use of these visual cues. No work has yet examined these skills in toddlers with ASD, despite the fact that they are frequently faced with noisy, multitalker environments. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
5
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 64 publications
3
5
0
1
Order By: Relevance
“…We propose that the absence of a relationship between attention to the speaker’s face and mouth regions at 12 months and language at 18 months in high‐risk infants may be due to a disruption in the processing and integration of dynamic facial and speech cues and that, unlike impaired social attention, which appears to be specific to ASD, this deficit may be shared amongst children across the autism risk spectrum. This idea is consistent with findings suggesting that 9‐month‐old high‐risk infants exhibit reduced audiovisual speech integration ability (Guiraud et al., 2012) and that preschoolers (Newman, Kirby, Von Holzen, & Redcay, 2021) and school‐aged children (Smith & Bennetto, 2007) with ASD benefit less from lip movements during speech decoding, with the effect driven largely by lower audiovisual integration skills (Smith & Bennetto, 2007). Findings also show that preschoolers with ASD do not prefer synchronous over asynchronous audiovisual events, suggesting altered multisensory speech processing and that diminished preference for synchronous speech was linked with lower language skills (Righi et al., 2018).…”
Section: Discussionsupporting
confidence: 87%
“…We propose that the absence of a relationship between attention to the speaker’s face and mouth regions at 12 months and language at 18 months in high‐risk infants may be due to a disruption in the processing and integration of dynamic facial and speech cues and that, unlike impaired social attention, which appears to be specific to ASD, this deficit may be shared amongst children across the autism risk spectrum. This idea is consistent with findings suggesting that 9‐month‐old high‐risk infants exhibit reduced audiovisual speech integration ability (Guiraud et al., 2012) and that preschoolers (Newman, Kirby, Von Holzen, & Redcay, 2021) and school‐aged children (Smith & Bennetto, 2007) with ASD benefit less from lip movements during speech decoding, with the effect driven largely by lower audiovisual integration skills (Smith & Bennetto, 2007). Findings also show that preschoolers with ASD do not prefer synchronous over asynchronous audiovisual events, suggesting altered multisensory speech processing and that diminished preference for synchronous speech was linked with lower language skills (Righi et al., 2018).…”
Section: Discussionsupporting
confidence: 87%
“…Our findings not only deepen the understanding of the underlying mechanisms of audiovisual speech integration in the McGurk task in autism, but also provide important insights for supporting strategies targeting audiovisual speech integration in AC. In addition, our findings confirmed the role of visual information for speech perception in AC (Newman et al., 2021).…”
Section: Discussionsupporting
confidence: 90%
“…In contrast, the ASD group shows a detrimental effect of noise and increased arousal in the harder condition (Keith et al 2019 ). Children with ASD who attend more time to the stimulus, such as looking longer to the speaker’s face, show better listening performance (Newman et al 2021 ). When different levels of signal-to-noise ratio (SNR) are compared, ASD and NT groups exhibited greater benefit at low SNRs relative to high SNRs in phoneme recognition tasks (Stevenson et al 2017 ).…”
Section: Results and Critical Discussionmentioning
confidence: 99%
“…Regarding other assessment paradigms, children with ASD seem to struggle with face-to-face matching, when compared to voice-face and word-face combinations (Golan et al 2018 ), with worst performance in noisy environments (Newman et al 2021 ). The ability to integrate facial-voice cues seems to be correlated with socialization skills in children with ASD (Golan et al 2018 ).…”
Section: Results and Critical Discussionmentioning
confidence: 99%