2019
DOI: 10.31234/osf.io/r5gp3
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Speech perception at birth: the brain encodes fast and slow temporal information

Abstract: Although infants show sophisticated speech perception abilities, it is not clear whether they rely on the same acoustic information as adults. When perceiving speech in quiet, adults mainly use the slowest temporal envelope or amplitude modulation (AM) cues (<16Hz), while they rely more on the faster AM and frequency modulation (FM) cues, when perceiving speech in noise. The present study investigated how newborns process the slow and fast AM cues to discriminate phonemes in quiet. We combined near-infr… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 52 publications
(70 reference statements)
0
12
0
Order By: Relevance
“…The hemodynamic response is slow and as a result, NIRS has low temporal resolution (in the second range), but offers precise spatial localization, as it is not subject to the inverse problem [even if the resolution is low compared with magnetic resonance imaging (MRI)]. NIRS and EEG can also be combined (Cabrera & Gervain 2020;Telkemeyer et al 2009;Wallois et al 2012), as the two signals do not interfere with each other and the two types of sensors can be placed into the same headgear, typically a stretch cap. NIRS-EEG co-recording has the advantage of offering both high spatial and temporal resolution.…”
Section: How To Study Newborns?mentioning
confidence: 99%
See 1 more Smart Citation
“…The hemodynamic response is slow and as a result, NIRS has low temporal resolution (in the second range), but offers precise spatial localization, as it is not subject to the inverse problem [even if the resolution is low compared with magnetic resonance imaging (MRI)]. NIRS and EEG can also be combined (Cabrera & Gervain 2020;Telkemeyer et al 2009;Wallois et al 2012), as the two signals do not interfere with each other and the two types of sensors can be placed into the same headgear, typically a stretch cap. NIRS-EEG co-recording has the advantage of offering both high spatial and temporal resolution.…”
Section: How To Study Newborns?mentioning
confidence: 99%
“…Recent EEG results (Cabrera & Gervain 2020;Ortiz Barajas et al 2021) suggest that newborns, like adults, are indeed able to track the speech envelope (i.e. the amplitude modulation) of the speech signal, which roughly corresponds to the syllables / syllabic rate.…”
Section: Universal Speech Perception Abilitiesmentioning
confidence: 99%
“…The EEG results revealed a mismatch response between standard and deviant syllables in both conditions. This suggests that at birth, the brain is able to detect a consonant change in syllables when only the slowest envelope cues (under 8 Hz) are preserved (81). More notably, the vascular responses recorded using NIRS revealed a different time course as well as a different region of activation between the two vocoder conditions, suggesting that the processing of fast and slow speech envelope may not rely on the same neural mechanisms.…”
Section: Coding Of Temporal Information For Speech Signalmentioning
confidence: 96%
“…Suggesting that their brain was in a very early stage of auditory functional development, at onset of CI hearing, their auditory and speech processing areas were irresponsive to sounds except in the right ATL, and were incapable of discriminating sound types (speech vs. noise). In contrast, the healthy brain at birth was able to recognize their mother's voice from unfamiliar voices (Decasper andFifer 1980, Moon, Christine et al 2000), native from nonnative vowels (Moon, C. et al 2013), and different consonants (Cabrera and Gervain 2020). Even before term birth, left superior temporal and supramarginal regions of preterm infants (25-37 weeks of post menstrual age) are already activated by speech sounds (Baldoli, Scola et al 2015), and inferior frontal regions of preterm infants (28-32 weeks of gestational age) are able to detect change of human voice and change of phoneme (Mahmoudzadeh, Dehaene-Lambertz et al 2013).…”
Section: Effect Of Auditory Deprivation Early In Lifementioning
confidence: 98%