2019
DOI: 10.1080/23273798.2019.1701691
|View full text |Cite
|
Sign up to set email alerts
|

Contextual speech rate influences morphosyntactic prediction and integration

Abstract: Understanding spoken language requires the integration and weighting of multiple cues, and may call on cue integration mechanisms that have been studied in other areas of perception. In the current study, we used eye-tracking (visual-world paradigm) to examine how contextual speech rate (a lower-level, perceptual cue) and morphosyntactic knowledge (a higher-level, linguistic cue) are iteratively combined and integrated. Results indicate that participants used contextual rate information immediately, which we i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

5
1

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 77 publications
2
13
0
Order By: Relevance
“…The chief prediction regarding structure and meaning from the architecture is that low-frequency power and phase synchronization should increase as structure and meaning build up in time. This has been attested in the literature (Brennan & Martin, 2019;Kaufeld, Naumann, et al, 2019;Kaufeld, Ravenschlag, et al, 2019;Meyer, 2018;Ding et al, 2016;Meyer, Henry, Gaston, Schmuck, & Friederici, 2016;Bastiaansen et al, 2005Bastiaansen et al, , 2008 but needs more careful investigation. It is likely that low-frequency phase organization reflects the increasingly distributed nature of the neural assemblies being (de)synchronized as structure and meaning are inferred, rather than reflecting a phrasal or sentential oscillator.…”
Section: Predictionsmentioning
confidence: 91%
See 1 more Smart Citation
“…The chief prediction regarding structure and meaning from the architecture is that low-frequency power and phase synchronization should increase as structure and meaning build up in time. This has been attested in the literature (Brennan & Martin, 2019;Kaufeld, Naumann, et al, 2019;Kaufeld, Ravenschlag, et al, 2019;Meyer, 2018;Ding et al, 2016;Meyer, Henry, Gaston, Schmuck, & Friederici, 2016;Bastiaansen et al, 2005Bastiaansen et al, , 2008 but needs more careful investigation. It is likely that low-frequency phase organization reflects the increasingly distributed nature of the neural assemblies being (de)synchronized as structure and meaning are inferred, rather than reflecting a phrasal or sentential oscillator.…”
Section: Predictionsmentioning
confidence: 91%
“…Perceptual inference asserts that sensory cues activate latent representations in the neural system that have been learned through experience. 4 In line with this idea, there is everaccumulating evidence that "lower level" cues like speech rate and phoneme perception (e.g., Kaufeld, Ravenschlag, Meyer, Martin, & Bosker, 2019;Kaufeld, Naumann, Meyer, Bosker, & Martin, 2019;Heffner, Dilley, McAuley, & Pitt, 2013;Dilley & Pitt, 2010), morphology (e.g., Gwilliams, Linzen, Poeppel, & Marantz, 2018;Martin, Monahan, & Samuel, 2017), foveal and parafoveally processed orthography (e.g., Cutter, Martin, & Sturt, 2019;Veldre & Andrews, 2018;Schotter, Angele, & Rayner, 2012), as well as "higher level" sentential (e.g., Kutas, Ferreira, & Martin, 2018;Martin & McElree, 2008van Alphen & Table 1.…”
Section: Linguistic Representation As Perceptual Inferencementioning
confidence: 99%
“…One interesting constraint for future investigation is prosody. Recent research has argued that prosodic cues-for instance, intonation or speech rate-can guide listeners' expectations (Kaufeld, Naumann et al, 2020;Kaufeld, Ravenschlag et al, 2020;Kurumada et al, 2014). Our speaker produced DM like with naturalistic prosody, yet perhaps more pronounced prosodic cues to like being used as a discourse marker (e.g., decrease in fundamental frequency; slower speech rate; Drager, 2011;Schleef & Turton, 2018) could modulate the extent to which cohort competitors are activated in online word recognition.…”
Section: Discussionmentioning
confidence: 68%
“…And the stress patterns that punctuate speech rhythm affect perceptual grouping (Lee & Todd, 2004;Martin, 1972). For example, one can be pushed between hearing "crisis turnip" or "cry sister nip" depending on the prior rhythmic context and the metrical expectancies they induce (Brown et al, 2011(Brown et al, , 2015Dilley & McAuley, 2008;Kaufeld et al, 2019). Rhythm can even make syllables perceptually disappear altogether (Baese-Berk et al, 2019;Dilley & Pitt, 2010;Morrill et al, 2014).…”
Section: Introductionmentioning
confidence: 99%