2015
DOI: 10.1037/a0038509
|View full text |Cite
|
Sign up to set email alerts
|

Can we hear morphological complexity before words are complex?

Abstract: Previous research has shown that listeners can tell the difference between phonemically identical onsets of monomorphemic words (e.g., cap and captain) using acoustic cues (Davis, Marslen-Wilson, & Gaskell, 2002). This study investigates whether this finding extends to multimorphemic words, asking whether listeners can use phonetic information to distinguish unsuffixed from suffixed words before they differ phonemically (e.g., clue vs. clueless). We report 4 experiments investigating this issue using forced-ch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
17
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(21 citation statements)
references
References 34 publications
4
17
0
Order By: Relevance
“…Results showed that participants fixated faster on the target image when the recording contained assimilated nasal consonants, suggesting that participants used their knowledge of this process to anticipate the identity of an upcoming consonant. Similar effects have been found for other cues as well, including vowel formant transitions (Dahan et al 2001;Salverda et al 2014;Mahr et al 2015;Paquette-Smith et al 2016), vowel nasalization (Beddor et al 2013;Paquette-Smith et al 2016;Zamuner et al 2016, Desmeules-Trudel andZamuner 2019), and segment duration (Salverda et al 2003;Blazej and Cohen-Goldberg 2015).…”
Section: Introductionsupporting
confidence: 74%
See 1 more Smart Citation
“…Results showed that participants fixated faster on the target image when the recording contained assimilated nasal consonants, suggesting that participants used their knowledge of this process to anticipate the identity of an upcoming consonant. Similar effects have been found for other cues as well, including vowel formant transitions (Dahan et al 2001;Salverda et al 2014;Mahr et al 2015;Paquette-Smith et al 2016), vowel nasalization (Beddor et al 2013;Paquette-Smith et al 2016;Zamuner et al 2016, Desmeules-Trudel andZamuner 2019), and segment duration (Salverda et al 2003;Blazej and Cohen-Goldberg 2015).…”
Section: Introductionsupporting
confidence: 74%
“…The long-distance nature of sibilant harmony and other types of consonant harmony is of particular interest when we consider its potential to facilitate language processing. A growing body of research has demonstrated that during spoken word recognition, listeners can use a variety of cues to anticipate an upcoming sound before it is realized (Dahan et al 2001;Salverda et al 2003Salverda et al , 2014Gow and McMurray 2007;Beddor et al 2013;Mahr et al 2015;Blazej and Cohen-Goldberg 2015;Paquette-Smith et al 2016;Zamuner et al 2016, Desmeules-Trudel andZamuner 2019). This literature, however, has focused on local dependencies between adjacent segments, as opposed to long-distance phenomena.…”
Section: Introductionmentioning
confidence: 99%
“…Further, even if such stem-internal duration patterns did apply to our stimuli, there is evidence that listeners do not require such nuanced cues to draw on durational information in perception. Blazej and Cohen-Goldberg (2015), for example, manipulated stem duration uniformly, without distinguishing between onsets, nuclei and codas. They nevertheless found that listeners were sensitive to stem duration as a cue for upcoming suffixes in the same way that Kemps and colleagues' listeners were.…”
Section: Discussionmentioning
confidence: 99%
“…Yet that stream is rife with systematic patterns, and listeners are quick to exploit them to their advantage. In the realm of pure phonetics they use nasal coarticulation to predict upcoming nasal consonants (Beddor, McGowan, Boland, Coetzee, & Brasher, 2013); [ɹ]-coloring on preceding sonorants to predict upcoming rhotics (Heinrich, Flory, & Hawkins, 2010); stem duration to predict upcoming suffixes (Blazej & Cohen-Goldberg, 2015;Kemps, Wurm, Ernestus, Schreuder, & Baayen, 2005); and syllable duration to predict upcoming word boundaries (Davis, Marslen-Wilson, & Gaskell, 2002;Salverda, Dahan, & McQueen, 2003). In the more abstract realm of distributional statistics, they use frequency distributions within the lexicon, within morphological families, and within inflectional paradigms to help identify and name words (Baayen, Levelt, Schreuder, & Ernestus, 2008;Baayen, Wurm, & Aycock, 2007;Moscoso Del Prado Martín, Kostić, & Baayen, 2004;Tabak, Schreuder, & Baayen, 2005.…”
mentioning
confidence: 99%
“…What is more, phonetic studies have shown that adding an affix to a base also affects the acoustic properties of the base, such that a base occurring on its own systematically differs acoustically from its realization as part of a derived word, for example in duration and pitch. For example, the base help without a suffix is generally shorter when it occurs in helper than if it occurs as a free morpheme (Lehiste 1972, Kemps et al 2005, Frazier 2006, Blazej and Cohen-Goldberg 2015. Importantly, two of these studies (i.e.…”
Section: The Morpheme As a Minimal Signmentioning
confidence: 99%