2009
DOI: 10.1111/j.1551-6709.2009.01012.x
|View full text |Cite|
|
Sign up to set email alerts
|

Lexical and Sublexical Units in Speech Perception

Abstract: Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

16
119
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 76 publications
(151 citation statements)
references
References 32 publications
16
119
0
Order By: Relevance
“…If learners are calculating (or representing) transitional probabilities, they should be able to differentiate these subcomponents from items with low transitional probabilities. However, when asked to differentiate between subcomponent items and items with low transitional probabilities, participants perform poorly [43,44]. This is consistent with memory-based accounts that argue that learners are extracting chunked representations, rather than calculating transitional probabilities between syllables.…”
Section: Statistical Learning As Memorysupporting
confidence: 78%
“…If learners are calculating (or representing) transitional probabilities, they should be able to differentiate these subcomponents from items with low transitional probabilities. However, when asked to differentiate between subcomponent items and items with low transitional probabilities, participants perform poorly [43,44]. This is consistent with memory-based accounts that argue that learners are extracting chunked representations, rather than calculating transitional probabilities between syllables.…”
Section: Statistical Learning As Memorysupporting
confidence: 78%
“…Indeed, alternative theoretical accounts assume that the seeming sensitivity to transitional statistics emerges from chunking due to the repetition of groups of elements (e.g. [32,[46][47][48]; see also [49]). 3 Importantly, though, comparing potentially different kinds of computations in correlational designs requires careful attention to the detailed probability structure of such computations.…”
Section: Endnotesmentioning
confidence: 99%
“…The result of the study by Giroux and Rey (2009), whose results are simulated in this research, provides evidence that when chunks are learned, the subunits making up these chunks are forgotten unless they are refreshed independently (cited in French et al, 2011). This would imply that chunks are encoded as atomic entities rather than as associations between their constituent elements.…”
Section: Introductionmentioning
confidence: 63%
“…The raw auditory signal generated by human speech is notoriously hard to segment into words because breaks in the continuity of the signal are poorly correlated with actual word boundaries (Cole &Jakimik, 1980). TRACX successfully models adults' better learning of word solver part words in the context of (a) differential within-word, versus between-word, forward TPs (Perruchet & Desaulty, 2008)in an artificial language with frequency-controlled test words and part words; (b) differential within-word, versus between-word, backward TPs (Perruchet &Desaulty, 2008) in an artificial language with frequency-controlled test words and part words; (c) gradual forgetting of sub chunks found inside chunks (Giroux &Rey, 2009), if these sub chunks are not independently refreshed; (d) sentence length and the fact that words become harder to extract as the length of the sentences in which they are found increases (Frank et al, 2010); and (e) vocabulary size and the fact that words become harder to extract as the number and length of the words to be extracted increases (Frank et al, 2010).…”
Section: Research Backgroundmentioning
confidence: 99%