2016
DOI: 10.1111/desc.12390
|View full text |Cite
|
Sign up to set email alerts
|

Co‐occurrence statistics as a language‐dependent cue for speech segmentation

Abstract: To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language-specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We inves… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
70
1
1

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(75 citation statements)
references
References 75 publications
(112 reference statements)
3
70
1
1
Order By: Relevance
“…We reasoned that phantoms may be accepted as legitimate elements of an artificial language due to their statistical congruency with word-like constituents. 22,27 Rejecting phantoms may therefore rely on the fact that they are not supported by memory representations; phantoms were not encountered and extracted as whole units during the learning phase. Hence, we reasoned that by splitting the tokens into accepted and rejected the chances of dissociating the brain basis of words versus phantom processing would increase.…”
Section: Behavioral Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We reasoned that phantoms may be accepted as legitimate elements of an artificial language due to their statistical congruency with word-like constituents. 22,27 Rejecting phantoms may therefore rely on the fact that they are not supported by memory representations; phantoms were not encountered and extracted as whole units during the learning phase. Hence, we reasoned that by splitting the tokens into accepted and rejected the chances of dissociating the brain basis of words versus phantom processing would increase.…”
Section: Behavioral Resultsmentioning
confidence: 99%
“…21,26 Furthermore, individual differences and the native language of the listener can influence whether or not phantoms are confused with words. 22,27 A key goal of the present study is to provide novel insights into the mechanisms supporting statistical learning using a novel behavioral and neuroimaging protocol to partial out the processing of words, phantoms, and pseudorandom sequences during learning and subsequent recognition.…”
Section: Introductionmentioning
confidence: 99%
“…We chose the forward dependency measure over the backward and the mutual information ones since it has been shown that it performs better on English. A possible explanation would be that forward TP gives more information in Subject-Verb-Object languages [24]. Compared to DiBS, TPs demands a larger memory as the number of all possible syllables encountered are much greater than the number of all possible phones.…”
Section: Word Segmentation Algorithmsmentioning
confidence: 99%
“…These findings countered the prevailing thought of the time that infant learning was too rudimentary to explain how children learn language in the span of a few short years, that instead language must develop according to a special language programme. Thus, despite continued debate about the nature of learning in language acquisition [3][4][5], there is now widespread appreciation for the fact that children rely on a variety of mechanisms to acquire language, not all innately determined or specific to language. There are also now many such presentations at the BUCLD and many important publications of SL in children and adults.…”
Section: Introductionmentioning
confidence: 99%