2019
DOI: 10.1111/cogs.12700
|View full text |Cite
|
Sign up to set email alerts
|

What Are You Waiting For? Real‐Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory

Abstract: Speech unfolds over time and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: 1) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and 2) an immediate integration scheme in which lexical repr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

9
66
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(90 citation statements)
references
References 105 publications
(193 reference statements)
9
66
0
Order By: Relevance
“…After that, looks to the target increase, and competitors receive more looks than unrelated objects briefly. Fricatives (Panels D–F) showed more competition, and a slower timecourse than stops (Panels A–C), consistent with prior studies (Galle, 2014; Galle, Klein-Packard, Schreiber, & McMurray, submitted). A few developmental patterns are apparent (Supplement S4 for complete analyses).…”
Section: Results1supporting
confidence: 89%
See 1 more Smart Citation
“…After that, looks to the target increase, and competitors receive more looks than unrelated objects briefly. Fricatives (Panels D–F) showed more competition, and a slower timecourse than stops (Panels A–C), consistent with prior studies (Galle, 2014; Galle, Klein-Packard, Schreiber, & McMurray, submitted). A few developmental patterns are apparent (Supplement S4 for complete analyses).…”
Section: Results1supporting
confidence: 89%
“…We cannot assume that any phonetic contrast is representative of speech as a whole. Categorical perception, for example, is less robust in vowels (Fry, Abramson, Eimas, & Liberman, 1962) and fricatives (Healy & Repp, 1982), and fricatives may be integrated with other portions of the signal differently from other speech sounds (Galle et al, submitted; Ishida, Samuel, & Arai, 2016). Moreover, frequency differences among phonemes could contribute to the rate or robustness of their development (Thiessen & Pavlik, 2016).…”
Section: Discussionmentioning
confidence: 99%
“…Recently, researchers have used eye‐tracking tasks to investigate the extent to which gradient information is maintained over time (McMurray, Tanenhaus, & Aslin, 2009; Zellou & Dahan, 2019). Debate continues over the precise nature of these gradient representations, whether they are based on low‐level acoustic cues (Galle, Klein‐Packard, Schreiber, & McMurray, 2019) or gradient higher‐level representations (Brown‐Schmidt & Toscano, 2017; Falandays, Brown‐Schmidt, & Toscano, 2020). In either case, however, behavioral measures make it difficult to separate early perception from later categorization (Gerrits & Schouten, 2004; Schouten et al, 2003).…”
Section: Gradient Representationsmentioning
confidence: 99%
“…At a broader timescale, AOC clarifies the interpretation of listeners' sensitivity to withincategory acoustic variation. Past work showing that performance on memory tasks depends on acoustic clarity (Crowder & Morton, 1969;Frankish, 2008) or that sensitivity is maintained across syllables (Brown-Schmidt & Toscano, 2017;Falandays et al, 2020;McMurray et al, 2009), or integrated over a delay (Galle et al, 2019;Gwilliams et al, 2018), did not address the internal contents of the representations that support such sensitivity. The present findings provide direct evidence in favor of the position that gradience is maintained through probabilistic uncertainty about potential categories.…”
Section: Discussionmentioning
confidence: 98%