2019
DOI: 10.3758/s13414-018-01659-3
|View full text |Cite
|
Sign up to set email alerts
|

Natural speech statistics shift phoneme categorization

Abstract: All perception takes place in context. Recognition of a given speech sound is influenced by the acoustic properties of surrounding sounds. When the spectral composition of earlier (context) sounds (e.g., more energy at lower first formant [F 1 ] frequencies) differs from that of a later (target) sound (e.g., vowel with intermediate F 1), the auditory system magnifies this difference, biasing target categorization (e.g., towards higher-F 1 /ɛ/). Historically, these studies used filters to force context sounds t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 91 publications
(116 reference statements)
2
6
0
Order By: Relevance
“…Despite being matched to filtered contexts in duration (Experiment 1) and long-term average spectra (Experiments 1 and 2), unfiltered contexts produced smaller (if any) SCEs than filtered contexts in each experiment. This finding parallels Stilp and Assgari (2019), where unfiltered context sentences produced smaller and more variable SCEs in vowel categorization than filtered renditions of a single context sentence. In that study, each unfiltered block presented two different sentences sometimes spoken by two different talkers, similar to two different musical passages played by two different musical instruments (and musicians) presented here.…”
Section: Discussionsupporting
confidence: 76%
See 3 more Smart Citations
“…Despite being matched to filtered contexts in duration (Experiment 1) and long-term average spectra (Experiments 1 and 2), unfiltered contexts produced smaller (if any) SCEs than filtered contexts in each experiment. This finding parallels Stilp and Assgari (2019), where unfiltered context sentences produced smaller and more variable SCEs in vowel categorization than filtered renditions of a single context sentence. In that study, each unfiltered block presented two different sentences sometimes spoken by two different talkers, similar to two different musical passages played by two different musical instruments (and musicians) presented here.…”
Section: Discussionsupporting
confidence: 76%
“…contribute to perception in more variable (and naturalistic) listening conditions. Recently, Stilp and Assgari (2019) observed SCEs in vowel categorization following highly controlled filtered sentences (as in previous studies) as well as sentences that naturally possessed the desired spectral properties without filtering, significantly enhancing the ecological validity of SCEs. Additionally, SCEs produced by unfiltered sentences were smaller than SCEs produced by filtered sentences, shedding considerable light on the precise degree to which these effects shape everyday perception.…”
Section: Introductionmentioning
confidence: 57%
See 2 more Smart Citations
“…Later, Stilp () tested this question by presenting sentence contexts before /da/‐/ga/ targets. He measured the inherent balance of spectral energy across two frequency regions in sentences (low F 3 region: 1,700–2,700 Hz, high F 3 region: 2,700–3,700 Hz) using mean spectral differences (MSDs; Stilp & Assgari, ). MSDs were measured in two different temporal windows of the context sentences: the last 500 ms of the sentence (the Late window) and everything preceding the last 500 ms (the Early window).…”
Section: Spectral Context Effectsmentioning
confidence: 99%