2012
DOI: 10.1121/1.3682054
|View full text |Cite
|
Sign up to set email alerts
|

The influence of stop consonants’ perceptual features on the Articulation Index model

Abstract: Studies on consonant perception under noise conditions typically describe the average consonant error as exponential in the Articulation Index (AI). While this AI formula nicely fits the average error over all consonants, it does not fit the error for any consonant at the utterance level. This study analyzes the error patterns of six stop consonants /p, t, k, b, d, g/ with four vowels (/α/, /ε/, /I/, /ae/), at the individual consonant (i.e., utterance) level. The findings include that the utterance error is es… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
37
1

Year Published

2013
2013
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(43 citation statements)
references
References 60 publications
5
37
1
Order By: Relevance
“…The reported deviations from simple feature structure are broadly consistent with the feature interactions suggested by previous work on confusions induced by speechweighted noise (Phatak and Allen 2007;Singh and Allen 2012). Such deviations provide at least some support for the segment as basic perceptual unit (Nearey 1990;Norris et al 2000).…”
Section: Implications and Limitationssupporting
confidence: 78%
See 2 more Smart Citations
“…The reported deviations from simple feature structure are broadly consistent with the feature interactions suggested by previous work on confusions induced by speechweighted noise (Phatak and Allen 2007;Singh and Allen 2012). Such deviations provide at least some support for the segment as basic perceptual unit (Nearey 1990;Norris et al 2000).…”
Section: Implications and Limitationssupporting
confidence: 78%
“…As mentioned above, a number of previous studies of speech in noise have focused on the perception of large sets of consonants, typically analyzing responses pooled across (sets of) listeners (Miller and Nicely 1955;Benkí 2003;Cutler et al 2004;Allen 2005;Phatak and Allen 2007;Singh and Allen 2012). By way of contrast, in the experiments described below, feature perception is analyzed for the smaller set of consonants - [p] The focus of the present work is restricted to place and voicing in this subset of English stop consonants in part so that the results are more or less directly com parable to similar results in previous studies with similarly narrow scope (e.g., Sawusch and Pisoni 1974;Oden and Massaro 1978), and in part because categories defined by the factorial combination of two levels on each of two dimensions maps directly onto the simplest full factorial GRT model (described in detail below).…”
Section: Interactions Vs Independence Between Dimensionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thus the total number presentations of each consonant ranged from N ¼ 40 to 80 for each HI ear (total N ¼ 5-10 over 2 sessions  2 tokens  4 SNRs). The Vysochanskij-Petunin inequality (Vysochanskij and Petunin, 1980) was used to verify that the number of trials were sufficient to determine correct perception within a 95% confidence interval (see appendix of Singh and Allen, 2012).…”
Section: Methodsmentioning
confidence: 99%
“…To ensure that tokens were unambiguous and robust to noise, each token was selected based on a criteria of 3.1% error for a population of 16 NH listeners, calculated by combining results in quiet and at a À2 dB signal-to-noise ratio (SNR) (i.e., no more than 1 error over a total N ¼ 32, per token) (Phatak and Allen, 2007). Such tokens are representative of the LDC database; Singh and Allen (2012) shows, for the majority of tokens, a ceiling effect for NH listeners above À2 dB SNR. One token of =fA= (male talker, label m112) was damaged in the preparation of the tokens, thus it has not been included in this analysis.…”
Section: Speech Materialsmentioning
confidence: 99%