2017
DOI: 10.1121/1.4976054
|View full text |Cite
|
Sign up to set email alerts
|

Predicting consonant recognition and confusions in normal-hearing listeners

Abstract: The perception of consonants in background noise has been investigated in various studies and was shown to critically depend on fine details in the stimuli. In this study, a microscopic speech perception model is proposed that represents an extension of the auditory signal processing model by Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102, 2892-2905]. The model was evaluated based on the extensive consonant perception data set provided by Zaar and Dau [(2015). J. Acoust. Soc. Am. 138, 1253-126… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
14
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 46 publications
1
14
0
Order By: Relevance
“…The consonant perception model of Zaar and Dau (2017) was considered for predicting the perceptual data obtained with the HA-processed CVs as well as with the CIprocessed VCVs. Figure 1 shows the model, which combines the auditory model front end of Dau et al (1996Dau et al ( , 1997 with a temporally dynamic correlation-based back end.…”
Section: Model Descriptionmentioning
confidence: 99%
See 4 more Smart Citations
“…The consonant perception model of Zaar and Dau (2017) was considered for predicting the perceptual data obtained with the HA-processed CVs as well as with the CIprocessed VCVs. Figure 1 shows the model, which combines the auditory model front end of Dau et al (1996Dau et al ( , 1997 with a temporally dynamic correlation-based back end.…”
Section: Model Descriptionmentioning
confidence: 99%
“…1. Scheme of the consonant perception model (reprinted from Zaar and Dau, 2017). For the test signal and a set of templates, the noisy speech and the noise alone were passed separately through the auditory model, consisting of a gammatone filterbank, an envelope extraction stage, a chain of adaptation loops, and a modulation filterbank.…”
Section: Simulation Proceduresmentioning
confidence: 99%
See 3 more Smart Citations