2016
DOI: 10.1121/1.4948772
|View full text |Cite
|
Sign up to set email alerts
|

A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception

Abstract: A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition exper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
63
0

Year Published

2016
2016
2025
2025

Publication Types

Select...
5
2
1

Relationship

3
5

Authors

Journals

citations
Cited by 35 publications
(63 citation statements)
references
References 27 publications
0
63
0
Order By: Relevance
“…These models can predict speech intelligibility in different listening situations based on individual auditory profiles and on various auditory features (e.g. Rhebergen et al ., ; Meyer & Brand, ; Schädler et al ., ). However, they do not consider cognitive abilities, such as selective auditory attention, as a parameter in estimating speech recognition.…”
Section: Introductionmentioning
confidence: 97%
“…These models can predict speech intelligibility in different listening situations based on individual auditory profiles and on various auditory features (e.g. Rhebergen et al ., ; Meyer & Brand, ; Schädler et al ., ). However, they do not consider cognitive abilities, such as selective auditory attention, as a parameter in estimating speech recognition.…”
Section: Introductionmentioning
confidence: 97%
“…In the current study, a framework for auditory discrimination experiments [2,8] which uses an automatic speech recognition (ASR) system and that neither requires calibration with empirical data nor the temporal alignment of the to-be recognized signal was used as a microscopic model to simulate and hence predict the outcome of the matrix test across several languages and noise conditions as empirically determined in [6]. FADE was shown to accurately predict speech intelligibility of the German matrix test in different stationary noise conditions [7].…”
Section: Introductionmentioning
confidence: 99%
“…FADE was shown to accurately predict speech intelligibility of the German matrix test in different stationary noise conditions [7]. Its scope was then successfully extended to a fluctuating noise conditions and even basic psychoacoustical experiments [8]. The matrix sentence test was developed for several languages in order to make speech recognition measurements as comparable across languages as possible.…”
Section: Introductionmentioning
confidence: 99%
“…In comparison with the recently published FADE model [20] which uses a GMM/HMM recognizer specifically trained on the same Matrix test material as used for the prediction, the current DNN/HMM model employs a more sophisticated recognizer that has not seen the exact speech token under test before and could potentially operate on an open speech recognition set as well. However, this comes at the cost of more training data needed for this approach and further restrictions like much more computational resources required of the model parameters for the respective experiment to be modelled.…”
Section: Discussionmentioning
confidence: 99%
“…However, this comes at the cost of more training data needed for this approach and further restrictions like much more computational resources required of the model parameters for the respective experiment to be modelled. In addition, the current model approach has not been tested yet for predicting psychoacoustic experiments and the effect of hearing impairment in a similar way as successfully implemented with FADE [20].…”
Section: Discussionmentioning
confidence: 99%