2009
DOI: 10.1121/1.3238257
|View full text |Cite
|
Sign up to set email alerts
|

Consonant recognition loss in hearing impaired listeners

Abstract: This paper presents a compact graphical method for comparing the performance of individual hearing impaired ͑HI͒ listeners with that of an average normal hearing ͑NH͒ listener on a consonant-by-consonant basis. This representation, named the consonant loss profile ͑CLP͒, characterizes the effect of a listener's hearing loss on each consonant over a range of performance. The CLP shows that the consonant loss, which is the signal-to-noise ratio ͑SNR͒ difference at equal NH and HI scores, is consonant-dependent a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
42
2

Year Published

2012
2012
2024
2024

Publication Types

Select...
9

Relationship

4
5

Authors

Journals

citations
Cited by 39 publications
(50 citation statements)
references
References 31 publications
6
42
2
Order By: Relevance
“…Ears having even a slight hearing loss (HL) experience significant and systematic consonant errors on these very same zero-error sounds. In our experience, any two ears having the same hearing loss, as characterized in terms of the pure tone average (PTA) or speech reception thresholds (SRT), never have similar errors (Phatak et al, 2009;Yoon et al, 2012;Han, 2011). Our several studies of consonant errors, in both normal hearing and hearing impaired ears, show that average scores fundamentally mischaracterize this idiosyncratic consonant speech loss (Phatak et al, 2009;Han, 2011).…”
Section: B Aims Of This Studymentioning
confidence: 95%
See 1 more Smart Citation
“…Ears having even a slight hearing loss (HL) experience significant and systematic consonant errors on these very same zero-error sounds. In our experience, any two ears having the same hearing loss, as characterized in terms of the pure tone average (PTA) or speech reception thresholds (SRT), never have similar errors (Phatak et al, 2009;Yoon et al, 2012;Han, 2011). Our several studies of consonant errors, in both normal hearing and hearing impaired ears, show that average scores fundamentally mischaracterize this idiosyncratic consonant speech loss (Phatak et al, 2009;Han, 2011).…”
Section: B Aims Of This Studymentioning
confidence: 95%
“…In our experience, any two ears having the same hearing loss, as characterized in terms of the pure tone average (PTA) or speech reception thresholds (SRT), never have similar errors (Phatak et al, 2009;Yoon et al, 2012;Han, 2011). Our several studies of consonant errors, in both normal hearing and hearing impaired ears, show that average scores fundamentally mischaracterize this idiosyncratic consonant speech loss (Phatak et al, 2009;Han, 2011). This observation leads to many difficult yet important questions, such as: Why are /pa/'s from some of the talkers confused with /ta/, while others are rarely confused and why are certain consonant utterances more robust to masking noise than others.…”
Section: B Aims Of This Studymentioning
confidence: 95%
“…Clinically, it is defined as the difference between speech-in-noise thresholds of NH and HI listeners, when the speech is presented at audible levels (Killion et al, 2004). The SNR loss is ideally an audibilityindependent phenomenon, but Phatak et al (2009) cautioned that the audibility loss may be sometimes misinterpreted as the SNR loss, due to insufficient gain. Current study verified that HI listeners indeed do not exhibit SNR loss for vowel recognition in the SS masker and in low-rate ( 12 Hz) modulated maskers, when audibility is restored.…”
Section: B Amplification Audibility and Snr Lossmentioning
confidence: 98%
“…Although the tests all differ slightly in composition, research shows that a common advantage of these tests is to simulate everyday listening conditions which are realistic for measuring the speech perception ability of HI listeners [2,17] . However, these tests fail to fully reflect HI speech perception in terms of the acoustic and speech cues, because a contextual bias is inherent in these word/sentence tests [18,19] . Boothroyd (1994) clearly demonstrated that HI listeners decode consonant-vowel -consonant (CVC) based on both direct sensory evidence and indirect contextual evidence as they decode the speech sound [20] .…”
Section: Current Clinical Measurementsmentioning
confidence: 99%