2016
DOI: 10.1097/aud.0000000000000234
|View full text |Cite
|
Sign up to set email alerts
|

Text as a Supplement to Speech in Young and Older Adults

Abstract: Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speech… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

6
17
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(23 citation statements)
references
References 52 publications
6
17
0
Order By: Relevance
“…Interestingly, they actually found evidence for increases in listening effort (operationalized as reaction time) with the visual task, despite subjective reports that the task was easier. In contrast, our results fit with the prior literature on assistive text captioning which suggest that visual text cues can provide a substantial benefit to speech processing (e.g., Krull & Humes, 2016;Gordon-Salant & Callahan, 2009;Grossman & Rajan, 2017).…”
Section: Discussionsupporting
confidence: 89%
See 1 more Smart Citation
“…Interestingly, they actually found evidence for increases in listening effort (operationalized as reaction time) with the visual task, despite subjective reports that the task was easier. In contrast, our results fit with the prior literature on assistive text captioning which suggest that visual text cues can provide a substantial benefit to speech processing (e.g., Krull & Humes, 2016;Gordon-Salant & Callahan, 2009;Grossman & Rajan, 2017).…”
Section: Discussionsupporting
confidence: 89%
“…One recent paper estimated that some popular ASR systems have word error rates between 9 and 34%; Këpuska & Bohouta, 2017). As noted previously, both Krull and Humes (2016) and Zekveld and colleagues (2008) used ASR technology to generate the text that accompanied the speech, which would allow for the introduction of realistic captioning errors.…”
Section: Discussionmentioning
confidence: 99%
“…Captions may be helpful in a variety of conditions where, for example, the sound system is poor, there are foreign accents, there is interfering speech or noise, or the viewer’s speech perception is affected by hearing loss. Both older and younger individuals modulate their emphasis on captions versus spoken speech depending on a variety of factors, including the SNR of the acoustic signal and the accuracy of the caption (Krull & Humes, 2016). Assuming an accurate caption, one can learn the content of the spoken message simply by reading the caption text.…”
Section: Discussionmentioning
confidence: 99%
“…In other words, spatially separating the target from the masker 'releases' the target from auditory masking. Somewhat surprisingly, providing listeners with information from a different modality prior to or during the presentation of masked sentences facilitates speech reception in noise (Freyman, Balakrishnan, & Helfer, 2004;Krull & Humes, 2016). For example, Freyman, Balakrishnan, and Helfer had participants report the last word of a syntactically correct but semantically anomalous sentence (e.g., BA rose could paint a fish.^) when it was masked by two other simultaneously presented sentences of the same type spoken in two different voices.…”
mentioning
confidence: 99%