Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility 2017
DOI: 10.1145/3132525.3132541
|View full text |Cite
|
Sign up to set email alerts
|

Deaf and Hard-of-Hearing Perspectives on Imperfect Automatic Speech Recognition for Captioning One-on-One Meetings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 42 publications
(11 citation statements)
references
References 36 publications
1
10
0
Order By: Relevance
“…To sum up, the perceived usefulness of the compressed subtitles is mixed with a tendency towards full subtitles because the participants are used to it (as stated in their final feedback). This is in line with the findings of Berke et al [37], who also observe that people tend to reject unknown subtitle designs.…”
Section: Resultssupporting
confidence: 92%
“…To sum up, the perceived usefulness of the compressed subtitles is mixed with a tendency towards full subtitles because the participants are used to it (as stated in their final feedback). This is in line with the findings of Berke et al [37], who also observe that people tend to reject unknown subtitle designs.…”
Section: Resultssupporting
confidence: 92%
“…Users are engaged in an attentiondemanding task of viewing a video with multiple sources of visual information in parallel to the caption-text stream, and if the choice of highlighting style or the frequency with which words are highlighted is suboptimal, then such visual decoration of the text could be distracting. This speculation is supported by prior research that has investigated the effect of different visual markup of caption texts in another context: to convey the confidence scores of a captioning service (e.g., ASR system) as to the accuracy of its caption output [6]. In a study with 107 DHH users, Berke et al discovered that although participants were receptive to the idea of having visual indicators of the confidence of an automatic caption system, they were concerned about distraction from changes in text appearance.…”
Section: Visual Markup Of Text In Captionsmentioning
confidence: 91%
“…Such highlighting in captions may require special consideration: Unlike text documents, captions are dynamic, with shorter text segments, which are usually shown in 1 or 2 lines, for 2 to 4 seconds [28]. Moreover, users are known to be sensitive to caption display parameters such as speed, font size, or decorations: Several researchers have measured the influence of such visual parameters of caption appearance on the readability of captions for DHH users [6,28,43].…”
Section: Introductionmentioning
confidence: 99%
“…they may fail on unexpected cases or fail in ways that are unlike how humans fail. In our research on using ASR to automatically generate captions for DHH users during live conversations with hearing colleagues, we have found that users are still interested in having access to such technology, even if it is not yet perfect [5]. However, they would like to have more information to help them, as an end-user, determine when they should trust the output of these systems.…”
Section: Lack Of Interpretabilitymentioning
confidence: 99%