Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 2021
DOI: 10.1145/3411764.3445572
|View full text |Cite
|
Sign up to set email alerts
|

Say It All: Feedback for Improving Non-Visual Presentation Accessibility

Abstract: Figure 1: Presentation A11y parses presentation slides and transcribes the presenter's speech in real-time to provide elementlevel feedback to presenters about whether they have verbally described visual content. Presenters can use real-time feedback to prompt them to speak about unaddressed slide elements or view post-presentation feedback to help them revise their slides.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 25 publications
(8 citation statements)
references
References 33 publications
0
8
0
Order By: Relevance
“…Creating an accessible tool for authoring videos is challenging partially due to the inaccessibility of videos themselves. Videos are inaccessible to BLV audiences when the visual content in the video is not described by the audio (e.g., travel videos with scenic shots set to music) [45,46,60]. To make videos accessible, video creators [57], volunteers [31], or professional audio describers [1] add audio descriptions to describe important visual content that is not understandable from the audio alone.…”
Section: Video Accessibilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Creating an accessible tool for authoring videos is challenging partially due to the inaccessibility of videos themselves. Videos are inaccessible to BLV audiences when the visual content in the video is not described by the audio (e.g., travel videos with scenic shots set to music) [45,46,60]. To make videos accessible, video creators [57], volunteers [31], or professional audio describers [1] add audio descriptions to describe important visual content that is not understandable from the audio alone.…”
Section: Video Accessibilitymentioning
confidence: 99%
“…To make videos accessible, video creators [57], volunteers [31], or professional audio describers [1] add audio descriptions to describe important visual content that is not understandable from the audio alone. As authoring audio descriptions is challenging, prior work developed tools that help creators gain feedback on audio descriptions [49,69], respond to audience requests for descriptions [31], optimize descriptions to fit within time available [57], and recognize mismatches between audio and visuals to add descriptions as they capture [60] or edit [46] videos. Beyond helping creators author accessible videos, prior work makes inaccessible videos accessible on demand by generating automatic [83] or interactive [29,59] visual descriptions.…”
Section: Video Accessibilitymentioning
confidence: 99%
“…Previous researchers have conducted studies to understand the general accessibility and usability issues of screen readers on mobile phones and how to make visual content (e.g. texts, images, UI elements and components) more accessible for screen readers [8,26,28,38,41,42,47]. For touchscreen accessibility, researchers have investigated how screen size and key size could affect BLV people's interaction with touchscreens [13,40].…”
Section: 22mentioning
confidence: 99%
“…We investigated extractive and abstractive summarization techniques to address this issue (Tas and Kiyani 2007). Of them, abstractive summarization methods are mainly trained and tested on structured documents such as news articles and are known to perform poorly on not as structured texts (Peng et al 2021). Therefore, we selected five different extractive summarization methods: a custom implementation of SMMRY-the algorithm behind Reddit's TLDR bot (https:// smmry.com); and four different pre-trained models-BART (Lewis et al 2020), GPT-2 (Radford et al 2019), XLNET (Yang et al 2019), and T5 (Raffel et al 2020) for modelling content importance.…”
Section: Summarizationmentioning
confidence: 99%