Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility 2021
DOI: 10.1145/3441852.3471234
|View full text |Cite
|
Sign up to set email alerts
|

Slidecho: Flexible Non-Visual Exploration of Presentation Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 42 publications
0
10
0
Order By: Relevance
“…We explore how community member familiar with the domain may be able to provide descriptions for live rather than recorded videos. While audio descriptions typically occur within gaps in video narration [3,37], adequate gaps do not always occur (e.g., for short videos [11], or videos with frequent speech [37,38]). To address this time constraint, prior work used rich audio to convey video themes [11], and provided users control over how often or when to pause a video to receive additional descriptions [37,38].…”
Section: Video Accessibilitymentioning
confidence: 99%
See 1 more Smart Citation
“…We explore how community member familiar with the domain may be able to provide descriptions for live rather than recorded videos. While audio descriptions typically occur within gaps in video narration [3,37], adequate gaps do not always occur (e.g., for short videos [11], or videos with frequent speech [37,38]). To address this time constraint, prior work used rich audio to convey video themes [11], and provided users control over how often or when to pause a video to receive additional descriptions [37,38].…”
Section: Video Accessibilitymentioning
confidence: 99%
“…audio descriptions. Prior work explored how to create audio descriptions for recorded videos such as films [3,36,46], user-generated videos [19, 28,29,37,50], slide presentations [38,39] and GIFs [11] by providing computational description support [29,37,38,50,54] and proposing what to describe for specific video types (e.g., GIFs [11], films [46]). Previous work has not yet explored technology to support live descriptions or description preferences for livestream-specific content (e.g., long expert streams).…”
Section: Introductionmentioning
confidence: 99%
“…As authoring audio descriptions is challenging, prior work developed tools that help creators gain feedback on audio descriptions [49,69], respond to audience requests for descriptions [31], optimize descriptions to fit within time available [57], and recognize mismatches between audio and visuals to add descriptions as they capture [60] or edit [46] videos. Beyond helping creators author accessible videos, prior work makes inaccessible videos accessible on demand by generating automatic [83] or interactive [29,59] visual descriptions. While such prior work provides BLV audience members access to visual content in videos, the prior approaches were designed to make videos accessible for consumption rather than authoring, such that they lack important information required for video authoring tasks (e.g., lighting, camera stability).…”
Section: Video Accessibilitymentioning
confidence: 99%
“…To help video consumers skim and navigate to content of interest, prior work introduced approaches to navigate videos based on transcripts [33,54,55], high-level chapters and scenes [13,19,34,54,56,80,84], or key objects and concepts [12,44,59]. While transcripts help users efficiently search for words used in the video [33,54,55], they can be difficult to skim as they are often long, unstructured, and contain disfluencies present in speech [56].…”
Section: Video Navigation Interaction Techniquesmentioning
confidence: 99%
See 1 more Smart Citation