Designing Interactive Systems Conference 2022
DOI: 10.1145/3532106.3533522
|View full text |Cite
|
Sign up to set email alerts
|

“So What? What's That to Do With Me?” Expectations of People With Visual Impairments for Image Descriptions in Their Personal Photo Activities

Abstract: People with visual impairments (PVI) access photos through image descriptions. Thus far, research has studied what PVI expect in these descriptions mostly regarding functional purposes (e.g., identifying an object) and when engaging with online, publicly available images. Extending this research, we interviewed 30 PVI to understand their expectations for image descriptions when viewing, taking, searching, and reminiscing with personal photos on their own devices. We show how their expectations varied across ph… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 70 publications
0
3
0
Order By: Relevance
“…For instance, a noteworthy challenge for accessibility arises from screen readers struggling to identify unlabeled buttons, an aspect that is ignored by developers when designing interface buttons [69]. Additionally, researchers have observed that visually impaired individuals' engagement with images is context-dependent, and the provided textual descriptions do not consistently align with their requirements [1,43,82]. These accessibility challenges indicate that accessibility is not always a technical problem but also a social issue.…”
Section: Blind People and Social Media Accessibilitymentioning
confidence: 99%
“…For instance, a noteworthy challenge for accessibility arises from screen readers struggling to identify unlabeled buttons, an aspect that is ignored by developers when designing interface buttons [69]. Additionally, researchers have observed that visually impaired individuals' engagement with images is context-dependent, and the provided textual descriptions do not consistently align with their requirements [1,43,82]. These accessibility challenges indicate that accessibility is not always a technical problem but also a social issue.…”
Section: Blind People and Social Media Accessibilitymentioning
confidence: 99%
“…Prior work has also emphasized that BLV people's preferences for image descriptions vary based on an image's context [17]. Specifically, preferences differ based on the source or content of the image [4,21,49,60,63,78,79,111,114]. For example, Stangl et al [111,114] found that BLV people wanted different details for images associated with different sources and user goals.…”
Section: Image Accessibilitymentioning
confidence: 99%
“…This task represents a standard example of multi-modal learning, bridging the domains of Computer Vision (CV) and Natural Language Processing (NLP). Image captioning models have utility across diverse domains, with application including assistance to individuals with visual impairments [ 1 , 2 ], automatic medical image captioning [ 3 ] and diagnosis [ 4 ], and enhancing human–computer interactions [ 5 ]. Motivated by the achievements of deep learning techniques in machine translation [ 6 ], the majority of image captioning models adopt the encoder–decoder framework coupled with a visual attention mechanism [ 7 , 8 ].…”
Section: Introductionmentioning
confidence: 99%