2018
DOI: 10.1109/mprv.2018.011591059
|View full text |Cite
|
Sign up to set email alerts
|

The Impact of Low Vision on Touch-Gesture Articulation on Mobile Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0
4

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(10 citation statements)
references
References 18 publications
0
6
0
4
Order By: Relevance
“…4.1.10 Vision. Datasets under this group (a total of 11) typically include people who are blind [14,24] (6) or have low vision [165]. When datasets are collected with real-world assistive applications, where disability status as well as visual acuity or age of onset are not known, the umbrella term, "people with visual impairments" or "visually impaired" is used to describe those contributing data (e.g.…”
Section: Communities Of Focusmentioning
confidence: 99%
See 2 more Smart Citations
“…4.1.10 Vision. Datasets under this group (a total of 11) typically include people who are blind [14,24] (6) or have low vision [165]. When datasets are collected with real-world assistive applications, where disability status as well as visual acuity or age of onset are not known, the umbrella term, "people with visual impairments" or "visually impaired" is used to describe those contributing data (e.g.…”
Section: Communities Of Focusmentioning
confidence: 99%
“…[92]). Datasets are typically collected in the context of accessibility such as navigation [65], object recognition [154], and accessibility of web or touchscreen interfaces [24,165]. There is one exception, where the context is clinical, focusing on screening of Proliferative Diabetic Retinopathy based on retina images [84].…”
Section: Communities Of Focusmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, letter "E" could present users with an edge-enhanced view of the physical reality, and letter "C" would activate color enhancement. However, it is known that people with low vision produce stroke gestures that exhibit larger variations in terms of the Length-Error and Bending-Error features [98] compared to people without visual impairments [102], which reflect negatively in the accuracy rates of gesture recognizers. Our practitioner would like to know how large this variation is for the particular symbols "E" and "C" chosen for implementation in their user interface and wearable prototype.…”
Section: Motivating Examplesmentioning
confidence: 99%
“…Moreover, they lack consistent descriptions and require manual screening. We believe that this repository does not only contribute to transparency but also moves us a step Figure 1: Examples of datasets generated by people with disability including from left to right photos taken by blind people [11] sign language videos and annotations [4,19] stroke gestures by people with low vision [31], mobility app logs from people with visual impairments [17], audio recording from people with dysphonia [3], and text written by people with dyslexia [25].…”
mentioning
confidence: 99%