Visualization techniques such as bar graphs and pie charts let sighted users quickly understand and explore numerical data. These techniques remain by and large inaccessible for visually impaired users. Even when these are made accessible, they remain slow and cumbersome, and not as useful as they might be to sighted users. Previous research has studied two methods of improving perception and speed of navigating auditory graphs -using non-speech audio (such as tones) instead of speech to communicate data and using two audio streams in parallel instead of in series. However, these studies were done in the early 2000s and speech synthesis techniques have improved considerably in recent times, as has the familiarity of visually impaired users with smartphones and speech systems. We systematically compare user performance on four modes that can be used for the generation of auditory graphs: parallel-tone, parallel-speech, serialtone, and serial-speech. We conducted two within-subjects studies -one with 20 sighted users and the other with 20 visually impaired users. Each user group performed point estimation and point comparison tasks with each technique on two sizes of bar graphs. We assessed task time, errors and user preference. We found that while tone was faster than speech, speech was more accurate than tone. The parallel modality was faster than serial modality and visually impaired users were faster than their sighted counterparts. Further, users showed a strong personal preference towards the serial-speech technique. To the best of our knowledge, this is the first empirical study that systematically compares these four techniques.
Cross cultural differences and cultural sensitivities have not yet received much attention in the areas of accessibility, assistive technologies, and inclusive design and methods for working with disabled and older users. However it is important to consider the challenges of developing accessible and usable technologies for people with disabilities and older people in different cultural contexts. This chapter presents the background to the topic and then considers three particular issues in relation to the topic: the accessibility of interactive systems in the home and implications for emerging markets; the accessibility problems in relation to a multilingual society such as India; and finally, the issues of the cultural biases of the methods used when working with users within a user centered design lifecycle or a "double diamond" methodology, whether they are mainstream users, disabled or older users.
Swarachakra is an Abugida text input keyboard available in 12 Indian languages. We enhanced an accessible version of Swarachakra Marathi with speech input. However, speech input could be error-prone, and especially so for languages where speech recognition technologies are new. Such errors could either slow the user down due to the need for editing, or go unnoticed, leading to high uncorrected error rates. We therefore conducted a withinsubject empirical study to compare the user performance of keyboard-only input method with keyboard+speech input method with 11 novice visually impaired users. We found that keyboard+speech input was almost 11 times faster, reaching 182 characters per minute, and had a lower uncorrected error rate than the keyboard-only input, and in spite of having higher corrected error rates. Though we used a wide variety of phrases in our study, we observed that all phrases were faster on average with the keyboard+speech input method. To the best of our knowledge, ours is the first empirical study to evaluate the performance of speech enabled text input in Marathi for visually impaired people. This is the highest reported speed by visually impaired users in any Indian language.
Though several keyboards for Indic languages are available on Android Play store, few are accessible by the visually impaired. Particularly, none of the gesture-based keyboards are accessible. We developed an accessible prototype of the popular gesture-based, logically organised Hindi keyboard Swarachakra. In this paper, we present findings from a two-part study. In the first part, we conducted a qualitative study with 12 visually impaired users on Swarachakra. In the second part, we conducted a longitudinal, within-subject evaluation comparing Swarachakra and Google Indic keyboard. At the end of the two-week long study, 10 participants had spent an average of 6.5 hours typing, including training and text input tasks. Our study establishes benchmark for text input speeds for Indic languages on virtual keyboards by visually impaired users. The mean typing speed on Swarachakra was 14.53 cpm and that on Google Indic was 12.79 cpm. The mean speeds in last session were 21.72 cpm and 18.36 cpm respectively. Regression analysis indicates that the effect of keyboard was significant. In addition, we report the user preferences, the challenges faced and qualitative findings that are relevant to future research in Indic language text input by visually impaired users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.