Audio CAPTCHAs were introduced as an accessible alternative for those unable to use the more common visual CAPTCHAs, but anecdotal accounts have suggested that they may be more difficult to solve. This paper demonstrates in a large study of more than 150 participants that existing audio CAPTCHAs are clearly more difficult and time-consuming to complete as compared to visual CAPTCHAs for both blind and sighted users. In order to address this concern, we developed and evaluated a new interface for solving CAPTCHAs optimized for non-visual use that can be added in-place to existing audio CAPTCHAs. In a subsequent study, the optimized interface increased the success rate of blind participants by 59% on audio CAPTCHAs, illustrating a broadly applicable principle of accessible design: the most usable audio interfaces are often not direct translations of existing visual interfaces.
For Deaf people, access to the mobile telephone network in the United States is currently limited to text messaging, forcing communication in English as opposed to American Sign Language (ASL), the preferred language. Because ASL is a visual language, mobile video phones have the potential to give Deaf people access to real-time mobile communication in their preferred language. However, even today's best video compression techniques can not yield intelligible ASL at limited cell phone network bandwidths. Motivated by this constraint, we conducted one focus group and two user studies with members of the Deaf Community to determine the intelligibility effects of video compression techniques that exploit the visual nature of sign language. Inspired by eye tracking results that show high resolution foveal vision is maintained around the face, we studied region-of-interest encodings (where the face is encoded at higher quality) as well as reduced frame rates (where fewer, better quality, frames are displayed every second). At all bit rates studied here, participants preferred moderate quality increases in the face region, sacrificing quality in other regions. They also preferred slightly lower frame rates because they yield better quality frames for a fixed bit rate. The limited processing power of cell phones is a serious concern because a real-time video encoder and decoder will be needed. Choosing less complex settings for the encoder can reduce encoding time, but will affect video quality. We studied the intelligibility effects of this tradeoff and found that we can significantly speed up encoding time without severely affecting intelligibility. These results show promise for real-time access to the current low-bandwidth cell phone network through sign-language-specific encoding techniques.
For many millions of users, 3D virtual worlds provide an engaging, immersive experience height ened by a synergistic combination of visual realism with dynamic control of the user's movement within the virtual world. For individuals with visual or dexterity impairments, however, one or both of those synergistic elements are impacted, reducing the usability and therefore the utility of the 3D virtual world. This article considers what features are necessary to make virtual worlds usable by such individuals. Empirical work has been based on a multiplayer 3D virtual world game called PowerUp, to which we have built in an extensive set of accessibility features. These features include in-world navigation and orientation tools, font customization, self-voicing text-to speech output, key remapping options, and keyboard-only and mouse-only navigation. Through empirical work with legally blind teenagers and adults with cerebral palsy, these features have been refined and validated. Whereas accessibility support for users with visual impairment often revolves around keyboard navigation, these studies emphasized the need to support visual aspects of pointing device actions too. Other notable findings include use of speech to supplement sound effects for novice users, and, for those with cerebral palsy, a general preference to use a pointing device to look around the world, rather than keys or on-screen buttons. The PowerUp accessibility features provide a core level of accessibility for the user groups studied.
The MobileASL project aims to increase accessibility by enabling Deaf people to communicate over video cell phones in their native language, American Sign Language (ASL). Real-time video over cell phones can be a computationally intensive task that quickly drains the battery, rendering the cell phone useless. Properties of conversational sign language allow us to save power and bits: namely, lower frame rates are possible when one person is not signing due to turntaking, and signing can potentially employ a lower frame rate than fingerspelling. We conduct a user study with native signers to examine the intelligibility of varying the frame rate based on activity in the video. We then describe several methods for automatically determining the activity of signing or not signing from the video stream in real-time. Our results show that varying the frame rate during turn-taking is a good way to save power without sacrificing intelligibility, and that automatic activity analysis is feasible.
Welcome to the new look of the online edition of the SIGACCESS Newsletter-with new layout, the use of sans-serif and larger font throughout, left-justification, and the inclusion of authors' short biographies and photographs (so that you can say hi when you meet them in meetings and conference). Following the tradition of including a variety of work from around the world, this issue encompasses a variety of topics, from a report from Italy on improving accessibility of a question-answering system to an investigation of the problems older Malaysians face when using mobile phones. This issue also includes a report on International Cross-Disciplinary Conference on Web Accessibility 2008 and an article from the USA that outlines some writing guidelines for authors writing about technology for people with disabilities, more specifically on currently accepted terminology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.