The need for security features to stop spam and bots has prompted research aimed at developing human interaction proofs (HIPs) that are both secure and easy to use. The primarily visual techniques used in these HIP tools present difficulties for users with visual impairments. This article reports on the development of Human-Interaction Proof, Universally Usable (HIPUU), a new approach to human-interaction proofs based on identification of a series of sound/image pairs. Simultaneous presentation of a single, unified task in two alternative modalities provides multiple paths to successful task completion. We present two alternative task completion strategies, based on differing input strategies (menu-based vs. free text entry). Empirical results from studies involving both blind and sighted users validate both the usability and accessibility of these differing strategies, with blind users achieving successful task completion rates above 90%. The strengths of the alternate task completion strategies are discussed, along with possible approaches for improving the robustness of HIPUU.
Despite growing interest in designing usable systems for managing privacy and security, recent efforts have generally failed to address the needs of users with disabilities. As security and privacy tools often rely upon subtle visual cues or other potentially inaccessible indicators, users with perceptual limitations might find such tools particularly challenging. To understand the needs of an important group of users with disabilities, a focus group was conducted with blind users to determine their perceptions of security-related challenges. Human-interaction proof (HIP) tools, commonly known as CAPTCHAs, are used by web pages to defeat robots and were identified in the focus group as a major concern. Therefore, a usability test was conducted to see how well blind users were able to use audio equivalents of these graphical tools. Finally, an accessible HIP tool was developed which combines audio and matching images, supporting both visual and audio output. Encouraging results from a small usability evaluation of the prototype with five sighted users and five blind users show that this new form of HIP is preferred by both blind and visual users to previous forms of text-based HIPs. Future directions for research are also discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.