With content rapidly moving to the electronic space, access to graphics for individuals with visual impairments is a growing concern. Recent research has demonstrated the potential for representing basic graphical content on touchscreens using vibrations and sounds, yet few guidelines or processes exist to guide the design of multimodal, touchscreen-based graphics. In this work, we seek to address this gap by synergizing our collective research efforts over the past eight years and implementing our findings into a compilation of recommendations, which we validate through an iterative design process and user study. We start by reviewing previous work and then collate findings into a set of design guidelines for generating basic elements of touchscreen-based multimodal graphics. We then use these guidelines to generate exemplary graphics in mathematics, specifically bar charts and geometry concepts. We discuss the iterative design process of moving from guidelines to actual graphics and highlight challenges. We then present a formal user study with 22 participants with visual impairments, comparing learning performance on using touchscreen-rendered graphics to embossed graphics. We conclude with qualitative feedback from participants on the touchscreen-based approach and offer areas of future investigation as these recommendation are expanded to include more complex graphical concepts.
Graphical access is one of the most pressing challenges for individuals who are blind or visually impaired. This chapter discusses some of the factors underlying the graphics access challenge, reviews prior approaches to addressing this longstanding information access barrier, and describes some promising new solutions. We specifically focus on touchscreen-based smart devices, a relatively new class of information access technologies, which our group believes represent an exemplary model of user-centered, needs-based design. We highlight both the challenges and the vast potential of these technologies for alleviating the graphics accessibility gap and share the latest results in this line of research. We close with recommendations on ideological shifts in mindset about how we approach solving this vexing access problem, which will complement both technological and perceptual advancements that are rapidly being uncovered through a growing research community in this domain.
Vibration plays a significant role in the way users interact with touchscreens. For many users, vibration affords tactile alerts and other enhancements. For eyes-free users and users with visual impairments, vibration can also serve a more primary role in the user interface, such as indicating streets on maps, conveying information about graphs, or even specifying basic graphics. However, vibration is rarely used in current user interfaces beyond basic cuing. Furthermore, designers and developers who do actually use vibration more extensively are often unable to determine the exact properties of the vibration signals they are implementing, due to out-of-the-box software and hardware limitations. We make two contributions in this work. First, we investigate the contextual properties of touchscreen vibrations and how vibrations can be used to effectively convey traditional, embossed elements, such as dashes and dots. To do so, we developed an open source, Android-based library to generate vibrations that are perceptually salient and intuitive, improving upon existing vibration libraries. Second, we conducted a user study with 26 blind or visually impaired users to evaluate and categorize the effects with respect to traditional tactile line profiles. We have established a range of vibration effects that can be reliably generated by our haptic library and are perceptible and distinguishable by users.
While text-to-speech software has largely made textual information accessible in the digital space, analogous access to graphics still remains an unsolved problem. Because of their portability and ubiquity, several studies have alluded to touchscreens as a potential platform for such access, yet there is still a gap in our understanding of multimodal information transfer in the context of graphics. The current research demonstrates feasibility for following lines, a fundamental graphical concept, via vibrations and sounds on commercial touchscreens. Two studies were run with 21 blind and visually impaired participants ( N = 12; N = 9). The first study examined the presentation of straight, linear lines using a multitude of line representations, such as vibration-only, auditory-only, vibration lines with auditory borders, and auditory lines with vibration borders. The results of this study demonstrated that both auditory and vibratory bordered lines were optimal for precise tracing, although both vibration- and auditory-only lines were also sufficient for following, with minimal deviations. The second study examined the presentation of curving, non-linear lines. Conditions differed on the number of auditory reference points presented at the inflection and deflection points. Participants showed minimal deviation from the lines during tracing, performing nearly equally in both 1- and 3-point conditions. From these studies, we demonstrate that line following via multimodal feedback is possible on touchscreens, and we present guidelines for the presentation of such non-visual graphical concepts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.