Web-2.0 applications turn static Web documents into dynamic user interfaces. They epitomize the final realization of the vision "The Document Is The Interface!". This transition from static Web pages to interactive Web applications also requires the introduction of a fresh set of innovations to how such applications are accessed in conjunction with adaptive technologies.Asynchronous JavaScript and XML (AJAX) breathes life into static Web pages. ARIA live regions helps bring such interaction to life when used in conjunction with adaptive technologies such as screenreaders and self-voicing browsers. This paper introduces the motivation behind live regions in ARIA, and describes how this support can be used to enhance the user interaction provided by Google Talk -an instant-messaging client that is integrated into the GMail Web interface. We describe the interaction model as it is surfaced to the end-user, and show how the introduction of live regions makes all aspects of the resulting UI usable with adaptive technologies.Web-2.0 applications -especially mashups -excel at creating end-user solutions that are greater than the sum of their individual building blocks. We demonstrate this by bringing together Google Talk, Live Regions and Natural Language translation by demonstrating a multi-lingual talking translation interface that is the result of speech-enabling these applications using the Google AxsJAX framework.
Screen-readers |computer software that e n a bles a visually impaired user to read the contents of a visual display| have been available for more than a decade. Screen-readers are separate from the u s e r a p plication. Consequently, t h ey have little or no contextual information about t h e contents o f t h e display. T h e a u t h or has used traditional screen-reading a p plications for the last ve y ears. The d esign of the speech-enabling a p proach described here has been implemented in Emacspeak to overcome m any o f t h e s h ortcomings he h as encountered with traditional screen-readers. The a p proach used by E m acspeak is very dierent from that of traditional screen-readers. Screen-readers allow the u s e r t o l i s t en to t h e contents a p pearing in dierent parts o f t h e display; but t h e u s e r i s e n t irely responsible for building a m ental model of the visual display in order to i n t erpret what a n a p plication is trying t o convey. E m acspeak, on the o t h er hand, does not speak the screen. Instead, applications provide b o t h visual and speech feedback, and t h e speech feedback i s d esigned to be sucient b y i t self. This approach r e d u ces cognitive load on the u s e r a n d is relevant t o providing general spoken access to information. Producing s p o k en output from within the application, rather than speaking t h e visually displayed information, vastly improves the quality o f t h e s p o k en feedback. Thus, an application can display its results in a visually pleasing m anner; the speech-enabling component renders the same in an aurally pleasing w ay.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.