Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. We designed and experimentally evaluated sonically-enhanced menus to try and overcome this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound.
This paper describes an experiment to investigate the effectiveness of adding sound to progress bars. Progress bars have usability problems because they present temporal information graphically and if the user wants to keep abreast of this information, he/she must constantly visually scan the progress bar. The addition of sounds to a progress bar allows users to monitor the state of the progress bar without using their visual focus. Nonspeech sounds called earcons were used to indicate the current state of the task as well as the completion of the download. Results showed a significant reduction in the time taken to perform the task in the audio condition. The participants were aware of the state of the progress bar without having to remove the visual focus from their foreground task.
Future human-computer interfaces will use more than just graphical output to display information. In this paper we suggest that sound and graphics together can be used to improve interaction. We describe an experiment to improve the usability of standard graphical menus by the addition of sound. One common difficulty is slipping off a menu item by mistake when trying to select it. One of the causes of this is insufficient feedback. We designed and experimentally evaluated a new set of menus with much more salient audio feedback to solve this problem. The results from the experiment showed a significant reduction in the subjective effort required to use the new sonically-enhanced menus along with significantly reduced error recovery times. A significantly larger number of errors were also corrected with sound.
Although most of us communicate using multiple sensory modalities in our lives, and many of our computers are similarly capable of multi-modal interaction, most human-computer interaction is predominantly in the visual mode. This paper describes a toolkit of widgets that are capable of presenting themselves in multiple modalities, but further are capable of adapting their presentation to suit the contexts and environments in which they are used. This is of increasing importance as the use of mobile devices becomes ubiquitous.
This paper describes the evolution of the design and implementation of a distributed run-time system that itself is designed to support the evolution of the topology and implementation of an executing, distributed system. The three different versions of the run-time architecture that have been designed and implemented are presented, together with how each architecture addresses the problems of topological and functional evolution. In addition, the reasons for the rapid evolution of the design and implementation of the architecture are also described.From the lessons learned in both evolving the design of the architecture and in trying to provide a runtime system that can support run-time evolution, this paper discusses two generally applicable observations: evolution happens all the time, and it is not possible to anticipate how systems will evolve as designs; and large, run-time systems do not follow a predictable path. In addition to this, rapid prototyping has proved to be extremely useful in the production of the three architectures; this kind of prototyping has been made much easier by designing the core set of Java abstractions in terms of interfaces; and building an architecture that allows as many decisions as possible to be made at run-time which has produced a support system that is more responsive to the user as well as the distributed environment in which it is executing. ; 33:99-120 § Teaq stands for Trees, Evolution and Queries and the word is pronounced the same as 'teak'. ¶ In Teaq, it is possible for this parent to refuse the connection, in which case, another parent has to be chosen. This should be quite rare and the number of alternative parents contacted is low, perhaps less than ten.
Increasingly, lab evaluations of mobile applications are incorporating mobility. The inclusion of mobility alone, however, is insufficient to generate a realistic evaluation context since real-life users will typically be required to monitor their environment while moving through it. While field evaluations represent a more realistic evaluation context, such evaluations pose difficulties, including data capture and environmental control, which mean that a lab-based evaluation is often a more practical choice. This paper describes a novel evaluation technique that mimics a realistic mobile usage context in a lab setting. The technique requires that participants monitor their environment and change the route they are walking to avoid dynamically changing hazards (much as reallife users would be required to do). Two studies that employed this technique are described, and the results (which indicate the technique is useful) are discussed.
The development of appropriate lab-based evaluation techniques for mobile technologies requires continued research attention. In particular, experimental design needs to account for the environmental context in which such technologies will ultimately be used. This requires, in part, that relevant environmental distractions be incorporated into evaluations. This chapter reflects on different techniques that were used in three separate lab-based mobile evaluation experiments to present visual distractions to participants and to measure the participants’ cognizance of the distractions during the course of mobile evaluations of technology. The different techniques met the different needs of the three studies with respect to the fidelity of the data captured, the impact of acknowledging distractions on the evaluation task, and the typical context of use for the technology being evaluated. The results of the studies showed that the introduction of visual distractions did have an impact on the experimental task and indicate that future work is required in this area.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.