Navigating and understanding the source code of a program are highly challenging activities. This paper introduces a fisheye view of source code to a Java programming environment. The fisheye view aims to support a programmer's navigation and understanding by displaying those parts of the source code that have the highest degree of interest given the current focus. An experiment was conducted which compared the usability of the fisheye view with a common, linear presentation of source code. Sixteen participants performed tasks significantly faster with the fisheye view, although results varied dependent on the task type. The participants generally preferred the interface with the fisheye view. We analyse participants' interaction with the fisheye view and suggest how to improve its performance. In the calculation of the degree of interest, we suggest to emphasize those parts of the source code that are semantically related to the programmer's current focus.
People typically interact with information visualizations using a mouse. Their physical movement, orientation, and distance to visualizations are rarely used as input. We explore how to use such spatial relations among people and visualizations (i.e., proxemics) to drive interaction with visualizations, focusing here on the spatial relations between a single user and visualizations on a large display. We implement interaction techniques that zoom and pan, query and relate, and adapt visualizations based on tracking of users' position in relation to a large high-resolution display. Alternative prototypes are tested in three user studies and compared with baseline conditions that use a mouse. Our aim is to gain empirical data on the usefulness of a range of design possibilities and to generate more ideas. Among other things, the results show promise for changing zoom level or visual representation with the user's physical distance to a large display. We discuss possible benefits and potential issues to avoid when designing information visualizations that use proxemics.
Multitouch wall-sized displays afford new forms of collaboration: They can be used up close by several users simultaneously, offer high resolution, and provide sufficient space for intertwining individual and joint work. The difference to displays without these capabilities is not well understood. To better understand the collaboration of groups around high-resolution multitouch wall displays, we conducted an exploratory study. Pairs collaborated on a problem-solving task using a 2.8m × 1.2m multitouch display with 24.8 megapixels. The study examines how participants collaborate; navigate relative to the display and to each other; and interact with and share the display. Participants physically navigated among different parts of the display, switched fluidly between parallel and joint work, and shared the display evenly. The results contrast earlier research that suggests difficulties in sharing and collaborating around wall displays. The study suggests that multitouch wall displays can support different collaboration styles and fluid transitions in group work.
Large, high-resolution displays offer new opportunities for visualizing and interacting with data. However, interaction techniques for such displays mostly support window manipulation and pointing, ignoring many activities involved in data analysis. We report on 11 workshops with data analysts from various fields, including artistic photography, phone log analysis, astrophysics, and health care policy. Analysts were asked to walk through recent tasks using actual data on a large whiteboard, imagining it to be a large display. From the resulting comments and a video analysis of behavior in the workshops, we generate ideas for new interaction techniques for large displays. These ideas include supporting sequences of visualizations with backtracking and fluid exploration of alternatives; using distance to the display to change visualizations; and fixing variables and data sets on the display or relative to the user.
Abstract. Most text entry methods require users to have physical devices within reach. In many contexts of use, such as around large displays where users need to move freely, device-dependent methods are ill suited. We explore how selection-based text entry methods may be adapted for use in mid-air. Initially, we analyze the design space for text entry in mid-air, focusing on singlecharacter input with one hand. We propose three text entry methods: H4 MidAir (an adaptation of a game controller-based method by MacKenzie et al.[21]), MultiTap (a mid-air variant of a mobile phone text entry method), and Projected QWERTY (a mid-air variant of the QWERTY keyboard). After six sessions, participants reached an average of 13.2 words per minute (WPM) with the most successful method, Projected QWERTY. Users rated this method highest on satisfaction and it resulted in the least physical movement.
Word-gesture keyboards enable fast text entry by letting users draw the shape of a word on the input surface. Such keyboards have been used extensively for touch devices, but not in mid-air, even though their fluent gestural input seems well suited for this modality. We present Vulture, a word-gesture keyboard for mid-air operation. Vulture adapts touch based word-gesture algorithms to work in midair, projects users' movement onto the display, and uses pinch as a word delimiter. A first 10-session study suggests text-entry rates of 20.6 Words Per Minute (WPM) and finds hand-movement speed to be the primary predictor of WPM. A second study shows that with training on a few phrases, participants do 28.1 WPM, 59% of the text-entry rate of direct touch input. Participants' recall of trained gestures in mid-air was low, suggesting that visual feedback is important but also limits performance. Based on data from the studies, we discuss improvements to Vulture and some alternative designs for mid-air text entry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.