Very-high-resolution wall-sized displays offer new opportunities for interacting with large data sets. While pointing on this type of display has been studied extensively, higherlevel, more complex tasks such as pan-zoom navigation have received little attention. It thus remains unclear which techniques are best suited to perform multiscale navigation in these environments. Building upon empirical data gathered from studies of pan-and-zoom on desktop computers and studies of remote pointing, we identified three key factors for the design of mid-air pan-and-zoom techniques: uni-vs. bimanual interaction, linear vs. circular movements, and level of guidance to accomplish the gestures in mid-air. After an extensive phase of iterative design and pilot testing, we ran a controlled experiment aimed at better understanding the influence of these factors on task performance. Significant effects were obtained for all three factors: bimanual interaction, linear gestures and a high level of guidance resulted in significantly improved performance. Moreover, the interaction effects among some of the dimensions suggest possible combinations for more complex, real-world tasks.
Rich interaction with high-resolution wall displays is not limited to remotely pointing at targets. Other relevant types of interaction include virtual navigation, text entry, and direct manipulation of control widgets. However, most techniques for remotely acquiring targets with high precision have studied remote pointing in isolation, focusing on pointing efficiency and ignoring the need to support these other types of interaction. We investigate high-precision pointing techniques capable of acquiring targets as small as 4 millimeters on a 5.5 meters wide display while leaving up to 93% of a typical tablet device's screen space available for task-specific widgets. We compare these techniques to state-of-the-art distant pointing techniques and show that two of our techniques, a purely relative one and one that uses head orientation, perform as well or better than the best pointing-only input techniques while using a fraction of the interaction resources.
The advent of ultra-high resolution wall-size displays and their use for complex tasks require a more systematic analysis and deeper understanding of their advantages and drawbacks compared with desktop monitors. While previous work has mostly addressed search, visualization and sense-making tasks, we have designed an abstract classification task that involves explicit data manipulation. Based on our observations of real uses of a wall display, this task represents a large category of applications. We report on a controlled experiment that uses this task to compare physical navigation in front of a wall-size display with virtual navigation using panand-zoom on the desktop. Our main finding is a robust interaction effect between display type and task difficulty: while the desktop can be faster than the wall for simple tasks, the wall gains a sizable advantage as the task becomes more difficult. A follow-up study shows that other desktop techniques (overview+detail, lens) do not perform better than pan-andzoom and are therefore slower than the wall for difficult tasks.
International audienceThe WILD room (wall-sized interaction with large datasets) serves as a testbed for exploring the next generation of interactive systems by distributing interaction across diverse computing devices, enabling multiple users to easily and seamlessly create, share, and manipulate digital content
Ultra-high-resolution wall-sized displays ("ultra-walls") are effective for presenting large datasets, but their size and resolution make traditional pointing techniques inadequate for precision pointing. We study mid-air pointing techniques that can be combined with other, domain-specific interactions. We first explore the limits of existing single-mode remote pointing techniques and demonstrate theoretically that they do not support high-precision pointing on ultra-walls. We then explore solutions to improve mid-air pointing efficiency: a tunable acceleration function and a framework for dual-precision techniques, both with precise tuning guidelines. We designed novel pointing techniques following these guidelines, several of which outperform existing techniques in controlled experiments that involve pointing difficulties never tested prior to this work. We discuss the strengths and weaknesses of our techniques to help interaction designers choose the best technique according to the task and equipment at hand. Finally, we discuss the cognitive mechanisms that affect pointing performance with these techniques.
Focus+context interfaces provide in-place magnification of a region of the display, smoothly integrating the focus of attention into its surroundings. Two representations of the data exist simultaneously at two different scales, providing an alternative to classical pan & zoom for navigating multiscale interfaces. For many practical applications however, the magnification range of focus+context techniques is too limited. This paper addresses this limitation by exploring the quantization problem: the mismatch between visual and motor precision in the magnified region. We introduce three new interaction techniques that solve this problem by integrating fast navigation and high-precision interaction in the magnified region. Speed couples precision to navigation speed. Key and Ring use a discrete switch between precision levels, the former using a keyboard modifier, the latter by decoupling the cursor from the lens' center. We report on three experiments showing that our techniques make interacting with lenses easier while increasing the range of practical magnification factors, and that performance can be further improved by integrating speed-dependent visual behaviors.
Targets of only a few pixels are notoriously difficult to acquire. Despite many attempts at facilitating pointing, the reasons for this difficulty are poorly understood. We confirm a strong departure from Fitts' Law for small target acquisition using a mouse and investigate three potential sources of problems: motor accuracy, legibility, and quantization. We find that quantization is not a problem, but both motor and visual sizes are limiting factors. This suggests that small targets should be magnified in both motor and visual space to facilitate pointing. Since performance degrades exponentially as targets get very small, we further advocate the exploration of uniform, target-agnostic magnification strategies. We also confirm Welford's 1969 proposal that motor inaccuracy can be modeled by subtracting a "tremor constant" from target size. We argue for the adoption of this model, rather than Fitts' law, when reflecting on small target acquisition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.