Despite typically receiving little emphasis in visualization research, interaction in visualization is the catalyst for the user's dialogue with the data, and, ultimately, the user's actual understanding and insight into this data. There are many possible reasons for this skewed balance between the visual and interactive aspects of a visualization. One reason is that interaction is an intangible concept that is difficult to design, quantify, and evaluate. Unlike for visual design, there are few examples that show visualization practitioners and researchers how to best design the interaction for a new visualization. In this paper, we attempt to address this issue by collecting examples of visualizations with "best-in-class" interaction and using them to extract practical design guidelines for future designers and researchers. We call this concept fluid interaction, and we propose a operational definition in terms of the direct manipulation and embodied interaction paradigms, the psychological concept of "flow", and Norman's gulfs of execution and evaluation.
We present HuddleLamp, a desk lamp with an integrated RGB-D camera that precisely tracks the movements and positions of mobile displays and hands on a table. This enables a new breed of spatially-aware multi-user and multi-device applications for around-the-table collaboration without an interactive tabletop. At any time, users can add or remove displays and reconfigure them in space in an adhoc manner without the need of installing any software or attaching markers. Additionally, hands are tracked to detect interactions above and between displays, enabling fluent cross-device interactions. We contribute a novel hybrid sensing approach that uses RGB and depth data to increase tracking quality and a technical evaluation of its capabilities and limitations. For enabling installation-free ad-hoc collaboration, we also introduce a web-based architecture and JavaScript API for future HuddleLamp applications. Finally, we demonstrate the resulting design space using five examples of cross-device interaction techniques.
We introduce Blended Interaction, a new conceptual framework that helps to explain when users perceive user interfaces as ''natural'' or not. Based on recent findings from embodied cognition and cognitive linguistics, Blended Interaction provides a novel and more accurate description of the nature of human computer interaction (HCI). In particular, it introduces the notion of conceptual blends to explain how users rely on familiar and real-world concepts whenever they learn to use new digital technologies. We apply Blended Interaction in the context of post-''Windows Icons Menu Pointer'' interactive spaces. These spaces are ubiquitous computing environments for computer-supported collaboration of multiple users in a physical space or room, e.g., meeting rooms, design studios, or libraries, augmented with novel interactive technologies and digital computation, e.g., multi-touch walls, tabletops, and tablets. Ideally, in these spaces, the virtues of the familiar physical and social world are combined with that of the digital realm in a considered manner so that desired properties of each are preserved and a seemingly ''natural'' HCI is achieved. To support designers in this goal, we explain how the users' conceptual systems use blends to tie together familiar concepts with the novel powers of digital computation. Furthermore, we introduce four domains of design to structure the underlying problem and design space: individual and social interaction, workflow, and physical environment. We introduce our framework by discussing related work, e.g., metaphors, mental models, direct manipulation, image schemas, reality-based interaction, and illustrate Blended Interaction using design decisions we made in recent projects.
Abstract. We present a proof-of-concept of a mobile navigational aid that uses the Microsoft Kinect and optical marker tracking to help visually impaired people find their way inside buildings. The system is the result of a student project and is entirely based on low-cost hard-and software. It provides continuous vibrotactile feedback on the person's waist, to give an impression of the environment and to warn about obstacles. Furthermore, optical markers can be used to tag points of interest within the building to enable synthesized voice instructions for point-to-point navigation.
Recent findings from Embodied Cognition reveal strong effects of arm and hand movement on spatial memory. This suggests that input devices may have a far greater influence on users' cognition and users' ability to master a system than we typically believeespecially for spatial panning or zooming & panning user interfaces. We conducted two experiments to observe whether multi-touch instead of mouse input improves users' spatial memory and navigation performance for such UIs. We observed increased performances for panning UIs but not for zooming & panning UIs. We present our results, provide initial explanations and discuss opportunities and pitfalls for interaction designers.
We present two experiments examining the impact of navigation techniques on users' navigation performance and spatial memory in a zoomable user interface (ZUI). The first experiment with 24 participants compared the effect of egocentric body movements with traditional multi-touch navigation. The results indicate a 47% decrease in path lengths and a 34% decrease in task time in favor of egocentric navigation, but no significant effect on users' spatial memory immediately after a navigation task. However, an additional second experiment with 8 participants revealed such a significant increase in performance of long-term spatial memory: The results of a recall task administered after a 15-minute distractor task indicate a significant advantage of 27% for egocentric body movements in spatial memory. Furthermore, a questionnaire about the subjects' workload revealed that the physical demand of the egocentric navigation was significantly higher but there was less mental demand.
Cross-device interaction between multiple mobile devices is a popular field of research in HCI. However, the appropriate design of this interaction is still an open question, with competing approaches such as spatiallyaware vs. spatially-agnostic techniques. In this paper, we present the results of a two-phase user study that explores this design space: In phase 1, we elicited gestures for typical mobile cross-device tasks from 4 focus groups (N=17). The results show that 71% of the elicited gestures were spatially-aware and that participants strongly associated cross-device tasks with interacting and thinking in space. In phase 2, we implemented one spatially-agnostic and two spatially-aware techniques from phase 1 and compared them in a controlled experiment (N=12). The results indicate that spatially-aware techniques are preferred by users and can decrease mental demand, effort, and frustration, but only when they are designed with great care. We conclude with a summary of findings to inform the design of future cross-device interactions.
Purpose Big Data introduces high amounts and new forms of structured, unstructured and semi-structured data into the field of accounting and this requires alternative data management and reporting methods. Generating insights from these new data sources highlight the need for different and interactive forms of visualization in the field of visual analytics. Nonetheless, a considerable gap between the recommendations in research and the current usage in practice is evident. In order to understand and overcome this gap, a detailed analysis of the status quo as well as the identification of potential barriers for adoption is vital. The paper aims to discuss this issue. Design/methodology/approach A survey with 145 business accountants from Austrian companies from a wide array of business sectors and all hierarchy levels has been conducted. The survey is targeted toward the purpose of this study: identifying barriers, clustered as human-related and technological-related, as well as investigating current practice with respect to interactive visualization use for Big Data. Findings The lack of knowledge and experience regarding new visualization types and interaction techniques and the sole focus on Microsoft Excel as a visualization tool can be identified as the main barriers, while the use of multiple data sources and the gradual implementation of further software tools determine the first drivers of adoption. Research limitations/implications Due to the data collection with a standardized survey, there was no possibility of dealing with participants individually, which could lead to a misinterpretation of the given answers. Further, the sample population is Austrian, which might cause issues in terms of generalizing results to other geographical or cultural heritages. Practical implications The study shows that those knowledgeable and familiar with interactive Big Data visualizations indicate high perceived ease of use. It is, therefore, necessary to offer sufficient training as well as user-centered visualizations and technological support to further increase usage within the accounting profession. Originality/value A lot of research has been dedicated to the introduction of novel forms of interactive visualizations. However, little focus has been laid on the impact of these new tools for Big Data from a practitioner’s perspective and their needs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.