INRIA Figure 1: A protein-protein interaction dataset (100,000 nodes and 1,000,000 edges) visualized using ZAME at two different levels of zoom. ABSTRACTWe present the Zoomable Adjacency Matrix Explorer (ZAME), a visualization tool for exploring graphs at a scale of millions of nodes and edges. ZAME is based on an adjacency matrix graph representation aggregated at multiple scales. It allows analysts to explore a graph at many levels, zooming and panning with interactive performance from an overview to the most detailed views. Several components work together in the ZAME tool to make this possible. Efficient matrix ordering algorithms group related elements. Individual data cases are aggregated into higher-order metarepresentations. Aggregates are arranged into a pyramid hierarchy that allows for on-demand paging to GPU shader programs to support smooth multiscale browsing. Using ZAME, we are able to explore the entire French Wikipedia-over 500,000 articles and 6,000,000 links-with interactive performance on standard consumer-level computer hardware.
We present a visual exploration of the field of human-computer interaction through the author and article metadata of four of its major conferences: the ACM conferences on Computer-Human Interaction (CHI), User Interface Software and Technology (UIST) andAdvanced Visual Interfaces (AVI) and the IEEE symposium on Information Visualization (InfoVis). This article describes many global and local patterns we discovered in this dataset, together with the exploration process that produced them. Some expected patterns emerged, such as that -like most social networks -co-authorship and citation networks exhibit a power-law degree distribution, with a few widely-collaborating authors and highly-cited articles. Also, the prestigious and long-established CHI conference has the highest impact (citations by the others). Unexpected insights included that the years when a given conference was most selective are not correlated with those that produced its most highly-referenced articles, and that influential authors have distinct patterns of collaboration.An interesting sidelight is that methods from the HCI field -exploratory data analysis by information visualization and direct-manipulation interaction -proved useful for this analysis. They allowed us to take an open-ended, exploratory approach, guided by the data itself. As we answered our original questions, new ones arose; as we confirmed patterns we expected, we discovered refinements, exceptions, and fascinating new ones.are the outliers? The great strength of exploratory analysis is its ability to raise unexpected questions. The drawback is that analysis can become a very drawn-out process, as the answer to one question raises many others that require further analysis. In this article, we describe our exploration process and provide a subset of interesting points for reflection, but we cannot hope to present a complete analysis of the field of human-computer interaction.This article is organized as follows: We present a discussion of related work, and then describe the process of dataset collection and cleaning, our approach to visual exploration, and how the visualizations were created. The central part of the article is the actual analysis, divided into three sections: an overview of the field describing important work, key researchers and the main topics across time for the four conferences; information about how articles reference each other and the patterns of citations between authors; and the collaboration networks that compare the community structure across conferences.
To be most useful, evaluation requires detailed observation and effective analysis of a full spectrum of system use. We have developed an approach and architecture for in-depth data collection and analysis of all use of a visualization system. User interface components in a large visualization and analysis platform automatically record user actions, and can restore previous system states on demand. Audio and text annotations are collected and indexed to states, allowing users to find a comment and restore the system state in which they made it; then explore actions before and after. History is visible as data; so a variety of visual displays and analysis techniques may be used to develop insights about the user's experience. States of any part of the interface may be analyzed separately. Actions are categorized in a taxonomy as the user interface is built, allowing comparison of similar patterns in all tools. History data can co-exist with other data during data exploration, supporting further individual or group data exploration.
We describe interactions between kinetic (moving) and static information displays. We have implemented "moxel" kinetic displays in a classic discovery platform with many standard information visualization and analytic tools, and experimented with interactions between them. Moxels, which generalize pixels, are an advanced, moving, form of iconographic display of the kind first developed in static form by Pickett and White (1966). As with the static graphic icons of those early displays, moxels provide a way of mapping together in one image multiple data variables, but with potentially more potency with in-place motion. We show examples of how the two kinds of displays have been integrated, and discuss issues with the integration of dynamic and static visualizations in a single environment. We discuss several interaction paradigms between them including linked brushing, multiple selections, and operations on selected regions.
To be most useful, evaluation metrics should be based on detailed observation and effective analysis of a full spectrum of system use. Because observation is costly, ideally we want a system to provide in-depth data collection with allied analyses of the key user interface elements. We have developed a visualization and analysis platform [1] that automatically records user actions and states at a high semantic level [2 and 3], and can be directly restored to any state. Audio and text annotations are collected and indexed to states, allowing users to comment on their current situation as they work, and/or as they review the session. These capabilities can be applied to usability evaluation of the system, describing problems they encountered, or to suggest improvements to the environment. Additionally, computed metrics are provided at each state [3, 4, and 5]. We believe that the metrics and the associated history data will allow us to deduce patterns of data exploration, to compare users, to evaluate tools, and to understand in a more automated approach the usability of the visualization system as a whole.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.