In this paper we introduce multidimensional visualization and interaction techniques that are an extension to related work in parallel histograms and dynamic querying. Bargrams are, in effect, histograms whose bars have been tipped over and lined up end-to-end. We discuss affordances of parallel bargrams in the context of systems that support consumer-based information exploration and choice based on the attributes of the items in the choice set. Our tool called EZChooser has enabled a number of prototypes in such domains as Internet shopping, investment decisions, college choice, and so on, and a limited version has been deployed for car shopping. Evaluations of the techniques include an experiment indicating that trained users prefer EZChooser over static tables for choice tasks among sets of 50 items with 7-9 attributes.
It is well established that humans possess cognitive abilities to process images extremely rapidly. At GTE Laboratories we have been experimenting with Web-based browsing interfaces that take advantage of this human facility. We have prototyped a number of browsing applications in different domains that offer the advantages of high interactivity and visual engagement. Our hypothesis, confirmed by user evaluations and a pilot experiment, is that many users will be drawn to interfaces that provide rapid presentation of images for browsing tasks in many contexts, among them online shopping, multimedia title selection, and people directories. In this paper we present our application prototypes using a system called PolyNavTM and discuss the imaging requirements for applications like these. We also raise the suggestion that if the Web industry at large standardized on an XML format for meta-content that included images, then the possibility exists that rapid-fire image browsing could become a standard part of the Web experience for content selection in a variety of domains.
In this paper we propose new visual interface technology to address multidimensional data exploration and browsing tasks. MultiNav, a prototype from GTE Laboratories, is based upon a multidimensional information model that affords new data exploration and semantically structured browsing interactions. The primary visual metaphor is based on sliding rods, each of which is associated with an information dimension from the underlying model. Users can interactively select value ranges along the rods in order to reveal hidden relationships as well as query and restrict the set through direct manipulation. A novel focus+context view is afforded in which detail about individual items is revealed within the context of the global multidimensional attribute space. We propose a novel interaction technique to change focus, which is based on dragging rods from side to side. We relate this work on multidimensional information visualization to other research in the area, including Parallel Coordinates, Dynamic Histograms, Dynamic Queries, and focus+context tables.* KeywordsVisual interface design, multidimensional information visualization, focus+context, shopping interfaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.