In this paper we introduce multidimensional visualization and interaction techniques that are an extension to related work in parallel histograms and dynamic querying. Bargrams are, in effect, histograms whose bars have been tipped over and lined up end-to-end. We discuss affordances of parallel bargrams in the context of systems that support consumer-based information exploration and choice based on the attributes of the items in the choice set. Our tool called EZChooser has enabled a number of prototypes in such domains as Internet shopping, investment decisions, college choice, and so on, and a limited version has been deployed for car shopping. Evaluations of the techniques include an experiment indicating that trained users prefer EZChooser over static tables for choice tasks among sets of 50 items with 7-9 attributes.
In this paper we propose a new model for a class of rapid serial visual presentation (RSVP) interfaces [16] in the context of consumer video devices. The basic spatial layout explodes a sequence of image frames into a 3D trail in order to provide more context for a spatial/temporal presentation. As the user plays forward or back, the trail advances or recedes while the image in the foreground focus position is replaced. The design is able to incorporate a variety of methods for analyzing or highlighting images in the trail. Our hypotheses are that users can navigate more quickly and precisely to points of interest when compared to conventional consumer-based browsing, channel flipping, or fast-forwarding techniques. We report on an experiment testing our hypotheses in which we found that subjects were more accurate but not faster in browsing to a target of interest in recorded television content with a TV remote. UIST 2003This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. ABSTRACTIn this paper we propose a new model for a class of rapid serial visual presentation (RSVP) interfaces [16] in the context of consumer video devices. The basic spatial layout "explodes" a sequence of image frames into a 3D trail in order to provide more context for a spatial/temporal presentation. As the user plays forward or back, the trail advances or recedes while the image in the foreground focus position is replaced. The design is able to incorporate a variety of methods for analyzing or highlighting images in the trail. Our hypotheses are that users can navigate more quickly and precisely to points of interest when compared to conventional consumer-based browsing, channel flipping, or fast-forwarding techniques. We report on an experiment testing our hypotheses in which we found that subjects were more accurate but not faster in browsing to a target of interest in recorded television content with a TV remote.
It is well established that humans possess cognitive abilities to process images extremely rapidly. At GTE Laboratories we have been experimenting with Web-based browsing interfaces that take advantage of this human facility. We have prototyped a number of browsing applications in different domains that offer the advantages of high interactivity and visual engagement. Our hypothesis, confirmed by user evaluations and a pilot experiment, is that many users will be drawn to interfaces that provide rapid presentation of images for browsing tasks in many contexts, among them online shopping, multimedia title selection, and people directories. In this paper we present our application prototypes using a system called PolyNavTM and discuss the imaging requirements for applications like these. We also raise the suggestion that if the Web industry at large standardized on an XML format for meta-content that included images, then the possibility exists that rapid-fire image browsing could become a standard part of the Web experience for content selection in a variety of domains.
The need for effective search for television content is growing as the number of choices for TV viewing and/or recording explodes. In this paper we describe a preliminary prototype of a multimodal Speech-In List-Out (SILO) interface in which users' input is unrestricted by vocabulary or grammar. We report on usability testing with a sample of six users. The prototype enables search through video content metadata download from an electronic program guide (EPG) service. Our setup for testing included adding a microphone to a TV remote control and running an application on a PC whose visual interface was displayed on a TV. ACM Advanced Visual Interfaces, May 2006This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. ABSTRACTThe need for effective search for television content is growing as the number of choices for TV viewing and/or recording explodes.In this paper we describe a preliminary prototype of a multimodal Speech-In List-Out (SILO) interface in which users' input is unrestricted by vocabulary or grammar. We report on usability testing with a sample of six users. The prototype enables search through video content metadata downloaded from an electronic program guide (EPG) service. Our setup for testing included adding a microphone to a TV remote control and running an application on a PC whose visual interface was displayed on a TV.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.