Abstract. Target selection is a fundamental aspect of interaction and is particularly challenging when targets are moving. We address this problem by introducing a novel selection technique we call Hold which temporarily pauses the content while selection is in progress to provide a static target. By studying users, we evaluate our method against two others for acquiring moving targets in one and two dimensions with variations in target size and velocity. Results demonstrate that Hold outperforms traditional approaches in 2D for small or fast-moving targets. Additionally, we investigate a new model to describe acquisition of 2D moving targets based on Fitts' Law. We validate our novel 2D model for moving target selection empirically. This model has application in the development of acquisition techniques for moving targets in 2D encountered in domains such as hyperlinked video and video games.
Abstract. We present a work-in-progress novel framework for the creation, delivery and viewing of multi-view hypermedia intended for mobile platforms. We utilise abstractions over creation and delivery of content and a unified language scheme (through XML) for communication between components. The delivery mechanism incorporates server-side processing to allow inclusion of additional features such as computer vision-based analysis or visual effects. Multi-view video is streamed live to mobile devices which offer several mechanisms for viewing hypermedia and perspective selection.
We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.