Learning of classifiers to be used as filters within the analytical reasoning process leads to new and aggravates existing challenges. Such classifiers are typically trained ad-hoc, with tight time constraints that affect the amount and the quality of annotation data and, thus, also the users' trust in the classifier trained. We approach the challenges of ad-hoc training by interactive learning, which extends active learning by integrating human experts' background knowledge to greater extent. In contrast to active learning, not only does interactive learning include the users' expertise by posing queries of data instances for labeling, but it also supports the users in comprehending the classifier model by visualization. Besides the annotation of manually or automatically selected data instances, users are empowered to directly adjust complex classifier models. Therefore, our model visualization facilitates the detection and correction of inconsistencies between the classifier model trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users assess the performance of the classifier and, thus, build up trust in the filter created. We demonstrate the capabilities of interactive learning in the domain of video visual analytics and compare its performance with the results of random sampling and uncertainty sampling of training sets.
In recent years, a collection of new techniques which deal with video as input data, emerged in computer graphics and visualization. In this survey, we report the state of the art in video‐based graphics and video visualization. We provide a review of techniques for making photo‐realistic or artistic computer‐generated imagery from videos, as well as methods for creating summary and/or abstract visual representations to reveal important features and events in videos. We provide a new taxonomy to categorize the concepts and techniques in this newly emerged body of knowledge. To support this review, we also give a concise overview of the major advances in automated video analysis, as some techniques in this field (e.g. feature extraction, detection, tracking and so on) have been featured in video‐based modelling and rendering pipelines for graphics and visualization.
There is an increasing number of rapidly growing repositories capturing the movement of people in space-time. Movement trajectory compression becomes an obvious necessity for coping with such growing data volumes. This paper introduces the concept of semantic trajectory compression (STC). STC allows for substantially compressing trajectory data with acceptable information loss. It exploits that human urban mobility typically occurs in transportation networks that define a geographic context for the movement. In STC, a semantic representation of the trajectory that consists of reference points localized in a transportation network replaces raw, highly redundant position information (e.g., from GPS receivers). An experimental evaluation with real and synthetic trajectories demonstrates the power of STC in reducing trajectories to essential information and illustrates how trajectories can be restored from compressed data. The paper discusses possible application areas of STC trajectories.
We investigate visual task solution strategies when exploring traditional, orthogonal, and radial node-link tree layouts, four orientations of the non-radial layouts, as well as varying difficulty of the task. The strategies are identified by examining eye movement data recorded in a controlled user study previously conducted by Burch et al. For detailed analysis of the spatio-temporal structures and patterns in the eye tracking data, we employ visual analytics techniques adopted from related methodology for geographic movement data by Andrienko et al. In this way, we complement the statistical analysis of task completion times and error rates reported by Burch et al. with spatio-temporal strategies that explain the variation in completion times. We identify differences between task solution strategies dependent on layout type, orientation, and task difficulty. Furthermore, we examine differences between groups of participants split according to completion time. Our ana lysis identifies that for all layouts it took nearly the same time to find the task solution node, but in the radial layout the solution was not confirmed directly. Instead, a more frequent cross-checking occurs afterwards, which is the main reason for the impaired performance of radial layouts
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.