Figure 1: EmbeddingVis consists of (1) control panel, (2) graph view, (3) cluster transition view, (4) pairwise ranking view and (5) structural view. Using EmbeddingVis to verify preserved metrics at the instance level: (a) users lasso a cluster of nodes and the corresponding nodes are highlighted and linked across different embedding spaces (b1-e1). Clicking one "hub" node in this cluster (label 20) in the graph view generates four neighbor ranking lists of this "hub" node: (b2) the graph space, (c2) DeepWalk, (d2) struc2vec, and (e2) node2vec. The highlighted rectangles (c2, d2, e2) indicate the preserved metrics by each embedding. (f) Users can click "Filter" to highlight the nodes of each ranking list in the cluster transition view (b3-e3). (g) The Euclidean distance between adjacent nodes in struc2vec is more volatile compared with that in the other two spaces. ABSTRACTConstructing latent vector representation for nodes in a network through embedding models has shown its practicality in many graph analysis applications, such as node classification, clustering, and link prediction. However, despite the high efficiency and accuracy of learning an embedding model, people have little clue of what information about the original network is preserved in the embedding vectors. The abstractness of low-dimensional vector representation, stochastic nature of the construction process, and non-transparent hyper-parameters all obscure understanding of network embedding results. Visualization techniques have been introduced to facilitate embedding vector inspection, usually by projecting the embedding *
Fig. 1: Our visualization system supports emotion analysis across three modalities (i.e., face, text, and audio) at different levels of details. The video view (a) summarizes each video in the collection and enables quick identification of videos of interest. The channel coherence view (b) shows emotion coherence of the three modalities at the sentence level and provides extracted features for channel exploration. The detail view (c) supports detail exploration for a selected sentence with some highlighted features and transition points. The sentence clustering view (d) provides a summary of the video and reveals the temporal patterns of emotion information. The word view (e) enables efficient quantitative analysis at the word level in the video transcript.Abstract-Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.
To design a successful Multiplayer Online Battle Arena (MOBA) game, the ratio of snowballing and comeback occurrences to all matches played must be maintained at a certain level to ensure its fairness and engagement. Although it is easy to identify these two types of occurrences, game developers often find it difficult to determine their causes and triggers with so many game design choices and game parameters involved. In addition, the huge amounts of MOBA game data are often heterogeneous, multi-dimensional and highly dynamic in terms of space and time, which poses special challenges for analysts. In this paper, we present a visual analytics system to help game designers find key events and game parameters resulting in snowballing or comeback occurrences in MOBA game data. We follow a user-centered design process developing the system with game analysts and testing with real data of a trial version MOBA game from NetEase Inc. We apply novel visualization techniques in conjunction with well-established ones to depict the evolution of players' positions, status and the occurrences of events. Our system can reveal players' strategies and performance throughout a single match and suggest patterns, e.g., specific player' actions and game events, that have led to the final occurrences. We further demonstrate a workflow of leveraging human analyzed patterns to improve the scalability and generality of match data analysis. Finally, we validate the usability of our system by proving the identified patterns are representative in snowballing or comeback matches in a one-month-long MOBA tournament dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.