Network Medicine applies network science approaches to investigate disease pathogenesis. Many different analytical methods have been used to infer relevant molecular networks, including protein-protein interaction networks, correlation-based networks, gene regulatory networks, and Bayesian networks. Network Medicine applies these integrated approaches to Omics Big Data (including genetics, epigenetics, transcriptomics, metabolomics, and proteomics) using computational biology tools and, thereby, has the potential to provide improvements in the diagnosis, prognosis, and treatment of complex diseases. We discuss briefly the types of molecular data that are used in molecular network analyses, survey the analytical methods for inferring molecular networks, and review efforts to validate and visualize molecular networks. Successful applications of molecular network analysis have been reported in pulmonary arterial hypertension, coronary heart disease, diabetes mellitus, chronic lung diseases, and drug development. Important knowledge gaps in Network Medicine include incompleteness of the molecular interactome, challenges in identifying key genes within genetic association regions, and limited applications to human diseases.
With today's technical possibilities, a stable visualization scenario can no longer be assumed as a matter of course, as underlying data and targeted display setup are much more in flux than in traditional scenarios. Incremental visualization approaches are a means to address this challenge, as they permit the user to interact with, steer, and change the visualization at intermediate time points and not just after it has been completed. In this paper, we put forward a model for incremental visualizations that is based on the established Data State Reference Model, but extends it in ways to also represent partitioned data and visualization operators to facilitate intermediate visualization updates. In combination, partitioned data and operators can be used independently and in combination to strike tailored compromises between output quality, shown data quantity, and responsiveness-i.e., frame rates. We showcase the new expressive power of this model by discussing the opportunities and challenges of incremental visualization in general and its usage in a real world scenario in particular.
Big Data technology has discarded traditional data modelling approaches as no longer applicable to distributed data processing. It is, however, largely recognised that Big Data impose novel challenges in data and infrastructure management. Indeed, multiple components and procedures must be coordinated to ensure a high level of data quality and accessibility for the application layers, e.g. data analytics and reporting. In this paper, the third of its kind co-authored by mem-
Situational awareness is a key concept in cyber-defence. Its goal is to make the user aware of different and complex aspects of the network he or she is monitoring. This paper proposes PERCIVAL, a novel visual analytics environment that contributes to situational awareness by allowing the user to understand the network security status and to monitor security events that are happening on the system. The proposed visualization allows for comparing the proactive security analysis with the actual attack progress, providing insights on the effectiveness of the mitigation actions the system has triggered against the attack and giving an overview of the possible attack's evolution. Moreover, the same visualization can be fruitfully used in the proactive analysis since it allows for getting details on computed attack paths and evaluating the mitigation actions that have been proactively computed by the system. A preliminary user study provided a positive feedback on the prototype implementation of the system. A video of the system is available at: https://youtu.be/uMpYCJCX95k
Progressive Visual Analytics (PVA) has gained increasing attention over the past years. It brings the user into the loop during otherwise long-running and non-transparent computations by producing intermediate partial results. These partial results can be shown to the user for early and continuous interaction with the emerging end result even while it is still being computed. Yet as clear-cut as this fundamental idea seems, the existing body of literature puts forth various interpretations and instantiations that have created a research domain of competing terms, various definitions, as well as long lists of practical requirements and design guidelines spread across different scientific communities. This makes it more and more difficult to get a succinct understanding of PVA's principal concepts, let alone an overview of this increasingly diverging field. The review and discussion of PVA presented in this paper address these issues and provide (1) a literature collection on this topic, (2) a conceptual characterization of PVA, as well as (3) a consolidated set of practical recommendations for implementing and using PVA-based visual analytics solutions.Keywords: visual analytics; progressive visualization; incremental visualization; online algorithms MotivationWith data growing in size and complexity, and analysis methods getting more sophisticated and computationally intensive, the idea of Progressive Visual Analytics (PVA) [1,2] becomes increasingly appealing. A PVA approach can either subdivide the data to process each data chunk individually, or it can subdivide the analytic process into computational steps that iteratively refine analytic results [3]. By doing so, PVA yields partial results of increasing completeness or approximative results of increasing correctness, respectively. This is useful in a wide range of visual analytics scenarios:• to realize responsive client-server visualizations using incremental data transmissions Because of its versatility, the progressive approach to data analysis and visualization is alternatively seen as a paradigm for computation, for interaction, for data transmission, or for visual presentation. It is thus not surprising that PVA-related research is distributed over multiple disciplines, motivated by various underlying problems, described in different, sometimes overloaded terms at different levels of detail for different audiences.
In this paper, we present a new benchmark to validate the suitability of database systems for interactive visualization workloads. While there exist proposals for evaluating database systems on interactive data exploration workloads, none rely on real user traces for database benchmarking. To this end, our long term goal is to collect user traces that represent workloads with different exploration characteristics. In this paper, we present an initial benchmark that focuses on "crossfilter"-style applications, which are a popular interaction type for data exploration and a particularly demanding scenario for testing database system performance. We make our benchmark materials, including input datasets, interaction sequences, corresponding SQL queries, and analysis code, freely available as a community resource, to foster further research in this area: https://osf.io/9xerb/?view_only= 81de1a3f99d04529b6b173a3bd5b4d23. CCS CONCEPTS • Information systems → Data management systems; Data analytics; • Human-centered computing → Visualization systems and tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.