Knowledge about visualization tasks plays an important role in choosing or building suitable visual representations to pursue them. Yet, tasks are a multi-faceted concept and it is thus not surprising that the many existing task taxonomies and models all describe different aspects of tasks, depending on what these task descriptions aim to capture. This results in a clear need to bring these different aspects together under the common hood of a general design space of visualization tasks, which we propose in this paper. Our design space consists of five design dimensions that characterize the main aspects of tasks and that have so far been distributed across different task descriptions. We exemplify its concrete use by applying our design space in the domain of climate impact research. To this end, we propose interfaces to our design space for different user roles (developers, authors, and end users) that allow users of different levels of expertise to work with it.
Fig. 1. Shaded relief of the Caucasus Mountains created with a neural network trained with a manual relief shading of Switzerland.
Abstract. Networks such as transportation, water, and power are critical lifelines for society. Managers plan and execute interventions to guarantee the operational state of their networks under various circumstances, including after the occurrence of (natural) hazard events. Creating an intervention program demands knowing the probable direct and indirect consequences (i.e., risk) of the various hazard events that could occur in order to be able to mitigate their effects. This paper introduces a methodology to support network managers in the quantification of the risk related to their networks. The methodology is centered on the integration of the spatial and temporal attributes of the events that need to be modeled to estimate the risk. Furthermore, the methodology supports the inclusion of the uncertainty of these events and the propagation of these uncertainties throughout the risk modeling. The methodology is implemented through a modular simulation engine that supports the updating and swapping of models according to the needs of network managers. This work demonstrates the usefulness of the methodology and simulation engine through an application to estimate the potential impact of floods and mudflows on a road network located in Switzerland. The application includes the modeling of (i) multiple time-varying hazard events; (ii) their physical and functional effects on network objects (i.e., bridges and road sections); (iii) the functional interrelationships of the affected objects; (iv) the resulting probable consequences in terms of expected costs of restoration, cost of traffic changes, and duration of network disruption; and (v) the restoration of the network.
Extracting features from printed maps has been a challenge for decades; historical maps pose an even larger problem due to manual, inconsistent drawing or scribing, low printing quality, and geometrical distortions. In this article, a new workflow is introduced, consisting of a segmentation step and a vectorization step to acquire high‐quality polygon representations of building footprints from the Siegfried map series. For segmentation, an ensemble of U‐Nets is trained, yielding pixel‐based predictions with an average intersection over union of 88.2% and an average precision of 98.55%. For vectorization, methods based on contour tracing and orientation‐based clustering are proposed to approximate idealized polygonal representations. The workflow has been tested on 10 randomly selected map sheets from the Siegfried map, showing that the time required to manually correct these polygons drops to about 45 min per map sheet. Of this sample, approximately 10% of buildings required manual corrections. This workflow can serve as a blueprint for similar vectorization efforts.
Cartographic maps have been shown to provide cognitive benefits when interpreting data in relation to a geographic location. In visualization, the term map‐like describes techniques that incorporate characteristics of cartographic maps in their representation of abstract data. However, the field of map‐like visualization is vast and currently lacks a clear classification of the existing techniques. Moreover, choosing the right technique to support a particular visualization task is further complicated, as techniques are scattered across different domains, with each considering different characteristics as map‐like. In this paper, we give an overview of the literature on map‐like visualization and provide a hierarchical classification of existing techniques along two general perspectives: imitation and schematization of cartographic maps. Each perspective is further divided into four principal categories that group common map‐like techniques along the visual primitives they affect. We further discuss this classification from a task‐centered view and highlight open research questions.
Emerging methodologies for natural hazard risk assessments involve the execution of a multitude of different interacting simulation models that produce vast amounts of spatio-temporal datasets. This data pool is further enlarged when such simulation results are post-processed using GIS operations, for example to derive information for decisionmaking. The novel approach presented in this paper makes use of the GPU-accelerated rendering pipeline to perform such operations on-the-fly without storing any results on secondary memory and thus saving large amounts of storage space. Particularly, algorithms for three frequently used geospatial analysis methods are provided, namely for the computation of difference maps using map algebra and overlay operations, distance maps and buffers as examples for proximity analyses as well as kernel density estimation and inverse distance weighting as examples for statistical surfaces. In addition, a visualization tool is presented that integrates these methods using a node-based data flow architecture. The application of this visualization tool to the results of a real-world risk assessment methodology used in civil engineering shows that the memory footprint of post-processing datasets can be reduced at the order of terabytes. Although the technique has several limitations, most notably the reduced interoperability with conventional analysis tools, it can be beneficial for other use cases. When integrated into desktop GIS applications, for example, it can be used to quickly generate a preview of the results of complex analysis chains or it can reduce the amount of data to be transferred to web or mobile GIS applications.
Maps contain abundant geospatial information, such as roads, settlements, and river networks, to name a few. The need to access this information to carry out analyses (e.g., in transportation, landscape planning, or ecology), as well as advances in software and hardware technologies, have driven the development of workflows to efficiently extract features from maps. The aim of this article is to provide a comprehensive overview of such methods to extract road features from raster maps. The methods are categorized based on the classes of techniques they employ (e.g., line extraction), as well as their subclasses (e.g., line tracing, Hough transform), the amount of user intervention required (e.g., interactive, automatic), the required data (e.g., scanned maps, contemporary vector data) and the produced results (e.g., raster‐based predictions, vector‐based results, attributes). Additionally, recent road extraction methods from overhead imagery, together with evaluation methods that will possibly benefit road extraction from raster maps, are reviewed. Furthermore, the evolution of this research field is analyzed over the past 35 years and the limitations of the current techniques, as well as possible future directions, are discussed.
A quantitative approach to conduct a specific type of stress test on road networks is presented in this article. The objective is to help network managers determine whether their networks would perform adequately during and after the occurrence of hazard events. Conducting a stress test requires (i) modifying an existing risk model (i.e., a model to estimate the probable consequences of hazard events) by representing at least one uncertainty in the model with values that are considerably worse than median or mean values, and (ii) developing criteria to conclude if the network has an adequate post-hazard performance. Specifically, the stress test conducted in this work is focused on the uncertain behavior of individual objects that are part of a network when these are subjected to hazard loads. Here, the relationships between object behavior and hazard load are modeled using fragility functions and functional capacity loss functions. To illustrate the quantitative approach, a stress test is conducted for an example road network in Switzerland, which is affected by floods and rainfall-triggered mudflows. Beyond the focus of the stress test, this work highlights the importance of using a probabilistic approach when conducting stress tests for temporal and spatially distributed networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.