High-resolution elevation and bathymetry data for coastal zones is extremely valuable to many researchers, however the cost of acquiring such data is prohibitively expensive for most research budgets, as it relies on specialized hardware. Mass produced off-the-shelf consumer cameras and sensors are becoming increasingly powerful, and can be affordable alternatives for collecting data. Microsoft's original Kinect sensor was repurposed to collect data for Earth sciences research, but its low depth resolution hindered its usefulness for creating accurate maps. In this paper, we evaluate Microsoft's next generation Kinect for Windows v2 sensor, which employs time-of-flight technology. Based on our results, the new sensor has great potential for use in coastal mapping and other Earth science applications where budget constraints preclude the use of traditional remote sensing data acquisition technologies.
Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.
Mesh simplification and discrete levels of detail (LOD) are wellstudied areas of research in computer graphics. However, until recently, most of the developed algorithms have focused on simplification and viewing of a single object with a large number of polygons. When these algorithms are applied to a large collection of simple models, many objects may be completely erased, leading to results that are misleading to the viewer. In this paper, we present an approach to simplifying city-sized collections of 2.5D buildings based on the principles of "urban legibility" as defined by architects and city planners. We demonstrate that our method, although similar to traditional simplification methods when compared quantitatively, better preserves the legibility and understandability of a complex urban space at all levels of simplification.
A perennially interesting research topic in the field of visual analytics is how to effectively develop systems that support organizational users' decision-making and reasoning processes. The problem is, however, most domain analytical practices generally vary from organization to organization. This leads to diverse designs of visual analytics systems in incorporating domain analytical processes, making it difficult to generalize the success from one domain to another. Exacerbating this problem is the dearth of general models of analytical workflows available to enable such timely and effective designs.To alleviate these problems, we present a two-stage framework for informing the design of a visual analytics system. This design framework builds upon and extends current practices pertaining to analytical workflow and focuses, in particular, on incorporating both general domain analysis processes as well as individual's analytical activities. We illustrate both stages and their design components through examples, and hope this framework will be useful for designing future visual analytics systems. We validate the soundness of our framework with two visual analytics systems, namely Entity Workspace [8] [14,41,46] suggest that the establishment of a general design framework is significant. The objective for such a VA design framework is threefold: firstly, the framework must inform designers to systematically incorporate the support for domain analytical processes. Secondly, the framework should provide a basis for designers to evaluate their system and further help them identify a cohesive technology transition progress, from system design and implementation to its release and deployment [49]. Finally, the framework must serve an educational purpose, contributing to the identification of potential course materials that are necessary to educate others regarding the field of VA [1].However, constructing a convincing and appropriate design framework is challenging. The framework must be validated against existing systems and more importantly, it must give researchers and designers new ideas regarding how to evaluate and improve their own work.Given the need to incorporate successes from diverse VA systems, it is difficult to generate a framework that can summarize and instruct all the design requirements from a top-down perspective. High-level VA design frameworks like [14,41,44,55] are certainly of great value. Nonetheless, little specific guidance or recommendation is currently available to articulate the boundaries within which particular design assumptions apply, leaving the system design to be solely based on designers' prior experience. For example, how does a designer know which analysis method is suitable to characterize an organization? Are there components that a designer should follow to systematically incorporate a domain analytical process? Further, what recommendations exist that specify the appropriate methods for supporting both general and individual analytical workflows? Encouraged by the di...
Many previous approaches to detecting urban change from LIDAR point clouds interpolate the points into rasters, perform pixel-based image processing to detect changes, and produce 2D images as output. We present a method of LIDAR change detection that maintains accuracy by only using the raw, irregularly spaced LIDAR points, and extracts relevant changes as individual 3D models. We then utilize these models, alongside existing GIS data, within an interactive application that allows the chronological exploration of the changes to an urban environment. A three-tiered level-of-detail system maintains a scale-appropriate, legible visual representation across the entire range of view scales, from individual changes such as buildings and trees, to groups of changes such as new residential developments, deforestation, and construction sites, and finally to larger regions such as neighborhoods and districts of a city that are emerging or undergoing revitalization. Tools are provided to assist the visual analysis by urban planners and historians through semantic categorization and filtering of the changes presented.
Modern ocean flow simulations are generating increasingly complex, multi-layer 3D ocean flow models. However, most researchers are still using traditional 2D visualizations to visualize these models one slice at a time. Properly designed 3D visualization tools can be highly effective for revealing the complex, dynamic flow patterns and structures present in these models. However, the transition from visualizing ocean flow patterns in 2D to 3D presents many challenges, including occlusion and depth ambiguity. Further complications arise from the interaction methods required to navigate, explore, and interact with these 3D datasets. We present a system that employs a combination of stereoscopic rendering, to best reveal and illustrate 3D structures and patterns, and multi-touch interaction, to allow for natural and efficient navigation and manipulation within the 3D environment. Exploratory visual analysis is facilitated through the use of a highly-interactive toolset which leverages a smart particle system. Multi-touch gestures allow users to quickly position dye emitting tools within the 3D model. Finally, we illustrate the potential applications of our system through examples of real world significance.
Previous perceptual research and human factors studies have identified several effective methods for texturing 3D surfaces to ensure that their curvature is accurately perceived by viewers. However, most of these studies examined the application of these techniques to static surfaces. This paper explores the effectiveness of applying these techniques to dynamically changing surfaces. When these surfaces change shape, common texturing methods, such as grids and contours, induce a range of different motion cues, which can draw attention and provide information about the size, shape, and rate of change. A human factors study was conducted to evaluate the relative effectiveness of these methods when applied to dynamically changing pseudo-terrain surfaces. The results indicate that, while no technique is most effective for all cases, contour lines generally perform best, and that the pseudocontour lines induced by banded color scales convey the same benefits.
Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.