Abstract:This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically appl… Show more
“…Representing image and volume data via probability density functions (PDFs) allows for consistent multi-resolution rendering and processing of both image [14] and volume data [27,40]. These PDFs can be approximated sparsely and, e.g., be stored in sparse PDF maps [14] and sparse PDF volumes [27], respectively. These representations are very similar to standard mipmaps [36], but enable the accurate and efficient evaluation of color mapping and non-linear filtering.…”
Section: Related Workmentioning
confidence: 99%
“…These representations are very similar to standard mipmaps [36], but enable the accurate and efficient evaluation of color mapping and non-linear filtering. Accurate approximation of PDFs is possible using isotropic Gaussians [14,27]. Our approach uses This makes it impossible to discern small features (orange circles) in large particle data, such as the Copper/Silver mixture shown here, during interactive exploration.…”
Section: Related Workmentioning
confidence: 99%
“…An example of a hand-drawn transfer function is shown in the supplemental material. Analogously to scale-consistent transfer functions for volumes [27], accurate filtering is indispensable for unfiltered hand-drawn transfer function, as the sparsely sampled or linearly filtered normals would result in lookups that do not reflect the actual sub-pixel structures present in the data.…”
Section: Exploration Via the S-ndf Explorer Widgetmentioning
“…Representing image and volume data via probability density functions (PDFs) allows for consistent multi-resolution rendering and processing of both image [14] and volume data [27,40]. These PDFs can be approximated sparsely and, e.g., be stored in sparse PDF maps [14] and sparse PDF volumes [27], respectively. These representations are very similar to standard mipmaps [36], but enable the accurate and efficient evaluation of color mapping and non-linear filtering.…”
Section: Related Workmentioning
confidence: 99%
“…These representations are very similar to standard mipmaps [36], but enable the accurate and efficient evaluation of color mapping and non-linear filtering. Accurate approximation of PDFs is possible using isotropic Gaussians [14,27]. Our approach uses This makes it impossible to discern small features (orange circles) in large particle data, such as the Copper/Silver mixture shown here, during interactive exploration.…”
Section: Related Workmentioning
confidence: 99%
“…An example of a hand-drawn transfer function is shown in the supplemental material. Analogously to scale-consistent transfer functions for volumes [27], accurate filtering is indispensable for unfiltered hand-drawn transfer function, as the sparsely sampled or linearly filtered normals would result in lookups that do not reflect the actual sub-pixel structures present in the data.…”
Section: Exploration Via the S-ndf Explorer Widgetmentioning
“…Several techniques (e.g., [Kraus and Bürger 2008;Sicat et al 2014]) have been developed previously to interpolate voxelized data for efficient 3D visualization. Although these approaches could be adapted to handle some scattering parameters, they normally offer limited accuracy.…”
Reference
GB Ours
MBRelative error: 0% 100%
Figure 1: We present a new approach to compute scattering parameters at reduced resolutions. Many detailed appearance models involve high-resolution volumetric representations (top-left). Such level of detail leads to high storage but is usually unnecessary especially when the object is rendered at a distance. However, naïve downsampling often loses intrinsic shadowing structures and brightens resulting images (see the insets). Our method computes scaled phase functions, a combined representation of single-scattering albedo and phase function, and provides significantly better accuracy while reducing the data size by almost three orders of magnitude (top-right).
AbstractVolumetric micro-appearance models have provided remarkably high-quality renderings, but are highly data intensive and usually require tens of gigabytes in storage. When an object is viewed from a distance, the highest level of detail offered by these models is usually unnecessary, but traditional linear downsampling weakens the object's intrinsic shadowing structures and can yield poor accuracy. We introduce a joint optimization of single-scattering albedos and phase functions to accurately downsample heterogeneous and anisotropic media. Our method is built upon scaled phase functions, a new representation combining abledos and (standard) phase functions. We also show that modularity can be exploited to greatly reduce the amortized optimization overhead by allowing multiple synthesized models to share one set of downsampled parameters. Our optimized parameters generalize well to novel lighting and viewing configurations, and the resulting data sets offer several orders of magnitude storage savings.
“…[YMC06] have proposed improving the visual quality of multi‐resolution volume rendering by approximating the voxel data distribution by its mean and variance at each level of detail. The recently introduced sparse pdf volumes [SKMH14] and sparse pdf maps [HSB*12], respectively, represent the data distributions more accurately. For sparse pdf volumes, this allows for consistent multi‐resolution volume rendering [SKMH14], i.e.…”
Section: Data Representation and Storagementioning
This survey gives an overview of the current state of the art in GPU techniques for interactive large‐scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga‐, tera‐ and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out‐of‐core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e. ‘output‐sensitive’ algorithms and system designs. This leads to recent output‐sensitive approaches that are ‘ray‐guided’, ‘visualization‐driven’ or ‘display‐aware’. In this survey, we focus on these characteristics and propose a new categorization of GPU‐based large‐scale volume visualization techniques based on the notions of actual output‐resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we present in this survey.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.