Abstract:In this paper we present a novel GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g., fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows for interactive generation of realistic images, enabling advanced rendering techniques suc… Show more
“…In order to make the results as comparable as possible we have to account for the three major factors affecting the rendering time: the number of points/particles in the scene, the screen resolution, and the number of spp. All methods use 1 spp except [53] and [47], which use 2 spp for particle inter-reflections and the methods' spp counts fairly compatible when assessing performance. We have categorized methods achieving real time (≥ 10FPS) or interactive real time (≥ 1FPS), mutually exclusively, and reported the closest maximum number of points still renderable in real-time or interactive frame rates.…”
Section: Summary and Discussionmentioning
confidence: 96%
“…A full GPU implementation of a view aligned voxelized sphere particle structure showed that interparticle refractions and reflections with up to seven fluid layers was possible at interactive frame rates. A view space aligned voxelized sphere particle method fully implemented on the GPU was published in [53]. With photorealistic interparticle refractions and reflections between up to seven fluid layers, they achieved interactive frame rates.…”
Readily available RGB-D cameras in smart phones and improving 3D scanning technologies have made it possible to produce detailed point cloud and point-based models of real world objects even in real time. Rendering such models in high quality and at satisfactory frame rates is needed for realistic extended reality (XR) applications. This publication reviews real-time photorealistic point cloud rendering methods which directly ray trace or rasterize point cloud models, with an emphasis on ray tracing and realtime performance. We found that real-time direct point cloud ray tracing research has been focused on static non-animated content, and thus, open research possibilities include adapting modern dedicated ray tracing hardware for increased performance for animated and live captured scenes, and adding path tracing techniques to increase photorealistic effects in the rendering result. A categorization and discussion on the capabilities of state-of-the-art photorealistic point cloud rendering methods is presented by surveying both real-time and offline methods, which are assumed to become real-time capable with the advances in near-future hardware. Challenges and future trends are derived by comparing different rasterization and ray tracing methods as well as acceleration structures for point clouds in terms of produced rendering effects and speed.
“…In order to make the results as comparable as possible we have to account for the three major factors affecting the rendering time: the number of points/particles in the scene, the screen resolution, and the number of spp. All methods use 1 spp except [53] and [47], which use 2 spp for particle inter-reflections and the methods' spp counts fairly compatible when assessing performance. We have categorized methods achieving real time (≥ 10FPS) or interactive real time (≥ 1FPS), mutually exclusively, and reported the closest maximum number of points still renderable in real-time or interactive frame rates.…”
Section: Summary and Discussionmentioning
confidence: 96%
“…A full GPU implementation of a view aligned voxelized sphere particle structure showed that interparticle refractions and reflections with up to seven fluid layers was possible at interactive frame rates. A view space aligned voxelized sphere particle method fully implemented on the GPU was published in [53]. With photorealistic interparticle refractions and reflections between up to seven fluid layers, they achieved interactive frame rates.…”
Readily available RGB-D cameras in smart phones and improving 3D scanning technologies have made it possible to produce detailed point cloud and point-based models of real world objects even in real time. Rendering such models in high quality and at satisfactory frame rates is needed for realistic extended reality (XR) applications. This publication reviews real-time photorealistic point cloud rendering methods which directly ray trace or rasterize point cloud models, with an emphasis on ray tracing and realtime performance. We found that real-time direct point cloud ray tracing research has been focused on static non-animated content, and thus, open research possibilities include adapting modern dedicated ray tracing hardware for increased performance for animated and live captured scenes, and adding path tracing techniques to increase photorealistic effects in the rendering result. A categorization and discussion on the capabilities of state-of-the-art photorealistic point cloud rendering methods is presented by surveying both real-time and offline methods, which are assumed to become real-time capable with the advances in near-future hardware. Challenges and future trends are derived by comparing different rasterization and ray tracing methods as well as acceleration structures for point clouds in terms of produced rendering effects and speed.
In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering.
“…Dos Santos Brito et al [dSBVeSTT18] presented a ray tracing method for particle data that achieves several frames per second for millions of particles. Zirr and Dachsbacher [ZD18] proposed an on‐the‐fly voxelization method for particle data as well as an accelerated ray casting approach for rendering that achieves a considerable speedup compared to the method by Fraedrich et al [FAW10]. While these approaches focus on more general particle data, several techniques specifically targeted at molecular visualization have been presented.…”
Molecular surface representations are an important tool for the visual analysis of molecular structure and function. In this paper, we present a novel method for the visualization of dynamic molecular surfaces based on the Gaussian model. In contrast to previous approaches, our technique does not rely on the construction of intermediate representations such as grids or triangulated surfaces. Instead, it operates entirely in image space, which enables us to exploit visibility information to efficiently skip unnecessary computations. With this visibility‐driven approach, we can visualize dynamic high‐quality surfaces for molecules consisting of millions of atoms. Our approach requires no preprocessing, allows for the interactive adjustment of all properties and parameters, and is significantly faster than previous approaches, while providing superior quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.