Abstract:We present a framework to volume-render three-dimensional data cubes interactively using distributed ray-casting and volume-bricking over a cluster of workstations powered by one or more graphics processing units (GPUs) and a multi-core central processing unit (CPU). The main design target for this framework is to provide an in-core visualization solution able to provide three-dimensional interactive views of terabyte-sized data cubes. We tested the presented framework using a computing cluster comprising 64 n… Show more
“…The worst-case memory usage of a single process when using the k-d tree structure is O(v), where v is the number of voxels in the volume. The risk that a high data imbalance occurs limits the use of k-d tree based dynamic load balancing in large-scale applications, where even small data imbalances can result in some processes running out of memory [5].…”
Section: A Low Scheduling Complexity the Strict K-d Tree Loadmentioning
confidence: 99%
“…In large-scale applications the data sets could consist of multiple terabytes of data, meaning that even small-scale data transfers can be time consuming and result in some processes exceeding their available amount of memory. As such, many large-scale visualization projects have utilized static techniques [5], [12] or have limited load balancing to equalizing the data distribution, rather than explicitly lowering the total render time [10], [13]. Dorier et al [13] developed a technique which in-situ can identify important data of a simulation and reduce less important data based on a time limit constraint.…”
Section: Related Workmentioning
confidence: 99%
“…However, dynamically redistributing data can result in large memory imbalances between processes. For large data sets some processes might run out of memory, making many dynamic Manuscript load balancing techniques unsuitable for large-scale visualization [4], [5]. Commonly used dynamic load balancing techniques are based on tree structures, e.g., a k-d tree [6].…”
We propose a novel compositing pipeline and a dynamic load balancing technique for volume rendering which utilizes a two-layered group structure to achieve effective and scalable load balancing. The technique enables each process to render data from non-contiguous regions of the volume with minimal impact on the total render time. We demonstrate the effectiveness of the proposed technique by performing a set of experiments on a modern GPU cluster. The experiments show that using the technique results in up to a 35.7% lower worst-case memory usage as compared to a dynamic k-d tree load balancing technique, whilst simultaneously achieving similar or higher render performance. The proposed technique was also able to lower the amount of transferred data during the load balancing stage by up to 72.2%. The technique has the potential to be used in many scenarios where other dynamic load balancing techniques have proved to be inadequate, such as during large-scale visualization. key words: large-scale visualization, distributed computing, load balancing, GPU
“…The worst-case memory usage of a single process when using the k-d tree structure is O(v), where v is the number of voxels in the volume. The risk that a high data imbalance occurs limits the use of k-d tree based dynamic load balancing in large-scale applications, where even small data imbalances can result in some processes running out of memory [5].…”
Section: A Low Scheduling Complexity the Strict K-d Tree Loadmentioning
confidence: 99%
“…In large-scale applications the data sets could consist of multiple terabytes of data, meaning that even small-scale data transfers can be time consuming and result in some processes exceeding their available amount of memory. As such, many large-scale visualization projects have utilized static techniques [5], [12] or have limited load balancing to equalizing the data distribution, rather than explicitly lowering the total render time [10], [13]. Dorier et al [13] developed a technique which in-situ can identify important data of a simulation and reduce less important data based on a time limit constraint.…”
Section: Related Workmentioning
confidence: 99%
“…However, dynamically redistributing data can result in large memory imbalances between processes. For large data sets some processes might run out of memory, making many dynamic Manuscript load balancing techniques unsuitable for large-scale visualization [4], [5]. Commonly used dynamic load balancing techniques are based on tree structures, e.g., a k-d tree [6].…”
We propose a novel compositing pipeline and a dynamic load balancing technique for volume rendering which utilizes a two-layered group structure to achieve effective and scalable load balancing. The technique enables each process to render data from non-contiguous regions of the volume with minimal impact on the total render time. We demonstrate the effectiveness of the proposed technique by performing a set of experiments on a modern GPU cluster. The experiments show that using the technique results in up to a 35.7% lower worst-case memory usage as compared to a dynamic k-d tree load balancing technique, whilst simultaneously achieving similar or higher render performance. The proposed technique was also able to lower the amount of transferred data during the load balancing stage by up to 72.2%. The technique has the potential to be used in many scenarios where other dynamic load balancing techniques have proved to be inadequate, such as during large-scale visualization. key words: large-scale visualization, distributed computing, load balancing, GPU
“…With adequate computational resources, distributed computing can be highly efficient to handle many large scale scientific problems [27][28][29][30][31][32] . For instance, Wijerathne et al [33] used a cluster of workstations to simulate the seismic damage of buildings in Tokyo.…”
“…Visualizing and animating output for inspection or study can be run as parallel processes or a graphics processing unit (GPU; Hassan et al 2012). Once keyframes have been inserted, frames generated from the render process become independent.…”
ABSTRACT. Astronomical data take on a multitude of forms-catalogs, data cubes, images, and simulations. The availability of software for rendering high-quality three-dimensional graphics lends itself to the paradigm of exploring the incredible parameter space afforded by the astronomical sciences. The software program Blender gives astronomers a useful tool for displaying data in a manner used by three-dimensional (3D) graphics specialists and animators. The interface to this popular software package is introduced with attention to features of interest in astronomy. An overview of the steps for generating models, textures, animations, camera work, and renders is outlined. An introduction is presented on the methodology for producing animations and graphics with a variety of astronomical data. Examples from subfields of astronomy with different kinds of data are shown with resources provided to members of the astronomical community. An example video showcasing the outlined principles and features is provided along with scripts and files for sample visualizations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.