2014 International Conference on High Performance Computing &Amp; Simulation (HPCS) 2014
DOI: 10.1109/hpcsim.2014.6903746
|View full text |Cite
|
Sign up to set email alerts
|

Multi-step image compositing for massively parallel rendering

Abstract: High performance visualization has played an important role in computer-aided scientific discovery and has become an indispensable tool for computational scientists. Sortlast parallel rendering is a proven approach for visual data analytics by extracting meaningful information from huge data sets generated from large scale scientific computing. Image compositing is the last stage of sort-last parallel rendering pipeline and works by combining the images generated by the rendering nodes to generate the final im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…In massively parallel rendering environments where tens of thousands of rendering nodes are involved, it is important to have a scalable image composition code for this degree of parallelism. However, scalability problems on massively parallel image composition environments have already been reported on different HPC systems such as T2K Open Supercomputer [9], IBM Blue Gene/L [10], IBM Blue Gene/P [11], and K Computer [12]. On K computer, besides this performance degradation when using a large composition node counts, there is a more critical issue since MPI Gatherv, an MPI collective function used to implement these aforementioned image composition algorithms, does not work over 50K composition node counts due to the buffer error (MRQ Overflow) in the current MPI implementation for K computer.…”
Section: Sort-firstmentioning
confidence: 99%
See 3 more Smart Citations
“…In massively parallel rendering environments where tens of thousands of rendering nodes are involved, it is important to have a scalable image composition code for this degree of parallelism. However, scalability problems on massively parallel image composition environments have already been reported on different HPC systems such as T2K Open Supercomputer [9], IBM Blue Gene/L [10], IBM Blue Gene/P [11], and K Computer [12]. On K computer, besides this performance degradation when using a large composition node counts, there is a more critical issue since MPI Gatherv, an MPI collective function used to implement these aforementioned image composition algorithms, does not work over 50K composition node counts due to the buffer error (MRQ Overflow) in the current MPI implementation for K computer.…”
Section: Sort-firstmentioning
confidence: 99%
“…Although it might be a temporary problem, large-scale parallel image composition using 64K composition nodes, such as presented in [11], is not directly possible. In this paper, we focused on the Multi-Step approach presented in [12] and expanded the investigation including floating point pixel format for attending high-quality rendering requirements; a method for selecting the maximum group size to be used in each…”
Section: Sort-firstmentioning
confidence: 99%
See 2 more Smart Citations
“…As the scale of computational science simulations grows to deal with increasingly complex problems, we can also verify a significant increase in the volume of data produced [Roten et al 2016]. To derive meaningful information from these huge datasets, leading to scientific discoveries and breakthroughs, scientists and engineers rely upon large-scale visualization and data analysis systems [Nonaka et al 2014]. Such dataintensive applications pose a great pressure on the shared backend storage system of modern high performance computing (HPC) environments.…”
Section: Introductionmentioning
confidence: 99%