The calculation of pairwise correlation coefficient on a dataset, known as the correlation matrix, is often used in data analysis, signal processing, pattern recognition, image processing, and bioinformatics. With the state-of-the-art Graphic Processing Units (GPUs) that consist of massive cores capable to do processing up to several Gflops, the calculation of correlation matrix can be accelerated several times over traditional CPUs.However, due to the rapid growth of the data in the digital era, the correlation matrix calculation becomes computing intensive which needs to be executed on multiple GPUs. As of now, GPUs are common components in data center at many institutions.Their GPU deployment tends towards a GPU cluster which each node is equipped with GPUs. In this paper, we propose a parallel computing based on the hybrid MPI/CUDA programming for fast and efficient Pearson correlation matrix calculation on GPU clusters. At coarse grain parallelism, the correlation matrix is partitioned into tiles which are distributed to execute concurrently on many GPUs using MPI. At fine grain level, the CUDA kernel function on each node performs massively parallel computing on a GPU. To balance load across all GPUs, we adopt the work pool model which there is a master node that manages tasks in the work pool and dynamically assign tasks to worker nodes. The result of the evaluation shows that the proposed work can ensure the load balance across different GPUs and thus gives better execution time than using a simple static data partitioning.
Data centers always face challenges of peak and fluctuating resource demand from time to time. Building a data center that is large enough to meet a peak demand is not cost effective. The emerging of Cloud computing model allows the data center to dynamically acquire additional resources on demand and pay only for what resources having been used. So, the data center has a flexibility to satisfy users with higher QoS while keeping the investment on IT infrastructure at an affordable cost. In this work, we present an architecture of autonomic provisioning in High Throughput Cluster (HTC) computing system using resources from Cloud. The system is developed as an extension to Rocks Clusters. It allows a data center to transparently and securely extend a local cluster into remote Cloud computing resources on demand through dynamic provisioning mechanism. We also introduce a set of provisioning policies that are aware of the different resource requirement in each job and adapt the system accordingly. Experiments have been carried out in our testbed to show that the proposed system is self-configured and self-organized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.