Large experiments and high-performance computer models generate many petabytes of data. While Cloud Computing systems may meet the needs for analyzing these petabytes by harnessing the computing power of many distributed computers, the key challenge in effectively utilizing such a distributed system is the data management process, including storage, indexing, searching, accessing, and transferring data. Most analysis tasks perform computations on a subset of a large data records satisfying some user specified constraints on attribute (variable) values. This subsetting procedure is extremely important in that it reduces the network traffic to and from the cloud facilities. However, selected data records often span many different data files, and extracting the values out these files can be time-consuming especially if the number of files is large. This work addresses this challenge of working with a large number of files. We use a set of astronomical data set as an example and use an efficient database indexing technique, called FastBit, to significantly speed up the subsetting and thus optimize network usage. Overall, we aim to provide transparent and highly efficient attribute-based data access to scientists through a web-based Astronomy Data Analysis Portal. We will discuss the system design, and options for managing an extremely large number of files while minimizing network usage and latency.
In Korea, specialized centers are designated for 10 strategic fields for the purpose of jointly utilizing supercomputer resources at the national level. Based on the "National Supercomputing Innovation Strategy," it plans to select 10 centers in three stages by 2030, and has now completed the designation of the first-stage specialized centers in 2022. With the second designation in 2024 ahead, it is urgent to review and improve the existing designation institution for fairer and more effective selection of specialized centers. Therefore, this paper analyzed the influence of evaluation items and the influence of evaluation items on evaluation results by using logistic regression analysis and network centrality analysis to prepare improvement plans for the existing evaluation model. As a result of the analysis, improvement measures were derived, such as subdividing evaluation items with low impact, expanding the items, and lowering the allotment of evaluation items with low impact.
Several fields of science have traditionally demanded large-scale workflow support, which requires thousands of CPU cores or more. In this paper, we investigate ways to support these scientific workflows in a heterogeneous environment in which cluster computing resources are integrating with cloud computing resources. Specifically, we first propose an architecture that utilizes cloud resources to address load balancing issues. For that, the proposed architecture measures the status of job queue on the front-end node, and then dynamically creates virtual machines from cloud pools based on the measured results to expand computing resource of the cluster. Next, we present experiment results of computational performance in hybrid infrastructure where the virtual and physical nodes are mixed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.