Abstract-The placement of tasks in a parallel application on specific nodes of a supercomputer can significantly impact performance. Traditionally, this task mapping has focused on reducing the distance between communicating tasks on the physical network. This minimizes the number of hops that point-to-point messages travel and thus reduces link sharing between messages and contention. However, for applications that use collectives over sub-communicators, this heuristic may not be optimal. Many collectives can benefit from an increase in bandwidth even at the cost of an increase in hop count, especially when sending large messages. For example, placing communicating tasks in a cube configuration rather than a plane or a line on a torus network increases the number of possible paths messages might take. This increases the available bandwidth which can lead to significant performance gains.We have developed Rubik, a tool that provides a simple and intuitive interface to create a wide variety of mappings for structured communication patterns. Rubik supports a number of elementary operations such as splits, tilts, or shifts, that can be combined into a large number of unique patterns. Each operation can be applied to disjoint groups of processes involved in collectives to increase the effective bandwidth. We demonstrate the use of Rubik for improving performance of two parallel codes, pF3D and Qbox, which use collectives over sub-communicators.
Fig. 1. Network traffic resulting from two different runs of the parallel simulation pF3D. This simulation models laser plasma interaction inside of a hohlraum chamber by decomposing the domain into a set of blocks (left). Depending on how data blocks are mapped to processor cores (middle), different communication patterns occur. When staggering data placement (bottom right) we observe significantly more balanced communication compared to a default mapping similar to how the domain is decomposed (top right).Abstract-The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D's performance on an IBM Blue Gene/P system.
As scientific applications target exascale, challenges related to data and energy are becoming dominating concerns. For example, coupled simulation workflows are increasingly adopting in-situ data processing and analysis techniques to address costs and overheads due to data movement and I/O. However it is also critical to understand these overheads and associated trade-offs from an energy perspective. The goal of this paper is exploring data-related energy/performance trade-offs for end-to-end simulation workflows running at scale on current high-end computing systems. Specifically, this paper presents: (1) an analysis of the data-related behaviors of a combustion simulation workflow with an insitu data analytics pipeline, running on the Titan system at ORNL; (2) a power model based on system power and data exchange patterns, which is empirically validated; and (3) the use of the model to characterize the energy behavior of the workflow and to explore energy/performance tradeoffs on current as well as emerging systems.
The growth in size and complexity of scaling applications and the systems on which they run pose challenges in analyzing and improving their overall performance. With metrics coming from thousands or millions of processes, visualization techniques are necessary to make sense of the increasing amount of data. To aid the process of exploration and understanding, we announce the initial release of Boxfish, an extensible tool for manipulating and visualizing data pertaining to application behavior. Combining and visually presenting data and knowledge from multiple domains, such as the application's communication patterns and the hardware's network configuration and routing policies, can yield the insight necessary to discover the underlying causes of observed behavior. Boxfish allows users to query, filter and project data across these domains to create interactive, linked visualizations. I. PROJECTING DATA ACROSS DOMAINSWe describe the association of elements that exist in one domain with the elements of another as a projection. A map file which associates integer MPI ranks with coordinate-denoted hardware nodes and threads is an example of a commonly used projection. Schulz et al.[1] advocated the use of projections in interpreting performance data and defined three domains of interest -hardware, application and communication. The hardware domain includes performance counters. The application domain includes information relating to the application, such as physics measurements in a simulation or matrix properties in a linear algebra library. The communication domain includes messages sent among subsets of processors. Boxfish recognizes these domains by default, but contributed modules may add others.Boxfish is designed to support the projection of data across domains. When filters or queries are written requiring attributes from multiple domains, or when a view requires attribute information in a native domain, Boxfish searches its available projections to make the necessary transformations. This allows users to view data such as the load on nodes which had a certain range of values in a previous run or the average wait time for communicators in a particular phase of the application. Data tables may have default preferred projections. Projections can be added from files, created based on data Fig. 1. A 3D torus network represented in 2D (left) and 3D (right). Both views represent elements of the hardware domain. However, nodes are colored by their sub-communicators, which belong to the communication domain.Links are colored by the number of packets sent over them. These views are rendered side by side in Boxfish, indicating they are siblings in the filter hierarchy and show the same data. In the 2D view, selected nodes are displayed at a slightly larger size. In the 3D view, the same nodes are selected and highlighted by their relative opacity. attributes, or composed from existing ones. More projections may be added through future or contributed modules. Figure 1 shows a projection from the communication domain on...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.