The popularity of general purpose Graphic Processing Unit (GPU) is largely attributed to the tremendous concurrency enabled by its underlying architecture -single instruction multiple thread (SIMT) architecture. It keeps the context of a significant number of threads in registers to enable fast "context switches" when the processor is stalled due to execution dependence, memory requests and etc. The SIMT architecture has a large register file evenly partitioned among all concurrent threads. Per-thread register usage determines the number of concurrent threads, which strongly affects the whole program performance. Existing register allocation techniques, extensively studied in the past several decades, are oblivious to the register contention due to the concurrent execution of many threads. They are prone to making optimization decisions that benefit single thread but degrade the whole application performance.Is it possible for compilers to make register allocation decisions that can maximize the whole GPU application performance? We tackle this important question from two different aspects in this paper. We first propose an unified on-chip memory allocation framework that uses scratch-pad memory to help: (1) alleviate single-thread register pressure; (2) increase whole application throughput. Secondly, we propose a characterization model for the SIMT execution model in order to achieve a desired on-chip memory partition given the register pressure of a program. Overall, we discovered that it is possible to automatically determine an on-chip memory resource allocation that maximizes concurrency while ensuring good single-thread performance at compile-time. We evaluated our techniques on a representative set of GPU benchmarks with non-trivial register pressure. We are able to achieve up to 1.70 times speedup over the baseline of the traditional register allocation scheme that maximizes single thread performance.
No abstract
Graph edge partition models have recently become an appealing alternative to graph vertex partition models for distributed computing due to their flexibility in balancing loads and their performance in reducing communication cost [6, 16]. In this paper, we propose a simple yet effective graph edge partitioning algorithm. In practice, our algorithm provides good partition quality (and better than similar state-of-the-art edge partition approaches, at least for power-law graphs) while maintaining low partition overhead. In theory, previous work [6] showed that an approximation guarantee of O ( d max √ log n log k ) apply to the graphs with m =Ω( k 2 ) edges ( k is the number of partitions). We further rigorously proved that this approximation guarantee hold for all graphs. We show how our edge partition model can be applied to parallel computing. We draw our example from GPU program locality enhancement and demonstrate that the graph edge partition model does not only apply to distributed computing with many computer nodes, but also to parallel computing in a single computer node with a many-core processor.
The Quantum Approximate Optimization Algorithm(QAOA) is one of the highly advocated variational algorithms for problem optimization, such as MAX-CUT problem. It is a type of parameterized quantum-classical combined algorithm feasible on the current era of Noisy Intermediate-Scale Quantum(NISQ) computing. Like all other quantum algorithms, a QAOA circuit has to be converted to a hardware-compliant circuit with some additional SWAP gates inserted, which is called the qubit mapping process. QAOA has a special kind of unit blocks called CPHASE. Commuting CPHASE blocks can be scheduled in any order, which grants more freedom to the quantum program compilation process in the scope of qubit mapping. Due to the decoherence of qubit, the number of SWAP gates inserted and the depth of converted circuit needs be as small as possible. After analyzing tremendous SWAP insertion and gate re-ordering combinations, we discovered a simple yet effective pattern of gates scheduling called CommuTativity-Aware Graph (CTAG). This specific pattern can be utilized to schedule any QAOA circuit while greatly reducing the gate count and circuit depth. Our CTAG-based method yields circuits with depth in the bound of O(2N − 2) , as long as there is line embedding in the given architecture with length of N .Comparing our CTAG-based method to the state-of-art QAOA compilation solution, which is a heuristic approach using the commutativity feature, we achieve up to 90% reduction in circuit depth and 75% reduction in gate count in linear architecture for input graphs with up to 120 vertices. Even more speedup can be achieved, if the input graph has more vertices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.