As more processing elements are integrated in a single chip, embedded software design becomes more challenging: It becomes a parallel programming for nontrivial heterogeneous multiprocessors with diverse communication architectures, and design constraints such as hardware cost, power, and timeliness. In the current practice of parallel programming with MPI or OpenMP, the programmer should manually optimize the parallel code for each target architecture and for the design constraints. Thus, the design-space exploration of MPSoC (multiprocessor systems-on-chip) costs become prohibitively large as software development overhead increases drastically. To solve this problem, we develop a parallel-programming framework based on a novel programming model called common intermediate code (CIC). In a CIC, functional parallelism and data parallelism of application tasks are specified independently of the target architecture and design constraints. Then, the CIC translator translates the CIC into the final parallel code, considering the target architecture and design constraints to make the CIC retargetable. Experiments with preliminary examples, including the H.263 decoder, show that the proposed parallel-programming framework increases the design productivity of MPSoC software significantly.
Pipelining algorithms are typically concerned with improving only the steady-state performance, or the kernel time. The pipeline setup time happens only once and therefore can be negligible compared to the kernel time. However, for Coarse-Grained Reconfigurable Architectures (CGRAs) used as a coprocessor to a main processor, pipeline setup can take much longer due to the communication delay between the two processors, and can become significant if it is repeated in an outer loop of a loop nest. In this paper we evaluate the overhead of such non-kernel execution times when mapping nested loops for CGRAs, and propose a novel architecture-compiler cooperative scheme to reduce the overhead, while also minimizing the number of extra configurations required. Our experimental results using loops from multimedia and scientific domains demonstrate that our proposed techniques can greatly increase the performance of nested loops by up to 2.87 times compared to the conventional approach of accelerating only the innermost loops. Moreover, the mappings generated by our techniques require only a modest number of configurations that can fit in recent reconfigurable architectures.
Coarse-grained reconfigurable architectures (CGRAs) promise high performance at high power efficiency. They fulfil this promise by keeping the hardware extremely simple, and moving the complexity to application mapping. One major challenge comes in the form of data mapping. For reasons of power-efficiency and complexity, CGRAs use multibank local memory, and a row of PEs share memory access. In order for each row of the PEs to access any memory bank, there is a hardware arbiter between the memory requests generated by the PEs and the banks of the local memory. However, a fundamental restriction remains in that a bank cannot be accessed by two different PEs at the same time. We propose to meet this challenge by mapping application operations onto PEs and data into memory banks in a way that avoids such conflicts. To further improve performance on multibank memories, we propose a compiler optimization for CGRA mapping to reduce the number of memory operations by exploiting data reuse. Our experimental results on kernels from multimedia benchmarks demonstrate that our local memory-aware compilation approach can generate mappings that are up to 53% better in performance (26% on average) compared to a memoryunaware scheduler.
Abstract. Reconfigurable Architecture (RA), which provides extremely high energy efficiency for certain domains of applications, have one problem that current mapping algorithms for it do not scale well with the number of cores. One approach to this problem is using SIMD (Single Instruction Multiple Data) paradigm. However, SIMD can complicate the mapping problem by adding an additional dimension, i.e., iteration mapping, to the already inter-dependent problems of data mapping and operation mapping, and can significantly affect performance through memory bank conflicts. In this paper we introduce SIMD reconfigurable architecture, which allows for SIMD mapping at multiple levels of granularity, and investigate ways to minimize bank conflicts in a SIMD reconfigurable architecture with the related sub-problems taken into consideration. We further present data tiling and evaluate a conflict-free scheduling algorithm as a way to eliminate bank conflicts for a certain class of iteration and data mapping.
A tandem ring-closing metathesis reaction using ruthenium catalyst was carried out to synthesize various fused bicyclic compounds containing both small and large rings. Fast ring-closure of the small ring and slow ring-closure of the large ring resulted in the formation of only one isomer. Further manipulation such as the Diels-Alder reaction was carried out to prepare a complex molecule containing multiple rings of different sizes.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.