The observation of the associated production of the Higgs boson with two top quarks in proton-proton collisions is one of the highlights of the LHC Run 2. Driven by the theoretical description of the physics processes, the Matrix Element Method (MEM) consists in computing a probability that an event is compatible with the signal hypothesis (ttH) or with one of the background hypotheses. It is a powerful classifying tool requiring high dimensional integral computations. The deployment of our MEM production code on GPU's platform will be described. What follows will focus on the adaptation of the main components of the computations in OpenCL kernels, namely the Magraph matrix element code generator, VEGAS, and LHAPDF. Finally, the gain obtained on GPU's platforms compared with classical CPU's platforms will be assessed. *
The amount of remote sensing data available to applications is constantly growing due to the rise of very-high-resolution sensors and short repeat cycle satellites. Consequently, tackling computational complexity in Earth Observation information extraction is rising as a major challenge. Resorting to High Performance Computing (HPC) is becoming a common practice, since it provides environments and programming facilities able to speed-up processes. In particular, clusters are flexible, cost-effective systems able to perform data-intensive tasks ideally fulfilling any computational requirement. However, their use typically implies a significant coding effort to build proper implementations of specific processing pipelines. This paper presents a generic framework for the development of RS images processing applications targeting cluster computing. It is based on common open sources libraries, and leverages the parallelization of a wide variety of image processing pipelines in a transparent way. Performances on typical RS tasks implemented using the proposed framework demonstrate a great potential for the effective and timely processing of large amount of data.
Exascale implies a major pre-requisite in terms of energy efficiency, as an improvement of an order of magnitude must be reached in order to stay within an acceptable envelope of 20 MW. To address this objective and to continue to sustain performance, HPC architectures have to become denser, embedding many-core processors (to several hundreds of computing cores) and/or become heterogeneous, that is, using graphic processors or FPGAs. These energy-saving constraints will also affect the underlying hardware architectures (e.g., memory and storage hierarchies, networks) as well as system software (runtime, resource managers, file systems, etc.) and programming models. While some of these architectures, such as hybrid machines, have existed for a number of years and occupy noticeable ranks in the TOP 500 list, they are still limited to a small number of scientific domains and, moreover, require significant 2 Gabriel Hautreux et al.porting effort. However, recent developments of new paradigms (especially around OpenMP and OpenACC) make these architectures much more accessible to programmers. In order to make the most of these breakthrough upcoming technologies, GENCI and its partners have set up a technology watch group and lead collaborations with vendors, relying on HPC experts and early adopted HPC solutions. The two main objectives are providing guidance and prepare the scientific communities to challenges of exascale architectures. The work performed on the OpenPOWER platform, one of the targeted platform for exascale, is described in this paper.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.