Background: Recent studies have shown an association of short-term exposure to fine particulate matter (PM) with transient increases in blood pressure (BP), but it is unclear whether long-term exposure has an effect on arterial BP and hypertension.Objectives: We investigated the cross-sectional association of residential long-term PM exposure with arterial BP and hypertension, taking short-term variations of PM and long-term road traffic noise exposure into account.Methods: We used baseline data (2000–2003) on 4,291 participants, 45–75 years of age, from the Heinz Nixdorf Recall Study, a population-based prospective cohort in Germany. Urban background exposure to PM with aerodynamic diameter ≤ 2.5 μm (PM2.5) and ≤ 10 μm (PM10) was assessed with a dispersion and chemistry transport model. We used generalized additive models, adjusting for short-term PM, meteorology, traffic proximity, and individual risk factors.Results: An interquartile increase in PM2.5 (2.4 μg/m3) was associated with estimated increases in mean systolic and diastolic BP of 1.4 mmHg [95% confidence interval (CI): 0.5, 2.3] and 0.9 mmHg (95% CI: 0.4, 1.4), respectively. The observed relationship was independent of long-term exposure to road traffic noise and robust to the inclusion of many potential confounders. Residential proximity to high traffic and traffic noise exposure showed a tendency toward higher BP and an elevated prevalence of hypertension.Conclusions: We found an association of long-term exposure to PM with increased arterial BP in a population-based sample. This finding supports our hypothesis that long-term PM exposure may promote atherosclerosis, with air-pollution–induced increases in BP being one possible biological pathway.
Our study shows a clear association of long-term exposure to PM(2.5) with atherosclerosis. This finding strengthens the hypothesized role of PM(2.5) as a risk factor for atherogenesis.
In this article we present SkePU 2, the next generation of the SkePU C++ skeleton programming framework for heterogeneous parallel systems. We critically examine the design and limitations of the SkePU 1 programming interface. We present a new, flexible and type-safe, interface for skeleton programming in SkePU 2, and a source-to-source transformation tool which knows about SkePU 2 constructs such as skeletons and user functions. We demonstrate how the source-to-source compiler transforms programs to enable efficient execution on parallel heterogeneous systems. We show how SkePU 2 enables new use-cases and applications by increasing the flexibility from SkePU 1, and how programming errors can be caught earlier and easier thanks to improved type safety. We propose a new skeleton, Call, unique in the sense that it does not impose any predefined skeleton structure and can encapsulate arbitrary user-defined multi-backend computations. We also discuss how the sourceto-source compiler can enable a new optimization opportunity by selecting among multiple user function specializations when building a parallel program. Finally, we show that the performance of our prototype SkePU 2 implementation closely matches that of SkePU 1.
In this paper, we discuss the role, design and implementation of smart containers in the SkePU skeleton library for GPU-based systems. These containers provide an interface similar to C++ STL containers but internally perform runtime optimization of data transfers and runtime memory management for their operand data on the different memory units. We discuss how these containers can help in achieving asynchronous execution for skeleton calls while providing implicit synchronization capabilities in a data consistent manner. Furthermore, we discuss the limitations of the original, already optimizing memory management mechanism implemented in SkePU containers, and propose and implement a new mechanism that provides stronger data consistency and improves performance by reducing communication and memory allocations. With several applications, we show that our new mechanism can achieve significantly (up to 33.4 times) better performance than the initial mechanism for page-locked memory on a multi-GPU based system.Keywords SkePU · Smart containers · Skeleton programming · Memory management · Runtime optimizations · GPU-based systems 1 Introduction Skeleton programming [4] for GPU-based systems is increasingly becoming popular for mapping common computational patterns. Several skeleton libraries are especially written (from scratch) targeting GPU-based systems including SkePU [10, 6], SkelCL [24] and Marrow [20]. Moreover, many existing skeleton libraries, initially written for execution on MPI-clusters and/or multicore CPUs have been ported for GPU execution, such as FastFlow [12] and Muesli [11]. These libraries differ in their 2 Usman Dastgeer, Christoph Kessler approach and feature offering but they all aim to provide performance comparable to hand-written code while requiring much less programming effort.Providing high-level abstraction with good execution performance in a library requires special design consideration. The question comes down to what is exposed to the programmer and what is handled implicitly by the skeleton library. For example, the Marrow library exposes concurrency to the application program by executing skeleton calls asynchronously; it returns a handle which can be used to synchronize execution when needed. This allows Marrow to effectively overlap computation and communication from different skeleton computations. SkelCL makes data distribution explicit so that the application programmer can choose how to map a computation to the underlying computing platform.Another important aspect in GPU computation is managing communication between CPU (main) memory and GPU (device) memory over PCIe interconnect. In Muesli, FastFlow, SkePU and SkelCL, skeleton calls can execute on a single or multicore CPU as well as on a GPU. Considering that CPUs and GPUs have separate physical memory, execution on a certain compute device may require transferring data back and forth to its associated memory if data is not already available in that memory. For example, in the following code, // 1 D arrays : v0 , v1 skel_c...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.