2017
DOI: 10.1007/978-3-319-64203-1_48
|View full text |Cite
|
Sign up to set email alerts
|

Shared Memory Pipelined Parareal

Abstract: Abstract. For the parallel-in-time integration method Parareal, pipelining can be used to hide some of the cost of the serial correction step and improve its efficiency. The paper introduces an OpenMP implementation of pipelined Parareal and compares it to a standard MPI-based variant. Both versions yield almost identical runtimes, but, depending on the compiler, the OpenMP variant consumes about 7% less energy and has a significantly smaller memory footprint. However, its higher implementation complexity migh… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 30 publications
0
8
0
Order By: Relevance
“…Furthermore, it was assumed that the computational cost of 𝒢 is negligible. A comparison between theoretical and effective speedup of Parareal can be found, for example, in References 25,26.…”
Section: Parallelization Of the Stieltjes Proceduresmentioning
confidence: 99%
“…Furthermore, it was assumed that the computational cost of 𝒢 is negligible. A comparison between theoretical and effective speedup of Parareal can be found, for example, in References 25,26.…”
Section: Parallelization Of the Stieltjes Proceduresmentioning
confidence: 99%
“…We first analyse the parallel speedup, defined as the ratio of the sequential to the parallel execution time for a given number of processes. As pointed out in [1,2,5,35], the Parareal algorithm is flexible enough to accommodate various implementations based on different programming paradigms. In the modeling, we consider a distributed memory implementation to handle the parallelization in time.…”
Section: Expected Parallel Performance Of Pararealmentioning
confidence: 99%
“…In the modeling, we consider a distributed memory implementation to handle the parallelization in time. We refer the reader to [35] for a discussion and analysis of other strategies. We consider a total of N proc processes for the space-time parallelism with N processes being devoted to the parallelization in time.…”
Section: Expected Parallel Performance Of Pararealmentioning
confidence: 99%
See 1 more Smart Citation
“…This observation naturally encapsulates a family of powerful algorithms referred to as parallel in time or parareal methods, often invoked to simulate the system's dynamics on heterogeneous classical hardware 13,14 . The latter techniques effectively take advantage of the fact that a part of the evolution can be distributed and carried out in parallel.…”
mentioning
confidence: 99%