2014
DOI: 10.1007/978-3-319-05789-7_61
|View full text |Cite
|
Sign up to set email alerts
|

Integrating an N-Body Problem with SDC and PFASST

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…Thus, the integration of Boris-SDC into the novel PFASST++ framework [41] is planned for future work. This requires extending Boris-SDC to multiple levels in space and time with adequate coarsening strategies, see [25] for a description of multi-level SDC and [42,43] for a first idea for particlebased coarsening. An efficient time-parallel method tailored for particle simulations could greatly aid in better exploiting the computational resources of massively parallel high-performance computing systems for plasma physics applications.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, the integration of Boris-SDC into the novel PFASST++ framework [41] is planned for future work. This requires extending Boris-SDC to multiple levels in space and time with adequate coarsening strategies, see [25] for a description of multi-level SDC and [42,43] for a first idea for particlebased coarsening. An efficient time-parallel method tailored for particle simulations could greatly aid in better exploiting the computational resources of massively parallel high-performance computing systems for plasma physics applications.…”
Section: Discussionmentioning
confidence: 99%
“…In these original papers from 2012 and 2014, the PFASST algorithm was introduced; its implementation was discussed and it was applied to first problems. In the following years, PFASST has been applied to more and more problems and coupled to different space‐parallel solver, ranging from a Barnes–Hut tree code to geometric multigrid . Together with spatial parallelization, PFASST was demonstrated to run and scale on up to 458,752 cores of an IBM Blue Gene/Q installation.…”
Section: Introductionmentioning
confidence: 99%
“…In the following years, PFASST has been applied to more and more problems and coupled to different space-parallel solver, ranging from a Barnes-Hut tree code to geometric multigrid. [12][13][14][15] Together with spatial parallelization, PFASST was demonstrated to run and scale on up to 458, 752 cores of an IBM Blue Gene/Q installation. Yet, while applications, implementation, and improvements are discussed frequently, a solid and reliable convergence theory is still missing.…”
Section: Introductionmentioning
confidence: 99%
“…The FAS correction allows different strategies for spatial coarsening to be used within the mesh hierarchy, leading to improved parallel efficiency. An accuracy study of PFASST as well as serial SDC for a first-order particle-based discretization can be found in [8]. A successful demonstration of PFASST's efficacy in extreme-scale parallel particle simulations was presented in [9], where PFASST was combined with the parallel Barnes-Hut tree codes PEPC [10] to simulate a spherical vortex sheet.…”
Section: Introductionmentioning
confidence: 99%