2019
DOI: 10.1002/cpe.5642
|View full text |Cite
|
Sign up to set email alerts
|

An evaluation of MPI and OpenMP paradigms in finite‐difference explicit methods for PDEs on shared‐memory multi‐ and manycore systems

Abstract: Summary This paper focuses on parallel implementations of three two‐dimensional explicit numerical methods on Intel® Xeon® Scalable Processor and the coprocessor Knights Landing. In this study, the performance of a hybrid parallel programming with message passing interface (MPI) and Open Multi‐Processing (OpenMP) and a pure MPI implementation used with two thread binding policies is compared with an improved OpenMP‐based implementation in three explicit finite‐difference methods for solving partial differentia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0
3

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 16 publications
0
2
0
3
Order By: Relevance
“…17 We also plan to run the implementation on machines with the third and fourth generations of Intel ® Xeon ® Scalable processors. We also intend to use the MPI standard for inter-socket communication and the OpenMP standard for intra-socket communication, as performed by Cabral et al 19 We also plan to run these implementations using several nodes of the SDumont supercomputer. 20…”
Section: Acknowledgmentsmentioning
confidence: 99%
“…17 We also plan to run the implementation on machines with the third and fourth generations of Intel ® Xeon ® Scalable processors. We also intend to use the MPI standard for inter-socket communication and the OpenMP standard for intra-socket communication, as performed by Cabral et al 19 We also plan to run these implementations using several nodes of the SDumont supercomputer. 20…”
Section: Acknowledgmentsmentioning
confidence: 99%
“…Para verificar se estratégia OMP-EWS apresentaria as mesmas vantagens em outros algoritmos similares, Cabral et al [Cabral et al 2019] compararam três métodos numéricos bidimensionais em diferentes processadores Intel R . Nesse trabalho, foram utilizadas versões híbridas MPI e OMP-EWS dos métodos Hopmoc, Diferenças Finitas Totalmente Explícita para as equações do Calor e de Laplace bidimensionais.…”
Section: Trabalhos Relacionadosunclassified
“…No algoritmo 1, mostra-se uma visão geral do pseudocódigo do método Hopmoc. A versão híbrida do método Hopmocé uma estratégia que faz uso tanto de threads OpenMP quanto de processos MPI para dividir a carga de trabalho e permitir uma paralelização eficiente [Cabral et al 2019]. A malha de entradaé inicialmente dividida em seções menores, de forma igualitária.…”
Section: Versão Híbrida Do Método Hopmoc Baseada Em Mpi E Openmpunclassified
See 1 more Smart Citation
“…Partial differential equations (PDEs) are widely employed in many scientific and engineering applications. Cabral et al 4 studied the performance of three numerical methods for solving PDEs on a 2D domain, and evaluated their performance on two shared memory architectures, namely, the multi‐core SKL and the manycore KNL. The authors studied several implementations combining OpenMP and MPI, and the best configuration depends on characteristics such as the program size, the number of synchronization points, and characteristics related to the NUMA architecture.…”
mentioning
confidence: 99%