2019
DOI: 10.1007/978-3-030-06228-6_9
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Inter-process Communication in Parallel Implementation of Grid-Characteristic Method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 13 publications
0
5
0
1
Order By: Relevance
“…At this stage, we made all the calculations one after the other. Later on, the calculations will become faster with the help of parallelization, for example, using the MPI technology [20]. Figures 1, 2, 3 depict wave patterns of the seismic impulse spread (namely, the normal component of the stress tensor σ xx ).…”
Section: Resultsmentioning
confidence: 99%
“…At this stage, we made all the calculations one after the other. Later on, the calculations will become faster with the help of parallelization, for example, using the MPI technology [20]. Figures 1, 2, 3 depict wave patterns of the seismic impulse spread (namely, the normal component of the stress tensor σ xx ).…”
Section: Resultsmentioning
confidence: 99%
“…At the contact boundaries between subdomains with structured and unstructured computational grids and subdomains with various parameters of the media, the contact condition of complete adhesion was used for system (1), (2): (7) and system 3, 4:…”
Section: Models and Methodsmentioning
confidence: 99%
“…The grid-characteristic method was proposed in [5]. Recently, new modifications of this method [6] and the corresponding computational algorithms for high-performance computer systems [7] are being developed. The grid-characteristic method has been successfully applied to solve direct [8][9][10] and inverse [11] seismic prospecting problems.…”
Section: Introductionmentioning
confidence: 99%
“…Для работы в системах с распределенной памятью программный комплекс распараллелен с использованием технологии MPI [16]. При распараллеливании использовались стандартные алгоритмы декомпозиции расчетной области и обмена приграничными ячейками, широко применяемые для явных сеточных методов [17][18][19][20]. Поскольку в данной реализации сеточно-характеристического решателя используются регулярные сетки, узлы могут храниться в 2D/3D массивах (в памяти они хранятся в виде непрерывных одномерных массивов).…”
Section: распараллеливание Mpiunclassified