2005
DOI: 10.1016/j.cageo.2005.02.002
|View full text |Cite
|
Sign up to set email alerts
|

Parallel processing of Prestack Kirchhoff Time Migration on a PC Cluster

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0
1

Year Published

2010
2010
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 21 publications
0
4
0
1
Order By: Relevance
“…The processing time is longer than the communication time, so the time elapsed is inversely proportional to the number of CPUs, and thus using more CPU nodes can reduce the time elapsed and improve the efficiency (Dai 2005). In this study, we propose a complete GPU solution for PSTM.…”
Section: Output: Common Reflection Gather I[ix][iy][ih][it]mentioning
confidence: 93%
See 1 more Smart Citation
“…The processing time is longer than the communication time, so the time elapsed is inversely proportional to the number of CPUs, and thus using more CPU nodes can reduce the time elapsed and improve the efficiency (Dai 2005). In this study, we propose a complete GPU solution for PSTM.…”
Section: Output: Common Reflection Gather I[ix][iy][ih][it]mentioning
confidence: 93%
“…However, the practical application of PSTM to tasks during large 3D surveys is still computationally intensive. To accelerate the processing of migration, the parallel processing of prestack time migration has been implemented routinely on distributed parallel computers (Schleicher andCopeland 1993, Chen et al 1993), as well as on PC clusters (Morton et al 1999, Hellman 2000, Dai 2005. In recent years, many other devices have also been used to accelerate PSTM such as FPGAs (He et al 2005).…”
Section: Introductionmentioning
confidence: 98%
“…Dai et al , introduced an embarrassingly parallel PKTM program to run in a single‐core cluster where each output trace is individually processed by each working node. Yerneni et al , studied computational demands of PKTM and presented a parallel implementation for a cluster of workstations.…”
Section: Related Workmentioning
confidence: 99%
“…Devidoàs inerentes características que permitem o paralelismo de dados do PKTM [Rizvandi et al 2011], muitos trabalhos como [Xu et al 2014], [Shi et al 2011], [Panetta et al 2012] e [Sun and Shi 2012] propuseram soluções para reduzir o tempo de execução deste algoritmo ao adotar dispositivos aceleradores como as Graphical Processing Units (GPU) ou Field-programable Gate Array (FPGA). Outros trabalhos como [Dai 2005] propuseram a divisão e distribuição do processamento do PKTM em cluster computacional usando o padrão MPI (Message Passing Interface). Já outros trabalhos como [Yang et al 2011] propuseram uma solução híbrida que faz uso tanto do paralelismo de memória distribuída (MPI) quanto do paralelismo de memória compartilhada (OpenMP), além de utilizar também os recursos de aceleração da GPU.…”
Section: Introductionunclassified