2015
DOI: 10.48550/arxiv.1510.08982
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Asynchronous Parallel Computing Algorithm implemented in 1D Heat Equation with CUDA

Abstract: In this note, we present the stability as well as performance analysis of asynchronous parallel computing algorithm implemented in 1D heat equation with CUDA. The primary objective of this note lies in dissemination of asynchronous parallel computing algorithm by providing CUDA code for fast and easy implementation. We show that the simulations carried out on nVIDIA GPU device with asynchronous scheme outperforms synchronous parallel computing algorithm. In addition, we also discuss some drawbacks of asynchron… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2016
2016
2016
2016

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…2) Parallel computing for fixed-point iteration -Parallel computing is widely used technique to speedup computation of fixed-point iterations. For example, in [22] and [23], onedimensional heat equation using a finite difference method is solved by parallel computing in which a certain group of grid points for the finite difference scheme is assigned to each CPU or GPU core. Thus, in parallel computing each value for the group of grid points is computed by different cores, followed by communication between cores for updating values at the boundary grid points of the group.…”
Section: Consensus Critical Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…2) Parallel computing for fixed-point iteration -Parallel computing is widely used technique to speedup computation of fixed-point iterations. For example, in [22] and [23], onedimensional heat equation using a finite difference method is solved by parallel computing in which a certain group of grid points for the finite difference scheme is assigned to each CPU or GPU core. Thus, in parallel computing each value for the group of grid points is computed by different cores, followed by communication between cores for updating values at the boundary grid points of the group.…”
Section: Consensus Critical Applicationsmentioning
confidence: 99%
“…In this case, asynchronous updates can improve the computing performance by avoiding synchronization bottleneck problem, but consequently leads to different consensus value (i.e. solution to the heat equation) [23]. Thus, we get an erroneous solution to the problem in the asynchronous updates.…”
Section: Consensus Critical Applicationsmentioning
confidence: 99%