2010 5th International Conference on Industrial and Information Systems 2010
DOI: 10.1109/iciinfs.2010.5578685
|View full text |Cite
|
Sign up to set email alerts
|

Recent trends in software and hardware for GPGPU computing: A comprehensive survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…Each core can preserve number of thread contexts, specific to the architecture. CUDA has zero-overhead scheduling, that is maintained by tolerating the data fetch latency by switching between the threads ( [20], [21]). …”
Section: A Gpu Architecturementioning
confidence: 99%
“…Each core can preserve number of thread contexts, specific to the architecture. CUDA has zero-overhead scheduling, that is maintained by tolerating the data fetch latency by switching between the threads ( [20], [21]). …”
Section: A Gpu Architecturementioning
confidence: 99%
“…The memory requests from a warp to a shared memory can also create a bank conflict, which also increases the number of clock cycles required for the memory access of a warp. GPU hides the latency of one warp with a computation of another warp to overcome the memory access delays .…”
Section: Gpu Architecture and Cuda Programming Modelmentioning
confidence: 99%
“…GPU hides the latency of one warp with a computation of another warp to overcome the memory access delays [5]. A warp cannot be launched until all the memory requests in that warp are executed.…”
Section: Introductionmentioning
confidence: 99%
“…With the advent of GPGPU, it is now possible to exploit the massive computing capability of a GPU by the use of API making it easily programmable. One such programming API that has been introduced by NVIDIA for parallel computing using GPU is the compute unified device architecture (CUDA) . CUDA enables programmers to write a sequence of code that can implicitly run on the GPU for better performance.…”
Section: Introductionmentioning
confidence: 99%