2009
DOI: 10.1007/978-3-642-01970-8_92
|View full text |Cite
|
Sign up to set email alerts
|

Power Consumption of GPUs from a Software Perspective

Abstract: Abstract. GPUs are now considered as serious challengers for highperformance computing solutions. They have power consumptions up to 300 W. This may lead to power supply and thermal dissipation problems in computing centers. In this article we investigate, using measurements, how and where modern GPUs are using energy during various computations in a CUDA environment.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
50
0
1

Year Published

2010
2010
2022
2022

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 73 publications
(54 citation statements)
references
References 5 publications
3
50
0
1
Order By: Relevance
“…Likewise, the activity within the functional units can be reduced during 22 % of the operations executed in GPGPU computations. The power reduction brought by this technique, proportional to the activity reduction, is known to be of a critical issue for GPU [10]. Future works have to precisely quantify it.…”
Section: Results and Validationmentioning
confidence: 99%
“…Likewise, the activity within the functional units can be reduced during 22 % of the operations executed in GPGPU computations. The power reduction brought by this technique, proportional to the activity reduction, is known to be of a critical issue for GPU [10]. Future works have to precisely quantify it.…”
Section: Results and Validationmentioning
confidence: 99%
“…Relative power and purchase costs vary constantly, and the power consumption of a GPU running CUDA applications depends on the configuration and the operations performed [13]. For instance, global memory accesses account for a large fraction of the power consumption and the effect on the power consumption of arithmetic instructions depends more on their throughput than on their type.…”
Section: Performance Resultsmentioning
confidence: 99%
“…functions run by the GPU and called by kernels or other device functions. 1 A kernel is run in parallel on the GPU. The execution pattern is given at launch time by specifying a grid.…”
Section: Cuda Programming Modelmentioning
confidence: 99%
“…The velocity index may correspond to any of the dimensions but the minor one, in order to preserve coalescence. According to [1,24], the SMs contain translation look-aside buffers (TLB) for global memory. Using the least significant dimension possible to span the velocity distribution reduces the occurrences of TLB misses.…”
Section: Proposed Implementationmentioning
confidence: 99%