2017
DOI: 10.1145/3075619
|View full text |Cite
|
Sign up to set email alerts
|

Scratchpad Sharing in GPUs

Abstract: GPGPU applications exploit on-chip scratchpad memory available in the Graphics Processing Units (GPUs) to improve performance. The amount of thread level parallelism (TLP) present in the GPU is limited by the number of resident threads, which in turn depends on the availability of scratchpad memory in its streaming multiprocessor (SM). Since the scratchpad memory is allocated at thread block granularity, part of the memory may remain unutilized. In this paper, we propose architectural and compiler optimization… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 33 publications
0
1
0
Order By: Relevance
“…Jenga [33] and Hotpads [34] organize the hierarchy of caches as a collection of SRAM banks. GPUs [16] and many cores use software-managed scratchpads. These approaches target CPUs or GPUs that role in address-translation and walking into the software.…”
Section: Related Workmentioning
confidence: 99%
“…Jenga [33] and Hotpads [34] organize the hierarchy of caches as a collection of SRAM banks. GPUs [16] and many cores use software-managed scratchpads. These approaches target CPUs or GPUs that role in address-translation and walking into the software.…”
Section: Related Workmentioning
confidence: 99%