High Performance Parallelism Pearls 2015
DOI: 10.1016/b978-0-12-803819-2.00010-0
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Use of the Reserved Core

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…For example, from it's inception, the GPU-enabled Unified scheduler has been designed for an arbitrary number of GPUs per node [56], not just single GPU nodes. Furthermore, multiple MICs have also been considered [57,58]. The team originally ran to machine capacity on both NFS Keeneland systems, each with 3 GPUs per node, and they regularly run on their own local 2-GPU/node cluster at the SCI Institute.…”
Section: Performance Portabilitymentioning
confidence: 99%
“…For example, from it's inception, the GPU-enabled Unified scheduler has been designed for an arbitrary number of GPUs per node [56], not just single GPU nodes. Furthermore, multiple MICs have also been considered [57,58]. The team originally ran to machine capacity on both NFS Keeneland systems, each with 3 GPUs per node, and they regularly run on their own local 2-GPU/node cluster at the SCI Institute.…”
Section: Performance Portabilitymentioning
confidence: 99%