2019 IEEE 30th International Conference on Application-Specific Systems, Architectures and Processors (ASAP) 2019
DOI: 10.1109/asap.2019.00013
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Performance Gains of Accelerator-Rich Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…Hardware specialization is the main technique for improving power-performance efficiency in emerging compute platforms. By customizing compute engines, memory hierarchies, and data representations [12,21,43], hardware accelerators provide efficient computation in various application domains like artificial intelligence, image processing, and graph analysis [15, 28-30, 61, 87]. At the same time, there is a growing trend in using domain-specific languages (DSLs) for boosting development productivity, e.g., TensorFlow for deep learning [1], Halide for image processing [59], and GraphIt for graph applications [88].…”
Section: Introductionmentioning
confidence: 99%
“…Hardware specialization is the main technique for improving power-performance efficiency in emerging compute platforms. By customizing compute engines, memory hierarchies, and data representations [12,21,43], hardware accelerators provide efficient computation in various application domains like artificial intelligence, image processing, and graph analysis [15, 28-30, 61, 87]. At the same time, there is a growing trend in using domain-specific languages (DSLs) for boosting development productivity, e.g., TensorFlow for deep learning [1], Halide for image processing [59], and GraphIt for graph applications [88].…”
Section: Introductionmentioning
confidence: 99%
“…Recent work [5][6][7][8] shows that CGRAs accompanied with spatial dataflows are well suited to exploit parallelism and are tractable for the mapping of dataflows on the architecture. Furthermore, several researchers have shown that this technology is making further progress with efforts in related areas: DSL and compilers, stream input/output, 9 and architectures, 7 bringing up the relevance of research in the area.…”
mentioning
confidence: 99%
“…Recent work (Fang et al, 2019;Akbari et al, 2019;Liu et al, 2019b;Weng et al, 2019) shows that CGRAs accompanied with spatial dataflows are well suited to exploit parallelism and are tractable for the mapping of dataflows on the architecture. Furthermore, several researchers have shown that this technology is making further progress with efforts in related areas: DSL and compilers, stream input/output (Nowatzki et al, 2017) and architectures (Liu et al, 2019b), bringing up the relevance of research in the area.…”
Section: Introductionmentioning
confidence: 99%