2021
DOI: 10.1109/tc.2021.3128266
|View full text |Cite
|
Sign up to set email alerts
|

Enabling One-size-fits-all Compilation Optimization across Machine Learning Computers for Inference

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 29 publications
0
1
0
Order By: Relevance
“…TVM [12] is an end-to-end machine learning compiler framework for CPUs, GPUs, and accelerators. It is an intermediary platform that can integrate various applications and systems, including blockly applications [13], runtime support for Android NNAPI [14], compiler optimizations across many machine learning computers [15] and underlying integration with a GPU [16], [17]. Fig.…”
Section: Background a Tvm And Hybrid Scriptmentioning
confidence: 99%
“…TVM [12] is an end-to-end machine learning compiler framework for CPUs, GPUs, and accelerators. It is an intermediary platform that can integrate various applications and systems, including blockly applications [13], runtime support for Android NNAPI [14], compiler optimizations across many machine learning computers [15] and underlying integration with a GPU [16], [17]. Fig.…”
Section: Background a Tvm And Hybrid Scriptmentioning
confidence: 99%
“…These languages allow programmers to easily specify the essential structure of a problem without concern for low-level details. Crucially, this separation of concerns enables domain-specific compilers [54], [16] to efficiently map programs down to a wide range of idiosyncratic accelerators.…”
Section: Introductionmentioning
confidence: 99%