2022 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) 2022
DOI: 10.1109/cgo53902.2022.9741277
|View full text |Cite
|
Sign up to set email alerts
|

SPNC: An Open-Source MLIR-Based Compiler for Fast Sum-Product Network Inference on CPUs and GPUs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
1

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 6 publications
0
1
1
Order By: Relevance
“…While our overall contribution is to connect a simulating infrastructure for the study of the human heart to compiler technology, our work shows a proof-of-concept MLIR code generation that does not require an additional dialect proposal. Though this could be perceived as lacking since many previous work on MLIR [9,28,29] have proposed new dialects, we have found no justification for the addition of a new language. Indeed, MLIR includes all necessary dialects and operations required to optimize the execution of ionic models.…”
Section: Representation Of Ionic Modelscontrasting
confidence: 59%
See 1 more Smart Citation
“…While our overall contribution is to connect a simulating infrastructure for the study of the human heart to compiler technology, our work shows a proof-of-concept MLIR code generation that does not require an additional dialect proposal. Though this could be perceived as lacking since many previous work on MLIR [9,28,29] have proposed new dialects, we have found no justification for the addition of a new language. Indeed, MLIR includes all necessary dialects and operations required to optimize the execution of ionic models.…”
Section: Representation Of Ionic Modelscontrasting
confidence: 59%
“…The approach has shown significant speedup in comparison to state-of-the-art solutions for climate and weather simulation, proving that extra levels of abstractions can help to devise new optimizations. Sommer et al [29] propose a dialect, and a lowering process to optimize sum-product network inference in both CPUs and GPUs, while DistIR [28] is an IR for distributed computation that employs MLIR to optimize neural networks. Recently, many works have proposed to extend MLIR with new dialects to analyze, optimize and accelerate heterogeneous applications in a variety of domains [12,18,24].…”
Section: Related Workmentioning
confidence: 99%