2019 IEEE/ACM International Workshop on Heterogeneous High-Performance Reconfigurable Computing (H2RC) 2019
DOI: 10.1109/h2rc49586.2019.00009
|View full text |Cite
|
Sign up to set email alerts
|

High-Throughput Multi-Threaded Sum-Product Network Inference in the Reconfigurable Cloud

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…The work in [36] targets PCs through fully-spatial mapping on FPGA. However, since large PCs cannot be completely spatially mapped, their approach is limited to only small DAGs with fewer than 500 nodes.…”
Section: Fpgamentioning
confidence: 99%
“…The work in [36] targets PCs through fully-spatial mapping on FPGA. However, since large PCs cannot be completely spatially mapped, their approach is limited to only small DAGs with fewer than 500 nodes.…”
Section: Fpgamentioning
confidence: 99%
“…The same accelerator was also used to demonstrate TaPaSCo's portability: As domain experts from the machine learning community usually do not run and maintain FPGA boards in on-premise setups, the accelerator architecture was ported [51] to the reconfigurable cloud, namely the F1 instances found in Amazon's AWS EC2 cloud. As the TaPaSCo architecture is completely platform-independent and TaPaSCo provides suitable platform integration for the F1 instances, the accelerator could be ported without any changes to the core itself.…”
Section: Custom Hdl-based Acceleratorsmentioning
confidence: 99%