Proceedings of the 51st International Conference on Parallel Processing 2022
DOI: 10.1145/3545008.3545091
|View full text |Cite
|
Sign up to set email alerts
|

From RTL to CUDA: A GPU Acceleration Flow for RTL Simulation with Batch Stimulus

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…The result is that it is currently infeasible to evaluate the performance of large workloads on the RTL level. Recent works [27]- [29] addressed the problem of accelerating RTL simulation by leveraging techniques like batch processing, task-level dataflow execution, low-level parallelism, selective execution, etc. These orthogonal techniques to speed up simulation may not scale well for very large workloads.…”
Section: E the Quest For Advanced And Efficient Samplingmentioning
confidence: 99%
“…The result is that it is currently infeasible to evaluate the performance of large workloads on the RTL level. Recent works [27]- [29] addressed the problem of accelerating RTL simulation by leveraging techniques like batch processing, task-level dataflow execution, low-level parallelism, selective execution, etc. These orthogonal techniques to speed up simulation may not scale well for very large workloads.…”
Section: E the Quest For Advanced And Efficient Samplingmentioning
confidence: 99%
“…RTFlow [21] is a GPU-accelerated RTL simulator that exploits stimulus-level parallelism to speed up simulation by running many independent simulations on a GPU. RTLFlow improves execution speed by up to 40× over Verilator for many stimuli, but it runs an order magnitude slower than Verilator with a single stimulus.…”
Section: Parallel Rtl Simulationmentioning
confidence: 99%