2020
DOI: 10.1007/978-3-030-48340-1_19
|View full text |Cite
|
Sign up to set email alerts
|

DataRaceOnAccelerator – A Micro-benchmark Suite for Evaluating Correctness Tools Targeting Accelerators

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…In particular, we used the Java benchmarks from the IBM Contest suite [19], Java Grande suite [60], DaCapo [9], and SIR [16]. In addition, we used OpenMP benchmark programs, whose execution lenghts and number of threads can be tuned, from DataRaceOnAccelerator [54], DataRaceBench [33], OmpSCR [17] and the NAS parallel benchmarks [6], as well as large OpenMP applications contained in the following benchmark suites: CORAL [1, 2], ECP proxy applications [3], and Mantevo project [4]. Each benchmark was instrumented and executed in order to log a single concurrent trace, using the tools RV-Predict [51] (for Java programs) and ThreadSanitizer [56] (for OpenMP programs).…”
Section: Methodsmentioning
confidence: 99%
“…In particular, we used the Java benchmarks from the IBM Contest suite [19], Java Grande suite [60], DaCapo [9], and SIR [16]. In addition, we used OpenMP benchmark programs, whose execution lenghts and number of threads can be tuned, from DataRaceOnAccelerator [54], DataRaceBench [33], OmpSCR [17] and the NAS parallel benchmarks [6], as well as large OpenMP applications contained in the following benchmark suites: CORAL [1, 2], ECP proxy applications [3], and Mantevo project [4]. Each benchmark was instrumented and executed in order to log a single concurrent trace, using the tools RV-Predict [51] (for Java programs) and ThreadSanitizer [56] (for OpenMP programs).…”
Section: Methodsmentioning
confidence: 99%
“…Also, each author uses a different set of benchmarks. It would be interesting to test all the mentioned tools with the benchmark suite created by the work of Schmitz et al [34], for a fair comparison between tools.…”
Section: Available Solutionsmentioning
confidence: 99%
“…In the papers presenting the various tools, those tools are compared with each other to show that for specific kernels, the new tool is, at that point in time, the best. It would be better to use a standard benchmark suite, like the suite by Schmitz et al [34], which is uniformly used and addresses the errors we mention in this paper. Additionally, it should support all the CUDA and OpenCL features.…”
Section: Research Directionsmentioning
confidence: 99%