2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER) 2016
DOI: 10.1109/saner.2016.70
|View full text |Cite
|
Sign up to set email alerts
|

AutoBench: Finding Workloads That You Need Using Pluggable Hybrid Analyses

Abstract: Researchers often rely on benchmarks to demonstrate feasibility or efficiency of their contributions. However, finding the right benchmark suite can be a daunting taskexisting benchmark suites may be outdated, known to be flawed, or simply irrelevant for the proposed approach. Creating a proper benchmark suite is challenging, extremely time consuming, and also-unless it becomes widely popular-a thankless endeavor. In this paper, we introduce a novel approach to help researchers find relevant workloads for thei… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…We also provide a list of publicly available open‐source software projects that heavily use stream processing (see Table 2), which could be considered as potential workload candidates for benchmarking by researchers and tool builders. Indeed, prior work 20,48 has shown promising results on using JUnit tests as workloads for benchmarking. Suitable workload candidates would use stream code from modern open‐source applications, complementing benchmark suites made either from stream code adaptations of traditional MapReduce algorithms 78 or by converting workloads designed for relational databases to stream‐based code 79 …”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We also provide a list of publicly available open‐source software projects that heavily use stream processing (see Table 2), which could be considered as potential workload candidates for benchmarking by researchers and tool builders. Indeed, prior work 20,48 has shown promising results on using JUnit tests as workloads for benchmarking. Suitable workload candidates would use stream code from modern open‐source applications, complementing benchmark suites made either from stream code adaptations of traditional MapReduce algorithms 78 or by converting workloads designed for relational databases to stream‐based code 79 …”
Section: Discussionmentioning
confidence: 99%
“…Some authors have explored unit tests as a source of workloads to create benchmark suites. Zheng et al 48 explore the feasibility of using unit tests available on open‐source projects as workloads for custom benchmarks. They find more than 500 Java and Scala projects containing unit tests that can be considered as good candidate workloads for benchmarking purposes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To be effective, it is crucial that DPA is applied to workloads covering a significant portion of application code, as DPA cannot provide any information on code that it is not executed. Previous work has shown [15], [16] that unit tests are a viable source of workloads for discovering runtime properties with DPA. In this section, we show that the test suite provided by TESA can also be used to extend the effectiveness of DPA tools.…”
Section: Case Study: Running Dpa With Extended Code Coveragementioning
confidence: 99%
“…A possible future direction could be to extend the results presented here by conducting a large-scale characterization of task granularity, performing related optimizations on a broad range of applications. To this end, one could integrate tgp into existing frameworks for large-scale dynamic analysis (such as AutoBench [151]), to automatically collect task-granularity profiles from many publicly available open-source workloads. Moreover, task-granularity analysis could be performed on multiple environments, including Cloud-based ones.…”
Section: Large-scale Task-granularity Analysis and Optimizationmentioning
confidence: 99%