Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Proceedings of the 9th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering 2010
DOI: 10.1145/1806672.1806684
|View full text |Cite
|
Sign up to set email alerts
|

Towards a unified fault-detection benchmark

Abstract: Developing a unified benchmark to compare and contrast ways to detect faults is an important aspect for the future of fault detection. In this paper, we explore benchmarks used in the evaluation of popular static analysis tools in order to raise awareness for the community to work towards a unified benchmark. Additionally, we introduce an initial design for a bottom-up repository to integrate benchmarks directly with the web interface of the accessible fault taxonomy, the Common Weakness Enumeration (CWE). The… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 36 publications
(48 reference statements)
0
3
0
Order By: Relevance
“…Finally, Schmeelk [20] has recently introduced the design of a repository, in order to integrate benchmarks with publicly available fault taxonomies like the CWE. He also pinpoints the need for a unified benchmarking framework.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, Schmeelk [20] has recently introduced the design of a repository, in order to integrate benchmarks with publicly available fault taxonomies like the CWE. He also pinpoints the need for a unified benchmarking framework.…”
Section: Related Workmentioning
confidence: 99%
“…Benchmarks are very important for evaluating pRogram Analysis and Testing (RAT) algorithms and tools [6][7][8][9][10]. Different benchmarks exist to evaluate different RAT aspects, such as how scalable RAT tools are, how fast they can achieve high test coverage, how thorough they handle different 408 I. HUSSAIN ET AL.…”
Section: Introductionmentioning
confidence: 99%
“…Benchmarks are very important for evaluating pRogram Analysis and Testing (RAT) algorithms and tools . Different benchmarks exist to evaluate different RAT aspects, such as how scalable RAT tools are, how fast they can achieve high test coverage, how thorough they handle different language extensions, how well they translate and refactor code, how effective RAT tools are in executing applications symbolically or concolically, and how efficient these tools are in optimizing, linking, and loading code in compiler‐related technologies, as well as profiling.…”
Section: Introductionmentioning
confidence: 99%