2011 IEEE 11th International Working Conference on Source Code Analysis and Manipulation 2011
DOI: 10.1109/scam.2011.24
|View full text |Cite
|
Sign up to set email alerts
|

Counting Bugs is Harder Than You Think

Abstract: Software Assurance Metrics And Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to test tools and techniques and measure their effectiveness. We review some SAMATE sub-projects: web application security scanners, malware research protocol, electronic voting systems, the SAMATE Reference Dataset, a public repository of thousands of example pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 16 publications
0
9
0
Order By: Relevance
“…The challenge is in ensuring that the count is well-defined, repeatable, and reproducible. A simple concept like "software bug" can have hidden ambiguity that interferes with counting [40].…”
Section: Abatement Of Philosophical Quagmiresmentioning
confidence: 99%
See 2 more Smart Citations
“…The challenge is in ensuring that the count is well-defined, repeatable, and reproducible. A simple concept like "software bug" can have hidden ambiguity that interferes with counting [40].…”
Section: Abatement Of Philosophical Quagmiresmentioning
confidence: 99%
“…Though bug counting is widely practiced and arguably "successful" as a measurement technique, there is no consensus definition of "bug" that is tight enough to ensure reproducible counts [40].…”
Section: Challengesmentioning
confidence: 99%
See 1 more Smart Citation
“…Complementing our better understanding of where software faults lie, organizations such as NIST and MITRE have developed code repositories containing known faults [5,61,39]. This provides a benchmark by which analysis tools can be compared.…”
Section: Software Faults and Failuresmentioning
confidence: 99%
“…The comparison of results across tools, and for accuracy against the actual number of weaknesses, is performed by human analysts. These comparisons have proved to be intractable [1], and illustrate the need for precision and This work is sponsored by DHS contract FA8750-12-C-0277 automation, of the Juliet Test Suite variety, for real-world comparisons to be feasible. This paper reports on our current project, supported by the Department of Homeland Security (DHS), to provide such precision and automation for real-world comparisons using formal methods technology that is currently available.…”
Section: Introductionmentioning
confidence: 99%