2013
DOI: 10.6028/nist.sp.500-297
|View full text |Cite
|
Sign up to set email alerts
|

Report on the Static Analysis Tool Exposition (SATE) IV

Abstract: The NIST Software Assurance Metrics And Tool Evaluation (SAMATE) project conducted the fourth Static Analysis Tool Exposition (SATE IV) to advance research in static analysis tools that find security defects in source code. The main goals of SATE were to enable empirical research based on large test sets, encourage improvements to tools, and promote broader and more rapid adoption of tools by objectively demonstrating their use on production software. Briefly, eight participating tool makers ran their tools on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
51
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 54 publications
(57 citation statements)
references
References 8 publications
(11 reference statements)
3
51
0
Order By: Relevance
“…For our experiment, we used over 21,000 test cases for 22 C/C++ CWEs and over 7500 test cases for 19 Java CWEs. Previous evaluations based on Juliet either did not report detailed quantitative results [13,14] or used a very small sample of only 152 test cases related to vulnerabilities in C code only [15]. Our study reports several performance metrics (i.e., accuracy, recall, probability of false alarm, and G-score) for individual CWEs, as well as across all considered CWEs.…”
Section: Related Workmentioning
confidence: 96%
See 4 more Smart Citations
“…For our experiment, we used over 21,000 test cases for 22 C/C++ CWEs and over 7500 test cases for 19 Java CWEs. Previous evaluations based on Juliet either did not report detailed quantitative results [13,14] or used a very small sample of only 152 test cases related to vulnerabilities in C code only [15]. Our study reports several performance metrics (i.e., accuracy, recall, probability of false alarm, and G-score) for individual CWEs, as well as across all considered CWEs.…”
Section: Related Workmentioning
confidence: 96%
“…Note that the exact matching is likely to underestimate the tools performance because CWEs are related and form a hierarchical structure in which lower level CWEs are more specific instances of their parent(s) CWEs. Earlier works used groupings of CWEs in the evaluation [14], but concluded that these groupings have to be improved. In general, due to the complex relationships among CWEs, there is no easy and consistent way to group CWEs in groups of related types of weaknesses [14].…”
Section: Threats To Validitymentioning
confidence: 97%
See 3 more Smart Citations