Proceedings of the 9th Annual Cyber and Information Security Research Conference on - CISR '14 2014
DOI: 10.1145/2602087.2602101
|View full text |Cite
|
Sign up to set email alerts
|

Towards modeling the behavior of static code analysis tools

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
10
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 3 publications
0
10
0
Order By: Relevance
“…We start with description of related works that used the Juliet benchmark as an input in evaluation of static code analysis tools [13][14][15][16][17]. These works are the closest to our work.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…We start with description of related works that used the Juliet benchmark as an input in evaluation of static code analysis tools [13][14][15][16][17]. These works are the closest to our work.…”
Section: Related Workmentioning
confidence: 99%
“…Recently, researchers have started to use Juliet test suite for evaluation of static code analysis [15,17]. A small subset of Juliet test suite was used in [15] to compare the performance of nine tools in detecting security vulnerabilities in C code.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…More specifically, we integrated each static code analysis tool into the Static Code Analysis Tool Evaluator (SCATE) [19], a framework for evaluating the quality of static code analysis tools. SCATE uses rules similar to the foregoing to automatically classify static code analysis tool warnings in the Juliet test suite.…”
Section: // J U L I E T Cwe476_null_pointer_dereference__char_73b Cmentioning
confidence: 99%
“…For example, one study evaluated how different static code analysis tools performed on the Juliet test suite [18] from the National Institute of Standards and Technology and discovered a large number of false positives [19]. For two commercial off-the-shelf static code analysis tools used in the study, as many as 59% and 63%, respectively, of the warnings generated were false positives.…”
mentioning
confidence: 99%