Abstract:The NIST Software Assurance Metrics And Tool Evaluation (SAMATE) project conducted the third Static Analysis Tool Exposition (SATE) in 2010 to advance research in static analysis tools that find security defects in source code. The main goals of SATE were to enable empirical research based on large test sets, encourage improvements to tools, and promote broader and more rapid adoption of tools by objectively demonstrating their use on production software.Briefly, participating tool makers ran their tool on a set of programs. Researchers led by NIST performed a partial analysis of tool reports. The results and experiences were reported at the SATE 2010 Workshop in Gaithersburg, MD, in October, 2010. The tool reports and analysis were made publicly available in 2011.This special publication consists of the following three papers. "The Third Static Analysis Tool Exposition (SATE 2010)," by Vadim Okun, Aurelien Delaitre, and Paul E. Black, describes the SATE procedure and provides observations based on the data collected. The other two papers are written by participating tool makers."Goanna Static Analysis at the NIST Static Analysis Tool Exposition," by Mark Bradley, Ansgar Fehnker, Ralf Huuck, and Paul Steckler, introduces Goanna, which uses a combination of static analysis with model checking, and describes its SATE experience, tool results, and some of the lessons learned in the process. Serguei A. Mokhov introduces a machine learning approach to static analysis and presents MARFCAT's SATE 2010 results in "The use of machine learning with signaland NLP processing of source code to fingerprint, detect, and classify vulnerabilities and weaknesses with MARFCAT."
Keywords:Software security; static analysis tools; security weaknesses; vulnerability Certain instruments, software, materials, and organizations are identified in this paper to specify the exposition adequately. Such identification is not intended to imply recommendation or endorsement by NIST, nor is it intended to imply that the instruments, software, or materials are necessarily the best available for the purpose.NIST SP 500-283 2 This paper describes the SATE procedure and provides our observations based on the data collected. We improved the procedure based on lessons learned from our experience with previous SATEs. One improvement was selecting programs based on entries in the Common Vulnerabilities and Exposures (CVE) dataset. Other improvements were selection of tool warnings that identify the CVE entries, expanding the C track to a C/C++ track, having largerup to 4 million lines of code -test cases, clarifying further the analysis categories, and having much more detailed analysis criteria.This paper identifies several ways in which the released data and analysis are useful. First, the output from running many tools on production software can be used for empirical research. Second, the analysis of tool reports indicates actual weaknesses that exist in the software and that are reported by the tools.Third, the CVE-selected test cases conta...