this is a work of the u.s. government and is not subject to copyright protection in the united...

Post on 23-Dec-2015

212 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

This is a work of the U.S. Government and is not subject to copyright protection in the United States.

The OWASP Foundation

OWASP

AppSec

DCOctober 2005

http://www.owasp.org/

The Software Assurance Metrics and Tool Evaluation (SAMATE) Project

Paul E. BlackComputer Scientist, NISTpaul.black@nist.gov+1 301-975-4794

2OWASP AppSec DC 2005

Outline

Overview of the NIST SAMATE projectPurposes of tool and technique evaluationSoftware and effectiveness metrics

Report of workshop on Defining the State of the Art in Software Security Tools

Final comments

3OWASP AppSec DC 2005

The NIST SAMATE Project

Surveys Tools Researchers and companies

Workshops & conference sessions Taxonomy of software assurance (SwA) functions &

techniques Order of importance (cost/benefit, criticalities, …) Gaps and research agendas Studies to develop metrics

Enable tool evaluations Write detailed specification Develop test plans and reference material Collect tool evaluations, case studies, and comparisons

http://samate.nist.gov/

4OWASP AppSec DC 2005

Taxonomy of SwA Tool Functions and Techniques

Concept or business need Use cases Changes to current systems

Requirements and design Consistency Completeness Compliance

Implementation The usual …

Assessment and acceptance External

Automated vulnerability scanners Penetration test assistants Other standard testing techniques: usage, spec-based, statistical, worst-case/criticality, etc.

Insider Automated code scanners

– Syntactic, e.g., “grep” – Semantic

Code review assistants – Source code – Virtual Machine code (e.g., Java bytecode or .Net intermediate code) – Binary (debugger, decompiler)

Operation Operator training Auditing Penetration test

5OWASP AppSec DC 2005

Planned Workshops

Enumerate SwA functions and techniques Approach (code vs. spec, static vs. dynamic) Software type (distributed, real time, secure) Type of fault detected

Recruit focus groups Which are the most “important”?

Highest cost/benefit ratio? Finds highest priority vulnerabilities? Most widely used?

Critique reference dataset Identify gaps in functions and recommend

research Plan and initiate studies for metrics

6OWASP AppSec DC 2005

Outline

Overview of the NIST SAMATE projectPurposes of tool and technique evaluationSoftware and effectiveness metrics

Report of workshop on Defining the State of the Art in Software Security Tools

Final comments

7OWASP AppSec DC 2005

Purposes of Tool or Technique Evaluations

Precisely document what a tool does (and doesn’t) do … in order to …

Provide feedback to tool developersSimple changes to makeDirections for future releases

Inform usersMatch the tool or technique to a particular

situationUnderstand significance of tool resultsKnow how techniques work together

8OWASP AppSec DC 2005

Developing a Specification

After a technique or tool function is selected by the working group …

NIST and focus group develops a clear, testable specification

Specification posted for public comment Comments incorporated Develop a measurement methodology:

Test cases Procedures Reference implementations and data Scripts and auxiliary programs Interpretation criteria

1 2 6 12 18 24

Workshop1SwA

classes

3 4 5 9 15 21

Workshop 3metricsstudies

Workshop 2research

gaps

focusgroupclass 1

focusgroupclass 1

FunctionTaxonomy

ToolSurvey

SurveyPublication

strawmanspec

test plan

test plandraft

Spec0

Spec1

select func

strawmanspec

draft

Spec0

Spec1

SAMATE Project Timeline

focusgroupclass 2

focusgroupclass 2

tool testing matrix

select func

10OWASP AppSec DC 2005

Why Look at Checking First?

Vital for software developed outside, i.e., when process is not visible

Applicable to legacy software Feedback for process improvement

Process experiments are expensive Many are working on process (SEI, PSP,

etc.)

11OWASP AppSec DC 2005

Outline

Overview of the NIST SAMATE projectPurposes of tool and technique evaluationSoftware and effectiveness metrics

Report of workshop on Defining the State of the Art in Software Security Tools

Final comments

12OWASP AppSec DC 2005

But, is the tool or methodology effective?

Is this program secure (enough)? How secure does tool X make a program? How much more secure does technique X make a

program after techniques Y and Z ? Do they really find or prevent bugs and

vulnerabilities? Dollar for dollar, does methodology P or S give

more reliability?

13OWASP AppSec DC 2005

Toward Software Metrics

Qualitative comparison

Formally defined quantity

Unit and scale

Measured value Derived units

warmer, colder buggy, secure

temperature quality? confidence?

degree, Kelvin ?

Heat energy=smt Software assurance pt

14OWASP AppSec DC 2005

Benefits of SAMATE Project

Define metrics for evaluating SwA tools Users can make more informed tool

choices Neutral test program Tool creators make better tools

15OWASP AppSec DC 2005

Outline

Overview of the NIST SAMATE projectPurposes of tool and technique evaluationSoftware and effectiveness metrics

Report of workshop on Defining the State of the Art in Software Security Tools

Final comments

16OWASP AppSec DC 2005

Workshop on Defining the State of the Art in Software Security Tools Workshop Characteristics

NIST, Gaithersburg 10 & 11 August 2005 http://samate.nist.gov/softSecToolsSOA 45 people from government, universities, vendors and service

providers, and research companies came Proceedings, including discussion notes and submitted material,

should be available from above URL when you see this. Goals

Understand the state of the art of software security assurance tools in detecting security flaws and vulnerabilities.

Discuss metrics to evaluate the effectiveness of such tools. Collect flawed and “clean” software for a reference dataset. Publish a report on classes of software security vulnerabilities.

17OWASP AppSec DC 2005

Outcomes of Workshop I

Understand the state of the art of software security assurance tools in detecting security flaws and vulnerabilities.A report is being written.

Discuss metrics to evaluate the tool effectivenessAll agreed that software metrics and tool

effectiveness metrics are a good idea.No consensus on how to approach the

challenge.

18OWASP AppSec DC 2005

Outcomes of Workshop II

Collect flawed and “clean” software to be a reference. Several collections emerged: MIT, Fortify, etc. Attendees agreed that a shared reference dataset would

help. NIST reference dataset in development. Prototype available at (URL forthcoming)

Report on classes of software security vulnerabilities Discussed several existing flaw taxonomies: Clasp,

PLOVER (CVE), etc. Attendees agreed a common taxonomy would help. Discussions continuing on samate email list.

19OWASP AppSec DC 2005

Outline

Overview of the NIST SAMATE projectPurposes of tool and technique evaluationSoftware and effectiveness metrics

Report of workshop on Defining the State of the Art in Software Security Tools

Final comments

20OWASP AppSec DC 2005

Society has 3 options:

Learn how to make software that works

Limit size or authority of software

Accept failing software

21OWASP AppSec DC 2005

Contact to Participate

Paul E. Black Project Leader Software Diagnostics & Conformance Testing

Division, Software Quality Group, Information Technology Laboratory, NIST

paul.black@nist.gov

top related