AppSec DC Application security metrics from the organization on down to the vulnerabilities

advertisement

Application security metrics from the organization on down to the vulnerabilities

Chris Wysopal

CTO

Veracode cwysopal@veracode.com

AppSec DC

November 13, 2009

11:30am-12:30pm

Copyright © The OWASP Foundation

Permission is granted to copy, distribute and/or modify this document under the terms of the OWASP License.

The OWASP Foundation http://www.owasp.org

Agenda

1. Why use metrics?

2. Challenges & Goals for Application Security

Metrics

3. Enumerations

4. Organizational Metrics

5. Testing Metrics

6. Application Metrics

7. WASC Web Application Security Statistics

Project 2008

8. Future Plans

OWASP

2

To measure is to know.

James Clerk Maxwell, 1831-1879

Measurement motivates.

John Kenneth Galbraith. 1908-2006

OWASP

3

Metrics do matter

1. Metrics quantify the otherwise unquantifiable

2. Metrics can show trends and trends matter more than measurements do

3. Metrics can show if we are doing a good or bad job

4. Metrics can show if you have no idea where you are

5. Metrics establish where “You are here” really is

6. Metrics build bridges to managers

7. Metrics allow cross sectional comparisons

8. Metrics set targets

9. Metrics benchmark yourself against the opposition

10. Metrics create curiosity

Source: Andy Jaquith, Yankee Group, Metricon 2.0

OWASP

4

Metrics don’t matter

 It is too easy to count things for no purpose other than to count them

 You cannot measure security so stop

 This following is all that matters and you can’t map security metrics to them:

» Maintenance of availability

» Preservation of wealth

» Limitation on corporate liability

» Compliance

» Shepherding the corporate brand

 Cost of measurement not worth the benefit

Source: Mike Rothman, Security Incite, Metricon 2.0

OWASP

5

Bad metrics are worse than no metrics

OWASP

6

Security metrics can drive executive decision making

 How secure am I?

 Am I better off than this time last year?

 Am I spending the right amount of $$?

 How do I compare to my peers?

 What risk transfer options do

I have?

Source: Measuring Security Tutorial, Dan Geer

OWASP

7

Goals of Application Security Metrics

 Provide quantifiable information to support enterprise risk management and risk-based decision making

Articulate progress towards goals and objectives

 Provide a repeatable, quantifiable way to assess, compare, and track improvements in assurance

 Focus activities on risk mitigation in order of priority and exploitability

 Facilitate adoption and improvement of secure software design and development processes

Provide an objective means of comparing and benchmarking projects, divisions, organizations, and vendor products

Source: Practical Measurement Framework for Software Assurance and Information Security, DHS SwA

Measurement Working Group

OWASP

8

Use Enumerations

Enumerations help identify specific software-related items that can be counted, aggregated, evaluated over time

Common Vulnerabilities and

Exposures

Common Weakness Enumeration

Common Attack Pattern Enumeration and Classification

OWASP

Organizational Metrics

 Percentage of application inventory developed with SDLC (which version of SDLC?)

 Business criticality of each application in inventory

 Percentage of application inventory tested for security (what level of testing?)

 Percentage of application inventory remediated and meeting assurance requirements

 Roll up of testing results

OWASP

10

Organizational Metrics

 Cost to fix defects at different points in the software lifecycle

 Cost of data breaches related to software vulnerabilities

OWASP

11

Testing Metrics

 Number of threats identified in threat model

 Size of attack surface identified

 Percentage code coverage (static and dynamic)

 Coverage of defect categories (CWE)

 Coverage of attack pattern categories (CAPEC)

OWASP

12

SANS Top 25 Mapped to Application Security

Methods

Source: 2009 Microsoft

OWASP

Weakness Class Prevalence based on 2008

CVE data

Category

SQL Injection

XSS

Buffer Overflow

Directory Traversal

PHP Include

Symbolic Link

Authorization Bypass

DoS Malformed Input

Information Leak

Integer Overflow

CSRF

Bad Permissions

Unnecessary Privileges

Count

941

681

455

298

135

133

113

97

84

78

57

40

36

%

19.4%

14.0%

9.4%

6.1%

2.8%

2.7%

2.3%

2.0%

1.7%

1.6%

1.2%

0.8%

0.7%

Hard coded Password

Upload of code

Weak Crypto

Format String

Insufficient Randomness

Metacharacter Injection

Search Path

Memory Leak

Sensitive data root

Race Condition

DoS Flood

CRLF Injection

Eval Injection

Numeric Error

36 0.7%

34 0.7%

30 0.6%

26 0.5%

24 0.5%

23 0.5%

20 0.4%

18 0.4%

16 0.3%

13 0.3%

10 0.2%

8 0.2%

8 0.2%

7 0.1%

4855 total flaws tracked by CVE in 2008

OWASP

Basic Metrics: Defect counts

 Design and implementation defects

CWE identifier

 CVSS score

 Severity

 Likelihood of exploit

OWASP

Automated Code Analysis Techniques

Static Analysis: (White Box Testing) Similar to a line by line code review. Benefit is there is complete coverage of the entire source or binary. Downside is it is computationally impossible to have a perfect analysis.

» Static Source – analyze the source code

» Static Binary – analyze the binary executable

» Source vs. Binary – You don’t always have all the source code. You don’t want to part with your source code to get a 3 rd party analysis

Dynamic Analysis: (Black Box Testing) Run time analysis more like traditional testing. Benefit is there is perfect modeling of a particular input so you can show exploitability. Downside is you cannot create all inputs in reasonable time.

» Automated dynamic testing (also known as penetration testing) using tools

» Manual Penetrating Testing (with or without use of tools)

 Create lists of defects that can be labeled with CWE, CVSS,

Exploitability

OWASP

Manual Analysis

Manual Penetration Testing – can discover some issues that cannot be determined automatically because a human can understand issues related to business logic or design

Manual Code Review – typically focused only on specific high risk areas of code

Manual Design Review – can determine some vulnerabilities early on in the design process before the program is even built.

 Threat Modeling

OWASP

WASC Web Application Security Statistics

Project 2008

 Purpose

 Collaborative industry wide effort to pool together sanitized website vulnerability data and to gain a better understanding about the web application vulnerability landscape.

 Ascertain which classes of attacks are the most prevalent regardless of the methodology used to identify them. MITRE

CVE project for custom web applications.

 Goals

 Identify the prevalence and probability of different vulnerability classes.

 Compare testing methodologies against what types of vulnerabilities they are likely to identify.

OWASP

18

Project Team

 Project Leader

 Sergey Gordeychik

Project Contributors

 Sergey Gordeychik, Dmitry Evteev ( POSITIVE TECHNOLOGIES )

 Chris Wysopal, Chris Eng ( VERACODE )

 Jeremiah Grossman ( WHITEHAT SECURITY )

 Mandeep Khera ( CENZIC )

 Shreeraj Shah ( BLUEINFY )

 Matt Lantinga ( HP APPLICATION SECURITY CENTER )

 Lawson Lee ( dns – used WebInspect)

 Campbell Murray ( ENCRIPTION LIMITED )

OWASP

19

Summary

 12186 web applications with 97554 detected vulnerabilities

more than 13%* of all reviewed sites can be compromised completely automatically

About 49% of web applications contain vulnerabilities of high risk level detected by scanning

 manual and automated assessment by white box method allows to detect these high risk level vulnerabilities with probability up to 80-96%

 99% of web applications are not compliant with

PCI DSS standard

* Web applications with Brute Force Attack, Buffer Overflow, OS Commanding, Path Traversal,

Remote File Inclusion, SSI Injection, Session Fixation, SQL Injection, Insufficient Authentication,

Insufficient Authorization vulnerabilities detected by automatic scanning.

OWASP

20

Compared to 2007 WASS Project

 Number of sites with SQL Injection fell by 13%

 Number of sites with Cross-site Scripting fell

20%

 Number of sites with different types of

Information Leakage rose by 24%

 Probability to compromise a host automatically rose from 7 to 13 %.

OWASP

21

Probability to detect a vulnerability

OWASP

22

% of total vulnerabilities

OWASP

23

White box vs. black box

OWASP

24

Full Report

 http://projects.webappsec.org/Web-Application-

Security-Statistics

OWASP

25

Future Plans

 Veracode processes over 100 applications and

500 Million lines of code per month

 Collecting data:

 vulnerabilities found/fixed

 Application metadata: industry, time in dev cycle, application type

 Vulnerability trends

 Industry/Platform/Language differences

OWASP

26

Further reading on software security metrics

& testing

NIST, Performance Measurement

Guide for Information Security http://csrc.nist.gov/publications/nistpubs/

800-55-Rev1/SP800-55-rev1.pdf

Security Metrics: Replacing Fear,

Uncertainty, and Doubt by Andrew Jaquith,

The Art of Software Security Testing by Chris Wysopal, Lucas Nelson, Dino Dai

Zovi, Elfriede Dustin

OWASP

27

Q&A

cwysopal@veracode.com

AppSec DC

Copyright © The OWASP Foundation

Permission is granted to copy, distribute and/or modify this document under the terms of the OWASP License.

The OWASP Foundation http://www.owasp.org

Download