Information Security Decision Making

advertisement

Tools to Make Objective

Information Security Decisions

The Trust Economics Methodology

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@ncl.ac.uk

Part I security metrics and measurements

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

motivation

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

security and trust data loss http://www.youtube.com/watch?v=JCyAwYv0Ly0 identity theft http://www.youtube.com/watch?v=CS9ptA3Ya9E worms http://www.youtube.com/watch?v=YqMt7aNBTq8 http://www.informationweek.com/news/software/showArti cle.jhtml?articleID=221400323

4

© Aad van Moorsel, Newcastle University, 2010

cyber security impact: money lost in UK most recent Garlik UK Cybercrime report with numbers for the year 2008, for the UK:

– over 3.6 million criminal acts online

– tripling of identity theft to 87 thousand

– doubling of online banking losses to 52 million

– 44 thousand phishing web sites targeting UK banks

– of £41 billion online shopping, £600 million credit card

fraud, online fraud rose from 4% to 8% in last two years

- online harassment: 2.4 million times

- FBI: median amount of money lost per scam victim:

$1,000

cybercrime impact: convictions in the US

security and trust

security: protection of a system against malicious attacks

information security: preservation of confidentiality, integrity and availability of information

CIA properties

• confidentiality

• integrity

• availability

7

© Aad van Moorsel, Newcastle University, 2010

why metrics and why quantification?

two uses:

• gives the ability to monitor the quality of a system as it is being used (using measurement)

• gives the ability to predict for the future the quality of a design or a system (using modelling) a good metric is critical to make measurement and modelling useful

© Aad van Moorsel, Newcastle University, 2010

8

security and trust

trust: a personal, subjective perception of the quality of a system or personal, subjective decision to rely on a system

evaluation trust: the subjective probability by which an individual A expects that another individual B performs a given action on which A’s welfare depends (Gambetta

1988)

decision trust: the willingness to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible

(McKnight and Chervany 1996)

9

© Aad van Moorsel, Newcastle University, 2010

security and trust what is more important, security or trust?

“Security Theatre and

Balancing Risks” (Bruce

Schneier) http://www.cato.org/dail ypodcast/podcastarchive.php?podcast_id=8

12

© Aad van Moorsel, Newcastle University, 2010

10

quality of service metrics

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

measure security

Lord Kelvin:

“what you cannot measure, you cannot manage” how true is this:

• in science?

• in engineering?

• in business?

and how true is:

“we only manage what we measure”?

12

© Aad van Moorsel, Newcastle University, 2010

classifying metrics (part one) quantitative vs. qualitative: quantitative metrics can be expressed through some number, while qualitative metrics are concerned with TRUE or FALSE

quality-of-service (QoS) metrics: express a grade of service as a quantitative metric

non-functional properties are system properties beyond the strictly necessary functional properties

IT management is mostly about quantitative/QoS/nonfunctional

13

© Aad van Moorsel, Newcastle University, 2010

classifying metrics (part two)

performance metrics: timing and usage metrics

• CPU load

• throughput

• response time

dependability or reliability metrics: metrics related to accidental failure

• MTTF

• availability

• reliability

security metrics: metrics related to malicious failures (attacks)

• ?

business metrics: metrics related to cost or benefits

• number of buy transaction in web site

• cost of ownership

• return on investment

14

© Aad van Moorsel, Newcastle University, 2010

common metrics (performance)

throughput = number of tasks a resource can complete per time unit:

• jobs per second

• requests per second

• millions of instructions per second (MIPS)

• floating point operations per second (FLOPS)

• packets per second

• kilo-bits per second (Kbps)

• transactions per second

• ...

15

© Aad van Moorsel, Newcastle University, 2010

common metrics (performance) response time, waiting time, propagation delay: user request arrival at server buffer start processing server response reply received by user time propagation delay waiting time proc.

time propagation delay response time

(server perspective) response time

(client perspective)

© Aad van Moorsel, Newcastle University, 2010

16

common metrics (performance)

capacity = maximum sustainable number of tasks

load = offered number of tasks

overload = load is higher than capacity

utilization = the fraction of resource capacity in use (CPU, bandwidth); for a CPU, this corresponds to the fraction of time the resource is busy (some times imprecisely called the CPU load)

17

© Aad van Moorsel, Newcastle University, 2010

some related areas: performance measuring performance:

• we know CPU speed (and Intel measured it)

• we can easily measure sustained load (throughput)

• we can reasonably okay model for performance (queuing, simulations)

• we’re pretty good in adapting systems for performance, through load balancing etc.

• there is a TOP 500 for supercomputers

• we buy PCs based on performance, and their performance is advertised

• companies buy equipment based on performance

18

© Aad van Moorsel, Newcastle University, 2010

common metrics (dependability/reliability) systems with failures: operating failed failure repair failure repair time up down

© Aad van Moorsel, Newcastle University, 2010

19

common metrics (dependability/reliability)

Mean Time To Failure (MTTF)

= average length up time period

Mean Time To Repair (MTTR)

= average length down time period availability

= fraction of time system is up

= fraction up time

= MTTF / (MTTF + MTTR) unavailability

= 1 – availability

= fraction up time

reliability at time t

= probability the system does not go down before t

© Aad van Moorsel, Newcastle University, 2010

20

relation between dependability metrics availability and associated yearly down time: availability

0.9

0.99

0.999

0.9999

0.99999

yearly down time

37 days

4 days

9 hours

50 minutes

5 minutes

© Aad van Moorsel, Newcastle University, 2010

21

relation between dependability metrics if you have a system with 1 day MTTF and 1 hour MTTR, would you work on the repair time or the failure time to improve the availability?

availability

0.96

0.99

0.999

0.9999

0.99999

required MTTF if MTTR = 1 hour

1 day

4 days

6 weeks

14 months

11 years required MTTR if MTTF = 1 day

1 hour

14 minutes

1 ½ minutes

9 seconds

1 second

© Aad van Moorsel, Newcastle University, 2010

22

five nines

© Aad van Moorsel, Newcastle University, 2010

23

some related areas: availability measuring availability:

• we do not know much about CPU reliability (although Intel measures it)

• it is easy, but time consuming to measure down time

• we can reasonably okay model for availability (Markov chains), although we do not know parameter values for fault and failure occurrences

• we’re rather basic in adapting systems for availability, but there are various fault tolerance mechanisms

• there is no TOP 500 for reliable computers

• we do not buy PCs based on availability, and their availability is rarely advertised

• companies buy equipment based on availability only for top end applications (e.g. goods and finances admin of super market chains)

24

© Aad van Moorsel, Newcastle University, 2010

how about security measuring security:

• we do not know much about level of CPU security (and

Intel does not know how to measure it)

• it is possible to measure security breaches, but how much do they tell you?

• we do not know how to model for levels of security, for instance we do not know what attacks look like

• we’re only just starting to research adapting systems for security—there sure are many security mechanisms available

• there is no TOP 500 for secure computers

• we do not buy PCs based on privacy or security, and their privacy/security is rarely advertised

• companies are very concerned about security, but do not know how to measure it and show improvements

25

© Aad van Moorsel, Newcastle University, 2010

what’s special about security security is a hybrid between functional and non-functional

(performance/availability) property

• it is tempting to think security is binary: it is secured or not  common mistake

• security deals with loss and attacks

– you can measure after the fact, but would like to predict

– maybe loss can still be treated like accidental failures

(as in availability)

– attacks certainly require knowledge of attackers, how they act, when they act, what they will invent

• security level (even if we somehow divined it) is meaningless:

– what’s the possible consequence

– how do people react to it (risk averse?)

26

© Aad van Moorsel, Newcastle University, 2010

how do people now measure security?

reporting after the fact:

• industry and government are obligated to report breaches

(NIST database and others)

• measure how many non-spam email went through, etc.

some predictive metrics as ‘substitute’ for security:

• how many CPU cycles needed to break encryption technique?

• risk analysis: likelihood X impact, summed for all breaches

 but we know neither likelihood nor impact

27

© Aad van Moorsel, Newcastle University, 2010

why is measurable security important?

without good measures

• security is sold as all or nothing

• security purchase decisions are based on scare tactics

• system configuration (including cloud, SaaS) cannot be judged for resulting security

© Aad van Moorsel, Newcastle University, 2010

28

security metrics

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

CIA metrics how about CIA

confidentiality: keep organisation’s data confidential

(privacy for the organisation)

integrity: data is unaltered

availability: data is available for use

• you can score them, and sum them up (see CVSS later)

• you can measure for integrity and availability if there is centralized control

• you cannot easily predict them

30

© Aad van Moorsel, Newcastle University, 2010

good metrics in practice, good metrics should be:

consistently measured, without subjective criteria

cheap to gather, preferably in an automated way

• a cardinal number or percentage, not qualitative labels

• using at least one unit of measure (defects, hours, ...)

contextually specific—relevant enough to make decisions from Jaquith’s book ‘Security Metrics’

© Aad van Moorsel, Newcastle University, 2010

31

good metrics in practice, metrics cover four different aspects:

• perimeter defenses

– # spam detected, # virus detected

• coverage and control

– # laptops with antivirus software, # patches per month

• availability and reliability

– host uptime, help desk response time

• application risks

– vulnerabilities per application, assessment frequence for an application from Jaquith’s book ‘Security Metrics’

© Aad van Moorsel, Newcastle University, 2010

32

good metrics how good do you think the metrics from Jaquith’s book

‘Security Metrics’ are?

it’s the best we can do now, but as the next slide shows, there are a lot of open issues

© Aad van Moorsel, Newcastle University, 2010

33

good metrics ideally good metrics should:

• not measure the process used to design, implement or manage the system, but the system itself

• not depend on things you will never know (such as in risk management)

• be predictive about the future security, not just reporting the past but these are very challenging requirements we do not yet know how to fulfill

34

© Aad van Moorsel, Newcastle University, 2010

data collection and security metrics

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

honeypots a honeypot

• pretends to be a resource with value to attackers, but is actually isolated and monitored, in order to:

• misguide attackers and analyze the behaviour of attackers.

© Aad van Moorsel, Newcastle University, 2010

36

honeypots two types:

high-interaction:

• real services, real OS, real application  higher risk of being used to break in or attack others

• honeynets (two or more honeypots in a network)

low-interaction:

• emulated services  low risk

• honeypots like nepenthes

© Aad van Moorsel, Newcastle University, 2010

37

an example of a hacker in a honeypot

SSH-1.99-OpenSSH_3.0

SSH-2.0-GOBBLES

GGGGO*GOBBLE* uname -a;id

OpenBSD pufferfish 3.0 GENERIC#94 i386 uid=0(root) gid=0(wheel) groups=0(wheel) ps -aux|more

USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND root 16042 0.0 0.1 372 256 ?? R 2:48PM 0:00.00 more (sh) root 25892 0.0 0.2 104 452 ?? Ss Tue02PM 0:00.14 syslogd root 13304 0.0 0.1 64 364 ?? Is Tue02PM 0:00.00 portmap

...

root 1 0.0 0.1 332 200 ?? Is Tue02PM 0:00.02 /sbin/init id uid=0(root) gid=0(wheel) groups=0(wheel) who cat inetd.conf

attempt to edit the configuration file for network services

© Aad van Moorsel, Newcastle University, 2010

38

data from a honeypot data from 2003, number of different ‘attack’ sources

Pouget, Dacier, Debar: “Attack processes found on the Internet”

© Aad van Moorsel, Newcastle University, 2010

39

data from honeypots a lot of other data can be obtained

• how do worms propagate?

• how do attackers use zombies?

• what kind of attackers do exist, and which ones start denialof-service attacks?

• which country do the attacks come from

• ...

Pouget, Dacier, Debar: “Attack processes found on the Internet”

© Aad van Moorsel, Newcastle University, 2010

40

honeypots a honeynet is a network of honeypots and other information system resources

T. Holz, “Honeypots and Malware Analysis—Know Your Enemy”

© Aad van Moorsel, Newcastle University, 2010

41

honeynets three tasks in a honeynet:

1. data capture

2. data analysis

3. data control: especially highinteraction honeypots are vulnerable of being misused by attackers  control of data flow, neither to come inside the organisation, nor to other innocent parties

T. Holz, “Honeypots and Malware Analysis—Know Your Enemy”

© Aad van Moorsel, Newcastle University, 2010

42

US CERT and CVSS

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

US-CERT security vulnerabilities

United State Computer Emergency Readiness Team people submit vulnerability notes, e.g.:

Vulnerability Note VU#120541

SSL and TLS protocols renegotiation vulnerability

A vulnerability exists in SSL and TLS protocols that may allow attackers to execute an arbitrary HTTP transaction.

Credit: Marsh Ray of PhoneFactor

44

© Aad van Moorsel, Newcastle University, 2010

CVSS scoring in US-CERT it uses a scoring system to determine how serious the vulnerability is: Common Vulnerability Scoring System

(CVSS)

P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2.0”

© Aad van Moorsel, Newcastle University, 2010

45

CVSS

BaseScore = 0.6 x Impact + 0.4 x Exploitability

Impact = 10.41 x (1 - (1 - ConfImpact) x (1 - IntegImpact) x (1 -

AvailImpact) )

ConfImpact = case ConfidentialityImpact of

• none: 0.0

• partial: 0.275

• complete: 0.660

...

P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2.0”

© Aad van Moorsel, Newcastle University, 2010

46

DataLossDB from

Open Security Foundation

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

Open Security Foundation

OSF wants to provide knowledge and resources so that organisations may properly detect, protect, and mitigate information security risks.

OSF maintains two databases:

1. OSVDB: Open Source Vulnerability Database with all kinds of computer security breaches

2. DataLossDB: data loss

– Improve awareness: for consumers, CISOs, governments, ligislators, citizens

– gain a better understanding of the effects of, and effectiveness of "compliance"

48

© Aad van Moorsel, Newcastle University, 2010

DataLossDB from Open Security Foundation

• Improve awareness of data security and identity theft threats to consumers.

• Provide accurate statistics to CSO's and CTO's to assist them in decision making.

• Provide governments with reliable statistics to assist with their consumer protection decisions and initiatives.

• Assist legislators and citizens in measuring the effectiveness of breach notification laws.

• Gain a better understanding of the effects of, and effectiveness of "compliance".

49

© Aad van Moorsel, Newcastle University, 2010

DataLossDB, an incident each day

© Aad van Moorsel, Newcastle University, 2010

50

DataLossDB: types of incidents

© Aad van Moorsel, Newcastle University, 2010

51

DataLossDB: outsiders vs. insiders note: interviews with CISOs suggest that the real number is 75% insider incidents

© Aad van Moorsel, Newcastle University, 2010

52

Part II trust economics methodology

motivation: the need for metrics of information security

Forrester report 2010 in ‘The Value of Corporate Secrets: How Compliance and

Collaboration Affect Enterprise Perceptions of Risk’,

Forrestors finds:

1. secrets comprise two-thirds of information value

2. compliance, not security, drives security budgets

3. focus on preventing accidents, but theft is 10 times costlier

4. more value correlates with more incidents

5. CISOs do not know how effective their security controls are

© Aad van Moorsel, Newcastle University, 2010 55

the value of top-five data assets in the knowledge industry about 70% of this is secrets,

30% custodial data (credit card, customer data, etc)

© Aad van Moorsel, Newcastle University, 2010 56

compliance drives budgets, but doesn’t protect secrets

© Aad van Moorsel, Newcastle University, 2010 57

most incidents are employee accidents

75% of incidents is insider (accident or theft)

© Aad van Moorsel, Newcastle University, 2010 58

but thefts are much more costly than accidents

© Aad van Moorsel, Newcastle University, 2010 59

do CISOs know?

CISO at high-value firm scores its security at 2.5 our of 3

CISO at low-value firm scores its security at 2.6 out of 3 high value firms have 4 times as many accidents as lowvalue firms, with 20 times more valuable data so, the CISOs seem to think security is okay/same, despite differences in actual accidents at a firm...

Forrester concludes: to understand more objectively how well their security programs perform, enterprises will need better ways of generating key performance indicators and metrics

© Aad van Moorsel, Newcastle University, 2010

60

example of compliance:

PCI DSS for credit card companies

PCI DSS

PCI DSS: Payment Card Industry Data Security Standard mastercard, VISA, American Express, Discover, ..., all come together to define a data security compliance standard mostly concerned with protecting customer data

– 12 requirements

– testing procedures for 12 requirements

– assessors will go on-site to see if a company passes the testing procedures

© Aad van Moorsel, Newcastle University, 2010

62

PCI DSS example

© Aad van Moorsel, Newcastle University, 2010

63

PCI DSS requirements

1. Install and maintain a firewall

2. Do not use vendor-supplied defaults for passwords etc.

3. Protect stored cardholder data

4. Encrypt cardholder data across open, public networks

5. Use and regularly update anti-virus software

6. Develop and maintain secure systems and applications

7. Restrict access to cardholder data by business need-to-know

8. Assign a unique ID to each person with computer access

9. Restrict physical access to cardholder data

10.Track and monitor all access to network and cardholder data

11.Regularly test security systems and processes

12.Maintain a policy that addresses information security

© Aad van Moorsel, Newcastle University, 2010

64

PCI DSS observations:

– it will take a company a lot of effort to show compliance

– you do not know how secure it actually makes your company

– you hope it protects you against loss of custodial data, which are indeed very embarrassing and bring bad press

– but these are not the most costly breaches (losing secrets is costlier) so, how good is such a standard for an industry?

would the industry do worse without the standard?

© Aad van Moorsel, Newcastle University, 2010

65

a case for trust economics would it be worse without the compliance standard?

to answer this question is very difficult:

– from a business perspective, you would be able to optimize your security investments better (potentially...)

– from the CISO perspective, does he value exactly the same as the company if something ‘minor’ is embarrassing enough to get fired for?

– from a legal perspective, how does one show negligence, isn’t it nice to have something written down, even if it makes people waste time?

– the psychology of the customer is to listen to the extreme cases, even if they are very rare, how to take that into consideration nevertheless, we are going to try it, make a few steps towards answering these questions using Trust Economics

© Aad van Moorsel, Newcastle University, 2010

66

introduction to the trust economics methodology

trust economics methodology for security decisions trade off: legal issues, human tendencies, business concerns,

...

a model of the information system stakeholders discuss

© Aad van Moorsel, Newcastle University, 2010 68

trust economics research from the trust economics methodology, the following research follows:

1. identify human, business and technical concerns

2. develop and apply mathematical modelling techniques

3. glue concerns, models and presentation together using a trust economics information security ontology

4. use the models to improve the stakeholders discourse and decisions

© Aad van Moorsel, Newcastle University, 2010 69

1. identify human concerns

Find out about how users behave, what the business issues are:

CISO1: Transport is a big deal.

Interviewer1: We’re trying to recognise this in our user classes.

CISO1: We have engineers on the road, have lots of access, and are more gifted in

IT.

Interviewer1: Do you think it would be useful to configure different user classes?

CISO1: I think it’s covered.

Interviewer1: And different values, different possible consequences if a loss occurs. I’m assuming you would want to be able to configure.

CISO1: Yes. Eg. customer list might or might not be very valuable.

Interviewer1: And be able to configure links with different user classes and the assets.

CISO1: Yes, if you could, absolutely.

Interviewer1: We’re going to stick with defaults at first and allow configuration if needed later. So, the costs of the password policy: running costs, helpdesk staff, trade-off of helpdesk vs. productivity

CISO1: That’s right.

© Aad van Moorsel, Newcastle University, 2010 70

1. identify human concerns

Find out about how users behave, what the business issues are:

Discussion of "Productivity Losses":

CISO2: But it’s proportional to amount they earn. This is productivity. eg. $1m salary but bring $20m into the company. There are expense people and productivity people.

Interviewer1: We have execs, “road warrior”, office drone. Drones are just a cost.

Interviewer2: And the 3 groups have different threat scenarios.

CISO2: Risk of over-complicating it, hard to work out who is income-earner and what proportion is income earning.

Interviewer2: But this is good point.

CISO2: Make it parameterisable, at choice of CISO.

CISO2: So, need to be able to drill down into productivity, cost, - esp in small company.

© Aad van Moorsel, Newcastle University, 2010 71

2. develop modeling techniques

© Aad van Moorsel, Newcastle University, 2010

72

3. develop ontology as glue of tools and methodology

Chapter

1 exploitedBy

1

Infra.

Proc.

hasFoundation

Threat

1 managesRiskOf

1 Behavioural

Foundation

*

Section

1

*

Guideline

1 contains contains

*

*

*

*

Behaviour

Control

1

1 hasRiskApproach

1 isMitigatedBy

*

Vulnerability hasVulnerability

1

Control Type hasStakeholder hasSubject

* hasVulnerability contains

*

Guideline

Step

* hasSubject

*

*

1

Asset

1 ownedBy

1

1

Role

© Aad van Moorsel, Newcastle University, 2010 73

4. facilitate stakeholder discourse

Password Policy Composition Tool

File Help

Breaches / Productivity / Cost

[projected per annum for 100-user sample]

BREACHES

Full

Composite

Partial

Productivity

Costs

#

#

#

#

#

#

BREACHES:

Full Composite Partial

Cl as s 1

Cl as s 2

Cl as s 3

Cl as s 1

Cl as s 2

Cl as s 3

Cl as s 1

Cl as s 2

Cl as s 3

Policy Properties

# min_length

Organisation Properties User Properties

Password Length:

# upper i

Password Change Notification:

# notif_days

# upper i

# lower

Password Complexity:

# upper

# char_classes i

# lower

Password Login Attempts:

# upper

# max_retries i

# lower

Password Change Frequency:

# upper i

# change_frequency

# lower

# lower

Generate Output

© Aad van Moorsel, Newcastle University, 2010

Export Policy

74

Newcastle’s involvement

1. identify human, business and technical concerns

– are working on a case study in Access Management (Maciej, James, with

Geoff and Hilary from Bath)

2. develop and apply mathematical modelling techniques

– are generalising concepts to model human behaviour, and are validating it with data collection (Rob, Simon, with Doug, Robin and Bill from UIUC)

– do a modelling case study in DRM (Wen)

3. glue concerns, models and presentation together using a trust economics information security ontology

– developed an information security ontology, taking into account human behavioural aspect (Simon)

– made an ontology editing tool for CISOs (John)

– are working on a collaborative web-based tool (John, Simon, Stefan from

SBA, Austria)

4. use the models to improve the stakeholders discourse and decision

– using participatory design methodology, are working with CISOs to do a user study (Simon, Philip and Angela from UCL)

© Aad van Moorsel, Newcastle University, 2010 75

example of the trust economics methodology

USB sticks

USB stick model

• Tests the hypothesis that there is a trade-off between the components of investments in information security that address confidentiality and availability;

• Captures trade-off between availability and confidentiality using a model inspired by a macroeconomic model of the

Central Bank Problem

• Conducts an empirical study together with a (rigorously structured) simulation embodying the dynamics of the conceptual model;

• Empirical data is obtained from semi-structured interviews with staff at two organizations;

• Demonstrates the use of the model to explore the utility of trade-offs between availability and confidentiality.

Modelling the Human and Technological Costs and Benefits of USB Memory Stick Security,

Beautement et al, WEIS 2008

© Aad van Moorsel, Newcastle University, 2010

77

central bank’s inflation-unemployment model threats to confidentiality security investment threats to availability

© Aad van Moorsel, Newcastle University, 2010

78

optimize utility

• you can set the value of I, the investment

– more monitoring of employees

– more training

• given that investment level, find out how humans would behave

– user will use encryption if it optimizes their personal utility function (human scoring function)

• plug this encryption level in the system behavioural model, and determine the utility

© Aad van Moorsel, Newcastle University, 2010

79

the USB stochastic model a base discrete-event stochastic model

© Aad van Moorsel, Newcastle University, 2010

80

USB stochastic model (in Möbius)

© Aad van Moorsel, Newcastle University, 2010

81

USB model translate human behavioural aspects to a model parameter

© Aad van Moorsel, Newcastle University, 2010

82

USB model humans take actions depending on their personal utility they get out of it: in this model, users will encrypt with a probability that optimizes the user’s score e.g., more reprimands will lower the human score, so in the model humans will behave to avoid reprimands  use encryption

© Aad van Moorsel, Newcastle University, 2010

83

some results

• a company can invest in more help desk staff, or more monitoring employees  which of two investments makes little difference

• if investment increases, one would expect increase in user encrypting  not gradual, sudden sharp increase at some investment level

• one would expect the user to change its proportion of encryption  optimal proportion seems to be always 0 or 1

© Aad van Moorsel, Newcastle University, 2010

84

human behaviour

• based on the human behaviour score function, we find out what the optimal encryption level for users is

– more encryption, less embarrassment

– more encryption, more annoying time wasting

• take all the human scoring functions together and determine what the optimal encryption level is for each investment level

• plug that in the model, and solve it for utility function

© Aad van Moorsel, Newcastle University, 2010

85

confidentiality/availability utility investment horizontally, encryption probability vertical, linear conf/avail utility function as some slides back

© Aad van Moorsel, Newcastle University, 2010

86

other stochastic models: effort models

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

privilege graphs (Dacier 1994) nodes are privileges, a path from an attacker to a privilege implies a vulnerability, here arcs are labelled with classes of attacks

© Aad van Moorsel, Newcastle University, 2010

88

for comparison: attack tree model

© Aad van Moorsel, Newcastle University, 2010

89

Markov security model with ‘effort’ (Kaaniche ‘96) observation: it takes effort to carry out an attack add exponentially distributed time to the privilege graph

© Aad van Moorsel, Newcastle University, 2010

90

results for an example

© Aad van Moorsel, Newcastle University, 2010

91

results: the metric is an availability metric

© Aad van Moorsel, Newcastle University, 2010

92

defining the problem space: ontology of information security

Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security aad.vanmoorsel@newcastle.ac.uk

ontologies

• a collection of interrelated terms and concepts that describe and model a domain

• used for knowledge sharing and reuse

• provide machine-understandable meaning to data

• expressed in a formal ontology language (e.g. OWL,

DAML+OIL)

© Aad van Moorsel, Newcastle University, 2010

94

ontology features

• common understanding of a domain

– formally describes concepts and their relationships

– supports consistent treatment of information

– reduces misunderstandings

• explicit semantics

– machine understandable descriptions of terms and their relationships

– allows expressive statements to be made about domain

– reduces interpretation ambiguity

– enables interoperability

© Aad van Moorsel, Newcastle University, 2010

95

ontology features (cont.)

• expressiveness

– ontologies built using expressive languages

– languages able to represent formal semantics

– enable human and software interpretation and reasoning

• sharing information

– information can be shared, used and reused

– supported by explicit semantics

– applications can interoperate through a shared understanding of information

© Aad van Moorsel, Newcastle University, 2010

96

core information security ontology elements

• information assets being accessed

– information that is of value to the organisation, which individuals interact with and which must be secured to retain its value

• the vulnerabilities

– within IT infrastructure, but also within the processes that a ‘user’ may partake in

• the intentional or unintentional threats

– not just to IT infrastructure, but to process security and productivity

• the potential process controls that may be used and their identifiable effects

– these may be technical, but also actions within a business process

• this formalised content is then encoded in an ontology

– e.g., represented in the Web Ontology Language (OWL)

© Aad van Moorsel, Newcastle University, 2010

97

security ontology: relationships

Fentz, ASIACCS’09, Formalizing Information Security Knowledge

© Aad van Moorsel, Newcastle University, 2010

98

security ontology: concepts

Fentz, ASIACCS’09, Formalizing Information Security Knowledge

© Aad van Moorsel, Newcastle University, 2010

99

security ontology: example of fire threat

Fentz, ASIACCS’09, Formalizing Information Security Knowledge

© Aad van Moorsel, Newcastle University, 2010

100

an information security ontology incorporating human-behavioural implications

Simon Parkin, Aad van Moorsel

Newcastle University

Centre for Cybercrime and Computer Security

UK

Robert Coles,

Bank of America, Merrill Lynch

UK

trust economics ontology

• we want to have a set of tools that implement the trust economics methodology

• needs to work for different case studies

• need a way to represent, maintain and interrelate relevant information

• glue between

– problem space: technical, human, business

– models

– interfaces

© Aad van Moorsel, Newcastle University, 2010 102

using an ontology

• We chose to use an ontology to address these requirements, because:

– An ontology helps to formally define concepts and taxonomies

– An ontology serves as a means to share knowledge

• Potentially across different disciplines

– An ontology can relate fragments of knowledge

• Identify interdependencies

© Aad van Moorsel, Newcastle University, 2010 103

business, behaviour and security

• Example: Password Management

– There is a need to balance security and ease-of-use

– A complex password may be hard to crack, but might also be hard to remember

• Is there a way to:

– Identify our choices in these situations?

– Consider the potential outcomes of our choices in a reasoned manner?

© Aad van Moorsel, Newcastle University, 2010 104

requirements

• Standards should be represented

– Information security mechanisms are guided by policies, which are increasingly informed by standards

• The usability and security behaviours of staff must be considered

– Information assets being accessed;

– The vulnerabilities that users create;

– The intentional or unintentional threats user actions pose, and;

– The potential process controls that may be used and their identifiable effects

• CISOs must be able to relate ontology content to the security infrastructure they manage

– Representation of human factors and external standards should be clear, unambiguous, and illustrate interdependencies

© Aad van Moorsel, Newcastle University, 2010 105

information security ontology

• We created an ontology to represent the humanbehavioural implications of information security management decisions

– Makes the potential human-behavioural implications visible and comparable

• Ontology content is aligned with information security management guidelines

– We chose the ISO27002: “Code of Practice” standard

– Provides a familiar context for information security managers (e.g.

CISOs, CIOs, etc.)

– Formalised content is encoded in the Web Ontology Language

(OWL)

• Human factors researchers and CISOs can contribute expertise within an ontology framework that connects their respective domains of knowledge

– Input from industrial partners and human factors researchers helps to make the ontology relevant and useful to prospective users

© Aad van Moorsel, Newcastle University, 2010 106

ontology - overview

Chapter

1 exploitedBy

1

Infra.

Proc.

hasFoundation

Threat

1 managesRiskOf

1 Behavioural

Foundation

*

Section

1

*

Guideline

1 contains contains

*

*

*

*

Behaviour

Control

1

1 hasRiskApproach

1 isMitigatedBy

*

Vulnerability hasVulnerability

1

Control Type hasStakeholder hasSubject

* hasVulnerability contains

*

Guideline

Step

* hasSubject

*

*

1

Asset

1 ownedBy

1

1

Role

© Aad van Moorsel, Newcastle University, 2010 107

ontology – password policy example

Chapter

Number: 11

Name:

“ Access Control”

Single Password Memorisation

Difficult hasVulnerability

Section

Number: 11.3

Name: “ User Responsibilities”

Objective: ...

Password

Guideline

Number: 11.3.1

Name:

“Password Use”

Control: ...

Implementation Guidance (Additional): ...

Other Information: ...

hasSubject

Implementation Guidance Step

Number: 11.3.1 (d)

Guidance: “select quality passwords with sufficient minimum length which are:

1) easy to remember;

...

© Aad van Moorsel, Newcastle University, 2010 108

example – password memorisation

Single Password Memorisation

Reduction

Difficult

Acceptance

Maintain Password

Policy

Capability

Single Password

Forgotten

User temporarily without access

Make Password Easier

To Remember

KEY

Classes

Vulnerability

Procedural Threat

Behavioural Foundation

Infrastructure Threat

Behaviour Control

Asset

Control Type

Threat Consequence

Relationships mitigated by has vulnerability exploited by manages risk of

© Aad van Moorsel, Newcastle University, 2010 109

example – recall methods

Single Password Memorisation

Difficult

Reduction

Educate Users in

Recall Techniques

Mindset

Password Stored Externally to Avoid Recall

Insecure storage medium can be exploited by malicious party

Reduction

Implement ISO27002 Guideline 11.3.1 (b),

“avoid keeping a record of passwords”

© Aad van Moorsel, Newcastle University, 2010

KEY

Classes

Vulnerability

Procedural Threat

Behavioural Foundation

Infrastructure Threat

Behaviour Control

Asset

Control Type

Threat Consequence

Relationships mitigated by has vulnerability exploited by manages risk of

110

example – password reset function

Single Password Memorisation

Difficult

Transfer

Helpdesk Password

Reset Management

User temporarily without access

Capability

Single Password

Forgotten

Helpdesk Provided With

Identity Verification Details

Reduction

Automated Password

Reset System

Temporal

Employee Becomes

Impatient

User compliance diminished

Reduction

Password Reset Process

Laborious

Helpdesk Busy

Additional Helpdesk Staff

Mindset

User Account

Details Stolen

Malicious party gains access

IT Helpdesk Cannot

Satisfy Reset Request

User temporarily without access

© Aad van Moorsel, Newcastle University, 2010 111

conclusions

• CISOs need an awareness of the human-behavioural implications of their security management decisions

• human Factors researchers need a way to contribute their expertise and align it with concepts that are familiar to CISOs

– standards

– IT infrastructure

– business processes

• we provided an ontology as a solution

– serves as a formalised base of knowledge

– one piece of the Trust Economics tools

© Aad van Moorsel, Newcastle University, 2010

112

an ontology for structured systems economics

Adam Beaument

UCL, HP Labs

David Pym

HP Labs, University of Bath

ontology to link with the models thus far, trust economics ontology represent technology and human behavioural issues how to glue this to the mathematical models?

© Aad van Moorsel, Newcastle University, 2010 114

ontology

© Aad van Moorsel, Newcastle University, 2010 115

example process algebra model

© Aad van Moorsel, Newcastle University, 2010 116

conclusion on trust economics ontology trust economics ontology is work in progress

- added human behavioural aspects to IT security concepts

- provided an abstraction that allows IT to be represented tailored to process algebraic model to do:

- complete as well as simplify...

- proof is in the pudding: someone needs to use it in a case study

© Aad van Moorsel, Newcastle University, 2010 117

user evaluation for trust economics software

Simon Parkin

Aad van Moorsel

Philip Inglesant

Angela Sasse

UCL

participatory design of a trust economics tool assume we have all pieces together:

• ontology

• models

• CISO interfaces what should the tool look like?

we conduct a participatory design study with CISOs from:

• ISS

• UCL

• National Grid method: get wish list from CISOs, show a mock-up tool and collect feedback, improve, add model in background, try it out with CISOs, etc.

© Aad van Moorsel, Newcastle University, 2010 119

information security management find out about how users behave, what the business issues are:

CISO1: Transport is a big deal.

Interviewer1: We’re trying to recognise this in our user classes.

CISO1: We have engineers on the road, have lots of access, and are more gifted in

IT.

Interviewer1: Do you think it would be useful to configure different user classes?

CISO1: I think it’s covered.

Interviewer1: And different values, different possible consequences if a loss occurs. I’m assuming you would want to be able to configure.

CISO1: Yes. Eg. customer list might or might not be very valuable.

Interviewer1: And be able to configure links with different user classes and the assets.

CISO1: Yes, if you could, absolutely.

Interviewer1: We’re going to stick with defaults at first and allow configuration if needed later. So, the costs of the password policy: running costs, helpdesk staff, trade-off of helpdesk vs. productivity

CISO1: That’s right.

© Aad van Moorsel, Newcastle University, 2010 120

information security management find out about how users behave, what the business issues are:

Discussion of "Productivity Losses":

CISO2: But it’s proportional to amount they earn. This is productivity. eg. $1m salary but bring $20m into the company. There are expense people and productivity people.

Interviewer1: We have execs, “road warrior”, office drone. Drones are just a cost.

Interviewer2: And the 3 groups have different threat scenarios.

CISO2: Risk of over-complicating it, hard to work out who is income-earner and what proportion is income earning.

Interviewer2: But this is good point.

CISO2: Make it parameterisable, at choice of CISO.

CISO2: So, need to be able to drill down into productivity, cost, - esp in small company.

© Aad van Moorsel, Newcastle University, 2010 121

modelling concepts and model validation

Rob Cain (funded by HP)

Simon Parkin

Aad van Moorsel

Doug Eskin (funded by HP)

Robin Berthier

Bill Sanders

University of Illinois at Urbana-Champaign

project objectives

• performance models traditionally have not included human behavioural aspects in their models

• we want to have generic modelling constructs to represent human behaviour, tendencies and choices:

– compliance budget

– risk propensity

– impact of training

– role dependent behaviour

• we want to validate our models with collected data

– offline data, such as from interviews

– online data, measure ‘live’

• we want to optimise the data collection strategy

• in some cases, it makes sense to extend our trust economics methodology with a strategy for data collection

© Aad van Moorsel, Newcastle University, 2010

123

presentation of Möbius

© Aad van Moorsel, Newcastle University, 2010

124

sample Möbius results

Without Comp Budget Feedback

380

360

340

320

300

280

260

240

220

0 0,1 0,2 0,3 0,4 0,5

Prob of Encryption

0,6 0,7

© Aad van Moorsel, Newcastle University, 2010

0,8 0,9 1

125

Utility

HB Score

sample Möbius results (cont.)

Using Comp Budget Feedback

340

320

300

280

380

360

260

240

220

0 0,1 0,2 0,3 0,4 0,5

Prob of Encryption

0,6 0,7

© Aad van Moorsel, Newcastle University, 2010

0,8 0,9 1

126

Utility

HB Score

criticality of using data

• the goal of using data is to provide credibility to the model:

– by defining and tuning input parameters according to individual organization

– by assessing the validity of prediction results

• issues:

– numerous data sources

– collection and processing phases are expensive and time consuming

– no strategy to drive data monitoring

– mismatch between model and data that can be collected

© Aad van Moorsel, Newcastle University, 2010

127

data collection approach

Stakeholders

1

Data

Sources

Cost / Quality

2

3

Importance

Model

4

• Input parameter definition

• Output validation

1.

Design specialized model according to requirements

2.

Classify potential data sources according to their cost and quality

3.

Optimize collection of data according to parameter importance

4.

Run data validation and execute model

© Aad van Moorsel, Newcastle University, 2010

128

data sources classification

• Cost:

– Cost to obtain

– Time to obtain

– Transparency

– Legislative process

• Quality:

– Accuracy

– Applicability

• Importance:

– Influence of parameter value on output

© Aad van Moorsel, Newcastle University, 2010

129

Organization Budget Parameters input/o utput in

Category

Budget

Parameter

Total security investment

Description

IT budget. Default is 100

Variables Influence medium in in in

Budget

Budget

Budget

Training investment

Training budget.

Always, one-off

100

Experimental

Support proportion value. Proportion of Active Security of budget

Investment used for support

Monitoring proportion of budget

Experimental value. 1

– (Support proportion of budget)

USB stick =

100, software =

0, install and maintenance =

0 low high high

Low

Medium

High

© Aad van Moorsel, Newcastle University, 2010

Data Sources and Cost

IT security survey

(http://www.gartner.com, http://www.gocsi.com) interview with IT directors public gov. budget data interview with IT directors public gov. budget data interview with IT directors public gov. budget data interview with IT directors public gov. budget data

130

Overall Human Parameters input/ output

Category in in

User behavior

User behavior

Parameter

Compliance budget

Perceived benefit of task

Description

Effort willing to spend conforming with security policy that doesn't benefit you.

Effort willing to put in without using compliance budget.

Variables

Generalised: understanding, investment, incentives

Influence Data Sources and Cost

User survey

© Aad van Moorsel, Newcastle University, 2010

131

password: probability of break-in input/ou tput in

Category

Culture of organization

Parameter

Prob, of leaving default password

Description Variables

Organization policy, user training in User behavior Password strength Organization policy, user training in

Attacker determination

Password strength threshold in User behavior

Password update frequency in User behavior

Prob. of being locked out in User interface

Prob. of finding lost password in User interface

Prob. of needing support in User behavior

Management reprimands in User behavior

Negative support experiences out User behavior

Prob. password can be compromised

Compromised by brute force attack

Password stength, attacker determination

Organization policy, user training when password is forgotten Organization policy, user training efficiency of password recovery tech.

(#support queries / #users) prob. of forgetting password out Security Availability #successful data transfer out Security Confidentiality #exposures + #reveals

Influence medium medium medium medium medium medium medium medium medium high high high

Data Sources and Cost

© Aad van Moorsel, Newcastle University, 2010

132

data collection research four sub problems:

• determine which data is needed to validate the model:

– provide input parameter values

– validate output parameters

• technical implementation of the data collection

• optimize data collection such that cost is within a certain bound: need to find the important parameters and trade off with cost of collecting it

• add data collection to the trust economics methodology:

– a data collection strategy will be associated with the use of a model

© Aad van Moorsel, Newcastle University, 2010

133

conclusion trust economics:

• ontology for human behavioural aspects, incl. editor and community version

• tool design with CISOs

• case studies: password, USB, DRM

• data collection strategies for validation to be expanded:

• generic ontology for trust economics, underlying the tools

• actual tool building

• evaluation of the methodology

© Aad van Moorsel, Newcastle University, 2010 134

trust economics info http://www.trust-economics.org/

Publications:

An Information Security Ontology Incorporating Human-Behavioural Implications. Simon Parkin, Aad van Moorsel,

Robert Coles. International Conference on Security of Information and Networks, 2009

Risk Modelling of Access Control Policies with Human-Behavioural Factors. Simon Parkin and Aad van Moorsel.

International Workshop on Performability Modeling of Computer and Communication Systems, 2009.

A Knowledge Base for Justified Information Security Decision-Making. Daria Stepanova, Simon Parkin, Aad van Moorsel.

International Conference on Software and Data Technologies, 2009.

Architecting Dependable Access Control Systems for Multi-Domain Computing Environments. Maciej Machulak, Simon

Parkin, Aad van Moorsel. Architecting Dependable Systems VI, R. De Lemos, J. Fabre C. Gacek, F. Gadducci and M. ter

Beek (Eds.), Springer, LNCS 5835, pp. 49—75, 2009.

Trust Economics Feasibility Study. Robert Coles, Jonathan Griffin, Hilary Johnson, Brian Monahan, Simon Parkin, David

Pym, Angela Sasse and Aad van Moorsel. Workshop on Resilience Assessment and Dependability Benchmarking, 2008.

The Impact of Unavailability on the Effectiveness of Enterprise Information Security Technologies. Simon Parkin,

Rouaa Yassin-Kassab and Aad van Moorsel. International Service Availability Symposium, 2008.

Technical reports:

Architecture and Protocol for User-Controlled Access Management in Web 2.0 Applications. Maciej Machulak, Aad van

Moorsel. CS-TR 1191, 2010

Ontology Editing Tool for Information Security and Human Factors Experts. John Mace, Simon Parkin, Aad van Moorsel.

CS-TR 1172, 2009

Use Cases for User-Centric Access Control for the Web, Maciej Machulak, Aad van Moorsel. CS-TR 1165, 2009

A Novel Approach to Access Control for the Web. Maciej Machulak, Aad van Moorsel. CS-TR 1157, 2009

Proceedings of the First Trust Economics Workshop. Philip Inglesant, Maciej Machulak, Simon Parkin, Aad van Moorsel,

Julian Williams (Eds.). CS-TR 1153, 2009.

A Trust-economic Perspective on Information Security Technologies. Simon Parkin, Aad van Moorsel. CS-TR 1056, 2007

© Aad van Moorsel, Newcastle University, 2010 135

conclusion state of security metrics:

• no good practical system metrics have been devised (at par with down time, throughput, etc)

• metrics in abundance (read Jacquit):

– macro measures of what’s happening in the world (# worms, etc.)

– process metrics, often for compliance

– ‘derivative’ system properties (# spam messages, virus detected, etc.)

© Aad van Moorsel, Newcastle University, 2010

136

conclusion state of security models:

• models:

– are all based on attack tree ideas: what can happen, an attacker, defence mechanisms, in what order, etc.

– often metrics are ‘traditional’: MTTF, success probabilities, etc.

– plethora of imaginative modelling approaches, e.g., economics based, psychology ideas, etc.

• to do

– validation (beyond lose interviews)

– convergence to and widespread use of the best approaches

© Aad van Moorsel, Newcastle University, 2010

137

conclusion must the lesson be:

it’s not about the metric, stupid!

it’s about justified-decision making!

and models are the way to add rigour?

© Aad van Moorsel, Newcastle University, 2010

138

Download