slides - SERENE | Software Engineering for Resilient Systems

advertisement
SERENE Spring School, Birkbeck College, UK April 14, 2010
Tools to Make Objective
Information Security Decisions
—
The Trust Economics Methodology
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@ncl.ac.uk
motivation
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
security and trust
data loss
http://www.youtube.com/watch?v=JCyAwYv0Ly0
identity theft
http://www.youtube.com/watch?v=CS9ptA3Ya9E
worms
http://www.youtube.com/watch?v=YqMt7aNBTq8
http://www.informationweek.com/news/software/showArti
cle.jhtml?articleID=221400323
© Aad van Moorsel, Newcastle University, 2010
3
security and trust
security:
protection of a system against malicious attacks
information security:
preservation of confidentiality, integrity and availability of
information
CIA properties
• confidentiality
• integrity
• availability
© Aad van Moorsel, Newcastle University, 2010
4
why metrics and why quantification?
two uses:
• gives the ability to monitor the quality of a system as it is
being used (using measurement)
• gives the ability to predict for the future the quality of a
design or a system (using modelling)
a good metric is critical to make measurement and modelling
useful
© Aad van Moorsel, Newcastle University, 2010
5
security and trust
trust: a personal, subjective perception of the quality of a
system or personal, subjective decision to rely on a system
evaluation trust: the subjective probability by which an
individual A expects that another individual B performs a
given action on which A’s welfare depends (Gambetta
1988)
decision trust: the willingness to depend on something or
somebody in a given situation with a feeling of relative
security, even though negative consequences are possible
(McKnight and Chervany 1996)
© Aad van Moorsel, Newcastle University, 2010
6
security and trust
what is more important,
security or trust?
“Security Theatre and
Balancing Risks” (Bruce
Schneier)
http://www.cato.org/dail
ypodcast/podcastarchive.php?podcast_id=8
12
© Aad van Moorsel, Newcastle University, 2010
7
defining the problem space:
ontology of security
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
ontologies
• a collection of interrelated terms and concepts
that describe and model a domain
• used for knowledge sharing and reuse
• provide machine-understandable meaning to data
• expressed in a formal ontology language (e.g. OWL,
DAML+OIL)
© Aad van Moorsel, Newcastle University, 2010
9
ontology features
• common understanding of a domain
– formally describes concepts and their relationships
– supports consistent treatment of information
– reduces misunderstandings
• explicit semantics
– machine understandable descriptions of terms and their
relationships
– allows expressive statements to be made about domain
– reduces interpretation ambiguity
– enables interoperability
© Aad van Moorsel, Newcastle University, 2010
10
ontology features (cont.)
• expressiveness
– ontologies built using expressive languages
– languages able to represent formal semantics
– enable human and software interpretation and
reasoning
• sharing information
– information can be shared, used and reused
– supported by explicit semantics
– applications can interoperate through a shared
understanding of information
© Aad van Moorsel, Newcastle University, 2010
11
ontology example
© Aad van Moorsel, Newcastle University, 2010
12
ontology structure
• ontologies describe data semantics
• express semantics by:
– defining information representation building
blocks
– describe relationships between building blocks
– describe relationships within building blocks
• building blocks are classes, individuals and
properties
© Aad van Moorsel, Newcastle University, 2010
13
ontology structure (cont.)
• classes
– represent groups of object instances with similar
properties
– related to object class concept in OO programming
– general statements can be made to include all of a
class’ member objects at once
• individuals
– represent class object instances
– similar to objects in OO programming but do not have
associated functionality
© Aad van Moorsel, Newcastle University, 2010
14
ontology structure (cont.)
• properties
–
–
–
–
–
associate an individual to a value
values can be simple data values or an object
individuals may have multiple properties
similar to accessor methods in OO programming
can be associated with multiple unrelated classes
leading to reusability of property descriptions
© Aad van Moorsel, Newcastle University, 2010
15
relating information
• need to describe relationships between classes,
individuals and properties
• the most important relationships are:
– individual to class “is an instance of”
– individual to property “has value of”
– restrictions between classes and properties
• individual to class
– relationship between an individual and its owning class
must be explicitly stated
– supports identification of a class’ members
© Aad van Moorsel, Newcastle University, 2010
16
relating information (cont.)
• individual to property
– individuals have values described by properties
– relationship allows the specification of values for
particular attributes of the individual
• restrictions between classes and properties
– can define which classes have which properties
– can constrain property values to be of a certain class
(range) or to only describe particular classes (domain)
© Aad van Moorsel, Newcastle University, 2010
17
core information security ontology elements
• information assets being accessed
– information that is of value to the organisation, which
individuals interact with and which must be secured to
retain its value
• the vulnerabilities that users create
– within IT infrastructure, but also within the processes that
a ‘user’ may partake in
• the intentional or unintentional threats user actions pose
– not just to IT infrastructure, but to process security and
productivity
• the potential process controls that may be used and their
identifiable effects
– these may be technical, but also actions within a business
process
• this formalised content is then encoded in an ontology
– represented in the Web Ontology Language (OWL)
© Aad van Moorsel, Newcastle University, 2010
18
security ontology: relationships
Fentz, ASIACCS’09, Formalizing Information Security Knowledge
© Aad van Moorsel, Newcastle University, 2010
19
security ontology: concepts
Fentz, ASIACCS’09, Formalizing Information Security Knowledge
© Aad van Moorsel, Newcastle University, 2010
20
security ontology: example of fire threat
Fentz, ASIACCS’09, Formalizing Information Security Knowledge
© Aad van Moorsel, Newcastle University, 2010
21
base metrics
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
objective
• understand the basis metrics important when assessing IT
systems
© Aad van Moorsel, Newcastle University, 2010
23
classifying metrics (part one)
quantitative vs. qualitative: quantitative metrics can be
expressed through some number, while qualitative metrics
are concerned with TRUE or FALSE
quality-of-service (QoS) metrics: express a grade of service
as a quantitative metric
non-functional properties are system properties beyond the
strictly necessary functional properties
IT management is mostly about quantitative/QoS/nonfunctional
© Aad van Moorsel, Newcastle University, 2010
24
common metrics (performance)
response time, waiting time, propagation delay:
user
request
arrival
at server
buffer
start
processing server
response
reply
received
by user
time
propagation
delay
waiting
time
proc.
time
propagation
delay
response time
(server perspective)
response time
(client perspective)
© Aad van Moorsel, Newcastle University, 2010
25
quantitative metrics
if you measure a metric several time, you will not always (or
always not) get the exact same result
metrics can be represented as random variables X, which
has characteristics such as mean, standard deviation,
variance and distribution
also for response time: mean response time, variance of the
response time, distribution of the response time
in next section we see how to derive these characteristics
given a set of measurement data
© Aad van Moorsel, Newcastle University, 2010
26
classifying metrics (part two)
performance metrics: timing and usage metrics
• CPU load
• throughput
• response time
dependability or reliability metrics: metrics related to accidental
failure
• MTTF
• availability
• reliability
security metrics: metrics related to malicious failures (attacks)
• ?
business metrics: metrics related to cost or benefits
• number of buy transaction in web site
• cost of ownership
• return on investment
© Aad van Moorsel, Newcastle University, 2010
27
common metrics (performance)
throughput = number of tasks a resource can complete per
time unit:
• jobs per second
• requests per second
• millions of instructions per second (MIPS)
• floating point operations per second (FLOPS)
• packets per second
• kilo-bits per second (Kbps)
• transactions per second
• ...
© Aad van Moorsel, Newcastle University, 2010
28
common metrics (performance)
capacity = maximum sustainable number of tasks
load = offered number of tasks
overload = load is higher than capacity
utilization = the fraction of resource capacity in use (CPU,
bandwidth);
for a CPU, this corresponds to the fraction of time the
resource is busy (some times imprecisely called the CPU
load)
© Aad van Moorsel, Newcastle University, 2010
29
relation between performance metrics
utilization versus response time, throughput and load:
overload
utilization
response time
1
utilization
0
1
load
throughput
0
0
utilization
1
© Aad van Moorsel, Newcastle University, 2010
30
common metrics (dependability/reliability)
systems with failures:
repair
operating
failure
failure
up
repair
failed
down
time
© Aad van Moorsel, Newcastle University, 2010
31
common metrics (dependability/reliability)
Mean Time To Failure (MTTF)
= average length up time period
Mean Time To Repair (MTTR)
= average length down time period
availability
= fraction of time system is up
= fraction up time
= MTTF / (MTTF + MTTR)
unavailability
= 1 – availability
= fraction up time
reliability at time t
= probability the system does not go down before t
© Aad van Moorsel, Newcastle University, 2010
32
relation between dependability metrics
availability and associated yearly down time:
availability
yearly down time
0.9
37 days
0.99
4 days
0.999
9 hours
0.9999
50 minutes
0.99999
5 minutes
© Aad van Moorsel, Newcastle University, 2010
33
relation between dependability metrics
if you have a system with 1 day MTTF and 1 hour MTTR,
would you work on the repair time or the failure time to
improve the availability?
availability
required MTTF
if MTTR = 1 hour
required MTTR
if MTTF = 1 day
0.96
0.99
0.999
0.9999
0.99999
1 day
4 days
6 weeks
14 months
11 years
1 hour
14 minutes
1½ minutes
9 seconds
1 second
© Aad van Moorsel, Newcastle University, 2010
34
five nines
© Aad van Moorsel, Newcastle University, 2010
35
conclusion
discussed the basis metrics important when managing IT
systems:
• performance metrics: throughput, utilization, response
time
• dependability metrics: availability, MTTF, MTTR
• security metrics
• business metrics
© Aad van Moorsel, Newcastle University, 2010
36
security metrics
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
measure security
Lord Kelvin:
“what you cannot measure, you cannot manage”
how true is this:
• in science?
• in engineering?
• in business?
and how true is:
“we only manage what we measure”?
© Aad van Moorsel, Newcastle University, 2010
38
some related areas: performance
measuring performance:
• we know CPU speed (and Intel measured it)
• we can easily measure sustained load (throughput)
• we can reasonably okay model for performance (queuing,
simulations)
• we’re pretty good in adapting systems for performance,
through load balancing etc.
• there is a TOP 500 for supercomputers
• we buy PCs based on performance, and their performance
is advertised
• companies buy equipment based on performance
© Aad van Moorsel, Newcastle University, 2010
39
some related areas: availability
measuring availability:
• we do not know much about CPU reliability (although Intel
measures it)
• it is easy, but time consuming to measure down time
• we can reasonably okay model for availability (Markov
chains), although we do not know parameter values for
fault and failure occurrences
• we’re rather basic in adapting systems for availability, but
there are various fault tolerance mechanisms
• there is no TOP 500 for reliable computers
• we do not buy PCs based on availability, and their
availability is rarely advertised
• companies buy equipment based on availability only for
top end applications (e.g. goods and finances admin of
super market chains)
© Aad van Moorsel, Newcastle University, 2010
40
how about security
measuring security:
• we do not know much about level of CPU security (and
Intel does not know how to measure it)
• it is possible to measure security breaches, but how much
do they tell you?
• we do not know how to model for levels of security, for
instance we do not know what attacks look like
• we’re only just starting to research adapting systems for
security—there sure are many security mechanisms
available
• there is no TOP 500 for secure computers
• we do not buy PCs based on privacy or security, and their
privacy/security is rarely advertised
• companies are very concerned about security, but do not
know how to measure it and show improvements
© Aad van Moorsel, Newcastle University, 2010
41
what’s special about security
security is a hybrid between functional and non-functional
(performance/availability) property
• it is tempting to think security is binary: it is secured or
not  common mistake
• security deals with loss and attacks
– you can measure after the fact, but would like to
predict
– maybe loss can still be treated like accidental failures
(as in availability)
– attacks certainly require knowledge of attackers, how
they act, when they act, what they will invent
• security level (even if we somehow divined it) is
meaningless:
– what’s the possible consequence
– how do people react to it (risk averse?)
© Aad van Moorsel, Newcastle University, 2010
42
how do people now measure security?
reporting after the fact:
• industry and government are obligated to report breaches
(NIST database and others)
• measure how many non-spam email went through, etc.
some predictive metrics as ‘substitute’ for security:
• how many CPU cycles needed to break encryption
technique?
• risk analysis: likelihood X impact, summed for all breaches
 but we know neither likelihood nor impact
© Aad van Moorsel, Newcastle University, 2010
43
why is measurable security important?
without good measures
• security is sold as all or nothing
• security purchase decisions are based on scare tactics
• system configuration (including cloud, SaaS) cannot be
judged for resulting security
© Aad van Moorsel, Newcastle University, 2010
44
CIA metrics
how about CIA
confidentiality: keep organisation’s data confidential
(privacy for the organisation)
integrity: data is unaltered
availability: data is available for use
• you can score them, and sum them up (see CVSS later)
• you can measure for integrity and availability if there is
centralized control
• you cannot easily predict them
© Aad van Moorsel, Newcastle University, 2010
45
good metrics
in practice, good metrics should be:
• consistently measured, without subjective criteria
• cheap to gather, preferably in an automated way
• a cardinal number or percentage, not qualitative labels
• using at least one unit of measure (defects, hours, ...)
• contextually specific—relevant enough to make decisions
from Jaquith’s book ‘Security Metrics’
© Aad van Moorsel, Newcastle University, 2010
46
good metrics
in practice, metrics cover four different aspects:
• perimeter defenses
– # spam detected, # virus detected
• coverage and control
– # laptops with antivirus software, # patches per month
• availability and reliability
– host uptime, help desk response time
• application risks
– vulnerabilities per application, assessment frequence for
an application
from Jaquith’s book ‘Security Metrics’
© Aad van Moorsel, Newcastle University, 2010
47
good metrics
how good do you think the metrics from Jaquith’s book
‘Security Metrics’ are?
it’s the best we can do now, but as the next slide shows,
there are a lot of open issues
© Aad van Moorsel, Newcastle University, 2010
48
good metrics
ideally good metrics should:
• not measure the process used to design, implement or
manage the system, but the system itself
• not depend on things you will never know (such as in risk
management)
• be predictive about the future security, not just reporting
the past
but these are very challenging properties we do not yet
know how to deliver
© Aad van Moorsel, Newcastle University, 2010
49
trust metrics
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
trust
trust [Gambetta]:
“expectation of an actor (trustor) that another
actor (trustee) will emit certain behavior,
in a context where the trustor may not be able to
monitor the trustee,
and where outcome of a social phenomenon
involving the trustor is dependent on that
behavior of the trustee”
© Aad van Moorsel, Newcastle University, 2010
51
trust: very complex notion
© Aad van Moorsel, Newcastle University, 2010
52
trust
trust is needed when uncertainty exists, either because
– something is not known, or
– one does not trust the source of the info
uncertainty in online transactions:
– the customer cannot monitor the provider
– the customer does not believe the legal institutions in a
different country will act honestly in case of disputes
– the customer does not believe a technological claim,
e.g., transactional properties
uncertainty may even exist when a proof is given: why
believe the proof is correct?
© Aad van Moorsel, Newcastle University, 2010
53
trust
uncertainty may even exist when a proof is given:
why believe the proof is correct?
that is, uncertainty is in the eye of the beholder
so, per definition, no technology solution exists for
trust
note furthermore:
‘trust is cheap, security is expensive’
‘the more trust exists, the less security is needed’
© Aad van Moorsel, Newcastle University, 2010
54
trust
no technology solution exists for trust
but we still can work toward improving trust
let us look at trust in cloud, where cooperation
between parties leads to business interactions
to establish cooperation, trust is needed  study
cooperation, tools to establish cooperation are
trust enablers
theory of Axelrod
© Aad van Moorsel, Newcastle University, 2010
55
how does trust develop?
© Aad van Moorsel, Newcastle University, 2010
56
Axelrod on cooperation
mutuality of preferences:
cooperation is beneficial for all
shadow of the future:
if one does not cooperate, it may hurt later
sanctioning:
if one does not cooperate, other parties (legal
institutions, mafia, ...) will sanction
© Aad van Moorsel, Newcastle University, 2010
57
how does trust develop?
© Aad van Moorsel, Newcastle University, 2010
58
Scott on institutions
to achieve Axelrod’s incentives, we use institutions:
constraints on behaviour of participating parties
• a legal institution can be expected to deter parties from
interactions that are against the law (‘regulative
institutions’)
• a norm from society or parents will influence how parties
behave (‘normative institutions’)
• the expectation that all parties are in it to make money
will restrict choices (‘cognitive institutions’)
© Aad van Moorsel, Newcastle University, 2010
59
Scott on institutions
© Aad van Moorsel, Newcastle University, 2010
60
technological institutions
for online interactions, technology also constrains behaviour
technological institutions
examples:
• a dedicated communication link instead of WWW reduces
uncertainty
• encryption reduces danger of losing important information
(perhaps at the cost of performance jitter)
• automated contract and non-repudiation
• automated agent-based negotiation
• ...
© Aad van Moorsel, Newcastle University, 2010
61
removing uncertainty through institutions
all states
states impossible
because of they would
not be economically
rational
states impossible
because of laws
© Aad van Moorsel, Newcastle University, 2010
states impossible
because technology
does not support
them
62
one party’s view, including (dis)trust
all states
remaining states
from one party’s
perspective
© Aad van Moorsel, Newcastle University, 2010
63
cooperation
a party will participate if it believes there are
beneficial states
a party will remain participating as long as the
states remain beneficial (can be relaxed)
a market will remain functioning if
– overlap in states for participating parties
– states will remain in overlap area
© Aad van Moorsel, Newcastle University, 2010
64
conclusion
an institutional framework for trust in cloud and
other online interactions
agents may implement state machines
– depicts the state the agent believes to be in
– determine the next action to take
basis for middleware for trust (uncertaintytolerance)
© Aad van Moorsel, Newcastle University, 2010
65
web of trust and
reputation systems
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
trust and reputation
reputation is a believe about a person’s (or thing’s)
character or standing (Jøsang, 2009)
could call it ‘trust in dependability of someone or
something’
so: reputation of B = expected [dependability trust
in B]
a quantitative metric!
© Aad van Moorsel, Newcastle University, 2010
67
trust and reputation
reputation is a believe about a person’s (or thing’s) character or
standing (Jøsang, 2009)
characteristics:
• reputation is public
• reputation is shared, but not necessarily adopted by all people
(they might not ‘trust’ the reputation number)
reputation is the trust level reported by someone else (person A),
but you may decide:
1. you do not trust the judgement of A
2. you do not trust the honesty of A
so you can have:
• I trust you despite your bad reputation
• I trust you because of your good reputation
© Aad van Moorsel, Newcastle University, 2010
68
security and trust
from Jøsang, IFIPTM 2009
• add cost to this figure
• at productivity loss to this figure
• at emergence of new applications to this figure
© Aad van Moorsel, Newcastle University, 2010
69
is trust transitive?
from Jøsang, IFIPTM 2009
© Aad van Moorsel, Newcastle University, 2010
70
PGP: Pretty Good Privacy
PGP is an asymmetric key solution:
-
symmetric: we share a password
asymmetric: we each have a public and a private
password, and only share the public one
use of asymmetric keys:
1. signing: I use my private key to encrypt a document 
everyone can use my public key to decrypt it, but if the
decryption succeeds, it is proof it came from me
2. encryption: I use your public key to encrypt a document
 only your private key can decrypt it
© Aad van Moorsel, Newcastle University, 2010
71
PGP: Pretty Good Privacy
can we trust in PGP?
main issue: if you give me your public key, why
should I trust it comes from you and has not been
tampered with?
solutions:
1. PKI: Public Key Infrastructure
2. Web of Trust (PGP’s solution)
© Aad van Moorsel, Newcastle University, 2010
72
PKI Certification Authority
Certification Authority (CA) assures the identity of the
owner of a public key
if you want a private/public key pair, you go to the CA
• it checks who you are and what kind of trust people can
place in you
• all your details and the public key are put in a public key
certificate
• you receive the public key certificate and the
private/public key pair
when you send the public key certificate to someone, the
person can check with the CA if you are who you say you
are
© Aad van Moorsel, Newcastle University, 2010
73
PKI
from Jøsang, IFIPTM 2009
© Aad van Moorsel, Newcastle University, 2010
74
Web of Trust
no certification authority
• you, A, as a new participant, create
your own public key certificate
• you go to someone, B, who is already
in the web of trust and ask that
person to sign your public key
certificate
• anyone who trusts B will now trust
your public key
• others may ask you to sign their
public key certificate
© Aad van Moorsel, Newcastle University, 2010
75
reputation systems
assume a system, with a number of participants
• all participants give feedback about other participants
• some mathematical formula computes resulting reputation
– reputation = sum positive scores – sum negative scores
(eBay)
– reputation = average all scores
– reputation = (1 + sum positive) / (1 + sum all scores)
– ...
• reputation computation can be centralized or distributed
© Aad van Moorsel, Newcastle University, 2010
76
data collection: honey pots and
security breaches data banks
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
honeypots
a honeypot
• pretends to be a resource with value to attackers, but is
actually isolated and monitored,
in order to:
• misguide attackers and analyze the behaviour of attackers.
© Aad van Moorsel, Newcastle University, 2010
78
honeypots
two types:
– high-interaction:
• real services, real OS, real application  higher risk
of being used to break in or attack others
• honeynets (two or more honeypots in a network)
– low-interaction:
• emulated services  low risk
• honeypots like nepenthes
© Aad van Moorsel, Newcastle University, 2010
79
an example of a hacker in a honeypot
SSH-1.99-OpenSSH_3.0
SSH-2.0-GOBBLES
GGGGO*GOBBLE*
uname -a;id
OpenBSD pufferfish 3.0 GENERIC#94 i386
uid=0(root) gid=0(wheel) groups=0(wheel)
ps -aux|more
USER PID %CPU %MEM
root 16042 0.0 0.1
root 25892 0.0 0.2
root 13304 0.0 0.1
...
root 1 0.0 0.1 332
VSZ RSS TT STAT STARTED TIME COMMAND
372 256 ?? R 2:48PM 0:00.00 more (sh)
104 452 ?? Ss Tue02PM 0:00.14 syslogd
64 364 ?? Is Tue02PM 0:00.00 portmap
200 ?? Is Tue02PM 0:00.02 /sbin/init
id
uid=0(root) gid=0(wheel) groups=0(wheel)
who
cat inetd.conf
attempt to edit the configuration file
for network services
© Aad van Moorsel, Newcastle University, 2010
80
data from a honeypot
data from 2003, number of different ‘attack’ sources
Pouget, Dacier, Debar: “Attack processes found on the Internet”
© Aad van Moorsel, Newcastle University, 2010
81
data from honeypots
a
•
•
•
lot of other data can be obtained
how do worms propagate?
how do attackers use zombies?
what kind of attackers do exist, and which ones start denialof-service attacks?
• which country do the attacks come from
• ...
Pouget, Dacier, Debar: “Attack processes found on the Internet”
© Aad van Moorsel, Newcastle University, 2010
82
honeypots
a honeynet is a network of honeypots and other information
system resources
T. Holz, “Honeypots and Malware Analysis—Know Your Enemy”
© Aad van Moorsel, Newcastle University, 2010
83
honeynets
three tasks in a honeynet:
1. data capture
2. data analysis
3. data control: especially highinteraction honeypots are vulnerable
of being misused by attackers 
control of data flow, neither to come
inside the organisation, nor to other
innocent parties
T. Holz, “Honeypots and Malware Analysis—Know Your Enemy”
© Aad van Moorsel, Newcastle University, 2010
84
US-CERT security vulnerabilities
United State Computer Emergency Readiness Team
people submit vulnerability notes, e.g.:
Vulnerability Note VU#120541
SSL and TLS protocols renegotiation vulnerability
A vulnerability exists in SSL and TLS protocols that may allow
attackers to execute an arbitrary HTTP transaction.
Credit: Marsh Ray of PhoneFactor
© Aad van Moorsel, Newcastle University, 2010
85
CVSS scoring in US-CERT
it uses a scoring system to determine how serious the
vulnerability is: Common Vulnerability Scoring System
(CVSS)
P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2.0”
© Aad van Moorsel, Newcastle University, 2010
86
CVSS
BaseScore = 0.6 x Impact + 0.4 x Exploitability
Impact = 10.41 x (1 - (1 - ConfImpact) x (1 - IntegImpact) x (1 AvailImpact) )
ConfImpact = case ConfidentialityImpact of
• none: 0.0
• partial: 0.275
• complete: 0.660
...
P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2.0”
© Aad van Moorsel, Newcastle University, 2010
87
statistics for modelling
and measurement
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
objective
• understand the basis statistics so you can take your own
measurements
© Aad van Moorsel, Newcastle University, 2010
89
measurements
x1, x2, x3, ..., xN are statistically independent measurement
samples (for instance response time measurements)
mathematically, xi, i=1...N, are realizations of random
variable X (for instance X represents the measured
response time)
an estimator is a function of
the samples (notation
)
X
X has a mean value (expectation), E[X]
what would be a reasonable estimator
X for E[X]?
© Aad van Moorsel, Newcastle University, 2010
90
unbiased estimators
X represents the estimator based on measured response time X
the real response time is represented by random variable R
mathematically, an unbiased estimator implies E[X ]= E[X]
source of estimator bias:
• mathematical subtleties (see standard deviation for one example)
for an experimentalist, an unbiased experiment implies also
E[X]=E[R]
sources of experiment bias:
• measurements not precise
• measurement points not exactly placed correctly
• environment of experiments not representative
© Aad van Moorsel, Newcastle University, 2010
91
unbiased estimators
mean:
variance:
(Var)
standard deviation:
(SD)
100α percentile:
Pr[ X  x]  
1
E[ X ] 
N
N
x
i 1
i
N
1
2
E[( X  E[ X ]) 2 ] 
(
x

E
[
X
]
)
 i
N  1 i 1
E[| X  E[ X ] |]  Var
smallest x such that
# xi  x

N
© Aad van Moorsel, Newcastle University, 2010
92
confidence intervals
central limit theorem: if I carry out an experiment often
enough, and average the result, then this average
converges, and takes on values close to a Normal
distribution
Allows one to construct “p-percent confidence intervals”:
the real value is p percent certain within the confidence
interval around the estimated value
note: that implies 100-p percent you’re off...
© Aad van Moorsel, Newcastle University, 2010
93
confidence interval
p-percent confidence interval:
c p .SD
c p .SD 

, E[ X ] 
 E[ X ] 

N
N 

where the constant cp depends on p, and can be read from
tables for Normal distribution percentiles:
p = 95%  cp = 1.65
p = 99%  cp = 1.96
© Aad van Moorsel, Newcastle University, 2010
94
example
you measure response time of a web site
x1 = 1 sec., x2 = 3 sec., x3 = 2 sec.
using the unbiased estimators:
E[ X ] = 2 sec and SD = 1 sec
95% confidence interval: [ 1.05 , 2.95 ]
99% confidence interval: [ 0.87 , 3.13 ]
increase the number of samples, 99% confidence interval:
N = 100:
[ 1.83, 2.17 ]
N = 10000:
[ 1.98, 2.02 ]
N = 1000000:
[ 1.998, 2.002 ]
© Aad van Moorsel, Newcastle University, 2010
95
confidence intervals prerequisite:
independent samples from identical distributions
the confidence interval statistics in this section applies only when
samples are statistically independent and representative for the
identical distribution (called i.i.d. or independent and
identically distributed)
examples of dependence:
• response time of subsequent web page customers: they might
both be influenced by the server being busy
• down time of a system in consecutive days: they might both be
influenced by a virus that was spread just around these days
example of independence:
• subsequent throws of a dice or tosses of a coin
• in systems, often samples are reasonably independent, but you
can never be sure
© Aad van Moorsel, Newcastle University, 2010
96
dealing with dependent samples
if you incorrectly assume independence, your confidence
interval will typically much narrower than it should  you
will have more confidence in your estimate than justified
to deal with dependence you can use method of batch
means or method of batches
the method of batches puts dependent samples in the same
batch  can assume independence between the batches
(and then the confidence interval theory applies again)
© Aad van Moorsel, Newcastle University, 2010
97
method of batch means
1. take the number of batches M to be 30 or more
2. create the batches: put x1 to xN/M in batch 1, xN/M+1 to x2N/M in
batch 2, etc.
(note: you will have M batches, each with N/M samples)
3. compute the means of each batch and call this yj for the j-th batch
4. then compute confidence intervals using y1 to yM instead of the
original samples x1 to xN
explanation: batches will be reasonably independent, as long as each
batch has a large quantity of samples
handy check: try it with different values of M (for instance for both 30
and 100 batches), if the batches are independent, the resulting
confidence intervals should be about the same (unfortunately, the
opposite doesn’t hold, so use with care)
© Aad van Moorsel, Newcastle University, 2010
98
conclusion
have discussed the basics behind doing good
system analysis
• metrics
• statistics
© Aad van Moorsel, Newcastle University, 2010
99
human factors and
economic drivers
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
trust economics
• research project funded by the UK Government’s
Technology Strategy Board (TSB)
• members include both academic institutions and industrial
organisations
– universities: Newcastle, Bath, UCL, Aberdeen,
– companies: HP Labs Bristol, Merrill Lynch, National Grid
• work is built on attempts to further understanding of a
number of diverse factors in information security
management, and how they are related
– human factors (UCL, Bath)
– economics of security (Bath, Aberdeen)
– business processes (HP, Merrill Lynch)
– security management tools (Newcastle)
• encapsulate insights from these strands of research
© Aad van Moorsel, Newcastle University, 2010
101
trust economics
• human factors
– user behaviour studies (e.g. USB sticks, password behaviours)
– rationalising human/security interaction
• economics of security
– economic modelling
– formalising relationships between security choices and economic
influences
• business processes
– utility functions
– system modelling
• security management tools
– information security & human factors ontology
– information security knowledge base tools
© Aad van Moorsel, Newcastle University, 2010
102
Newcastle ontology
• explicitly represent human factors in information security
management, and relate them to infrastructure components and
processes that are familiar to a Chief Information Security Officer
(CISO)
– elements of IT infrastructure that must be secured and controlled
– security standards that guide policy decisions (e.g. ISO27K)
• provides support for IT security management decision-making
– clearly and unambiguously represent elements of the IT security
infrastructure, but also the human behaviours that may arise or be
controlled as a result of deploying specific security mechanisms
• provides a means of encoding expert knowledge of human factors as
relates to IT security management
– this knowledge was previously unavailable to CISOs, or hidden away
in a form that made it difficult to communicate
© Aad van Moorsel, Newcastle University, 2010
103
use of an ontology
• an ontology helps to formally describe concepts and
taxonomies within information security management
– the exact meanings of terms can often be lost as they are
communicated or internalised within an organisation
• use of an ontology provides scope to identify the
interdependencies between various elements of
information security management
• an ontology can also be used to relate and communicate
different domains of knowledge across shared concepts,
and demonstrate the hierarchies that exist within them
• an ontology provides scope to reason about captured
knowledge
• reasoning may be achieved manually or by a machine
© Aad van Moorsel, Newcastle University, 2010
104
ontology structure
Infra.
Chapter
Proc.
1
1
hasFoundation
Behavioural
Foundation
Threat
exploitedBy
1
1
managesRiskOf
contains
*
*
Behaviour
Control
Section
1
*
*
hasRiskApproach
1
*
Guideline
Control Type
1
*
hasVulnerability
hasStakeholder
Vulnerability
*
hasSubject
contains
*
Guideline
Step
1
isMitigatedBy
contains
1
1
*
*
hasSubject
*
hasVulnerability
1
*
Asset
1
1
1
ownedBy
© Aad van Moorsel, Newcastle University, 2010
Role
105
ontology – password policy case study
Chapter
Number: 11
Single Password Memorisation
Difficult
Name: “ Access Control”
hasVulnerability
Section
Number: 11.3
Name: “User Responsibilities”
Objective: ...
Password
Guideline
Number: 11.3.1
Name: “Password Use”
Control: ...
Implementation Guidance (Additional): ...
Other Information: ...
hasSubject
Implementation Guidance Step
Number: 11.3.1 (d)
Guidance: “select quality passwords with sufficient minimum length which are:
1) easy to remember;
...”
© Aad van Moorsel, Newcastle University, 2010
106
case study – recall methods
Single Password Memorisation
Difficult
KEY
Classes
Vulnerability
Procedural Threat
Reduction
Educate Users in
Recall Techniques
Behavioural Foundation
Infrastructure Threat
Behaviour Control
Mindset
Password Stored Externally
to Avoid Recall
Insecure storage medium can
be exploited by malicious party
Asset
Control Type
Threat Consequence
Relationships
mitigated by
has vulnerability
exploited by
Reduction
Implement ISO27002 Guideline 11.3.1 (b),
“avoid keeping a record of passwords”
© Aad van Moorsel, Newcastle University, 2010
manages risk of
107
case study – password reset function
Single Password Memorisation
Difficult
Transfer
Helpdesk Password
Reset Management
User temporarily without access
Capability
Reduction
Helpdesk Provided With
Identity Verification Details
Single Password
Forgotten
Password Reset Process
Laborious
Automated Password
Reset System
Temporal
Mindset
Helpdesk Busy
Employee Becomes
Impatient
User compliance diminished
Reduction
User Account
Details Stolen
Malicious party gains access
IT Helpdesk Cannot
Satisfy Reset Request
Additional Helpdesk Staff
© Aad van Moorsel, Newcastle University, 2010
User temporarily without access
108
conclusion
Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
aad.vanmoorsel@newcastle.ac.uk
conclusion
part 1: assessment of security and trust
• used an ontology to describe the problem space
• noted subtle differences between security and trust
• realised that several security metrics have been proposed and used: CIA,
CVSS, Jaquith’s, but that
– security metrics are extremely challenging to define effectively: how
to make representative and predictive?
• noted that trust metrics are used in reputation systems and web of trust,
but that these are
– equally challenging to make representative and predictive
• provided basic methodology for good measurement studies
you now know some basics, apply it in the field and improve on the
methodologies used in information security assessment!
© Aad van Moorsel, Newcastle University, 2010
110
trust economics methodology
trust economics methodology for security decisions
trade off:
legal issues,
human tendencies,
business concerns,
...
a model
of the information
system
© Aad van Moorsel, Newcastle University, 2010
stakeholders
discuss
112
trust economics research
from the trust economics methodology, the following
research follows:
1. identify human, business and technical concerns
2. develop and apply mathematical modelling
techniques
3. glue concerns, models and presentation together using
a trust economics information security ontology
4. use the models to improve the stakeholders discourse
and decisions
© Aad van Moorsel, Newcastle University, 2010
113
our involvement
1.
–
2.
–
–
3.
–
–
–
4.
–
identify human, business and technical concerns
are working on a case study in Access Management (Maciej, James, with
Geoff and Hilary from Bath)
develop and apply mathematical modelling techniques
are generalising concepts to model human behaviour, and are validating it
with data collection (Rob, Simon, with Doug, Robin and Bill from UIUC)
do a modelling case study in DRM (Wen)
glue concerns, models and presentation together using a trust economics
information security ontology
developed an information security ontology, taking into account human
behavioural aspect (Simon)
made an ontology editing tool for CISOs (John)
are working on a collaborative web-based tool (John, Simon, Stefan from
SBA, Austria)
use the models to improve the stakeholders discourse and decision
using participatory design methodology, are working with CISOs to do a user
study (Simon, Philip and Angela from UCL)
© Aad van Moorsel, Newcastle University, 2010
114
example of the trust economics methodology
passwords
Information Security Management
Find out about how users behave, what the business issues
are:
CISO1: Transport is a big deal.
Interviewer1: We’re trying to recognise this in our user classes.
CISO1: We have engineers on the road, have lots of access, and are more gifted in
IT.
Interviewer1: Do you think it would be useful to configure different user
classes?
CISO1: I think it’s covered.
Interviewer1: And different values, different possible consequences if a loss
occurs. I’m assuming you would want to be able to configure.
CISO1: Yes. Eg. customer list might or might not be very valuable.
Interviewer1: And be able to configure links with different user classes and the
assets.
CISO1: Yes, if you could, absolutely.
Interviewer1: We’re going to stick with defaults at first and allow configuration
if needed later. So, the costs of the password policy: running costs, helpdesk
staff, trade-off of helpdesk vs. productivity
CISO1: That’s right.
© Aad van Moorsel, Newcastle University, 2010
116
Information Security Management
Find out about how users behave, what the business issues
are:
Discussion of "Productivity Losses":
CISO2: But it’s proportional to amount they earn. This is productivity. eg. $1m
salary but bring $20m into the company. There are expense people and
productivity people.
Interviewer1: We have execs, “road warrior”, office drone. Drones are just a
cost.
Interviewer2: And the 3 groups have different threat scenarios.
CISO2: Risk of over-complicating it, hard to work out who is income-earner and
what proportion is income earning.
Interviewer2: But this is good point.
CISO2: Make it parameterisable, at choice of CISO.
…
CISO2: So, need to be able to drill down into productivity, cost, - esp in small
company.
© Aad van Moorsel, Newcastle University, 2010
117
a model of the IT system
© Aad van Moorsel, Newcastle University, 2010
118
tool to communicate
the result to a CISO
Password Policy Composition Tool
File
Help
Breaches / Productivity / Cost
[projected per annum for 100-user sample]
BREACHES
#
Full
#
Composite
#
Partial
#
Productivity
#
Costs
#
BREACHES:
Composite
Policy Properties
ss 3 350
Cla
Cla
ss 1 280
Cla
Password Change Notification:
#upper
i
upper
#upper
#
#min_length
#upper
i
#notif_days
#lower
#lower
Password Login Attempts:
Password Complexity:
i
#upper
#
#
ss 2 175
No.
ss 3 350
Cla
Organisation Properties User Properties
Password Length:
upper
#upper
#
Partial
Cla
ss 2 175
ss 1 280
Cla
Cla
ss 3 350
ss 2 175
Cla
Cla
ss 1 280
No.
No.
Full
i
#upper
char_classes
max_retries
#
lower
#
lower
Password Change Frequency:
#upper
i
#change_frequency
#lower
Generate Output
© Aad van Moorsel, Newcastle University, 2010
Export Policy
119
an information security
ontology incorporating
human-behavioural implications
Simon Parkin, Aad van Moorsel
Newcastle University
Centre for Cybercrime and Computer Security
UK
Robert Coles,
Bank of America, Merrill Lynch
UK
trust economics ontology
• we want to have a set of tools that implement the trust
economics methodology
• needs to work for different case studies
• need a way to represent, maintain and interrelate relevant
information
• glue between
– problem space: technical, human, business
– models
– interfaces
© Aad van Moorsel, Newcastle University, 2010
121
using an ontology
• We chose to use an ontology to address these
requirements, because:
– An ontology helps to formally define concepts and
taxonomies
– An ontology serves as a means to share knowledge
• Potentially across different disciplines
– An ontology can relate fragments of knowledge
• Identify interdependencies
© Aad van Moorsel, Newcastle University, 2010
122
business, behaviour and security
• Example: Password Management
– There is a need to balance security and ease-of-use
– A complex password may be hard to crack, but might
also be hard to remember
• Is there a way to:
– Identify our choices in these situations?
– Consider the potential outcomes of our choices in a
reasoned manner?
© Aad van Moorsel, Newcastle University, 2010
123
requirements
• Standards should be represented
– Information security mechanisms are guided by policies, which are
increasingly informed by standards
• The usability and security behaviours of staff must be
considered
–
–
–
–
Information assets being accessed;
The vulnerabilities that users create;
The intentional or unintentional threats user actions pose, and;
The potential process controls that may be used and their
identifiable effects
• CISOs must be able to relate ontology content to the
security infrastructure they manage
– Representation of human factors and external standards should be
clear, unambiguous, and illustrate interdependencies
© Aad van Moorsel, Newcastle University, 2010
124
information security ontology
• We created an ontology to represent the humanbehavioural implications of information security
management decisions
– Makes the potential human-behavioural implications visible and
comparable
• Ontology content is aligned with information security
management guidelines
– We chose the ISO27002: “Code of Practice” standard
– Provides a familiar context for information security managers (e.g.
CISOs, CIOs, etc.)
– Formalised content is encoded in the Web Ontology Language
(OWL)
• Human factors researchers and CISOs can contribute
expertise within an ontology framework that connects
their respective domains of knowledge
– Input from industrial partners and human factors researchers helps
to make the ontology relevant and useful to prospective users
© Aad van Moorsel, Newcastle University, 2010
125
ontology - overview
Infra.
Chapter
Proc.
1
1
hasFoundation
Behavioural
Foundation
Threat
exploitedBy
1
1
managesRiskOf
contains
*
*
Behaviour
Control
Section
1
*
*
hasRiskApproach
1
*
Guideline
Control Type
1
*
hasVulnerability
hasStakeholder
Vulnerability
*
hasSubject
contains
*
Guideline
Step
1
isMitigatedBy
contains
1
1
*
*
hasSubject
*
hasVulnerability
1
*
Asset
1
1
1
ownedBy
© Aad van Moorsel, Newcastle University, 2010
Role
126
ontology – password policy example
Chapter
Single Password Memorisation
Difficult
Number: 11
Name: “ Access Control”
hasVulnerability
Section
Number: 11.3
Name: “User Responsibilities”
Objective: ...
Password
Guideline
Number: 11.3.1
Name: “Password Use”
Control: ...
Implementation Guidance (Additional): ...
Other Information: ...
hasSubject
Implementation Guidance Step
Number: 11.3.1 (d)
Guidance: “select quality passwords with sufficient minimum length which are:
1) easy to remember;
...”
© Aad van Moorsel, Newcastle University, 2010
127
example – password memorisation
KEY
Classes
Vulnerability
Procedural Threat
Single Password Memorisation
Difficult
Acceptance
Capability
Maintain Password
Policy
Single Password
Forgotten
Behavioural Foundation
Infrastructure Threat
Behaviour Control
Asset
Control Type
User temporarily without access
Reduction
Make Password Easier
To Remember
Threat Consequence
Relationships
mitigated by
has vulnerability
exploited by
manages risk of
© Aad van Moorsel, Newcastle University, 2010
128
example – recall methods
Single Password Memorisation
Difficult
KEY
Classes
Vulnerability
Procedural Threat
Behavioural Foundation
Reduction
Educate Users in
Recall Techniques
Infrastructure Threat
Behaviour Control
Asset
Mindset
Password Stored Externally
to Avoid Recall
Insecure storage medium can
be exploited by malicious party
Control Type
Threat Consequence
Relationships
mitigated by
has vulnerability
Reduction
Implement ISO27002 Guideline 11.3.1 (b),
“avoid keeping a record of passwords”
© Aad van Moorsel, Newcastle University, 2010
exploited by
manages risk of
129
example – password reset function
Single Password Memorisation
Difficult
Transfer
Helpdesk Password
Reset Management
User temporarily without access
Capability
Reduction
Helpdesk Provided With
Identity Verification Details
Single Password
Forgotten
Password Reset Process
Laborious
Automated Password
Reset System
Temporal
Mindset
Helpdesk Busy
Employee Becomes
Impatient
User compliance diminished
Reduction
User Account
Details Stolen
Malicious party gains access
IT Helpdesk Cannot
Satisfy Reset Request
Additional Helpdesk Staff
© Aad van Moorsel, Newcastle University, 2010
User temporarily without access
130
conclusions
• CISOs need an awareness of the human-behavioural
implications of their security management decisions
• human Factors researchers need a way to
contribute their expertise and align it with concepts
that are familiar to CISOs
– standards
– IT infrastructure
– business processes
• we provided an ontology as a solution
– serves as a formalised base of knowledge
– one piece of the Trust Economics tools
© Aad van Moorsel, Newcastle University, 2010
131
an ontology for structured systems economics
Adam Beaument
UCL, HP Labs
David Pym
HP Labs, University of Bath
ontology to link with the models
thus far, trust economics ontology represent technology
and human behavioural issues
how to glue this to the mathematical models?
© Aad van Moorsel, Newcastle University, 2010
133
ontology
© Aad van Moorsel, Newcastle University, 2010
134
example process algebra model
© Aad van Moorsel, Newcastle University, 2010
135
conclusion on trust economics ontology
trust economics ontology is work in progress
- added human behavioural aspects to IT security
concepts
- provided an abstraction that allows IT to be
represented tailored to process algebraic model
to do:
- complete as well as simplify...
- proof is in the pudding: someone needs to use it in a
case study
© Aad van Moorsel, Newcastle University, 2010
136
an ontology editor and a community ontology
John Mace (project student)
Simon Parkin
Aad van Moorsel
Stefan Fenz
SBA, Austria
stakeholders
• Chief Information Security Officers (CISOs)
• Human Factors Researchers
• Ontology experts
© Aad van Moorsel, Newcastle University, 2010
138
current ontology development
• requires use of an ontology creation tool
• graphical or text based tools
• both create machine readable ontology file from user
input
• user must define underlying ontology structure
© Aad van Moorsel, Newcastle University, 2010
139
current development issues
• knowledge required of ontology development and tools
• development knowledge held by ontology experts and not
those whose knowledge requires capture
• current tools are complex and largely aimed at ontology
experts
• process is time-consuming and error prone
© Aad van Moorsel, Newcastle University, 2010
140
how would you want to write ontology content?
<Vulnerability rdf:about="#SinglePasswordMemorisationDifficult">
<mitigatedBy rdf:resource="#MakePasswordEasierToRemember"/>
<exploitedBy rdf:resource="#SinglePasswordForgotten"/>
</Vulnerability>
© Aad van Moorsel, Newcastle University, 2010
141
proposed solution
• a simple, intuitive tool to create/modify ontology in
graphical form
• captures knowledge of domain experts while removing
need to know of ontology construction techniques
• underlying information security ontology structure is
predefined
• interactive help system and mechanisms to minimise error
© Aad van Moorsel, Newcastle University, 2010
142
implementation overview
Chief Information Security Officer (CISO) /
Human Factors Researcher (HFR)
enter content
load
existing
diagram
Ontology Diagram Store
ontology
diagram
save
current
diagram
Java Translation Program
ontology
file
Ontology File Store
Ontology Editor
© Aad van Moorsel, Newcastle University, 2010
143
ontology editor
© Aad van Moorsel, Newcastle University, 2010
144
adding new concept
© Aad van Moorsel, Newcastle University, 2010
145
ontology diagram
© Aad van Moorsel, Newcastle University, 2010
146
Java translation program
Ontology Diagram
diagram
saved to
Temp
folder
Ontology File
diagram
retrieved
from Temp
folder
file saved
file created
user defined
parameters
Java Translation Program
Ontology Editor
Ontology File Store
Java libraries imported
Java 1.5 API
Xerces API
© Aad van Moorsel, Newcastle University, 2010
OWL API
147
ontology file
• written in machine readable Web Ontology Language OWL
• created using OWL API
• file structure:
– header
– classes
– data properties
– object properties
– individuals
© Aad van Moorsel, Newcastle University, 2010
148
ontology file example
<Vulnerability rdf:about="#SinglePasswordMemorisationDifficult">
<mitigatedBy rdf:resource="#MakePasswordEasierToRemember"/>
<exploitedBy rdf:resource="#SinglePasswordForgotten"/>
</Vulnerability>
© Aad van Moorsel, Newcastle University, 2010
149
summary
• need for information security ontology editing tool
• proposed tool allows domain experts to develop ontology
without knowledge of ontology construction
• delivers machine readable ontology files
• simplifies development process
• allow further development of ‘base’ ontology
© Aad van Moorsel, Newcastle University, 2010
150
future developments
• ontology too large for small group to develop effectively
• vast array of knowledge held globally
• ontology development needs to be a collaborative process
to be effective
• web-oriented collaborative editing tool
• basis for 3rd year dissertation
© Aad van Moorsel, Newcastle University, 2010
151
user evaluation for trust economics software
Simon Parkin
Aad van Moorsel
Philip Inglesant
Angela Sasse
UCL
participatory design of a trust economics tool
assume we have all pieces together:
• ontology
• models
• CISO interfaces
what should the tool look like?
we conduct a participatory design study with CISOs from:
• ISS
• UCL
• National Grid
method: get wish list from CISOs, show a mock-up tool and
collect feedback, improve, add model in background, try it
out with CISOs, etc.
© Aad van Moorsel, Newcastle University, 2010
153
information security management
find out about how users behave, what the business issues
are:
CISO1: Transport is a big deal.
Interviewer1: We’re trying to recognise this in our user classes.
CISO1: We have engineers on the road, have lots of access, and are more gifted in
IT.
Interviewer1: Do you think it would be useful to configure different user
classes?
CISO1: I think it’s covered.
Interviewer1: And different values, different possible consequences if a loss
occurs. I’m assuming you would want to be able to configure.
CISO1: Yes. Eg. customer list might or might not be very valuable.
Interviewer1: And be able to configure links with different user classes and the
assets.
CISO1: Yes, if you could, absolutely.
Interviewer1: We’re going to stick with defaults at first and allow configuration
if needed later. So, the costs of the password policy: running costs, helpdesk
staff, trade-off of helpdesk vs. productivity
CISO1: That’s right.
© Aad van Moorsel, Newcastle University, 2010
154
information security management
find out about how users behave, what the business issues
are:
Discussion of "Productivity Losses":
CISO2: But it’s proportional to amount they earn. This is productivity. eg. $1m
salary but bring $20m into the company. There are expense people and
productivity people.
Interviewer1: We have execs, “road warrior”, office drone. Drones are just a
cost.
Interviewer2: And the 3 groups have different threat scenarios.
CISO2: Risk of over-complicating it, hard to work out who is income-earner and
what proportion is income earning.
Interviewer2: But this is good point.
CISO2: Make it parameterisable, at choice of CISO.
…
CISO2: So, need to be able to drill down into productivity, cost, - esp in small
company.
© Aad van Moorsel, Newcastle University, 2010
155
example of the trust economics methodology
access management
Maciej Machulak (also funded by JISC SMART)
James Turland (funded by EPSRC AMPS)
Wen Zeng (for DRM)
Aad van Moorsel
Geoff Duggan
Hilary Johnson
University of Bath
SMART project description
• the SMART (Student-Managed Access to Online Resources)
project will develop an online data access management
system based on the User-Managed Access (UMA) Web
protocol, deploy it within Newcastle University and
evaluate the system through a user study.
– The project team will also contribute to the
standardisation effort of the UMA protocol by actively
participating in the User-Managed Access Work Group
(UMA WG – charter of the Kantara Initiative)
© Aad van Moorsel, Newcastle University, 2010
157
project description - UMA
• User-Managed Access protocol – allows an individual control
the authorization of data sharing and service access made
between online services on the individual's behalf.
Source: http://kantarainitiative.org/confluence/display/uma/UMA+Explained
© Aad van Moorsel, Newcastle University, 2010
158
project description – objectives
• objectives:
– Define scenario for UMA use case within Higher
Education (HE) environments
– Develop UMA-based authorisation solution
– Deploy the UMA-based solution within Newcastle
University:
• Integrate the system with institutional Web
applications
• Evaluate the system through a user study
– Contribute with the scenario, software and project
findings to the UMA WG and actively participate in the
standardisation effort of the UMA Web protocol.
– Demonstrate, document and disseminate project
outputs
© Aad van Moorsel, Newcastle University, 2010
159
trust economics applied to access management
• we build the application
• we build models to quantify trust or CIA properties
• we investigate user interfaces and user behaviour to input
into the model
related: we also build DRM models, trading off productivity
and confidentiality
© Aad van Moorsel, Newcastle University, 2010
160
modelling concepts and model validation
Rob Cain (funded by HP)
Simon Parkin
Aad van Moorsel
Doug Eskin (funded by HP)
Robin Berthier
Bill Sanders
University of Illinois at Urbana-Champaign
project objectives
• performance models traditionally have not included human
behavioural aspects in their models
• we want to have generic modelling constructs to represent
human behaviour, tendencies and choices:
– compliance budget
– risk propensity
– impact of training
– role dependent behaviour
• we want to validate our models with collected data
– offline data, such as from interviews
– online data, measure ‘live’
• we want to optimise the data collection strategy
• in some cases, it makes sense to extend our trust economics
methodology with a strategy for data collection
162
© Aad van Moorsel, Newcastle University, 2010
presentation of Möbius
© Aad van Moorsel, Newcastle University, 2010
163
sample Möbius results
Without Comp Budget Feedback
380
360
340
320
Utility
300
HB Score
280
260
240
220
0
0.1
0.2
0.3
0.4
0.5
0.6
Prob of Encryption
0.7
© Aad van Moorsel, Newcastle University, 2010
0.8
0.9
1
164
sample Möbius results (cont.)
Using Comp Budget Feedback
380
360
340
320
Utility
300
HB Score
280
260
240
220
0
0.1
0.2
0.3
0.4
0.5
0.6
Prob of Encryption
0.7
© Aad van Moorsel, Newcastle University, 2010
0.8
0.9
1
165
criticality of using data
• the goal of using data is to provide credibility to the
model:
– by defining and tuning input parameters according to
individual organization
– by assessing the validity of prediction results
• issues:
– numerous data sources
– collection and processing phases are expensive and
time consuming
– no strategy to drive data monitoring
– mismatch between model and data that can be
collected
© Aad van Moorsel, Newcastle University, 2010
166
data collection approach
Stakeholders
1
2
Data
Sources
Cost / Quality
Model
Importance
3
4
1.
2.
3.
4.
•
•
Input parameter definition
Output validation
Design specialized model according to requirements
Classify potential data sources according to their cost and quality
Optimize collection of data according to parameter importance
Run data validation and execute model
© Aad van Moorsel, Newcastle University, 2010
167
data sources classification
• Cost:
– Cost to obtain
– Time to obtain
– Transparency
– Legislative process
• Quality:
– Accuracy
– Applicability
• Importance:
– Influence of parameter value on output
© Aad van Moorsel, Newcastle University, 2010
168
Organization Budget Parameters
input/o
Category
utput
Parameter
Description
Variables
Influence
Data Sources and Cost
IT security survey
(http://www.gartner.com,
http://www.gocsi.com)
in
Budget
Total security
investment
IT budget. Default
is 100
medium
interview with IT directors
public gov. budget data
in
in
in
Budget
Training
investment
Training budget.
Always, one-off
100
Experimental
value. Proportion
Support proportion
of Active Security
Budget
of budget
Investment used
for support
Budget
Low
Medium
High
Monitoring
proportion of
budget
Experimental
value. 1 – (Support
proportion of
budget)
USB stick =
100, software =
0, install and
maintenance =
0
interview with IT directors
low
public gov. budget data
interview with IT directors
high
public gov. budget data
interview with IT directors
high
© Aad van Moorsel, Newcastle University, 2010
public gov. budget data
169
Overall Human Parameters
input/
output
in
in
Category
User
behavior
User
behavior
Parameter
Description
Compliance
budget
Effort willing to
spend
conforming
with security
policy that
doesn't benefit
you.
Perceived
benefit of task
Effort willing to
put in without
using
compliance
budget.
Variables
Influence
Generalised:
understanding,
investment,
incentives
© Aad van Moorsel, Newcastle University, 2010
Data Sources and Cost
User survey
170
password: probability of break-in
input/ou
tput
Category
Parameter
in
Culture of
organization
in
User behavior Password strength
in
Attacker
Password strength
determination threshold
in
User behavior
Password update
frequency
in
User behavior
in
Description
Prob, of leaving default
password
Variables
Influence
Organization policy, user training
medium
Organization policy, user training
medium
Password stength, attacker
determination
medium
Organization policy, user training
medium
Prob. of being locked
out
when password is forgotten Organization policy, user training
medium
User interface
Prob. of finding lost
password
efficiency of password
recovery tech.
medium
in
User interface
Prob. of needing
support
(#support queries / #users) prob. of forgetting password
medium
in
User behavior
Management
reprimands
medium
in
User behavior
Negative support
experiences
medium
out
User behavior
Prob. password can be
compromised
out
Security
Availability
#successful data transfer
high
out
Security
Confidentiality
#exposures + #reveals
high
Compromised by brute
force attack
Data Sources and Cost
high
© Aad van Moorsel, Newcastle University, 2010
171
data collection research
four sub problems:
• determine which data is needed to validate the model:
– provide input parameter values
– validate output parameters
• technical implementation of the data collection
• optimize data collection such that cost is within a certain
bound: need to find the important parameters and trade
off with cost of collecting it
• add data collection to the trust economics methodology:
– a data collection strategy will be associated with the
use of a model
© Aad van Moorsel, Newcastle University, 2010
172
conclusion
trust economics research in Newcastle:
• ontology for human behavioural aspects, incl. editor and
community version
• tool design with CISOs
• modelling: DRM and Access Management
• data collection strategies for validation
work to be done:
• generic ontology for trust economics, underlying the tools
• actual tool building
• evaluation of the methodology
© Aad van Moorsel, Newcastle University, 2010
173
trust economics info
http://www.trust-economics.org/
Publications:
• An Information Security Ontology Incorporating Human-Behavioural Implications. Simon Parkin, Aad van Moorsel,
Robert Coles. International Conference on Security of Information and Networks, 2009
• Risk Modelling of Access Control Policies with Human-Behavioural Factors. Simon Parkin and Aad van Moorsel.
International Workshop on Performability Modeling of Computer and Communication Systems, 2009.
• A Knowledge Base for Justified Information Security Decision-Making. Daria Stepanova, Simon Parkin, Aad van Moorsel.
International Conference on Software and Data Technologies, 2009.
• Architecting Dependable Access Control Systems for Multi-Domain Computing Environments. Maciej Machulak, Simon
Parkin, Aad van Moorsel. Architecting Dependable Systems VI, R. De Lemos, J. Fabre C. Gacek, F. Gadducci and M. ter
Beek (Eds.), Springer, LNCS 5835, pp. 49—75, 2009.
• Trust Economics Feasibility Study. Robert Coles, Jonathan Griffin, Hilary Johnson, Brian Monahan, Simon Parkin, David
Pym, Angela Sasse and Aad van Moorsel. Workshop on Resilience Assessment and Dependability Benchmarking, 2008.
• The Impact of Unavailability on the Effectiveness of Enterprise Information Security Technologies. Simon Parkin,
Rouaa Yassin-Kassab and Aad van Moorsel. International Service Availability Symposium, 2008.
Technical reports:
• Architecture and Protocol for User-Controlled Access Management in Web 2.0 Applications. Maciej Machulak, Aad van
Moorsel. CS-TR 1191, 2010
• Ontology Editing Tool for Information Security and Human Factors Experts. John Mace, Simon Parkin, Aad van Moorsel.
CS-TR 1172, 2009
• Use Cases for User-Centric Access Control for the Web, Maciej Machulak, Aad van Moorsel. CS-TR 1165, 2009
• A Novel Approach to Access Control for the Web. Maciej Machulak, Aad van Moorsel. CS-TR 1157, 2009
• Proceedings of the First Trust Economics Workshop. Philip Inglesant, Maciej Machulak, Simon Parkin, Aad van Moorsel,
Julian Williams (Eds.). CS-TR 1153, 2009.
• A Trust-economic Perspective on Information Security Technologies. Simon Parkin, Aad van Moorsel. CS-TR 1056, 2007
© Aad van Moorsel, Newcastle University, 2010
174
Download