Secure Software Development

advertisement
Secure Software Development
Marcus Bendtsen
Institutionen för Datavetenskap (IDA)
Avdelningen för Databas- och Informationsteknik (ADIT)
Agenda
•  Securing the software development life cycle
•  Example of a formal secure development method
•  Secure architectural, design and implementation patterns
2
Introduction
We do not simply write code, and then as an
afterthought test and patch it to ensure that it
fulfills a functional requirement:
If we want a piece of code that sums integers, then we
state this before we start coding and specifically write
the code to sum integers. We do not randomly write
code and then try and patch the code to sum integers.
3
Introduction
For non-functional requirements such as quality and security, the
same logic applies: We do not patch a piece of code to ensure it fulfills
a non-functional requirement.
•  Non-functional requirements are met not only by stating the
requirements, but activities are required.
•  Security considerations must permeate all phases of the
software development life cycle.
4
Software Development Life Cycle
5
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Software Development Life Cycle
Static analysis
Security requirements
6
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Software Development Life Cycle
Static analysis
Security requirements
7
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Security Requirements
•  Requirements are gathered during the initial phase of the
software development life cycle.
•  This is an opportunity to not only gather functional
requirements, but also security requirements.
•  Several methods exists for gathering security requirements.
•  We will look at misuse cases, which can be seen as a
method in itself, but also takes part in more elaborate methods
(such as SQUARE).
8
Use cases and Misuse cases
•  A use case illustrates required usage of a system – i.e.
expected functionality.
•  However it is equally important to illustrate how one should not
be able to use the system.
•  Misuse cases are used to identify threats and required
countermeasures.
9
Misuse case legend
Authorized user
Insider
Outside attacker
Use case
Vulnerability
Threat
<<extend>>
<<exploit>>
<<threaten>>
<<mitigate>>
<<include>>
Fig. 2. Extended misuse case legend
3.1
10
Examples of use of the extended notation
Image from Lillian Røstad – An extended misuse case notation: Including vulnerabilities and the insider threat
Emergency access control in healthcare systems Figure 3 depicts an example misuse case model using the extended notation proposed in this paper.
Misuse case example
•  Electronic Patient Record (EPR)
•  Under normal circumstances patients should be registered in the
system and linked to a specific ward – only personnel with access
to the patients at this ward can then read the patients records.
•  During emergencies the organization and the law allows the use of
an emergency access control function – which gives immediate
access to any records needed.
•  For such an emergency control to be useful, it must be available at
all time. This effectively creates a backdoor in the system that
insiders can use to snoop around.
•  By identifying emergency access as a vulnerability we can also
consider proper countermeasures – auditing (enables traceability
and detection) and awareness training (making sure that users
are aware of consequences of misuse).
11
Electronic Patient Record
Access Control (AC)
<<extend>>
<<extend>>
<<include>>
Read EPR
Normal AC
Emergency AC
Authorized user
<<exploit>>
<<include>>
Emergency read EPR
Auditing
<<mitigate>>
Unauthorized read EPR
Awareness training
<<mitigate>>
Insider
Fig. 3. Extended misuse case example: access control
12
Image from Lillian Røstad – An extended misuse case notation: Including vulnerabilities and the insider threat
User input in web-based systems
<<exploit>>
Enter username
<<exploit>>
Injection attack
<<mitigate>>
Input validation
Authorized user
Enter password
<<exploit>>
<<exploit>>
Attacker
<<mitigate>>
Overflow attack
<<threaten>>
Use system
<<threaten>>
Fig. 4. Extended misuse case example: user input
13
team or an organization. For example a disgruntled employee working on a deImage from Lillian
Røstad
– Aninject
extended
misuse
case
vulnerabilities
and the insider
threat
velopment
project
may
code
into
a notation:
systemIncluding
that opens
up a backdoor
that
attackers may exploit like Figure 5 illustrates.
Fig. 4. Extended misuse case example: user input
team or an organization. For example a disgruntled employee working on a development project may inject code into a system that opens up a backdoor that
attackers may exploit like Figure 5 illustrates.
An insider on the system development team
Implement system
System developer
Inject backdoor
Insider
<<mitigate>>
Code audit
<<mitigate>>
<<mitigate>>
Inject bug
<<mitigate>>
Security testing
Fig. 5. Extended misuse case example: insider in development team
14
Image from Lillian Røstad – An extended misuse case notation: Including vulnerabilities and the insider threat
3.2
A step-by-step approach: how to apply the extended notation
Requirements
•  Misuse cases is one method of gathering requirements.
•  Other more complex methods exists that range up to full-fledge
risk analysis methods.
•  Misuse cases are good due to their simplicity, this increases the
probability that they will be used.
•  When requirements have been gathered they are transferred to
the design and architecture phase.
15
Software Development Life Cycle
Static analysis
Security requirements
16
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Risk analysis
•  Risk analysis is used at the architecture & design phase and
at the verification phase (to some degree also at requirements stage)
•  Helps to find and quantify risks and then allows us to change
our architecture and design.
•  We will look briefly at CORAS (more in Info-Sec course) and in more
detail about Attack Trees (overlap with Info-Sec course).
17
CORAS (overview)
Model-based security analysis in seven steps — a guided tour to the CORAS method
analysis leader (required),
Model-based security analysis in seven steps — a guided tour to the CORAS method
cardiologist
GP
analysis
secretary
(required),
Step 2 —
summary
Tasks:
log on
log on
representatives
of the
client:
the target as understood
by the
analysts is presented,
Step
1 – makers
Experts
and clients decide upon
— decision
(required),
which
system
to be analyzed and what
— technical
expertiseis
(optional),
parts
of(optional).
the system that should be focused
— users
upon.guideline:
Modelling
the assets are identified,
a high-level analysis is conducted.
retrieve health
record
People that should be present:
security analysis leader (required),
security analysis secretary (required),
establish
connection
acknowledge
connection
connect medical
equipment
open health
record
system—description:
technical expertise (required),
examine
patient
review
examination
users (optional).
— at —this
stage of the analysis it can be useful to
Modelling guidelines:
describe the target with informal models like drawings,
asset diagrams:
pictures
or sketches on a blackboard,
representatives of the client:
— decision makers (required),
Model-based security analysis in seven steps — a guided tour to the CORAS method
Internet
Fig 2
firewall
Picture of the target.
update health
record
database
— draw a region that logically or physically represents
target of analysis,
— thethepresentation
can later be supplemented with
—
place
the
direct
assets
within
region,
more formal modellingthe techniques
such as UML or
The
system
to
be
analyzed
is
—
place
the
indirect
assets
outside
the
region (indirect
data-flow diagrams.
Step
close from
the picture we see that speech and other data
the 2 –
connection
assets are a harmed as a consequence of a direct asset
examination of a patient is streamed over a dedicated
formalized,
assets
identified, high-level
being harmedare
first),
network, while access to the patient’sStep health
record
2 — summary
log out
log out
— indicate with arrows which assets may affect other
(stored in a database at the regional hospital) isrisk
given analysis. assets,
3.
Step 2 — high-level analysis
through an encrypted channel over the Internet. Next in
assets may be ranked according to their importance,
The second—step
is called the high-level analysis, and as the
Fig 5 doctor
Activity diagram
describing
the parallel processes of the GP
line after the IT manager is the medical
from
the
— if the analysis has more than one client, the clients
and the cardiologist.
name indicates
this
involves conducting an initial analysis of
should be associated with their assets,
PHCC. She talks about her personal experiences from
the
target.
This
step
also typically involves a meeting
target descriptions:
using the system.
Ministry
of Health
between
the
analysts
and
the representatives of the client.
— use a formal or standardised notation such as UML
(client)
[1], but ensure
notation is explained
thoroughAfter the presentations, a discussion on the scope and
The main purpose
isthattotheidentify
assets
and get an overview of
ly so that the participants understand it,
focus of the analysis follows. The representative of the
the main risks. Finding the assets that need protection is
— create models of both the static and the dynamic
ministry emphasises that they are particularly worried
target
(static may be in
hardware
initiated infeatures
step of2 the
and
completed
step 3. The remaining
configurations, network design, etc, while dynamic may
about the confidentiality and integrity of the health
four
steps
of
the
analysis
will
be
directed
towards these
provision of
be work processes, information flow, etc),
telecardiology
records and other medical data, first andpatient’s
foremost
for
the
service
assets. The— outcome
ofof the
high-level
health
for the static parts
the description
UML class analysis helps the
diagrams and UML collaboration diagrams (or similar
public trust
sake of the patients’
health, but also because of the
stakeholder that is initiating and paying for the analysis),
in Fig 6 symbolise the client’s, or other interested
analysts
to
identify
the
aspects
of
the
target having the most
in
system
notations)
are
recommended,
and its four assets: ‘Health records’, ‘Provision of
parties’, relation to the assets.
health
public’s
in the
national healthcare system.
Fortelecardiology
the
telecardiology service’, ‘Patient’s
health’ and trust
‘Public’s
records
urgent need— for
forthein-depth
analysis,
and
hence makes it easier
service
After agreeing on the assets, the analysts conduct a
trust in system’. Because trust and health are difficult to
dynamic parts we
recommend UML
activity
high-level analysis together with the analysis parmeasure, especially in a technical
setting like doctor
this, the
medical
the
most important thing is the patient’s
diagrams and UML sequence diagrams (or similar
ticipants. The short brainstorming should identify the
analysis leader makes a distinction between direct and
to
define
the
exact
scope
and
focus
of
the
full analysis.
Fig 6
Asset diagram.
notations).
most important threats and vulnerabilities, but without
indirect assets. He explains direct assets as assets that
A risk is the potential for an unwanted incident to
4.
Step
3 — approval
andwhile well-being,
and
hence
the availability and
going into great detail.
In this case
the client is concerned
may be harmed directly byhealth
an unwanted incident,
The last of the preparatory steps is the approval step. The
have an impact upon objectives (assets) [4], or in other
about hackers, eavesdroppers, system failure and
the indirect assets18
are only harmed if one of the direct
words to reduce the value of at least one of the identified
approval is often conducted as a separate meeting, but may
the security mechanisms
aresystem.
sufficient. et al.
assets is harmed first. In the
asset diagram the
direct the whether
Images
from
Braber
security
analysis
in Journal
seven
steps
–security
a guided
tour to the CORAS method
integrity
of
telemedicine
The–ITModel-based
manager
Technology
Vol 25 No
2007
105client accepts some risks that are not
Often the
also take place via e-mail.
The
main goal
is to• finalise
the1 • January
assets are placed within the target of analysis region and
The
second BTmeeting
starts
with assets.
the
analysis
judged
to be
critical rather than
eliminating or reducing
documentation and characterisation of target and assets,
These threats and vulnerabilities do not necessarily
the indirect are placed outside.
explains that they
have
made
a security analysis
them. This may be because of shortage of resources to
involve
major risks,already
but give the analysis
leader valuable
and get this formally approved by the client. At the end of
leader
presenting
the (possibly
analysts’
understanding
of concerns,
the or the
implement changes, conflicting
The arrows show dependencies between the assets,
input on where to start the analysis. The analysis
this meeting
there should be a document
with a list
such that, for example harm
‘Health records’
may
secretary documents
the results and
by filling inthe
the hightreatment costs will be greater than the benefits. As a
oftothe
health
record
database
encrypted access,
of required changes) to which all parties agree and commit.
cause harm to ‘Public’s trust in system’. The dashed lines
level risk table shown in Table 1 .
first steppresented
towards distinguishing
that can be
target
to be
analysed.
The information
byrisksthe
The approval
also involves
defining consequence
scales (for
accepted from those that cannot, the representatives
each asset) and a likelihood scale. Multiple consequence
so she is confident that this part of the system is secure
focus
terminal
medical
equipment
dedicated
connection
cardiologist
GP
Tasks:
cardiologist
GP
Fig 3
Model-based security analysis in seven steps — a guided tour to the CORAS method
cardiologist
terminal
GP
terminal
log on
log on
the target as understood by the analysts is presented,
the assets are identified,
a high-level analysis is conducted.
Class diagram showing a conceptual view of the target.
retrieve health
record
People that should be present:
Model-based security analysis in seven steps — a guided tour to the CORAS method
security analysis leader (required),
hardware communication
establish
connection
acknowledge
connection
focus
:medical
equipment
open health
record
review
examination
patient
threat
scenario
— technical expertise (required),
— users (optional).
:cardiologist
terminal
examine
Modelling guidelines:
:database
update health
:firewall
— draw a region that logically or physically represents
the target of analysis,
record
— place the direct assets within the region,
close
connection
Fig 4
— place the indirect assets outside the region (indirect
assets are a harmed as a consequence of a direct asset
being harmed first),
Collaboration diagram illustrating the physical communication lines.
log out
log out
— indicate with arrows which assets may affect other
assets,
— assets may be ranked according to their importance,
Fig 5
Activity diagram describing the parallel processes of the GP
and the cardiologist.
Ministry
of Health
(client)
patient’s
health
provision of
telecardiology
service
— if the analysis has more than one client, the clients
should be associated with their assets,
target descriptions:
— use a formal or standardised notation such as UML
[1], but ensure that the notation is explained thoroughly so that the participants understand it,
— create models of both the static and the dynamic
features of the target (static may be hardware
configurations, network design, etc, while dynamic may
be work processes, information flow, etc),
— for the static parts of the description UML class
threat
(deliberate)
threat
(accidental)
asset diagrams:
:firewall
High-level risk table.
representatives of the client:
— decision makers (required),
connect medical
equipment
:GP
terminal
Table 1
security analysis secretary (required),
Who/what causes it?
unwanted
incident
asset
threat
(non-human)
How? What is the incident? What does it harm?
vulnerability
What makes it possible?
Hacker
Breaks into the system and steals health records
Employee
Sloppiness compromises confidentiality of health
records
Insufficient training
Eavesdropper
Eavesdropping on dedicated connection
Insufficient protection of connection
System failure
System goes down during examination
Insufficient security
Unstable connection/immature technology
Employee
Sloppiness compromises integrity of health record Prose-based health records (i.e. natural language)
Network failure
Transmission problems compromise integrity of
medical data
Unstable connection/immature technology
Employee
Health records leak out by accident —
compromises their confidentiality and damages
the trust in the system
Possibility of irregular handling of health records
weeks, hours,
etc) or probabilities.
last activity
of the
value
unwanted
andand
risks,
to the
measure,
e.g.
could
be
theThe
number
disclosed
probability
[4]) ofincidents
which
incidents
occur,
andcan
the
impact
unwanted
incidents
andsame
risks,
can and
move
on move
to the on but,
measure,
e.g.‘major’
‘major’upon
couldthe
berisk
the
number
ofofcriteria.
disclosed
the
as
in
the
previous
meeting,
since this ste
approval
isCertain
to
decide
evaluation
The
Five
times
or
more
per
year
(50-*:
10y
=
5-*:
1y)
next
step. they have on the assets. The analysts
health
records,
ororthe
number
of deleted
deletedrecords,
records,
etc.
or consequence
next
step.
health
records,
the
number
of
etc.
criteria states
which levelTwo
of to
risk
the
client
accepts
for each
setsby the
boundaries
the further analysis, it i
Likely
five
times
per
year
(21-49:
1y) the discussion
initiate
suggesting
a scale offorlikelihood
Table
4 gives
the
likelihood
scale
defined
for
the
target
as = 2,1-4,9:
Table
4
gives
the
likelihood
scale
defined
for
the
target
as10y
of the assets.
important
the— relevant
based
thesummary
following
rule ofthat
thumb
the lower decision-makers ar
Possible
Once
a year
(6-20:
10yand
= and
0,6-2: 1y)
such.
By Byusing
for
scenarios
such.
usingthe
the same
same scale
scale
for
allall scenarios
Step
3summary
—
Step
3 —on
incident
likelihood present.
‘rare’ is set to be a maximum of one
incidents,
itis ispossible
possible
to
extract
combined
Less
than once
yearlikelihood
(2-5:
10y = 0,2-0,5:
1y)
incidents,
to
extract
combined
likelihood
The itUnlikely
security
analysis
leader
hasper updated
the
Tasks:
Tasks:
occurrence during the target’s lifetime; the remaining
values
asRare
shown
later
the
risk
estimation
presentation
from
the
last
meeting
based
on
comments
values
as shown
later
ininthe
risk
estimation
step.
Less
than
once
per step.
ten
years (0-1:10y = 0-0,1:1y)
CORAS (overview)
from the other participants, and the target and asset
the the
clientclient
target
descriptions
and assetand asset
5.approves
4—
risk identification
approves
target
descriptions
Table
2Step
Asset
table.
Table 3 areConsequence
scaleBased
for ‘health
records’.
descriptions
now
approved.
on
the
discussions
Table 3
Consequence
scale
for ‘health records’.
descriptions,
To identify risks CORAS makes use of a technique calle
Finally, the
representatives
in the first value
two meetings
and issues
identified in of
the the
high-client need to descriptions,
Consequence
Description
structured
brainstorming.
brainstorming may b
define
risk toevaluation
criteria,
Consequence
value
Description
the assets should
be ranked
according toStructured
importance,
Asset
Importance
Type
level analysis,
it is the
decided
narrow
scope
ofthe
the criteria which
Catastrophic
1000+ health
recordsthe
(HRs)
are affected
the
assets
should
be
ranked
according
to
importance,
understood as a structured ‘walk-through’ of the target o
assert
ahealth
risk to
antarget
asset
isare
acceptable
Catastrophic
1000+
records
(HRs)
affected or whether
analysis, and
agreewhether
upon
the100-1000
following
definition.
Major
HRs
are
affected
consequence
scales and
mustisbe
set forout
each
asset
within The main idea o
Health
records
2carried
asset
it is necessary 100-1000
to evaluate
possible
treatments for it. They
analysis
asDirect
a workshop.
Major
HRs
affected
consequence
scales
must
be
set
for
each
asset within
Moderate
10-100 HRs
areare
affected
the
scope
of
the
analysis,
The target
of analysis
be the
availability
theevaluation matrix
define
these will
criteria
byHRs
means
of aofrisk
structured
brainstorming
is
that
since
the analysis par
Provision
of
3
Direct
asset
Moderate
10-100
are
affected
Minor
1-10 HRs are affected
the scope of the analysis,
telecardiology
service,
and
confidentiality
and
telecardiology
service
for each asset. The security analysis leader draws
the
a likelihood
scale mustrepresent
be defined,
ticipants
different competences, backgrounds an
Insignificant
No
HR
affected
Minor
1-10
HRsismedical
are
affected
integrity matrix
of health
records
and
data
in
for the asset ‘Health records’ on a blackboard. It a likelihood
interests,
will
theIndirect
target
from different perspec
scalethey
must
beview
defined,
Public’s
in system
out)
asset
Insignificant
No HR
is affected
the trust
client
must decide(Scoped
risk evaluation
criteria for
each
relation to
use
of
the service
related
equipment.
has likelihood
and and
consequence
values as its axes so that
tives
and
consequently
identify
more,
and possibly othe
The indirect asset
trust in system’ is to be
assethealth
within
the scope
of
the
Table 4‘Public’s
the
client
must
decide
risk
evaluation
criteria
for each
Patient’s
1 analysis.
Indirect asset
a risk
with aLikelihood
specific scale.
likelihood and consequence
will
risks than individuals or a more homogeneous
group woul
kept outside the scope.
asset
within
the
scope
of
the
analysis.
Table
4
Likelihood
scale.
belong
to
the
intersecting
cell.
Based
on
a
discussion
in
Participants: have managed.
Likelihood
3
Description
the group, the security
analysis leader marks the cells in
value
3in–the
Prioritizemeeting,
assets,
create
Likelihood
theStep
same as
but,
since
this step
BT
Technology Journal
•times
Vol 25
1‘acceptable’
• per
January
the
as
‘must
The
106
3 or10y
Certain
Fivematrix
orNo
more
year 2007
(50-*:
= 5-*:be
1y)evaluated’. Participants:
Theprevious
findings from the
brainstorming
Description
value
Likely
Certain
Possible
Likely
Unlikely
Possible
Rare
Unlikely
Less than once per year (2-5: 10y = 0,2-0,5: 1y)
Finally,Less
thethan
representatives
of the
client =need
to
Rare
once per ten years
(0-1:10y
0-0,1:1y)
Table
5. 5
present.
Risk
evaluation
matrix.
Step
4 — risk
identification
To identify risks CORAS makes use of a technique called
Consequence
structured brainstorming. Structured brainstorming may be
define the risk evaluation criteria, the criteria which
5.
Step
4 — risk identification
Insignificant
Minor
Moderate
Major
understood
as a structured
‘walk-through’
of the Catastrophic
target of
assert whether a risk to an asset is acceptable or whether
To
identify
risks
CORAS
makes
use
of
a
technique
called
Finally,
the
representatives
of
the
client
need
to
Rare
Acceptable
Acceptable
Acceptable
Acceptable
Must
be evaluated
it is necessary to evaluate possible treatments for it. They
analysis and is carried out as a workshop. The main idea
of
structured
brainstorming.
Structured
may be
define
the
riskcriteria
evaluation
criteria,
criteria
which Acceptable
Must
be evaluated
Must be parevaluated
Unlikely
Acceptable
Acceptable is that
define
these
by means
of a risk the
evaluation
matrix
structured
brainstorming
since
the brainstorming
analysis
understood
aevaluated
structured
of evaluated
the target of
assert
a risk
an asset
is acceptable
or whether
Mustas
be
Must ‘walk-through’
be evaluated
Must be
Possible
Acceptable
for whether
each asset.
Thetosecurity
analysis
leaderAcceptable
draws
the
ticipants
represent
different
competences,
backgrounds
and
for the
‘Health
records’
on a blackboard.
It Mustinterests,
it ismatrix
necessary
to asset
evaluate
possible
for it. They
be evaluated
Must
be
evaluated
Must
Must
be evaluated
Likelytreatments
Acceptable
analysis
carried
out
asbe
aevaluated
workshop.
The
main idea of
theyand
willis
view
the target
from
different
perspechas likelihood
and consequence
values
its axes
so that
define
these criteria
by means of
a riskasevaluation
matrix Musttives
Must
be evaluated
be evaluated
Must
be evaluated
Must
evaluated
Must
evaluated
Certain
and consequently
identify
more,
andsince
possibly
structured
brainstorming
isbethat
thebeother,
analysis para
risk
with
a
specific
likelihood
and
consequence
will
risksticipants
than individuals
or
a
more
homogeneous
group
would
for each asset. The security analysis leader draws the
represent different competences, backgrounds and
belong
to
the
intersecting
cell.
Based
on
a
discussion
in
have
managed.
matrix for the asset ‘Health records’ on a blackboard. It
interests, they will view the target from different perspecthe group, the security analysis leader marks the cells in
BTthe
Technology
Journal
• Vol 25 Noother,
1 • January 200
has likelihood
as its axes
so that
Images and
fromconsequence
Braber et al. –values
Model-based
security
analysis intives
seven
steps
– a guided tour
to
CORAS
method
and
consequently
identify
more,
and
possibly
the matrix as ‘acceptable’ or ‘must be evaluated’. The
The
findings
from
the
brainstorming
are
documented
a risk
with risk
a specific
likelihood
and consequence
or modelling
a more homogeneous
group would
resulting
evaluation
matrix is shown
in Table 5, andwill
withrisks
the than
CORASindividuals
security risk
language. We will
Frequency
19
are documente
We wi
CORA
for consequence
and
setsscales
the boundaries
for the further
analysis, it is
resulting
risk per
evaluation
matrix
shown1y)in Table 5, and
Two
to five times
year (21-49:
10y =is2,1-4,9:
withasthe
CORAS
security
risk modelling
language.
the
same
in
the
previous
meeting,
but, since
important
that values,
the relevant
decision-makers
are this step
Five times
or amore
per year
(50-*:
10y
= 5-*: 1y)
likelihood
create
risk
Once
year
(6-20:
10y
=
0,6-2:
1y)
3
now boundaries
exemplify how
we further
model risks
withit the
50-*:10y is short for 50 or more incidents per 10 years, equivalentpresent.
to 5 or
sets the
for the
analysis,
is
Two
to
five
times
per
year
(21-49:
10y
=
2,1-4,9:
1y)
Less
than
once
per
year
(2-5:
10y
=
0,2-0,5:
1y)
evaluation
matrix.
more incidents per year.
language,
using
the
symbols
presented
in
Fig
7.
important that the relevant decision-makers are
Onceonce
a year
10y
= 0,6-2:
1y)
Less than
per(6-20:
ten years
(0-1:10y
= 0-0,1:1y)
belong to the intersecting cell. Based on a discussion in
have managed.
Model-based security analysis in seven steps — a guided tour to the CORAS method
— insert the vulnerabilities before the threat scenario
or unwanted incident to which they lead, e.g. a
vulnerability called ‘poor back-up solution’ is typically
placed before the threat scenario ‘the back-up solution
fails to run the application database correctly’.
CORAS (overview)
6.
Model-based security analysis in seven steps — a guided tour to the CORAS method
GP
health records
sent to
unauthorised
people
compromises
confidentiality
of health records
insufficient
training
prose-based
health records
wrong input in
health record
compromises
integrity of
health records
insufficient
access control
slow system
unable to set
diagnosis due
to slow system
wrong input in
health record
[possible]
patient’s
health
provision of
telecardiology
service
Fig 11
Final threat diagram — accidental actions.
Step 4 — summary
Tasks:
the initial threat diagrams should be completed with
identified threats, vulnerabilities, threat scenarios and
unwanted incidents.
compromises
integrity of
health records
[possible]
prose-based
health records
IT
personnel
people without the required competence become
responsible for critical changes. This may lead to
misconfiguration of the system, which again may slow it
down. A slow system may make it impossible to set a
patient’s diagnosis, and also the ability to provide a
telecardiology service.
20
patient is
given wrong
diagnosis
Modelling guideline:
compromises
confidentiality
of health records
[rare]
insufficient
access control
misconfiguration
of system
[possible]
slow system
[possible]
e
health
records
e
rat
patient is
given wrong
diagnosis
[unlikely]
no input
validation
lack of
competence
mod
er at
de
mo
health record
copies stored on
local computer
[unlikely]
lack of
competence
telecardiology
service
health records sent to
unauthorised
people
[rare]
insufficient
training
no input
validation
misconfiguration
of system
IT
personnel
possibility of
irregular handling
of health records
GP
health
records
health record
copies stored on
local computer
There are different ways of computing the likelihood
of an incident that may be caused by more than one
threat scenario. If the estimates are suitable for
mathematical calculations a computerised tool may be
used. Since the likelihood scale in our case is in the form
of intervals, the analysis leader decides to use an informal
method that is quite straightforward and transparent.
The threat scenario ‘Health records sent out to
unauthorised people’ and ‘Health record copies stored on
local computer’ can both lead to ‘Compromises
confidentiality of health records’. Table 6 shows how the
combined likelihood is estimated. The technique is
informal, but suitable for the creative structured
Step 5 – Estimate risks
(consequence and likelihood)
Step 4 – Create threat
diagrams through structured
brainstorming (workshop).
possibility of
irregular handling
of health records
Step 5 — risk estimation
When the threat scenarios, unwanted incidents, threats and
vulnerabilities are properly described in threat diagrams it is
time to estimate likelihood values and consequences. This is
typically done in a separate workshop. The values are used to
compute the risk value which decides whether the risk
should be accepted or evaluated for treatments. The
participants in the workshop provide likelihood estimates for
each threat scenario in the threat diagrams. For scenarios
that are difficult to estimate, the analysis leader gives
suggestions based on historical data like security incident
statistics or personal experience. The likelihood of the threat
scenarios are used to extract a combined likelihood for
unwanted incidents. Consequences are estimated for each
‘unwanted incident — asset’ relation. The consequence
value is taken from the consequence scale of the asset
decided in Step 3. In this workshop it is especially important
to include people with the competence needed to estimate
realistic likelihoods and consequences, meaning that
technical expertise, users and decision makers must be
included.
The analysis leader organises the estimation as a
separate workshop where the input is the threat diagrams
from the previous workshop. In this workshop it is
especially important to include users, technical experts
and decision makers to obtain estimates that are as
correct as possible. The analysis participants decide that
‘most likely’ estimates will provide more realistic risk
values than ‘worst case’ estimates. Firstly, they provide as
many estimates as possible for the threat scenarios which
help estimating the likelihood of the unwanted incidents
(if this cannot be established by other means). Secondly,
the consequences of the unwanted incidents for each
harmed asset are estimated. The estimates are documented by annotating the diagrams as shown in Fig 12
— further details can be specified in a table.
catas
troph
ic
unable to set
diagnosis due
to slow system
[likely]
mod
era
r
ajo
m
patient’s
health
te
provision of
telecardiology
service
telecardiology
service
Fig 12
Threat diagram with likelihood and consequence estimates.
BT Technology Journal • Vol 25 No 1 • January 2007
threat diagrams:
— use the region from the asset diagram and add more
regions if necessary,
— model different kinds of threats in separate
diagrams, e.g. deliberate sabotage in one diagram,
mistakes in an other, environmental in a third, etc (the
ISO/IEC standard [5] contains a useful classification) —
this makes it easier to generalise the risks, e.g. ‘these
risks are caused by deliberate intruders’ or ‘these risks
are caused by human errors’,
Images from Braber et al. – Model-based security analysis in seven steps – a guided tour to the CORAS method
People that should participate:
security analysis leader (required),
— threats are placed to the left in the region, while
threats that can be classified as external (hackers,
intruders, etc) are placed outside the region,
security analysis secretary (required),
— assets are listed to the right, outside the region,
111
ssible, but it
best in the
ikely interval.
In our case the risk value is determined by the risk
evaluation matrix. From the four unwanted incidents in
the threat diagram, the analysis secretary extracts five
risks. ‘Compromising the confidentiality of health
records’ (CC1) may affect health records. ‘Compromising
the integrity of health records’ may also harm health
records (CI1), in addition to patient’s health if it
contributes to a faulty diagnosis (PR1). Finally, ‘slow
system’ may slow down an examination (SS2) and harm
the patient’s health (SS1). Only CC1 is within acceptable
risk levels,
needevaluation,
further evaluation. Table 7 shows
Step
6the
– rest
Risk
the risks placed in the risk evaluation matrix.
ulation of
used. It is
tes reflect
should be
CORAS (overview)
uggested
of health
‘unlikely’
Step 7 – Risk treatment
Model-based security analysis in seven steps — a guided tour to the CORAS method
estimates are confirmed or
adjusted.
Table 7
Risk evaluation matrix with risks consequence.
extend training
programme
(1 - 2 days)
Consequence
Insignificant Minor
Likelihood
a likelihood
ds are based
dent and an
e.
Rare
Moderate
CI1
Major Catastrophic
Likely
insufficient
training
health record
copies stored on
local computer
compromises
integrity of
health records
CC1
Unlikely
Possible
GP
PR1
prose-based
health records
wrong input in
health record
unable to set
diagnosis due
to slow system
SS1
Certain
misconfiguration
of system
IT
personnel
insufficient
access control
PR1
SS2
slow system
lack of
competence
SS1
provision of
telecardiology
service
telecardiology
service
revise access
lists
Fig 14
extend the training programme for practitioners by
1-2 days, with a special focus on security aspects,
21
patient is
given wrong
diagnosis
no input
validation
CI1, SS2
The analysis leader gives the participants an opportunity to adjust likelihood and consequence estimates,
and risk acceptance levels, to make sure that the results
reflect reality as much as possible.
health
records
Treatment diagram.
People that should be present:
The participants request an overview of the risks.
security analysis leader (required),
revise the list of people that have maintenance
access, and restrict access to only the users that
security analysis secretary (required),
They want to know who, or what, is initiating them and
have competence on critical configuration tasks.
representatives of the client:
When the final results from the analysis are to be
which assets they harm. The analysis secretary models the
presented to the client and other interested parties, an
— decision makers (required),
overview of the risks and the proposed treatments is
risks Images
with their
risk– Model-based
values in a risk
diagram
fromassociated
Braber et al.
security
analysis inuseful.
seven
–overview
a guided
In our casesteps
the treatment
diagram of Figtour to the CORAS method
— technical expertise (required),
15 is used for this purpose.
according to the guidelines (see summary). The final risk
— users (required).
Step 7 — summary
Tasks:
diagram for unwanted incidents accidentally caused by
Modelling guidelines:
patient’s
health
Attack trees
Represent attacks against the system in a tree structure, with the
goal as the root node and different ways of achieving that goal as
leaf nodes.
22
Attack Trees
Open Safe
Pick lock
Learn combo
Find written
combo
Threaten
Blackmail
Cut open
Get combo
from target
Eavesdrop
Listen to
conversation
23
Install
improperly
Bribe
Get target to
state combo
Attack Trees
Open Safe
Pick lock
Learn combo
Find written
combo
Threaten
Blackmail
Cut open
Get combo
from target
Eavesdrop
Bribe
and
Listen to
conversation
24
Install
improperly
Get target to
state combo
Attack Trees
P
Open Safe
I
P
Pick lock
P
Learn combo
I
Get combo
from target
I
Threaten
Install
improperly
Cut open
P
Find written
combo
I
I
I
Blackmail
P
Eavesdrop
Bribe
and
P
I
Listen to
conversation
25
Get target to
state combo
Attack Trees
P
Open Safe
$10
I
P
Pick lock
$30
P
Learn combo
$20
I
$20
$75
I
Threaten
$60
Cut open
$10
$100
Get combo
from target
I
Blackmail
$100
P
Eavesdrop
$60
Bribe
$20
and
P
Listen to
conversation
$20
26
Install
improperly
P
Find written
combo
I
I
I
Get target to
state combo
$40
Attack Trees
•  We can annotate the attack tree with many different kind of
Boolean and continuous values:
•  “Legal” versus “Illegal”
•  “Requires special equipment” versus “No special equipment”
•  Probability of success, likelihood of attack, etc.
•  Once we have annotated the tree we can query it:
•  Which attacks cost less than $10?
•  Legal attacks that cost more than $50?
•  Would it be worth paying a person $80 so they are less susceptible
to bribes? (In reality you need to also consider the probability of success)
27
Attack Trees
•  First you identify possible attack goals.
•  Each goal forms a separate tree.
•  Add all attacks you can think of to the tree.
•  Expand the attacks as if they were goals downwards in the tree.
•  Let somebody else look at your tree, get comments from
experts, iterate and re-iterate.
•  Keep your trees updated and use them to make security
decisions throughout the software life cycle.
28
Software Development Life Cycle
Static analysis
Security requirements
29
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Software Development Life Cycle
Static analysis
Security requirements
30
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Software Development Life Cycle
Static analysis
Security requirements
31
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Software development process
•  The software development life cycle is generic, can be modified
to fit into any development process:
•  Iterative (SCRUM, Kanban, etc)
•  Waterfall
•  Adopting a secure software development process entails
adding the security touchpoints discussed.
•  Examples of formal development processes that include
security touchpoints are: SDL, TSP, CLASP.
32
SECURITY DEVELOPMENT
LIFECYCLE (SDL)
33
Security Development Lifecycle (SDL)
If a software development project is determined to be subject to
the security development lifecycle (SDL) then the team must
successfully complete sixteen mandatory security activities to
comply with the Microsoft SDL process.
-Simplified Implementation of the Microsoft SDL
34
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Pre-SDL: Security training
•  All members must receive appropriate training to stay
informed about security basics and recent trends in security
and privacy.
•  Topics include:
•  Threat modeling (e.g. design implications)
•  Secure coding (e.g. buffer overruns, cross-site scripting)
•  Privacy (e.g. types of privacy-sensitive data)
•  This is only the baseline training, specialization and advanced
training may be necessary.
35
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Phase 1: Requirements
36
• 
Specify security requirements for the application as it is designed to
run in its planned operational environment.
• 
A project team must defined quality gates (e.g. all compiler warnings
must be fixed before committing code), these are defined for each
phase of the development and are negotiated with a security advisor.
• 
Bug bars must be defined which can be seen as quality gates for the
entire project, e.g. no known vulnerabilities in the application with a
“critical” or “important” rating at time of release.
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Identify functional aspects of the software that require deep review:
•  Which portions of the project will require security design reviews
before release?
•  Which portions of the project will require penetration testing by a
mutually agreed upon group that is external to the project team?
•  What is the privacy impact rating?
37
• 
P1: High privacy risk, e.g. installs software
• 
P2: Moderate privacy risk, e.g. one-time user initiated data-transfer
• 
P3: Low privacy risk, e.g. no anonymous or personal data is transferred
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Phase 2: Design
•  All design specifications should describe how to securely
implement all functionality provided by a given feature or
function:
•  Attack surface reduction (giving attackers less opportunity to exploit a
potential weak spot).
•  Threat modeling (risk analysis) of components or features that have
meaningful security risks (can be defined by the security risk assessment
during requirements).
•  Secure design patterns (discussed later)
38
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Phase 3: Implementation
•  Publish a list of approved tools and their associated security
checks, such as compilers/linker options and warnings.
•  List is to be approved by external security advisor.
•  Teams should analyze all functions and APIs that will be used
in conjunction with a software development project and
prohibit those that are determined to be unsafe.
•  Once a prohibited list is defined, all code should be scanned
for these functions and APIs and modified accordingly.
•  Static analysis of code should be performed.
39
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Phase 4: Verification
•  Dynamic program analysis, monitor application problems with
with memory corruption, user privilege issues, etc.
•  Fuzz testing, deliberately introduce malformed or random data
to an application during dynamic analysis.
•  Update threat model and attack surface analysis, account for
any design or implementation changes to the system, and
assure that any new threats/attack are reviewed and mitigated.
40
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Phase 5: Release
•  An incident response plan must be in place:
•  A first point of contact in an emergency.
•  On-call contacts with decision-making authority that are available
24-hours a day.
•  Security servicing plans for code inherited from other groups in the
organization.
•  Security servicing plans for third-party code (and if appropriate the
right to make changes).
41
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review Final security review: Includes an examination of threat models, tool
output, performance against quality gates and bug bars.
42
• 
Pass FRS – Good to go.
• 
Pass FSR with exceptions – Issues that can be fixed in the next release.
• 
FSR with escalation – Go back and address whatever SDL requirement that
is not fulfilled or escalate to executive management for decision.
• 
Release to manufacturing (RTM) or release to web (RTW) conditional
on FSR.
• 
If at least one component has privacy rating P1 then a privacy advisor
must certify that the privacy requirements are satisfied.
• 
All specifications, code, binaries, threat models, plans, etc. must be
archived so that service can be done on the product at a later stage.
Pre-­‐SDL Phase 1 Phase 2 Phase 3 Phase 4 Phase 5 Final security review •  Security advisors can request that for some critical software
additional activities are completed, e.g.:
•  Manual code review
•  Penetration testing
•  Vulnerability analysis of similar applications
•  SDL is not a “one-size-fits-all” process, teams must implement
SDL in a fashion that is appropriate to time and resources.
•  There exists variants, such as SDL for Agile.
43
SECURE DESIGN PATTERNS
44
Software Development Life Cycle
Static analysis
Security requirements
45
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Secure design patterns
•  Descriptions or templates describing a general solution to a
security problem that can be applied in many different
situations.
•  The design patterns are meant to eliminate the accidental
insertion of vulnerabilities into code or to mitigate the
consequence of vulnerabilities.
•  Categorized by abstraction: architecture, design or
implementation.
46
Categories
• 
Architectural-level patterns: Focus on high-level allocation of responsibilities
between different components and define the interaction between those highlevel components.
• 
• 
• 
47
Privilege separation (PrivSep)
Design-level patterns: Address problems in the internal design of a single
high-level component.
• 
Secure factory
• 
Secure chain of responsibility
Implementation-level patterns: Low-level security issues, applicable to
specific functions or methods in the system.
• 
Secure logger
• 
Clear sensitive information
Privilege separation (PrivSep)
•  Intent: Reduce the amount of code that runs with special
privilege without affecting or limiting the functionality of the
program.
•  Motivation: In many applications, a small set of simple
operations require elevated privileges, while a much larger set
of complex and security error-prone operations can run in the
context of normal privileged user.
48
Privilege separation (PrivSep)
root
Request from unauthorized user
Open socket and listen for connec>ons root
Spawn a child process that has least possible privilege Unprivileged
Authen>cate (complex code) root
Spawn a child with the privileges of the authorized user 49
User
Return identity
Do some work as user Privilege separation (PrivSep)
Unprivileged
Authen>cate (complex code) • 
The majority of the code is run
without elevated privileges.
• 
If there is a vulnerability and
somebody gets control of the
process, then they are confined
within the same level of
privilege.
• 
Extra testing, verification,
reviews etc. can be focused on
the code that runs with elevated
privileges.
User
Do some work as user root
Open socket and listen for connec>ons root
Spawn a child process that has least possible privilege root
Spawn a child with the privileges of the authorized user 50
Secure Factory
•  Intent: Separate the security dependent logic involved in
creating an object from the basic functionality of the created
object.
•  Motivation: An application may make use of an object whose
behavior is dependent on the privileges of the user running the
application.
51
Secure Factory
AbstractSecureFactory
+getInstance() : AbstractSecureFactory
+getObject(givenCredentials : SecurityCredentials) : SomeObject
ConcreteteSecureFactory1
+getObject(givenCredentials : SecurityCredentials) : SomeObject
ConcreteteSecureFactory2
+getObject(givenCredentials : SecurityCredentials) : SomeObject
Getting SomeObject is done by making the call:
AbstractSecureFactory.getInstance().getObject(securityCredentials)
The returned object, SomeObject, is an object that operates with the correct privileges.
52
Secure Factory
•  Inside the factory:
1. 
Using the current concrete implementation of AbstractSecureFactory
2. 
Look at security credentials that were passed in the call
3. 
Create an instance of the appropriate concrete version of SomeObject
4. 
Further specialise settings in SomeObject
SomeObject
LowPrivilegeSomeObject
HighPrivilegeSomeObject
MidPrivilegeSomeObject
53
Secure Factory
• 
The caller and SomeObject does not have to contain logic for checking
privileges. It is always returned by the factory, and the factory picks the
SomeObject with correct behavior.
• 
It is easy to change the security credentials by changing which
concrete factory is used. (This is similar to another pattern which we will not
discuss in class).
• 
Concrete versions of SomeObject does not have to implement code for
functions that are not callable by the level of privilege to which it is
developed.
• 
54
The LowPrivilegeSomeObject does not need to implement the Write
function.
Secure Chain of Responsibility
•  Intent: Decouple the logic that determines privileges from the
portion of the program that is requesting the functionality.
•  Motivation: Applications sometimes need to allow and disallow
certain functions depending on the role of the user.
55
Secure Chain of Responsibility
Handle request
Supply
request
and user
credentials
Manager Report Generator Handle request
Sale Analyst Report Generator Pass request if
credential check
fails
56
Handle or reject request
Sales Intern Report Generator Pass request if
credential check
fails
Secure Chain of Responsibility
• 
The selection of functionality is hidden from the caller, it will be
selected based on the user credentials.
• 
The caller is not aware of which handler has dealt with the request.
• 
Easy to change the behavior of the system (add/remove handlers).
Can even be done dynamically at runtime by changing the links.
Handle request
Supply request
and user
credentials
Manager Report Generator Handle request
Sale Analyst Report Generator Pass request if credential
check fails
57
Handle or reject request
Sales Intern Report Generator Pass request if credential
check fails
Secure Logger
•  Intent: Prevent an attacker from gathering potentially useful
information about the system from system logs and to prevent
an attacker from hiding their actions by editing system logs.
•  Motivation: System logs usually contain a great deal of
information about the system itself and its users.
58
Secure Logger
Application
Log
Secure Logger
Protected data
Log Viewer
59
Unprotected data
Log Reader
Secure Logger
•  Standard mechanisms for reading log files will not work as the
data will be somehow encrypted.
•  The reader is necessary to access log files, and it requires
authentication and authorization.
•  Any adversary that gets a hold of log files can not use their
content.
• 
(A possible implementation could use existing disk encryption systems).
Application
Log
Secure Logger
Protected data
Log Viewer
Log Reader
Unprotected data
60
Clear Sensitive Information
•  Intent: It is possible that sensitive information has been stored
in reusable resources after a user session or application has
run. Sensitive information should be cleared from reusable
resources.
•  Motivation: In many cases the action of returning a reusable
resource to the pool of resources simply marks the resource as
available. The contents of the resource are left intact until the
resource is actually reused. This could potentially lead to
leaking of private information.
(Resources include files, memory allocations, etc.)
61
Clear Sensitive Information
Release data
Application
Scrub data
Do not simply release back
Get resource
62
Return to pool
Pool
Clear Sensitive Information
ClientInfo::~ClientInfo() {
this->ipAddr = 0;
this->trustLevel = BOGUS;
this->numFaultyRequests = 0;
}
An example of clearing sensitive information in the destructor of an object. In this
way the information stored in memory is made insensitive before destroying the
object.
63
Secure Design Patterns
•  Secure design patterns are important for all developers,
regardless of platform or language.
•  Their main purpose is to:
•  Eliminate the accidental insertion of vulnerabilities into code or
to mitigate the consequence of vulnerabilities.
•  Using design patterns you are taking advantage of many years
of learning from mistakes made by others, and you are using
best practices.
•  It also helps when communicating about code with other
developers.
•  There are many more very useful patterns:
•  C. Dougherty, K. Sayre, R. C. Seacord, D. Svoboda, K. Tagashi.
Secure Design Patterns. Technical Report CMU/SEI-2009-TR-010.
64
Software Development Life Cycle
Misuse cases
CORAS/
Attack Trees
Security requirements
Risk analysis
Requirements Architecture and Design Gather requirements and use cases Plan how the system shall work and how code should be wri;en Static analysis
Risk-based
security tests
Risk analysis and
penetration testing
Implementa>on Verifica>on Release & Maintenance Code and make test plans Test and ensure that requirements and design are fulfilled Release, patch, release, patch, … Secure design patterns
65
SDL
www.liu.se
Download