SecurityModelling12Mar10

advertisement
Security Modelling:
What is Security?
for
Tsinghua University
Clark Thomborson
12 March 2010
Questions to be (Partially) Answered
What is security? What is trust?
 “What would be the shape of an
organisational theory applied to
security?” [Anderson, 2008]

How can an organisation control itself, and
its environment, to increase its functionality
and security?
 How can an organisation exploit, and
nurture, its trusting relationships?

2
The Importance of Modelling


Assertion: A human can analyse simple
systems (≤ 7 elements or concepts).
Implications:



If we want to analyse complex systems, we
must use models (simplifications).
If we want to have confidence in our
analyses, we must validate our models.
Validation: Do our analytic results
(predictions) match our observations?

Error sources: model, application,
observation.
3
Still more questions...
What are the most important parts of a
security model?
 How can we validate a security model?
 How can we validate an application of a
security model?
 How can we validate our observations of
a secure system?


A journey of a thousand miles! We’ll
take some initial steps...
4
Human-based security!

Axioms:
A1. Security and distrust are determined by
human fears.
A2. Functionality and trust are determined by
human desires.

If nobody could be harmed or helped by a
system, then ...
How could this system be secure or insecure?
 How could it be functional or non-functional?

5
Systems and Actors: Definitions



A system is a structured entity that interacts with
other systems.
Every system is composed of atomic units called
actors.
Every system has a distinguished actor called its
constitution, which specifies


its constituent actors and their relationships; its
interactional behaviour; and how the constitution will
change as a result of its system’s interactions.
A constitution is rarely a complete specification.

If we insisted on completeness, we could not include
humans in our models.
6
System Architecture
Three types of relationships between
actors
1. Hierarchical: a superior (owning) actor
and its inferior actors (subsystems).
2. Peering: anonymous equals, with
voting rights.
3. Aliased: to represent the different roles
played by the same human or realworld system.

7
Interactions

Axiom A3: System activity can be
decomposed into interactions:
A: M(B) → C

A, B, and C are systems.

Note: A, B, or C may be null, e.g. M → C.
M is a message: information (mass, or
energy) that is transmitted from A to C,
and which may be a function of B.
 B is the subject of the message. For
example, “A introduces B to C”.

8
The Caja Project at Google

Rewrite JavaScript, to enforce capabilities.
Alice: foo(Carol) → Bob
Alice authorises Carol to provide “foo” to Bob.
9
Modelling a Caja Guard




Alice has authority to call
foo(Carol).
 Carol is an external service
provider.
 foo() is a JavaScript object
in Alice’s secure browser.
 Bob is an untrusted
JavaScript object.
Alice uses Caja to build
gfoo(foo(Carol)).
Alice gives gfoo() to Carol.
Bob is unable to access
foo(Carol) except by calling
gfoo(), because
 Caja uses a capability-safe
subset of JavaScript.
Alice
Gift(gfoo())
Bob
foo()
gfoo()
Carol
Granovetter Diagram:
10
Owners and Sentience
Axiom 4: Every system has an owner,
and every owner is a system.
 If a constitutional actor C is a subsystem
of itself (i.e. if C owns C, and |C| = 1),
then we say that “C is a sentient actor”.
 We use sentient actors to model humans.

11
Judgement Actors

Axiom A5: Every system has a
distinguished actor called its “judgement
actor”, which specifies its security and
functionality requirements.
When a judgement actor is sent a message
containing a list of actions, it may reply to
the sender with a judgement.
 A list of actions resulting in a positive
judgement is a functional behaviour.
 A list of actions resulting in a negative
judgement is a security fault.

12
Analyses

A descriptive and interpretive report of a
judgement actor's (likely) responses to a
(possible) series of system events is
called an analysis of this system.
If an analysis considers only security faults,
then it is a security analysis.
 If an analysis considers only functional
behaviour, then it is a functional analysis.


We can model an analyst as an actor in
our systems!
13
The Hierarchy






Control is exerted by a
superior power.
Prospective controls
are not easy to evade.
Retrospective controls
are punishments.
The Hierarch grants
allowances to inferiors.
King, President, Chief
Justice, Pope, or …
Peons, illegal immigrants, felons,
excommunicants, or …
The Hierarch can impose and enforce obligations.
In the Bell-LaPadula model, the Hierarch is concerned with
confidentiality. Inferiors are prohibited from reading
superior’s data. Superiors are allowed to read their
inferior’s data.
14
The Alias (in an email use case)




We use aliases
every time we
send personal
email from our
work computer.
We have a
different alias in
each organisation.
We are prohibited
from revealing “too
much” about our
organisations.
We are prohibited
from accepting
dangerous goods
and services.
Agency X
Gmail
C, acting as a
governmental
agent
C, acting as
a Gmail
client


Each of our aliases is in a
different security environment.
Managing aliases is difficult, and
our computer systems aren’t very
helpful…
15
The Peerage
Peers, Group members, Citizens
of an ideal democracy, …

The peers define the
goals of their peerage.

If a peer misbehaves,
their peers may punish
them only by ignoring
them (shunning).

Peers can trade goods
and services.

The trusted servants of a peerage do not exert control over
peers.
The trusted servants may be aliases of peers, or they may
be automata.

Facilitator, Moderator,
Democratic Leader, …
16
Example: A Peerage Exerting Audit
Control on a Hierarchy
OS Root Administrator
Auditor
Users/
Peers
IG1
IG2
Inspector-General
(an elected officer)
Chair of User Assurance
Group
• Peers elect one or more
Inspector-Generals.
• The OS Administrator
makes a Trusting
appointment when
granting auditor-level
Privilege to an alias of an
Inspector-General.
• The Auditor discloses an
audit report to their
Inspector-General alias.
• The audit report can be
read by any Peer.
• Peers may disclose the
report to non-Peers.
17
Owner-Centric Security


Axiom A6. The judgement actor of a system is a
representation of the desires and fears of its owner.
Requirements are poorly defined, if the analyst’s point
of view isn’t stated.
 Stakeholder analysis: The analyst should
consider the (likely) security requirements of
anyone who is (likely to be) affected by a system,
when helping an owner define the judgement actor
for their system.
 The stakeholder analysis may reveal that the
owner has some privacy requirements – if the
owner fears that their system will reveal private
information about its users.
18
What can an owner do?

An owner might pursue their desires by modifying
their system, or by controlling its environment.


A fearful owner may seek security enhancements



These are functional enhancements.
by modifying their own system, or
by exerting control over other systems.
Security enhancements may cause functional
degradations, and vice versa.


Separating the two analyses may help an owner
understand their options.
Technologically-oriented analysts may not consider a
full range of control options.
19
Lessig’s Taxonomy of Control
Governments make things legal or illegal.
Legal
Moral
Inexpensive
The world’s
economy makes
things inexpensive
or expensive.
Expensive
Immoral
Our culture makes
things moral or
immoral.
Illegal
Easy
Difficult
Computers make
things easy or difficult.
20
Temporal & Organisational Dimensions

Prospective controls:



Retrospective controls:




Architectural security (easy/hard)
Economic security (inexpensive/expensive)
Legal security (legal/illegal)
Normative security (moral/immoral)
Temporality = {prospective, retrospective}.
Organisation = {hierarchy, peerage}.
21
Security Requirements (Traditional)
1.
2.
3.




Confidentiality: no one is allowed to read, unless they
are authorised.
Integrity: no one is allowed to write, unless they are
authorised.
Availability: all authorised reads and writes will be
performed by the system.
Authorisation: giving someone the authority to do
something.
Authentication: being assured of someone’s identity.
Identification: knowing someone’s name or ID#.
Auditing: maintaining (and reviewing) records of
security decisions.
22
Micro to Macro Security Req’ts


“Static security”: system properties (confidentiality,
integrity, availability).
“Dynamic security”: system processes
(Authentication, Authorisation, Audit).


Beware the “gold-plated” system design!
“Security Governance”: human oversight



Specification, or Policy (answering the question of
what the system is supposed to do),
Implementation (answering the question of how to
make the system do what it is supposed to do), and
Assurance (answering the question of whether the
system is meeting its specifications).
23
Clarifying Static Security


Confidentiality, Integrity, and Availability are
appropriate for read/write data.
What about security for executables?


What about security for directories, services, ...?




Unix directories have “rwx” permission bits: XXXity!
Each level of a taxonomy should have a few categories
which cover all the possible cases.
Each case should belong to one category.
Confidentiality, Integrity, XXXity, “etc”ity are all
Prohibitions.
Availability is a Permission.
SS
SS
C
I
Pro
X
A
C
I
Per
X
A
24
Prohibitions and Permissions



Prohibition: forbid something from happening.
Permission: allow something to happen.
There are two types of P-secure systems:





In a prohibitive system, all operations are forbidden by
default. Permissions are granted in special cases.
In a permissive system, all operations are allowed by
default. Prohibitions are special cases.
Prohibitive systems have permissive subsystems.
Permissive systems have prohibitive subsystems.
Prohibitions and permissions are properties of
hierarchies, such as a judicial system.

Most legal controls (“laws”) are prohibitive. A few are
permissive.
25
Extending our Requirements Taxonomy

Contracts are non-hierarchical: agreed between peers.



There are two types of O-secure systems.



Obligations are promises to do something in the future.
Exemptions are exceptions to an obligation.
Obligatory systems have exemptive subsystems.
Exemptive systems have obligatory subsystems.
If a party alleges that another party has not met an
obligation, then the contract’s enforcement clauses are
invoked. Typically...


Arbitration: a mutually-trusted peer attempts to find a mutuallyacceptable resolution to the contractual difficulty.
Litigation: the contract specifies a legal person (i.e. an alias of the
obligated peer) who is ultimately responsible for contract fulfilment.
26
Enforceable Contracts are OP-secure!
• A legal person can petition
the Judge.
• The Judge controls all legal
persons, and may require
or prohibit specific actions
and inactions: P-secure.
Judge
Legal persons
Peers
• A typical contract includes
an obligation to submit to a
binding arbitration, during
the dispute-resolution
process: O-secure.
• Contracts are based on
trust between peers, with
OP-security as a backstop.
Contract
Arbitrator (a Trusted Third Party)
• Cloud security is currently
problematic, in part
because of a lack of
contractual trust.
27
Review: Inactions and Actions



Four types of static security requirements:
 Obligations are forbidden inactions, e.g. “I.O.U.
$1000.”
 Exemptions are allowed inactions, e.g. “You need not
repay me if you have a tragic accident.”
 Prohibitions are forbidden actions.
 Permissions are allowed actions.
Two classification axes:
 Strictness = {forbidden, allowed},
 Activity = {action, inaction}.
“Natural habitat” of these requirements:
 Peerages typically forbid and allow inactions,
 Hierarchies typically forbid and allow actions.
28
Review: Today’s Questions
1. What is security?



Three layers: static, dynamic, governance.
Static security requirements: (forbidden, allowed) x
(action, inaction).
Unanswered: how to characterise dynamic and
governance requirements?
2. How can owners understand and improve the
security and functionality of their systems?

Controls: (prospective, retrospective) x (hierarchy,
peerage).
3. What is trust?
29
Niklas Luhmann, on Trust



A prominent, and controversial, sociologist.
Thesis: Modern systems are so complex that
we must use them, or avoid using them,
without carefully examining all risks, benefits,
and alternatives.
Trust is a reliance without an assessment.


We cannot control any risk we haven’t assessed 
We trust any system which might harm us. (This is
the usual definition.)
Distrust is an avoidance without an
assessment.
30
Security, Trust, Distrust, ...

Dimensions 1-2 are the requirements: (forbidden,
allowed) x (action, inaction).

Dimensions 3-4 are the controls: (prospective,
retrospective) x (hierarchy, peerage).

The fifth dimension in our framework is
assessment, with three cases:



Cognitive assessment (of security & functionality),
Optimistic non-assessment (of trust & coolness),
Pessimistic non-assessment (of distrust &
uncoolness).
31
Security vs. Functionality

Sixth dimension: Feedback (negative vs.
positive) to the owner of the system.
We treat security as a property right.
 Every system has an owner, otherwise we
cannot define its security or functionality.
 The owner reaps the benefits from
functional behaviour, and pays the penalties
for security faults. (Controls are applied to
the owner, ultimately.)
 The analyst must understand the owner’s
desires and fears.

32
Summary of our Taxonomy

Requirements:
Strictness = {forbidden, allowed},
 Activity = {action, inaction},
 Feedback = {negative, positive},
 Assessment = {cognitive, optimistic,
pessimistic}.


Controls:
Temporality = {prospective, retrospective},
 Organisation = {hierarchy, peerage}.


Layers = {static, dynamic, governance}.
33
Application: Access Control
An owner may fear losses as a result of
unauthorised use of their system.
 This fear induces an architectural
requirement (prospective, hierarchical):



Accesses are forbidden, with allowances for
specified users.
It also induces an economic requirement, if
access rights are traded in a market economy.

If the peers are highly trusted, then the architecture
need not be very secure.
34
Access Control (cont.)

Legal requirement (retrospective,
hierarchical): Unauthorised users are
prosecuted.


Must collect evidence – this is another
architectural requirement.
Normative requirement (retrospective,
peering): Unauthorised users are
penalised.

Must collect deposits and evidence, if peers
are not trusted.
35
Functions of Access Control

If an owner desires authorised accesses, then
there will be functional requirements.


If an owner fears losses from downtime, then
there are also security requirements.


Forbidden inaction, positive feedback (reliability)
Forbidden inaction, negative feedback (availability)
Security and functionality are intertwined!


The analyst must understand the owner’s motivation,
before writing the requirements.
The analyst must understand the likely attackers’
motivation and resources, before prioritising the
requirements.
36
Summary

What is security? What is trust?



Four qualitative dimensions in requirements:
Strictness, Activity, Feedback, and Assessment.
Two qualitative dimensions in control: Temporality,
and Power.
Can security be organised? Can
organisations be secured?


Yes: Static, Dynamic, and Governance levels.
Hybrids of peerages and hierarchies seem very
important.
:37
Open Questions

Can our framework be extended to dynamic
systems, e.g. Clark-Wilson?



How should we model introspection?
How should changes to architectures, and to
judgement actors, be specified and controlled?
Would an analysis, in our framework, be
helpful in the debate over ECMA (JavaScript)
harmonisation?

Capabilities (as in Caja) are natural in our models,
but will be difficult to specify if analysts aren’t able
to describe them to owners...
38
Lecture Plan
3.
4.
5.
6.
Techniques for software watermarking
and fingerprinting.
Techniques for software obfuscation
and tamperproofing.
Steganography: functions and threats.
Axiomatic and behavioural trust.
39
Download