*

advertisement
A Comparison
of
Commercial
David
*
**
Senior
Director,
and
D.
Clark*
MilitarY
-
Il.
Policies
Wilson*’e
misunderstanding
policies
the
leads
often
understanding
Most
discussions
of
computer
security
focus
on control
of
disclosure.
In
Particular,
the
U.S.
Department
of
Defense
has
developed
a set
of
criteria
for
computer
mechanisms
to
provide
control
of
classified
information.
However,
for
that
core
of
data
processing
concerned
with
business
operation
control
of
and
assets,
the
primary
security
concern
is
data
integrity.
This
paper
presents
a policy
for
data
integrity
based
on
commercial
data
processing
practices,
and
compares
the
mechanisms
needed
for
this
policy
with
the
mechanisms
needed
to
enforce
the
lattice
model
for
information
security.
We argue
that
a lattice
model
is
not
sufficient
to
characterize
integrity
policies,
and
that
distinct
mechanisms
are
needed
to
Control
disclosure
and
to provide
integrity.
mechanisms
to
Any
discussion
of
enforce
computer
security
must
involve
a
security
policy
particular
that
specifies
the
security
goals
the
system
and
the
must
meet
threats
it
must
resist.
For
example,
the
high-level
security
goals
most
often
specified
are
that
the
system
should
prevent
disclosure
or
theft
of
unauthorized
information,
should
prevent
unauthorized
and
should
modification
of
information,
prevent
denial
of
service.
Traditional
that
must
be
countered
are
threats
system
penetration
by
unauthorized
actions
by
unauthorized
persons,
authorized
persons,
and
abuse
of
special
programmers
and
privileges
by
systems
facility
operators.
These
threats
may
be intentional
or accidental.
Imprecise
or
conflicting
assumptions
policies
desired
often
confuse
about
discussions
of
computer
security
In
particular,
in
comparing
mechanisms.
commercial
and
military
systems,
a
184
19871EEE
two
the
the
underlying
about
are
trying
to
enforce
difficulty
in
to
motivation
for
certain
mechanisms
that
have
been
developed
and
other.
the
espoused
by
one
9rouP
or
the
military
discusses
paper
This
security
policy,
presents
a
security
commercial
in
many
valid
policy
and
then
the
two
compares
situations,
policies
to
reveal
important
differences
between
them.
The
military
security
policy
we are
referring
to
is
a set
of
policies
that
classified
control
of
the
regulate
This
information
within
the
government.
information
high-level
well-understood,
that
all
classified
security
policy
is
from
protected
shall
be
information
disclosure
or
unauthorized
Mechanisms
used
to
declassification.
this
policy
include
the
enforce
mandatory
labeling
of
all
documents
with
classification
level,
and
the
their
categories
access
user
assigning
of
the
investigation
based
on
(or
“clearing”)
of
all
persons
permitted
to
use
this
information.
During
the
last
15
to
20 years,
considerable
effort
has
into
determining
which
mechanisms
gone
should
be
used
to
enforce
this
policy
Mechanisms
such
as
within
a
computer.
and
authorization
of
identification
users,
generation
of
audit
information,
and
association
of
access
control
labels
objects
are
well
with
all
information
is
defined
in
This
policy
understood.
Trusted
Defense
Department
of
the
Criteria
Evaluation
System
computer
“Orange
Book”
[DOD],
often
called
the
It
cover
.
its
color
of
the
from
standard
for
maintaining
articulates
a
information
and
is,
confidentiality
of
our
the
purposes
of
paper
,
the
for
security
policy.
information
“military”
The
term
“military”
is
perhaps
not
the
characterization
of
descriptive
most
this
policy;
it
is
relevant
to
any
which
access
rules
for
situation
in
sensitive
material
must
be
enforced.
We
as
a concise
tag
‘military”
use
the
term
which
at
least
captures
the
origin
of
the
policy.
INTRODUCTION
184SOi.000
David
Security
MIT Laboratory
for
Computer
Sciencf
Research
Scientist,
545 Technology
Square,
Cambridge,
MA
02139
Information
Security
Servicesl
Ernst
& whinneY
44114
2000
National
City
Center,
Cleveland,
OH
ABSTRACT
CH2416-61871000010
computer
In
the
commercial
environment,
preventing
disclosure
“
often
important,
but
preventing
~~authorized
data
modification
is
usually
paramount.
In
particular,
for
that
core
of
commercial
data
processing
that
relates
to
management
and
accounting
for
assets,
preventing
fraud
and
error
is
the
This
goal
is
addressed
by
primary
goal.
enforcing
the
integrity
rather
than
the
privacy
of
the
information.
For
this
reason,
the
policy
we
will
concern
with
is
ourselves
one
that
addresses
integrity
rather
than
disclosure.
We
will
call
this
a commercial
policy,
in
contrast
to
the
military
information
security
policy.
We are
not
suggesting
that
integrity
plays
no role
in
military
However,
to
the
extent
that
concerns.
the
Orange
Book
is
the
articulation
of
information
the
military
security
difference
of
policy
,
there
is
a clear
emphasis
in
the
military
and
commercial
worlds.
While
the
accounting
principles
that
are
the
basis
of
fraud
and
error
control
are
well
known,
there
is
yet
no
Orange
Book
for
the
commercial
sector
that
articulates
how these
policies
are
to
be
implemented
in
the
context
of
a computer
makes
it
difficult
to
This
system.
question
of
whether
answer
the
the
mechanisms
designed
to
enforce
military
information
security
policies
also
apply
commercial
integrity
enforcing
to
policies.
It
would
be
very
nice
if
the
could
meet
both
goals,
same
mechanisms
thus
enabling
the
commercial
and
military
worlds
to
share
the
development
costs
of
the
necessary
mechanisms
.
However,
we will
argue
that
two
distinct
classes
of
mechanism
will
be
required,
because
some
of
the
mechanisms
needed
to
disclosure
controls
and
enforce
integrity
controls
are
very
different.
Therefore,
the
goal
of
this
paper
is
First,
there
to
defend
two
conclusions.
is
a distinct
set
of
security
policies,
than
rather
integrity
related
to
disclosure,
which
are
often
of
highest
data
commercial
the
priority
in
environment
.
Second,
some
processing
for
are
required
mechanisms
separate
enforcement
of
these
policies,
disjoint
from
those
of
the
Orange
Book.
MILITARY
SECURITY
provides
a good
starting
point,
we begin
with
a
brief
summary
of
computer
the
context
security
in
of
classified
information
control.
The
top-level
goal
for
the
control
of
classified
information
is
very
simple:
classified
information
must
not
be
disclosed
to
unauthorized
At
individuals.
first
glance,
it
appears
the
correct
mechanism
to
enforce
this
policy
is
a
control
which
over
individuals
can
read
which
data
items.
while
certainly
needed,
This
mechanism,
is
much
too
simplistic
to
solve
the
entire
problem
of
unauthorized
information
release.
In
particular,
this
policy
enforcing
requires
a
mechanism
to
control
writing
of
data
as
Because
the
control
well
as
reading
it.
writing
of
data
is
superficially
integrity
ensuring
with
associated
and
the
theft,
than
preventing
rather
concerns
the
policy
classification
confu~ion
has
arisen
control
of
theft,
the
military
fact
that
the
about
mechanism
includes
strong
controls
over
who can
write
which
data.
reasoning
the
line
of
Informally,
mechanism
is
as
this
that
leads
to
the
this
policy,
To
enforce
follows.
protect
itself
from
the
must
system
the
well
as
user
as
authorized
There
are
a
number
unauthorized
user.
user
to
the
authorized
ways
for
of
He can
do so as
declassify
information.
as
a deliberate
a result
of
a mistake,
or
because
he
invokes
a
illegal
action,
program
on
his
behalf
that,
without
his
declassifies
data
as
a
knowledge,
malicious
side
effect
of
its
execution.
sometimes
This
class
of
program,
has
Horse”
program,
“Trojan
called
a
the
much
attention
within
received
To
understand
how
to
COntrOl
military.
this
class
of
problem
in
the
computer,
can
be
document
how
a
consider
noncomputerized
declassified
in
a
The
simple
technique
involves
context
.
document,
the
copying
the
removing
classification
labels
from
the
document
and
then
making
with
a pair
of
scissors,
the
not
have
that
does
copy
another
second
This
labels.
classification
to
be
appears
which
physically
copy ,
can
then
be
carried
past
unclassified,
security
guards
who
are
responsible
for
classified
theft
of
the
controlling
by
occurs
Declassification
documents.
copying.
computer
this
in
a
prevent
To
to
control
the
is
necessary
system,
it
ability
of
an authorized
user
to
copy
a
once
a
In
particula~,
item.
data
computation
has
read
a
data
item
of
a
the
system
must
certain
security
level,
ensure
that
any
data
items
written
by
label
security
have
a
computation
tnat
the
label
of
at
least
as
restrictive
as
It
is
this
the
item
previously
read.
POLICY
the
with
associated
The
policies
information,
classified
management
of
and
the
mechanisms
used
to
enforce
these
policies,
are
carefully
defined
and
well
the
military.
within
understood
not
are
mechanisms
these
However,
the
well
understood
in
necessarily
does
which
normally
world,
commercial
for
not
have
such
a complex
requirement
disclosure.
unauthorized
control
of
security
model
military
the
Because
185
Yet,
these
packages
are
used
commonly
in
viewed
as
rather
and
being
industry
effective
in
their
meeting
of
industry
This
would
suggest
that
requirements.
industry
views
security
requirements
somewhat
differently
than
the
security
policy
described
in
the
Orange
Book .
The
next
section
of
the
paper
begins
a
discussion
of
this
industry
view.
mandatory
check
of
the
security
level
of
all
data
items
whenever
they
are
written
level
security
that
enforces
the
high
policy.
component
of
this
An
important
mechanism
is
that
checking
the
security
writes
is
all
reads
and
level
on
mandatory
and
enforced
by
the
system,
as
opposed
to
being
at
the
discretion
of
the
individual
user
or
application.
In
system
not
time
sharing
typical
a
for
intended
multilevel
secure
the
individual
responsible
operation,
for
a piece
of
data
determines
who
may
write
that
data.
Such
read
or
discretionary
controls
are
not
enforce
the
military
sufficient
to
security
because,
as
rules
suggested
above,
the
authorized
user
(or
programs
running
on his
behalf)
cannot
be trusted
the
properly.
The
enforce
rules
to
system
the
of
controls
mandatory
user
so
that
the
individual
constrain
action
he
takes
is
guaranteed
to
any
Most
conform
to
the
security
policy.
intended
for
military
security
systems
traditional
discretionary
provide
the
mandatory
addition
to
control
in
classification
checking
to
support
what
is
informally
called
“need
to
know.”
BY
it
is
possible
for
the
this
mechanism,
the
further
restrict
to
user
accessibility
of
his
data,
but
it
is
not
the
scope
in
a
increase
possible
to
inconsistent
with
the
manner
classification
levels.
the
U.S.
Department
of
In
1983,
Defense
produced
the
Orange
Book,
which
and
organize
document
attempts
to
that
should
be
found
in
a
mechanisms
computer
system
designed
to
enforce
the
This
policies.
security
military
the
document
stresses
importance
of
if
effective
controls
mandatory
enforcement
of
a
policy
is
to
be
To
enforce
system.
achieved
within
a
the
particular
policy
of
the
Orange
Book,
the
mandatory
controls
relate
to
data
labels
and
user
access
categories.
have
no
division
C
Systems
in
controls,
for
mandatory
requirement
and
B
divisions
A
systems
in
while
mandatory
these
have
specifically
for
controls
checking
and
maintenance
(Systems
in
rights.
user
and
labels
COMMERCIAL
as
described
in
the
Orange
POLICY
FOR
INTEGRITY
control
of
confidential
Clearly,
information
is
important
in
both
the
and
military
environments.
commercial
However,
a major
goal
of
commercial
data
often
the
processing,
most
important
goal,
is
to
ensure
integrity
of
data
to
prevent
fraud
and
errors.
No
user
of
the
system,
even
if
authorized,
may
be
permitted
to
modify
data
items
in
such
a
way
that
assets
or
accounting
records
of
the
company
are
lost
or
corrupted.
Some
the
system,
such
as
user
mechanisms
in
authentication,
are
an
integral
part
of
both
the
commercial
and
enforcing
military
policies.
However,
other
mechanisms
are
very
different.
The
high-level
mechanisms
used
to
commercial
security
policies
enforce
related
to
data
integrity
were
derived
long
before
computer
systems
came
into
existence.
Essentiallyr
there
are
two
mechanisms
at
the
heart
of
fraud
and
control:
error
the
well-formed
and
transaction,
separation
of
duty
among
employees.
concept
of
The
the
well-formed
transaction
is
that
a
user
should
not
manipulate
data
arbitrarily,
but
only
in
constrained
ways
that
preserve
or
ensure
the
data.
A
very
the
integrity
of
common
mechanism
well-formed
in
transactions
is
to
all
record
data
modifications
in
a log
so
that
actions
the
audited
later.
can
be
(Before
computer,
bookkeepers
were
instructed
to
write
in
ink,
and
to
make
correcting
entries
rather
than
erase
in
case
of
books
In
this
way
the
error.
themselves,
being
write-only,
became
the
evidence
of
erasure
was
log,
and
any
indication
of
fraud.)
Perhaps
the
most
formally
structured
example
of
well-formed
transactions
systems,
occurs
in
accounting
which
model
their
transactions
on
the
principles
of
double
entry
bookkeeping,
Double
entry
bookkeeping
ensures
the
consistency
of
internal
the
system’s
data
items
by
requiring
that
any
modification
of
the
books
comprises
two
parts,
which
account
for
or
balance
each
other.
For
example,
if
a check
is
to
be
written
(which
implies
an
entry
in
the
cash
account)
there
must
be
a matching
entry
on
the
accounts
payable
account.
If
an
entry
is
not
performed
properly,
so that
the
parts
do not
match,
this
can
Division
A are
distinguished
from
those
in
B,
not
by additional
function,
hut
by
having
been
designed
to
Permit
formal
security
principles
the
verification
of
of
the
system.
)
several
security
systems
used
in
the
specifically
environment,
commercial
were
CA-TopSecret,
and
AcF/2
,
RACF ,
recently
evaluated
using
the
Orange
Book
these
that
ratings
The
C
criteria.
would
received
packages
security
did
not
meet
the
they
that
indicated
the
security
requirements
of
mandatory
model
SECURITY
Book.
186
by
an
test
detected
independent
be
the
books).
It
is
(balancing
thus
possible
to
detect
such
frauds
as
the
simple
issuing
of
unauthorized
checks.
The
second
mechanism
to
control
fraud
and
error,
separation
of
duty,
the
ensure
external
attempts
to
consistency
of
the
data
objects:
the
correspondence
between
the
data
object
and
the
world
real
object
“
Because
computers
do
n;;
represents
.
normally
have
direct
sensors
to
monitor
the
real
world,
computers
cannot
verify
Rather,
external
consistency
directly.
the
correspondence
is
ensured
indirectly
into
all
operations
by
separating
several
subparts
and
requiring
that
each
different
executed
by
a
subpart
be
the
process
of
example,
For
person.
purchasing
some
item
and
paying
for
it
authorizing
the
might
involve
subparts:
purchase
order,
recording
the
arrival
of
the
item,
recording
the
arrival
of
the
and
authorizing
payment.
The
invoice,
last
subpart,
or
step,
should
not
be
three
are
executed
unless
the
previous
each
step
is
If
done.
properly
the
person,
a
different
performed
by
representation
and
internal
external
should
correspond
unless
some
of
these
can
person
If
one
people
conspire.
steps,
then
a
all
of
these
execute
is
possible,
in
form
of
fraud
simple
order
and
payment
which
an
is
placed
made
to
a fictitious
company
without
any
In
this
case,
actual
delivery
of
items.
the
books
appear
to
balance;
the
error
between
real
in
the
correspondence
is
enforce
these
two
rules.
To ensure
that
data
items
are
manipulated
only
by means
of
well-formed
transactions,
it
is
first
necessary
to
ensure
that
a data
item
can
be manipulated
only
by a specific
set
of
These
programs.
programs
must
be
inspected
for
proper
Construction,
and
controls
must
be provided
on the
ability
to
install
and
modify
these
programs,
so
their
continued
validity
is
that
ensured.
To
ensure
separation
of
duties,
each
user
must
be
permitted
to
use
only
certain
sets
of
programs.
The
to
programs
must
assignment
of
people
to
ensure
that
the
again
be
inspected
desired
controls
are
actually
met.
These
integrity
mechanisms
differ
in
ways
from
the
important
number
of
a
mandatory
controls
for
military
security
First,
as described
in
the
Orange
Book,
data
these
integrity
controls,
a
with
item
is
not
necessarily
associated
with
a Particular
security
level,
but
rather
programs
permitted
to
with
a
set
of
Second,
a
user
is
not
manipulate
it.
given
authority
to
read
or
write
certain
certain
execute
but
to
items,
data
data
certain
items.
The
programs
on
distinction
between
these
two
mechanisms
With
the
Orange
Book
is
fundamental.
controls,
a user
is
constrained
by
what
If
he
data
items
he can
read
and
write.
is
authorized
to
write
a particular
data
way
he
any
may
do
so
in
item
he
commercial
integrity
With
chooses.
the
user
is
constrained
by
controls,
what
programs
he
can
execute,
and
the
read
or
write
in
which
he
can
manner
and
data
items
is
implicit
in
the
actions
of
Because
of
separation
those
programs.
of
duties,
it
will
almost
always
be
the
even
though
he
is
that
a
user,
case
authorized
to
write
a data
item,
can
do
using
some
of
the
so
only
by
data
that
for
defined
transactions
with
different
users,
Other
item.
to
different
will
have
access
duties,
that
related
to
transactions
sets
of
recorded
inventory.
Perhaps
the
most
basic
separation
of
duty
rule
is
that
any
person
permitted
well-formed
create
or
certify
a
to
permitted
to
not
be
may
transaction
execute
it
(at
least
against
production
This
rule
ensures
that
at
least
data)
.
cause
a
required
to
are
people
two
well-formed
set
of
the
change
in
transactions.
duty
method
is
separation
of
The
the
case
of
except
in
effective
For
this
employees.
among
collusion
standard
auditing
disclaimer
reason,
a
is
that
the
system
is
certified
correct
under
the
assumption
that
there
has
been
While
this
might
seem
a
no
collusion.
risky
assumption,
the
method
has
proved
very
effective
in
practical
control
of
Separation
of
duty
can
be
made
fraud.
very
powerful
by
thoughtful
application
random
technique,
such
as
the
of
the
sets
of
p:;g;e
to
selection
of
any
operation,
so
some
perform
only
collusion
is
safe
by
proposed
thus
a
Separation
of
duty
is
chance.
fundamental
integrity
principle
control.
Therefore,
be
used
specific
for
for
commercial
mechanisms
of
a
commercial
computer
data
are
system
processing,
needed
data.
MANDATORY
COMMERCIAL
CONTROLS
The
concept
of
mandatory
control
is
central
to
the
mechanisms
for
military
security,
but
the
term
is
not
usually
applied
to
commercial
systems.
That
is,
commercial
systems
have
not
reflected
the
idea
that
certain
functions,
central
to
the
enforcement
of
policy,
designed
as
a fundamental
characteris~!~
of
the
system.
However,
it
is
important
to
that
understand
the
mechanisms
described
in
the
previous
section,
in
some
respects,
are
mandatory
controls.
They
are
mandatory
in
that
the
user
of
the
system
should
not,
by
any
sequence
data
to
of
to
list
of
programs
a particular
data
187
operations,
be
able
permitted
item
to
or
modify
to
to
manipul~?~
modify
the
separation
of
duty
can
be done
only
by a
with
comparison
application-specific
duty
The
separation
of
criteria.
can
determination
be
rather
complex,
the
for
decisions
al 1
the
because
transactions
This
interact.
greater
there
is
also
that
discretion
means
greater
scope
for
error
by
the
security
system
owner,
and
that
the
officer
or
the
system
is
less
able
to
prevent
the
opposed
to
officer,
as
security
To
the
user,
from
misusing
the
system.
the
behavior
of
however,
system
user,
list
of
users
permitted
to
execute
a
If
the
individual
user
given
program.
could
do
so,
then
there
would
be
no
the
ability
of
an
over
control
untrustworthy
user
to
alter
the
system
for
fraudulent
ends.
integrity
commercial
the
In
environment,
the
owner
of
an application
and
the
general
controls
implemented
by
are
processing
organization
data
the
all
that
for
ensuring
responsible
programs
are
well-formed
transactions.
As in
the
military
environment,
there
is
separake
staff
designated
usually
a
the
two
mandatory
controls
is
similar.
The
rules
are
seen
as a fundamental
part
may
and
not
be
system,
the
of
restricted,
further
only
circumvented,
by
any
other
discretionary
control
that
exists.
responsible
for
assuring
that
users
can
execute
transactions
only
in
such
a way
duty
rule
is
the
separation
of
that
The
system
ensures
that
the
enforced.
cannot
circumvent
these
controls.
user
than
a
mandatory
rather
This
is
a
discretionary
control.
The
two
mandatory
controls,
military
and
commercial,
different
are
very
They
do
not
enforce
the
mechanisms.
same
policy.
The
military
mandatory
control
enforces
the
correct
setting
of
The
commercial
classification
levels.
the
rules
control
enforces
mandatory
that
well-formed
implement
the
and
separation
of
duty
transaction
model.
When
constructing
a
computer
system
to
support
these
mechanisms,
very
different
low-level
tools
are
implemented.
An
interesting
example
of
these
two
sets
of
mechanisms
can
be
found
in
the
Multics
system,
marketed
by
operating
Honeywell
Information
Systems
and
evaluated
by
the
Department
of
Defense
in
Class
B2 of
its
evaluation
criteria.
A certification
in
Division
B
implies
that
Multics
has
mandatory
mechanisms
to
levels,
and
indeed
enforce
security
those
mechanisms
were
specifically
implemented
to
make
the
system
usable
in
a military
multilevel
secure
environment
However,
mechanisms
those
[wHITMORE].
do
not
provide
a
sufficient
basis
for
enforcing
a commercial
integrity
model.
entirely
Multics
has
an
In
fact,
set
of
mechanisms,
called
different
were
developed
that
protection
rings,
for
this
specifically
purpose
Protection
rings
provide
a
[SCHROEDERI.
means
for
ensuring
that
data
bases
can
programs
only
by
manipulated
be
them.
Multics
authorized
to
use
thus
security
complete
sets
of
two
has
mechanisms
,
one
oriented
toward
COMMERCIAL
EVALUATION
CRITERIA
As
discussed
earlier,
RACF,
ACF/2,
and
CA-TopSecret
were
all
reviewed
using
evaluation
Defense
Department
of
the
criteria
described
in
the
Orange
Book.
Under
these
criteriat
these
systems
did
controls.
any
mandatory
provide
not
However,
these
systems,
especially
when
context
of
a
the
in
executed
such
system
monitor
telecommunications
constitute
the
closest
as
CICS
or
IMS,
approximation
the
commercial
world
has
enforcement
of
a
mandatory
the
to
There
is
thus
a
policy.
integrity
strong
need
for
a commercial
equivalent
of
the
military
evaluation
criteria
to
provide
a means
of
categorizing
systems
that
are
useful
for
integrity
control.
Extensive
study
is
needed
to
develop
the
depth
of
detail
document
with
a
Department
of
with
the
associated
But,
as
a
Defense
evaluation
criteria.
starting
point,
we propose
the
following
the
compare
to
which
we
criteria,
security
computer
fundamental
“Introduction”
to
requirements
from
the
First,
the
system
must
the
Orange
Book.
and
identify
authenticate
separately
every
user,
so
that
his
actions
can
be
and
audited.
controlled
(This
is
similar
to
the
Orange
Book
requirement
Second,
the
system
for
identification.)
must
ensure
that
specified
data
items
can
be
manipulated
only
by
a restricted
set
of
programs,
and
the
data
center
controls
must
ensure
that
these
programs
meet
the
well-formed
transaction
rule.
associate
with
system
must
the
Third,
each
user
a valid
set
of
programs
to
be
and
the
data
center
controls
must
run,
that
ensure
these
sets
meet
the
of
duty
rule.
Fourth,
the
separation
must
system
maintain
an
auditing
log
that
records
every
program
executed
and
the
name
of
the
authorizing
user.
(This
to
the
Orange
is
superficially
similar
Book
requirement
for
accountability,
but
the
designed
specifically
for
military
and
multilevel
operation,
the
and
other
model
of
the
commercial
designed
for
integrity.
The
analogy
between
the
two
forms
of
mandatory
control
is
not
perfect.
In
the
integrity
control
model,
there
must
be
more
discretion
left
to
the
administrator
of
the
system,
because
the
determination
of
what
constitutes
proper
188
the
events
to
be
audited
are
quite
different.
)
In
addition
to
these
criteria,
the
military
and
commercial
environments
share
two
requirements.
First,
the
computer
system
must
contain
mechanisms
to
ensure
that
the
system
enforces
its
requirements.
And
second,
ttl:
the
system
must
mechanisms
in
protected
tampering
against
t:;
change.
These
unauthorized
requirements,
which
ensure
that
the
system
actually
does
what
it
asserts
it
clearly
an
integral
part
of
does,
are
These
are
policy.
security
any
generally
referred
to
“administrative”
or
commercial
data
center.
A FORMAL
MODEL
OF
as
the
controls
“general”
in
By
definition
valid
state
as
follows.
it
will
take
the
CDIS
into
a valid
state
valid
state
before
if
they
were
in
a
execution
of
the
TP.
But
this
precondition
was
ensured
by
execution
of
the
IVP.
For
each
TP
in
turn,
we
can
step
to
ensure
this
necessary
repeat
that,
at
any
point
after
a sequence
of
the
system
is
still
valid.
This
TPs ,
proof
method
resembles
the
mathematical
valid
induction,
and
is
method
of
that
only
provided
the
system
ensures
TPs can
manipulate
the
CDIS.1
While
the
system
can
ensure
that
only
TPs
manipulate
CDIS,
it
cannot
ensure
that
the
TP
performs
a
well-formed
transformation.
The
validity
of
a
TP
(or
an
IVP)
can
be
determined
only
by
certifying
it
with
respect
to
a specific
integrity
policy.
In
the
case
of
the
bookkeeping
example,
a
INTEGRITY
In
this
section,
we introduce
a more
formal
model
for
data
integrity
within
systems,
and
compare
our
work
computer
We use
with
other
efforts
in
this
area.
integrity
specific
the
examples
as
accounting
with
associated
policies
practices,
but
we believe
our
model
is
applicable
to
a wide
range
of
integrity
policies.
To begin,
we must
identify
and
label
those
data
items
within
the
system
to
must
be
model
integrity
the
which
these
“Constrained
call
applied.
We
The
particular
CDIS.
Data
Items,”
or
each
TP would
be
certified
to
implement
transactions
that
lead
to
properly
segregated
double
entry
accounting.
The
certification
function
is
usually
a
operation,
although
manual
some
automated
aids
may be available.
thus
a
Integrity
assurance
is
two-part
process:
certification,
which
is
done
by
the
security
officer,
system
owner,
and
system
custodian
with
policy;
integrity
respect
to
an
and
done
by
which
is
the
enforcement,
Our
model
to
this
point
can
be
system.
summarized
in
the
following
three
rules:
integrity
policy
desired
is
defined
by
Integrity
of
procedures:
classes
two
IVPS ,
and
Procedures,
or
Verification
The
Transformation
Procedures,
or
TPs.
purpose
of
an IVP
is
to
confirm
that
all
of
the
CDIS
in
the
system
conform
to
the
integrity
specification
at
the
time
the
accounting
the
executed.
In
IVP
is
example,
this
corresponds
to
the
audit
are
books
the
which
function,
in
balanced
and
reconciled
to
the
external
The
TP corresponds
to
our
environment.
concept
of
the
well-formed
transaction.
The
purpose
of
the
TPs
is
to
change
the
CDIS
from
one
valid
state
to
set
of
In
the
accounting
example,
a
another.
TP
would
correspond
to
a
double
entry
transaction.
the
integrity
of
the
To
maintain
the
system
must
ensure
that
only
a
CDIS,
It
is
this
TP can
manipulate
the
CDIS.
term
the
that
motivated
constraint
this
Given
Item.
Data
constrained
can
argue
that,
at
any
constraint~
we
given
time,
the
CDIS
meet
the
integrity
requirements.
(We call
this
condition
a
We can
assume
that
at
“valid
state.”)
some
time
in
the
past
the
system
was
in
IVP
was
state,
because
an
valid
a
this.
Reasoning
verify
executed
to
forward
from
this
sequence
of
the
For
the
executed.
can
assert
that
it
point,
TPs
first
left
All
that
state
IVPS
all
at
the
must
CDIS
time
Cl:
(Certification)
properly
ensure
are
in
a valid
the
IVP
is
run.
C2:
All
TPs
must
be
certified
to
be
valid.
That
is,
they
must
take
a
CDI
to
a
valid
final
state,
given
that
it
is
in
a
valid
state
to
begin
with.
For
each
TP,
and
each
set
of
CDIS
that
it
may
manipulate,
the
security
specify
must
a
officer
defines
“relation,”
which
that
execution.
A
relation
is
thus
--- -----
------
lThere
is
- ---an
detail
which
additional
the
system
must
enforce,
which
is
to
ensure
that
TPs
are
executed
serially,
During
the
rather
than
several
at
once.
eXeCUtiOn
of
a
TP,
the
mid-point
of
there
is
no
requirement
that
the
system
If
another
TP
be
in
a
valid
state.
begins
execution
at
this
point,
there
is
no
assurance
that
the
final
state
will
be valid.
To address
this
problem,
most
modern
data
base
systems
have
mechanisms
that
appear
to
TPs
to
ensure
have
serial
fashion,
executed
in
a strictly
they
were
actually
executed
even
if
concurrently
for
efficiency
reasons.
we can
examine
been
that
have
TP executed,
we
the
system
in
a
189
of
the
form:
(TPi,
(CDIa,
CDIC,
. . .)),
where
the
CDIS
defines
a particular
arguments
for
which
the
been
certified.
El:
commercial
policy
is
likely
to
be
based
separation
of
on
responsibility
among
two
or more
users.
There
may
be
other
restrictions
on
the
validity
of
a
TP.
In
each
case,
this
restriction
will
be manifested
as a
certification
rule
and
enforcement
rule.
For
example,
if
a
TP
is
valid
only
during
certain
hours
of
the
day,
then
the
system
must
provide
a
trustworthy
clock
(an
enforcement
rule)
and
the
TP must
be certified
to
read
the
clock
properly.
Almost
all
integrity
enforcement
systems
require
that
all
TP execution
be
logged
to
audit
provide
an
trail.
However,
no
special
enforcement
rule
is
needed
to
implement
this
facility;
the
log
can
be modeled
as
another
CDI,
with
an
associated
TP
that
only
appends
to
The
only
rule
the
existing
CDI
value.
required
is:
CDIb,
list
of
set
of
TP
has
The
system
must
(Enforcement)
maintain
the
list
of
relations
and
must
specified
in
rule
c2,
ensure
that
the
only
manipulation
of
any
CDI
is
by
a
TP,
where
the
TP is
operating
on
the
CDI
as
specified
in
some
relation.
rules
provide
the
basic
above
The
framework
to
ensure
internal
consistency
To provide
a mechanism
for
of
the
CDIS.
the
separation
of
external
consistency,
we need
additional
rules
duty
mechanism,
can
execute
which
persons
control
to
which
programs
on specified
CDIS:
E2 :
C3:
The
system
must
maintain
a list
the
form:
of
relations
of
(UserID,
TPi,
(CDIa,
CDIb,
CDIC,
which
relates
a user,
a
. . . )),
TP,
and
the
data
objects
that
TP
may
reference
on
behalf
of
that
It
must
ensure
that
only
user.
executions
described
in
one
of
the
relations
are
performed.
The
list
of
relations
in
E2
meet
be
certified
to
separation
of
duty
requirement.
C4:
There
is
one
more
critical
only
component
to
this
integrity
model.
Not
all
data
is
constrained
data.
In
addition
to
CDIS,
most
systems
contain
data
items
not
covered
by
the
integrity
policy
that
may
be
manipulated
arbitrarily,
subject
only
to
discretionary
controls.
These
Unconstrained
Data
Items,
or
UDIS,
are
relevant
because
they
represent
the
way
new
information
is
fed
into
the
system.
For
example,
information
typed
by a user
it
may
have
at
the
keyboard
is
a UDI;
or
been
entered
modified
arbitrarily.
To deal
with
this
class
of
data,
it
is
necessary
to
recognize
that
certain
TPs
may
take
UDIS
as
input
values,
and
may
modify
or
create
CDIS
based
on
this
information.
This
impl ies
a
certification
rule:
must.
the
specified
the
relations
Formally,
for
rule
E2 are
more
powerful
than
those
unnecessary.
El,
so
El
is
rule
of
philosophical
and
for
both
However,
practical
reasons,
it
is
helpful
to
have
of
relations.
sorts
both
and
E2
keeping
El
Philosophically,
separate
helps
to
indicate
that
there
two
basic
problems
to
be
solved:
are
As a
internal
and
external
consistency.
the
existence
of
both
practical
matter,
forms
together
permits
complex
relations
to
be
expressed
with
shorter
lists,
by
use
of
identifiers
within
the
relations
“wild
card”
characters
to
match
that
use
classes
of
TPs or CDIS.
relation
made
use
of
The
above
UserID,
an
identifier
for
a user
of
the
implies
the
need
for
a
This
system.
rule
to
define
these:
E3:
The
must
system
the
attempting
idenkity
to
C5 :
authenticate
of
execute
user
each
a
All
TPs
must
be
certified
to
write
to
an
append-only
CD I
(the
log)
all
information
necessary
to
permit
the
nature
the
operation
to
be
of
reconstructed.
TP.
Rule
E3
is
relevant
to
both
commercial
systems.
Howevert
and
military
those
two
classes
of
systems
use
the
identity
of
the
user
very
different
to
enforce
the
The
relevant
policy
in
policies.
context,
as
described
in
the
military
based
on
level
and
Book,
is
Orange
clearance,
category
of
while
the
Any
TP that
takes
a UDI
as
an
input
V?ilUe
mUSt
be
certified
to
perform
only
valid
transformations,
or
else
no
transformations,
for
any
possible
value
of
the
UDI.
The
transformation
should
take
the
input
frOM
a
UDI
to
a CDI,
OK
the
UD I
is
rejected.
Typically,
this
is
an
edit
program.
For
this
model
be
effective,
the
various
certification
rules
must
not
be
bypassed.
For
example,
if
a
user
can
create
and
run
a new
TP without
having
it
certified,
the
system
cannot
meet
its
goals.
For
this
reason,
the
system
must
190
ensure
certain
Most
obviously:
E4 :
additional
constraints.
Other
examples
exist.
Separation
of
duty
might
be
enforced
by
analysis
of
sets
of
accessible
CDIS
for
each
user.
We
believe
that
further
research
on
specific
aspects
of
integrity
policy
would
lead
to
a new
generation
of
tools
for
integrity
control.
Only
the
agent
permitted
to
certify
entities
may
change
the
list
of
such
entities
associated
with
other
entities:
specifically,
the
associated
with
a TP.
An agent
that
can
certify
an
entity
may
not
any
execute
have
rights
with
respect
to
that
entity.
OTHER
MODELS
OF
INTEGRITY
Other
attempts
to
model
integrity
have
tried
to
follow
more
closely
the
structure
for
data
security
defined
by
Bell
and
LaPadula
[BELLI,
the
formal
basis
of
the
military
security
mechanisms.
Biba
[BIBA]
defined
an
integrity
model
that
is
the
inverse
of
the
Bell
and
LaPadula
model.
His
model
states
that
data
items
exist
at
different
levels
of
integrity,
and
that
the
system
should
prevent
lower
level
data
from
contaminating
higher
level
data.
In
particular,
once
a
program
reads
lower
level
data,
the
system
prevents
that
program
from
writing
to
contaminating
)
(and
thus
higher
level
data.
Our
model
has
two
levels
of
integrity:
the
lower
level
UDIS
and
the
higher
level
CDIS.
CDIS
would
be
considered
higher
level
because
they
can
be
verified
using
an
IVP.
In
Biba’s
model,
any
conversion
of
a UDI
to
a CDI
officer
could
be done
only
by a security
or
trusted
process.
This
restriction
is
clearly
unrealistic;
data
input
is
the
most
common
system
function,
and
should
not
be
done
by
a mechanism
essentially
outside
the
security
model.
Our
model
permits
the
security
officer
to
certify
the
method
for
integrity
upgrade
(in
our
‘terms?
those
UDIS
as
take
that
TPs
input
values),
and-thus
recognizes
ch e
fundamental
the
TP
cole
of
(i.e.,
model.
trusted
process)
in
our
More
generally,
Bibs’s
model
lacks
any
equivalent
of
rule
El
(CDIS
changed
only
by
authorized
TP),
thus
and
cannot
provide
the
specific
idea
of
constrained
data.
to
describe
attempt
Another
using
the
Bell
integrity
and
LaPadula
He recognizes
model
is
Lipner
[LIPNER].
that
the
cakegory
facility
of
this
model
can
be
used
to
distinguish
the
general
user
from
the
systems
programmer
or
the
Lipner
security
officer.
also
that
data
should
recognizes
be
certified
only
by
manipulated
(production)
programs.
In
attempting
to
terms
of
the
lattice
express
this
in
model,
he
is
constrained
to
attach
lists
to
programs
users
and
data
of
separately,
rather
than
attaching
a list
of
programs
to
a data
item.
His
model
thus
has
no way
to
express
our
rule
El.
By
combining
a
lattice
security
model
with
the
Biba
integrity
model,
he
more
closely
approximates
the
desired
model,
This
last
rule
makes
this
integrity
mandatory
rather
mechanism
enforcement
than
discretionary.
For
this
structure
to
work
overall,
the
ability
to
change
permission
lists
must
be
coupled
to
the
certify,
and
not
to
some
ability
to
such
as
the
ability
to
ability,
other
This
coupling
is
the
execute
a
TP.
critical
feature
that
ensures
that
the
certification
rules
govern
what
actually
happens
when
the
system
is
run.
Together,
these
nine
rules
define
a
system
that
enforces
a
consistent
policy.
The
integrity
rules
are
summarized
in
Figure
1,
which
shows
the
control
the
system
way
the
rules
The
figure
operation.
shows
a TP
that
takes
certain
CDIS
as
input
and
produces
new
versions
of
certain
CDIS
as
output.
two
CDIS
represent
two
sets
of
These
successive
valid
states
of
the
system.
The
figure
also
shows
an
IVP
reading
the
collected
CDIS
in
the
system
in
order
to
Associated
validity.
the
CDIS’
verify
with
each
part
of
the
system
is
the
rule
it
to
ensure
that
governs
(or
rules)
integrity.
Central
to
this
model
is
the
idea
that
there
two
are
classes
of
rules:
certification
rules
and
enforcement
rules.
Enforcement
rules
correspond
to
application-independent
security
the
functions,
while
certification
rules
permit
the
application-specific
integrity
definitions
to
be
incorporated
desirable
to
the
model.
into
It
is
certification
minimize
rules,
because
the
certification
process
is
complex,
prone
to
error,
must
be
repeated
and
In
extending
after
each
program
change.
this
model
,
therefore,
an
important
research
goal
must
be
to
shift
as
much
of
the
security
burden
as
possible
from
certification
to
enforcement.
example,
a
For
common
integrity
that
TP S
are
to
be
constraint
is
executed
in
a
certain
order.
model
(and
in
most
systems
of
~~da~?~
this
idea
can
be
captured
only
by
storing
control
information
in
some
CDI,
and
executing
explicit
program
steps
in
The
each
TP
to
test
this
information.
result
of
this
style
is
that
the
desired
the
within
program,
hidden
policy
is
rather
than
being
stated
as
an
explicit
rule
that
the
system
can
then
enforce.
191
Figure
1:
Summary
of System
Integrity
Rules
USERS
Cl:
lVPvalidatea CDl atate
C5: TPsvatidate
UDl
C2:
TPspreserve
valid state
$
E4:
El:
CDlschanged
==%/
T
CDI
/
CDI
IVP
CDI
CDI
D
System in
some state
192
only byauthorized
Authorization
lists
but
still
cannot
effectively
express
the
idea
that
data
may
be
manipulated
only
by specified
programs
(rule
El).
Our
integrity
model
is
less
related
to
the
Bell
and
LaPadula
model
than
it
is
to
the
models
constructed
in
support
of
security
certification
of
systems
themselves.
The
iterative
process
we
use
to
argue
that
TPs
preserve
integrity,
starts
which
with
a
known
valid
state
and
then
validates
incremental
modifications,
is
also
the
methodology
often
used
to
verify
that
a
system,
executing,
while
continues
to
meet
its
requirements
enforcing
for
security.
In
this
comparison,
our
CDIS
would
correspond
to
the
data
structures
of
the
system,
and
the
TPs
to
the
system
code.
This
comparison
suggests
that
the
certification
tools
developed
for
system
security
certification
may
be
relevant
for
the
certifications
that
must
be
performed
on this
model.
For
example,
if
an
Orange
Book
for
industry
were
created,
it
also
might
have
rating
levels.
Existing
systems
such
as
ACF/2,
RACF ,
and
CA-TopSecret
would
certainly
be
found
wanting
in
comparison
to
the
model.
This
model
that,
to
would
suggest
receive
higher
ratings,
these
security
systems
must
provide:
better
facilities
for
end-user
authentication;
segregation
of
duties
within
the
security
officer
functions,
such
as
the
ability
to
segregate
the
person
who
adds
and
deletes
users
from
those
write
a
who
user$s
rules,
and
the
restriction
of
security
function
from
user
passwords;
and
the
need
to
provide
much
better
rule
capabilities
to
govern
the
execution
of
and
programs
transactions.
The
commercial
sector
would
be
very
interested
in
a model
that
would
lead
to
changes.
kinds
of
and
measure
these
Further,
for
the
commercial
world,
these
changes
would
be much
more
valuable
than
to
take
existing
operating
systems
and
security
packages
to
B or
A levels
as
defined
in
the
Orange
Book.
do not
mean
to
imply
that
the
commercial
control
of
use
for
world
has
no
disclosure,
or
that
the
military
is
not
Indeed,
much
concerned
with
integrity.
the
military
within
processing
data
commercial
practices.
matches
exactly
However,
taking
the
Orange
Book
as
the
most
organized
articulation
of
military
concerns,
there
is
a clear
difference
in
For
priority
between
the
two
sectors.
the
core
of
traditional
commercial
data
processing,
preservation
of
integrity
is
the
central
and
critical
goal.
has
priority
This
difference
in
impeded
the
introduction
of
Orange
Book
the
commercial
sector.
into
mechanisms
could
mechanisms
Book
Orange
the
If
enforce
commercial
integrity
policies
as
well
as
those
for
military
information
difference
in
priority
control,
the
same
the
because
matter,
would
not
both.
used
for
could
be
system
Regrettably,
this
paPer
argues
there
is
the
overlap
between
effective
not
an
The
the
two .
for
needed
mechanisms
Bell
and
LaPadula
model
of
lattice
cannot
directly
express
the
idea
that
manipulation
of
data
must
be
restricted
to
well-formed
transformations,
and
that
duty
must
be
based
on
separation
of
these
to
subjects
of
control
transformations.
ACF/2,
and
The
evaluation
of
RACF,
Book
the
Orange
against
CA-TopSecret
the
to
clear
made
has
criteria
these
that
many
of
sector
commercial
criteria
concerns
is
needed
would
be
integrity
are
not
central
to
the
security
What
in
the
commercial
world.
is
a new
set
of
criteria
that
more
revealing
with
respect
to
paper
‘This
enforcement.
first
cut
at
such
a
set
of
offers
a
We hope
that
we can
stimulate
criteria.
further
effort
to
refine
and
formalize
the
eventual
with
integrity
model,
an
security
providing
better
goal
of
commercial
the
tools
in
and
systems
sector.
There
is
no
reason
to
believe
that
would
be
irrelevant
to
effort
this
Indeed,
concerns
.
military
incorporation
of
some
form
of
integrity
controls
into
the
Orange
Book
might
lead
to
systems
that
better
meet
the
needs
of
both
groups.
CONCLUSION
With
the
publication
of
the
Orange
deal
of
public
and
great
Book,
a
governmental
interest
has
focused
on
the
for
computer
systems
evaluation
of
been
has
However,
it
security.
difficult
for
the
commercial
sector
to
the
Orange
the
relevance
of
evaluate
Book
criteria,
because
there
is
no clear
articulation
of
the
goals
of
commercial
This
paper
has
attempted
to
security.
goal
,
one
such
describe
and
identify
a
goal
that
is
integrity,
information
commercial
data
much
of
central
to
processing.
and
commercial
In
using
the
woras
military
in
describing
these
models,
we
ACKNOWLEDGMENTS
thank
like
to
would
authors
The
Frank
S.
Smith,
111,
Robert
G. Andersen,
Poore
(Ernst
&
Whinney,
and
Ralph
S.
Information
Security
Services)
for
their
assistance
in
preparing
this
paper.
We
Steve
Lipner
thank
also
(Digital
Equipment
Corporation)
and
the
referees
for
their
very
helpful
the
paper
of
comments
193
.
REFERENCES
[DoD]
[Bell]
[Bibs]
[Lipner]
[Schroeder]
[Whitmore]
Defense
of
Department
System
computer
Trusted
Criteria,
Evaluation
Department
CSC-STD-011-83,
Computer
Defense
of
Fort
Security
Centerr
Meade,
MD, August
1983.
and
L.
J.
Bell,
D.
E.
“Secure
Computer
LaPadulat
ESD-TR-73-278
systems,”
Mitre
(also
I-III)
(Vol
TR-2547),
Mitre
Bedford,
MA,
Corporation,
April
1974.
“Integrity
Biba,
K.
J.,
Considerations
for
Secure
Mitre
systems,”
Computer
TR-3153,
Mitre
Bedford,
MA,
Corporation,
April
1977.
s.
Lipner,
B.t
“Non-Discretionary
for
Commercial
Controls
Applications,”
Proceedings
of
the
1982
IEEE
Symposium
Security
and
Privacy,
~kland,
CA,
April
1982.
Schroeder,
M. D. and
J.
H.
“A
Saltzer,
Hardware
Architecture
for
Protection
Implementing
Rings,
“
Comm ACM,
Vol
15,
3 March
1972.
Whitmore,
J.
C.
et
al.,
“Design
for
Multics
Enhancements
,“
Security
ESD-TR-74-176,
Honeywell
Information
Systems,
1974.
194
Download