Uploaded by merrypaulose8787

UNIT 1 CONTENT

advertisement
UNIT 1 CONTENT
Trusted System
By Jennifer Spencer
A trusted system is a hardware device, such as a computer or home entertainment
center, that is equipped with digital rights management (DRM) software. DRM software
controls the uses that can be made of copyrighted material in the secondary (after sale)
market, in violation of established U.S. Principles of copyright law; however, copyright
law restrictions are evaded by software licenses denying that a sale has taken place.
Software for such systems specify transport rights (permission to copy the media, loan it
to another user, or transfer the license to another user), rendering rights (permission to
view or listen to the content), and derivative-work rights (permission to extract and reuse
content from the protected work). See copyright, Digital Millennium Copyright Act
(DMCA), digital rights management (DRM), first sale.
Technipages Explains Trusted System
Taking a closer look at the terminology “Trusted System” involved in understanding and
describing trust. The word “Trust” reflects a dichotomy: Something is either secure or
not secure. If secure, it should withstand all attacks, today, tomorrow, and a century
from now, and if we claim that it is secure, you either accept our assertion (and buy and
use it) or reject it (and either do not use it or use it but do not trust it).
So we say that software is trusted software if we know that the code has been
rigorously developed and analyzed, giving us reason to believe that the code does what
it is expected to do and nothing more. Typically, trusted code can be a foundation on
which other untrusted code runs, i.e. the untrusted system’s quality depends, in part, on
the trusted code; the trusted code establishes the baseline for the security of the overall
system. In particular, an operating system can be trusted software when there is a basis
for trusting that it correctly controls the accesses of components or systems run from it.
For example, the operating system might be expected to limit users’ access to specific
files.
Common Uses of Trusted System



A trusted system can protect malicious attacks from future bugs or viruses.
The code of a trusted system is passed through rigorous analysis and development
A trusted system and an untrusted system can share a similar foundation
Common Misuses of Trusted System

A trusted system cannot withstand attacks from future malware attack
What is a security policy?
A security policy is a document that states in writing how a company plans to
protect its physical and information technology (IT) assets. Security policies
are living documents that are continuously updated and changing as
technologies, vulnerabilities and security requirements change.
A company's security policy may include an acceptable use policy. These
describe how the company plans to educate its employees about protecting
the company's assets. They also include an explanation of how security
measurements will be carried out and enforced, and a procedure for
evaluating the effectiveness of the policy to ensure that necessary corrections
are made.
Why are security policies important?
Security policies are important because they protect an organizations' assets,
both physical and digital. They identify all company assets and all threats to
those assets.
Physical security policies are aimed at protecting a company's physical
assets, such as buildings and equipment, including computers and other IT
equipment. Data security policies protect intellectual property from costly
events, like data breaches and data leaks.
Physical security policies
Physical security policies protect all physical assets in an organization,
including buildings, vehicles, inventory and machines. These assets include IT
equipment, such as servers, computers and hard drives.
Protecting IT physical assets is particularly important because the physical
devices contain company data. If a physical IT asset is compromised, the
information it contains and handles is at risk. In this way, information security
policies are dependent on physical security policies to keep company data
safe.
Physical security policies include the following information:

sensitive buildings, rooms and other areas of an organization;

who is authorized to access, handle and move physical assets;

procedures and other rules for accessing, monitoring and handling these
assets; and

responsibilities of individuals for the physical assets they access and
handle.
Security guards, entry gates, and door and window locks are all used to
protect physical assets. Other, more high-tech methods are also used to keep
physical assets safe. For example, a biometric verification system can limit
access to a server room. Anyone accessing the room would use a fingerprint
scanner to verify they are authorized to enter.
Information security policies
These policies provide the following advantages.
Protect valuable assets. These policies help ensure the confidentiality,
integrity and availability -- known as the CIA triad -- of data. They are often
used to protect sensitive customer data and personally identifiable
information.
Guard reputations. Data breaches and other information security incidents
can negatively affect an organization's reputation.
Ensure compliance with legal and regulatory requirements. Many legal
requirements and regulations are aimed at security sensitive information. For
example, Payment Card Industry Data Security Standard dictates how
organizations handle consumer payment card information. Health Insurance
Portability and Accountability Act details how companies handle protected
health information. Violating these regulations can be costly.
Dictate the role of employees. Every employee generates information that
may pose a security risk. Security policies provide guidance on the conduct
required to protect data and intellectual property.Identify third-party
vulnerabilities. Some vulnerabilities stem from interactions with other
organizations that may have different security standards. Security policies
help identify these potential security gaps.
New
security concerns have emerged as employees moved into remote workspaces in response
to the COVID-19 pandemic. Companies must consider these as they update their security
policies.
Types of security policies
Security policy types can be divided into three types based on the scope and
purpose of the policy:
1. Organizational. These policies are a master blueprint of the entire
organization's security program.
2. System-specific. A system-specific policy covers security procedures for
an information system or network.
3. Issue-specific. These policies target certain aspects of the larger
organizational policy. Examples of issue-related security policies include
the following:
o
Acceptable use policies define the rules and regulations for employee
use of company assets.
o
Access control policies say which employees can access which
resources.
o
Change management policies provide procedures for changing IT
assets so that adverse effects are minimized.
o
Disaster recovery policies ensure business continuity after a service
disruption. These policies typically are enacted after the damage from
an incident has occurred.
o
Incident response policies define procedures for responding to a
security breach or incident as it is happening.
The National Institute of Standards and Technology (NIST) frames incident response as a
cycle instead of a list of steps, which is a more proactive approach.
Key elements in a security policy
Some of the key elements of an organizational information security policy
include the following:

statement of the purpose;

statement that defines who the policy applies;

statement of objectives, which usually encompasses the CIA triad;

authority and access control policy that delineates who has access to
which resources;

data classification statement that divides data into categories of sensitivity - the data covered can range from public information to information that
could cause harm to the business or an individual if disclosed;

data use statement that lays out how data at any level should be handled -this includes specifying the data protection regulations, data backup
requirements and network security standards for how data should be
communicated, with encryption, for example;

statement of the responsibilities and duties of employees and who will be
responsible for overseeing and enforcing policy;

security awareness training that instructs employees on security best
practices -- this includes education on potential security threats, such as
phishing, and computer security best practices for using company devices;
and

effectiveness measurements that will be used to assess how well security
policies are working and how improvements will be made.
Learn more about security awareness training
IT pros stress importance of security awareness training
How effective security training goes deeper than 'awareness'
10 tips for cybersecurity awareness programs in uncertain times
Security awareness training quiz: Insider threat prevention
What to consider when creating a security policy
Security professionals must consider a range of areas when drafting a
security policy. They include the following:

Cloud and mobile. It is important for organizations to consider how they
are using the cloud and mobile applications when developing security
policies. Data is increasingly distributed through an organization's network
over a spectrum of devices. It is important to account for the increased
amount of vulnerabilities that a distributed network of devices creates.

Data classification. Improperly categorizing data can lead to the exposure
of valuable assets or resources expended protecting data that doesn't
need to be protected.

Continuous updates. An organization's IT environment and the
vulnerabilities it is exposed to change as the organization grows, industries
change and cyberthreats evolve. Security policies must evolve to reflect
these changes.

Policy frameworks. The National Institute of Standards and Technology
(NIST) offers its Cybersecurity Framework, which provides guidance for
creating a security policy. The NIST approach helps businesses detect,
prevent and respond to cyber attacks.
The
NIST cybersecurity framework provides guidance for creating security policies.
The takeaway
Data is one of an IT organization's most important assets. It is always being
generated and transmitted over an organization's network, and it can be
exposed in countless ways. A security policy guides an organization's strategy
for protecting data and other assets.
It is up to security leaders -- like chief information security officers -- to ensure
employees follow the security policies to keep company assets safe. Failing to
do so can result in the following:

customer data in jeopardy;

fines and other financial repercussions; and

damage to a company's reputation.
Good cybersecurity strategies start with good policies. The best policies
preemptively deal with security threats before they have the chance to
happen.
Related Terms
PA-DSS (Payment Application Data Security Standard)
Payment Application Data Security Standard (PA-DSS) is a set of requirements intended to
help software vendors develop secure ... See complete definition
security posture
Security posture refers to an organization's overall cybersecurity strength and how well it can
predict, prevent and respond to ... See complete definition
What is cyber hygiene and why is it important?
Cyber hygiene, or cybersecurity hygiene, is a set of practices individuals and organizations
perform regularly to maintain the ... See complete definition
Dig Deeper on Security operations and
management
This chapter discusses the design of a trusted operating system and differentiates
this concept from that of a secure operating system. This chapter includes a
discussion of security policy and models of security, upon which a trusted design can
be based.
Definitions: Trust vs. Security
Here we discuss the term “trusted operating system” and specify why we prefer the
term to something such as “secure operating system”. Basically, security is not a
quality that can be quantified easily. Either a system is secure or it is not secure. If a
system is called secure, it should be able to resist all attacks. The claim of security is
something that has to be taken as is, either one accepts the claim or one does not.
Trust, on the other hand, is something that can be quantified. A system is called
trusted if it meets the intended security requirements; thus one can assign a level of
trust to a system depending on the degree to which it meets a specific set of
requirements.
The evaluation of the trust to be accorded a system is undertaken by the user of the
system and depends on a number of factors, all of which can be assessed.
1) the enforcement of security policy, and
2) the sufficiency of its measures and mechanisms.
Security Policies
A security policy is a statement of the security we expect a given system to
enforce. A system can be characterized as trusted only to the extent that it satisfies
a security policy.
All organizations require policy statements. The evolution of policy is perhaps a fairly
dull job, but it is necessary. Policy sets the context for the rules that are
implemented by an organization. For this course, we focus on information security
policy, used to give a context for the rules and practices of information
security. Policy sets the strategy – it is the “big picture”, while rules are often seen
as the “little picture”. The text states that “Policy sets rules”. The author of these
notes would state that “Policy sets the context for rules”.
Another way to look at policy is that rules and procedures say what to do while the
policy specifies why it is done.
Sections of a Policy
Each policy must have four sections.
Purpose
benefit?
Why has the policy been created and how does the company
Scope
What section of the company is affected by this policy?
Responsibility
Who is held accountable for the proper implementation of
the policy.
Authority
A statement of who issued the policy and how that person has
the
authority to define and enforce the policy.
Types of Policy
Information security policy must cover a number of topics. The major types of policy
that are important to an organization are the following.
Information Policy
Security Policy
Computer Use Policy
Internet Use Policy
E-Mail Use Policy
User Management Procedures
System Management Procedures
Incident Response Procedures
Configuration Management Policy
Information Policy
Companies process and use information of various levels of sensitivity. Much of the
information may be freely distributed to the public, but some should not. Within the
category of information not freely releasable to the public, there are usually at least
two levels of sensitivity. Some information, such as the company telephone book
would cause only nuisance if released publicly. Other information, such as details of
competitive bids, would cause the company substantial financial loss if made public
prematurely.
One should note that most information becomes less sensitive with age – travel
plans of company officials after the travel has been completed, details of competitive
bids after the bid has been let, etc.
Military Information Security Policy
The information security policy of the U.S. Department of Defense, U.S. Department
of Energy and similar agencies is based on classification of information by the
amount of harm its unauthorized release would cause to the national security. The
security policy of each agency is precisely spelled out in appropriate documentation;
those who are cleared for access to classified information should study those
manuals carefully.
Department of Defense (DOD) policy is not a proper course of study for this civilian
course, but it provides an excellent model for the types of security we are
studying. The first thing to note may not be applicable to commercial concerns: the
degree of classification of any document or other information is determined only by
the damage its unauthorized release would cause; possibility of embarrassment or
discovery of incompetent or illegal actions is not sufficient reason to classify
anything.
There are four levels of classification commonly
used: Unclassified, Confidential, Secret, and Top Secret. There is a subset of
Unclassified Data called For Official Use Only, with the obvious implications. Each
classification has requirements for storage, accountability, and destruction of the
information. For unclassified information, the only requirement is that the user
dispose of the information neatly. For FOUO (For Official Use Only) information, the
requirement is not to leave it on top of a desk and to shred it when discarding it.
For Secret and Top Secret information, requirements include GAO (Government
Accounting Office) approved storage containers (more stringent requirements for
Top Secret), hand receipts upon transfer to establish accountability, and complete
destruction (with witnessed destruction certificates) upon discard.
As a word of caution to everyone, the DOD anti-espionage experts give each level of
classification a “life” – the average amount of time before it is known to the
enemy. When this author worked for the U.S. Air Force, the numbers were three
years for Secret and seven years for Top Secret information. Nothing stays secure
for ever.
The U.S. Government uses security clearances as a method to establish the
trustworthiness of an individual or company to access and protect classified
data. The clearances are named identically to the levels of information sensitivity of
information (except that there are no clearances for Unclassified or FOUO) and
indicate the highest level of sensitivity a person is authorized to access. For example,
a person with a Secret clearance is authorized for Secret and Confidential material,
but not for Top Secret.
The granting of a security clearance is based on some determination that the person
is trustworthy. At one time Confidential clearances (rarely issued) could be granted
based on the person presenting a birth certificate. Secret clearances are commonly
based on a check of police records to insure that the person has no criminal
history. Top Secret clearances always require a complete background check,
involving interviews of a person’s family and friends by an agent of the U. S.
Government.
Each security clearance must be granted by the branch of the U. S. Government that
owns the classified data to which access is being granted. It is usual for one branch
of the government to accept clearances issued by another branch, but this author
knows people granted Top Secret clearances by the U. S. Air Force who transferred
to the U. S. Navy and had their clearances downgraded to Secret pending the
completion of another background check.
Information access is restricted by need-to-know. Formally this phrase implies that
one must have access to this information in order to complete his or her assigned
duties. Depending on the level of sensitivity, the criteria for need-to-know
differ. Conventionally, the criteria for access to Secret and lower level data is a
credible request from a person known to be cleared and working in an area related
to the data. The criteria for Top Secret information usually are more formal, such as
a specific authorization by a senior person who already has access to the data.
For some areas, the need-to-know is formalized by creating compartments. All
information related to nuclear weapons is classified as Restricted Data, with
additional security controls.
Another well-known area is “Crypto”, that area related to the ciphers and codes used
to transmit classified data. Other sensitive information specific to a given project is
assigned to what is called a compartment, and is called Sensitive Compartmented
Information or SCI. Contrary to popular belief, not all SCI information is classified as
Top Secret.
As an example of a well-known project that must have had SCI controls, consider the
famous spy plane commonly called the Black Bird. It was the follow-on to the U2,
another project that must have had SCI controls. Suppose that the project was called
Blackbird (quite unlikely, but we need a name). Some information, such as detailed
design of this advanced plane would have been classified Top Secret and labeled as
“Handle via Blackbird channels only”. Administrative information on the project
might be classified as Secret with the same distribution restrictions. All of the
information to be handled within the Blackbird channels would be releasable only to
those people who had been specifically cleared for access to Blackbird information.
This author’s opinion of the classification process is that the U.S. Government had to
pay money to develop the data; others should not get it for free.
Company Security Policy
This section covers how a company might adapt the DOD security policy to handle its
own proprietary data; companies that handle U. S. Government sensitive data must
follow the appropriate policies as specified in the contract allowing the company
access to such data.
The first idea is that of a multi-level security. It should be obvious that some
company data are more sensitive that other data and require more protection. For
many companies a three level policy might suffice. Suggested levels of classification
include:
3-level
Public Release, Internal Use Only, and Proprietary
4-level
Public Release, Internal Use Only, Proprietary, and Company
Confidential.
It cannot be overemphasized that the terms “Secret” and “Top Secret” not be
used to classify company-sensitive data, as this can lead to serious embarrassments
when an auditor visits from a U. S. Government agency and asks what contract
allows the company to possess data considered by the U. S. Government to be
classified.
While companies normally do not have a formal clearance system, there are certain
aspects of the U. S. Government system that should be applied. Every company
should do some sort of background investigation on all of its employees (did you just
hire a known felon?) and delegate to specific managers the authority to grant access
to each sensitive project as needed to further the interests of the
company. The need-to-know policy should be enforced in that a person should have
access to company sensitive information only when a project manager determines
that such access is in the best interest of the company.
The idea of compartmented information comes naturally to companies; how many of
us working on sensitive projects need access to payroll data and other personnel
files. Again, one is strongly cautioned not to use the term SCI to refer to any
information other than that information so labeled by the U. S. Government.
Company policy must include instructions for storing, transferring, and destroying
sensitive information. Again, the DOD policy provides a good starting point.
Marking Sensitive Information
The U.S. Department of Defense has developed a standard approach to marking
sensitive information and protecting it with cove sheets that are themselves not
sensitive. These suggestions follow the DOD practice. The student should remember
to avoid the terms “Secret” and “Top Secret” in companies that have any dealings
with the government, as considerable embarrassment might arise were company
data confused and considered to be sensitive and so classified under some U.S.
Government regulation.
The suggested practice for paper documents involves cover sheets, page markings,
and paragraph markings. Each paragraph, table, and figure should be labeled
according to the sensitivity of the information contained. Each page should be
marked with the sensitivity of the most sensitive information on the page, and each
document should be labeled with the sensitivity of the most sensitive information in
the document and given an appropriate cover sheet. When stored in electronic
form, the document should contain the markings as if it were to be printed or
displayed on a monitor. These precautions should exist in addition to any
precautions to label the disk drive itself to show it contains sensitive information.
When designing cover sheets for physical documents and labels for electronic
storage media, one should note that the goal is to indicate that the attached
document is sensitive without actually revealing any sensitive information; the cover
sheet itself should be publicly releasable. The DOD practice is to have blue cover
sheets for Confidential documents and red cover sheets for Secret documents. The
figure below illustrates two cover sheets.
The policy for company sensitive information should list the precautions required for
storage and transmission of the various levels of sensitivity.
Processing Sensitive Information
Again, we mention a policy that is derived from DOD practice. When a computer is
being used to process or store sensitive information, access to that computer should
be restricted to those employees explicitly authorized to work with that
information. This practice is a result of early DOD experiments with “multi-level
security” on time-sharing computers, in which some users were able to gain access
to information for which they were not authorized.
Destruction of Sensitive Information
Sensitive information on paper should be destroyed either by shredding with a crosscut shredder or by burning. Destruction of information on disk drives should be
performed by professionals; the delete command of the operating system does not
remove any data.
A common practice is to destroy the disk physically, possibly by melting it.
Security Policy
There are a number of topics that should be addressed. Identification and
authentication are two major topics – how are the users of the system identified and
authenticated. User ID’s and passwords are the most common mechanisms, but
others are possible.
The audit policy should specify what events are to be logged for later analysis. One
of the more commonly logged classes of events covers failed logins, which can
identify attempts to penetrate the system. One should remember, however, that
event logs can be useful only if there is a method for scanning them systematically
for significant events. Manual log reading is feasible only when an event has been
identified by other means – people are not good at reading long lists of events.
Any policy must include a provision for waivers; that is, what to do when the
provisions of the policy conflict with a pressing business need. When a project
manager requests a waiver of the company security policy, it must be documented
formally. Items to include are
the system in question,
the section of the security policy that will not be met,
how the non-compliance will increase the risk to the company,
the steps being taken to manage that risk, and
the plans for bringing the system into compliance with the policy.
Computer Use Policy
The policy should state clearly that an employee enters into an implicit agreement
with the company when using a computer issued by the company. Some important
items are:
1) All computers and network resources are owned by the company,
2) The acceptable use (if any) of non-company-owned computers within
the company business environment,
3) With the exception of customer data (which are owned by the customer), that
all
information stored on or used by the company computers is owned by the
company.
4) That the employee is expected to use company-owned computers only for
purposes
that are related to work, and
5) That an employee has no expectation of privacy for information stored on
company
computers or network assets.
System Administration Policies
These should specify how software patches and upgrades are to be distributed in the
company and who is responsible for making these upgrades. There should also be
policies for identification and correcting vulnerabilities in computer systems.
There should also be a policy for responding for security incidents, commonly called
an IRP or Incident Response Policy. There are a number of topics to be covered
1) how to identify the incident,
2) how to escalate the response as necessary until it is appropriate, and
3) who should contact the public press or law-enforcement authorities.
Creating and Deploying Policy
The most important issue with policy is gaining user acceptance – it should not be
grudging. The first step in creating a policy is the identification of stakeholders –
those who are affected by the policy. These must be included in the process of
developing the policy.
Another important concept is “buy-in”, which means that people affected by the
policy must agree that the policy is important and agree to abide by it. This goal is
often achieved best by a well-designed user education policy. Face it – if security is
viewed only as a nuisance imposed by some bureaucratic “bean counter”, it will be
ignored and subverted.
Here I must recall a supposedly true story about a company that bought a building
from another company that had been a defense contractor. The company
purchasing the building was not a defense contractor and had no access to
information classified by the U. S. Government. Imagine the company’s surprise,
when as a part of their renovation they removed the false ceiling and were showered
with documents indicating that they were the property of the U. S. Department of
Defense and marked SECRET.
It turned out that the security officer of the previous company was particularly
zealous. It was required that every classified document be properly locked up in the
company’s safe at the end of the working day. Accessing the safe was a nuisance, so
the engineers placed the documents above the false ceiling to avoid the security
officer discovering them on one of his frequent inspections. Here we have an
obvious case of lack of buy-in.
Models of Security
It is common practice, when we want to understand a subject, to build a logical
model and study that logical model. Of course, the logical model is useful only to the
extent that it corresponds to the real system, but we can try to get better
models. Models of security are used for a number of purposes.
1) To test the policy for consistency and adequate coverage.
Note that I do not say “completeness” – one can only show a policy to be
incomplete.
2) To document the policy.
3) To validate the policy; i.e. to determine that the policy meets its
requirements.
There are many useful models of security, most of which focus on multi-level
security. We shall discuss some of these, despite this author’s documented
skepticism that multi-level security systems are feasible with today’s hardware
running today’s operating systems.
Multi-Level Security
The idea of multi-level security is that some data are more sensitive than
others. When we try to formalize a model of multi-level security using the most
obvious model, we arrive at a slight problem. Consider the four traditional security
classifications and their implied order.
Unclassified  Confidential  SECRET  Top Secret
This is an example of what mathematicians call a total ordering. A total ordering is a
special case of an ordering on a set. We first define partial ordering.
A partial order (or partial ordering) is defined for a set S as follows.
1) There is an equality operator, =, and by implication an inequality operator, .
Any two elements of the set, a  S and b  S can be compared.
Either a = b or a  b. All sets share this property.
2) There is an ordering operator , and by implication the operator .
If a  b, then b  a. Note that the operator could be indicated by another
symbol.
3) The operator is transitive.
For any a  S, b  S, c  S, if a  b and b  c, then a  c.
4) The operator is antisymmetric.
For any a  S, b  S, if a  b and b  a, then a = b.
If, in addition to the above requirements for a partial ordering, it is the case that for
any two elements a  S, b  S, that either a  b or b  a, then the relation is a total
ordering. We are fairly familiar with sets that support a total ordering; consider the
set of positive integers.
In models of the security world, it is often the case that two items cannot be
compared by an ordering operator. It has been discovered that the mathematical
object called a lattice provides a better model of security.
A lattice is a set S that supports a partial order, with the following additional
requirements.
1) Every pair of elements a  S, b  S possess a common upper bound; i.e.,
there is an element u  S, such that a  u and b  u.
2) Every pair of elements a  S, b  S possess a common lower bound; i.e.,
there is an element l  S, such that l  a and l  b.
Obviously a total ordering is a special case of a lattice. For any two
elements a  S, b  S is a set with a total ordering, let l = min(a, b) and u = max(a, b)
to satisfy the lattice property.
The most common example of a lattice is the relationship of divisibility in the set of
positive integers. Note that addition of zero to the set ruins the divisibility property.
The divisibility operator is denoted by the symbol “|”; we say a | b if the
integer a divides the integer b, equivalently that the integer b is an integer multiple
of the integer a. Let’s verify that this operator on the set of positive integers satisfies
the requirements of a partial order.
1) Both equality and inequality are defined for the set of integers.
2) We are given the ordering operator “|”.
3) The operator is transitive.
For any a  S, b  S, c  S, if a | b and b | c, then a | c. The proof is easy.
If b | c, then there exists an integer q such that c = qb.
If a | b, then there exists an integer p such that b = pa.
Thus c = qb = c = q  pa = (qp)a, and a | c.
4) The operator is antisymmetric.
For any a  S, b  S, if a | b and b | a, then a = b.
If the divisibility operator imposed a total order on the set of integers, then it would
be the case that for any two integers a and b, that either a | b or b | a. It is easy to
falsify this claim by picking two prime numbers; say a = 5 and b = 7. Admittedly,
there are many pairs of integers that are not prime and still falsify the claim (27 =
33 and 25 = 52), but one set is enough. We now ask if the set of integers under the
divisibility operator forms a lattice.
It turns out that the set does form a lattice as it is quite easy to form the lower and
upper bounds for any two integers. Let a  S and b  S, where S is the set of positive
integers.
A lower bound that always works is l = 1 and an upper bound that always works
is u = ab. Admittedly, these are not the greatest lower bound or least upper bound,
but they show that such bounds do exist. To illustrate the last statement, consider
this example.
a = 4 and b = 6, with ab = 24.
The greatest lower bound is l = 2, because 2 | 6 and 3 | 6, and the number 2
is the largest integer to have that property.
The least upper bound is u = 12, because 4 | 12 and 6 | 12, and the number
12
is the smallest integer to have that property.
The lattice model has been widely accepted as a model for security systems because
it incorporates two of the basic requirements.
1) There is a sense of the idea that some data are more sensitive than other
data.
2) It is not always possible to rank the sensitivity of two distinct sets of data.
The figure below, adapted from figure 5-6 on page 241 of the textbook, shows a
lattice model based on the factors of the number 60 = 2235.
This figure is a directed acyclic graph (DAG) although the arrows are not shown on
the edges as drawn. Depending on the relation being modeled, the arrows all point
up or the arrows all point down. Note that this makes a good model of security, in
that some elements may in a sense be “more sensitive” than others without being
directly comparable. In the above DAG, we see that 12 is larger than 5 in the sense
of traditional comparison, but that the two numbers cannot be compared within the
rules of the lattice.
Before proceeding with security models that allow for multi-level security, we should
first mention that there are two problems associated with multi-level security. We
mention the less severe problem first and then proceed with the one discussed in
the text.
By definition, a multi-level security system allows for programs with different levels
of security to execute at the same time. Suppose that your program is processing
Top Secret data and producing Top Secret results (implying that you are cleared for
Top Secret), while my program is processing SECRET data and producing SECRET
results. A leak of data from your program into my program space is less severs if I
also am cleared for Top Secret, but just happen to be running a SECRET program. If I
am not cleared for access to Top Secret data, then we have a real security violation.
For the duration of this discussion, we shall assume the latter option – that a number
of users are processing data, with each user not being authorized to see the other
user’s data.
The Bell-LaPadula Confidentiality Model
The goal of this model is to identify allowable flows of information in a secure
system. While we are applying this to a computer system running multiple processes
(say a server with a number of clients checking databases over the Internet), I shall
illustrate the model with a paper-oriented example of collaborative writing of a
document to be printed. In this example, I am assuming that I have a SECRET
clearance.
This model is concerned with subjects and objects, as are other models. Each
subject and object in the model has a fixed security class, defined as follows.
C(S)
for subject S this is the person’s clearance
C(O)
for objects (data and programs) this is the classification.
The first property is practically a definition of the meaning of a security clearance.
Simple Security Property A subject S may have read access to an object O
only if C(S)  C(O).
In my example, this implies that I may show my SECRET parts of the report only to
those who are cleared for SECRET-level or higher information. Specifically, I cannot
show the information to someone cleared only for access
to Confidential information.
*-Property
C(S)  C(O))
A subject S who has read access to an object O (thus
may have write access to an object P only if C(O)  C(P).
This property seems a bit strange until one thinks about it. Notice first what
this does not say – that the subject has read access to the object P. In our example,
this states that if you are cleared for access to Top Secret information and are writing
a report classified Top Secret, that I (having only a SECRET clearance) may submit a
chapter classified SECRET for inclusion into your report. You accept the chapter and
include it. I never get to see the entire report as my clearance level is not sufficient.
The strict interpretation of the *-Property places a severe constraint on information
flow from one program to a program of less sensitivity. In actual practice, such flows
are common with a person taking responsibility for removing sensitive data. The
problem here is that it is quite difficult for a computer program to scan a document
and detect the sensitivity of data. For example, suppose I have a document classified
as SECRET. A computer program scanning this document can easily pick out the
classification marks, but cannot make any judgments about what it is that causes the
document to be so classified. Thus, the strict rule is that if you are not cleared for
the entire document, you cannot see any part of it.
The author of these notes will share a true story dating from his days working for Air
Force intelligence. As would be expected, much of the information handled by the
intelligence organization was classified Top Secret, with most of that associated with
sensitive intelligence projects. People were hired based on a SECRET security
clearance and were assigned low-level projects until their Top Secret clearance was
obtained.
Information is the life blood of an intelligence organization. The basic model is that
the people who collect the intelligence pass it to the analysts who then determine its
significance. Most of what arrives at such an organization is quickly destroyed, but
this is the preferable mode as it does not require those who collect the information
to assess it.
There were many sensitive projects that worked with both SECRET and Top Secret
data. As the volume of documents to be destroyed was quite large, it was the
practice for the data that was classified only SECRET to be packaged up, sent out of
the restricted area, and given to the secretaries waiting on their Top Secret clearance
to handle for destruction. Thus we had a data flow from an area handling Top Secret
to an area authorized to handle data classified no higher than SECRET. This author
was present when the expected leak happened.
This author walked by the desk of a secretary engaged in the destruction of a large
pile of SECRET documents. At the time, both she and I had SECRET security
clearances and would soon be granted Top Secret clearances (each of got the
clearance in a few months). In among the pile of documents properly delivered was
a document clearly marked Top Secret with a code word indicating that it was
associated with some very sensitive project. The secretary asked this author what to
do with the obviously misplaced document. This author could not think of anything
better than to report it to his supervisor, who he knew to have the appropriate
clearance. Result – MAJOR FREAKOUT, and a change in policy.
The problem at this point was a large flow of data from a more sensitive area to a
less sensitive area. Here is the question: this was only one document out of tens of
thousands. How important is it to avoid such a freak accident?
If one silly story will not do the job, let’s try for two with another story from this
author’s time in Dayton, Ohio. At the time an adult movie-house (porn theater) was
attempting to reach a wider audience, so it started showing children’s movies during
the day. This author attended the first showing. While the movie was G rated,
unfortunately nobody told the projectionist that the previews of coming attractions
could not be X rated. The result was a lot of surprised parents and amazed
children. There was no second showing for children.
The Biba Integrity Model
The Biba integrity model is similar to the Bell-La Padula model, except that it is
designed to address issues of integrity of data. Security addresses prevention of
unauthorized disclosure of data, integrity addresses unauthorized modification of
data. The student should note the similarities of the two models.
Design of a Trusted Operating System
Here we face the immediate problem of software quality. It is almost impossible to
create a complete and consistent set of requirements for any large software system,
and even more difficult to insure that the software system adheres to that set of
requirements and no other. Now we are asked to make an operating system adhere
to a set of requirements specifying security – perhaps both the Bell-La Padula model
and the Biba integrity model. This is quite a chore. The difficulty of the chore does
not excuse us from trying it.
The main difficulty in insuring the security of an operating system is the fact that the
operating system is interrupt-driven. Imagine an ordinary user program, perhaps
one written for a class project. One can think of this as a deterministic system
(although it might not be) in that the program does only what the instructions say to
do. Admittedly what the instructions say to do may be different from what the
author of the program thinks they say to do, but that is always a problem.
The main job of an operating system is to initialize the execution environment of the
computer and then enter an idle state, just waiting for interrupts. Its job is to
respond to each of the interrupts according to a fixed priority policy and to execute
the program associated with the interrupt. The association of programs with
interrupts is established when the execution environment is set up; for further study
consult a book on computer architecture.
When an interrupt causes the operating system to suspend the execution of one
program and initiate the execution of another program, the operating system
performs a context switch, basically loading the new program and establishing its
execution environment. It is this context switch that introduces some indeterminacy
into the operating system. Another concern is that the time and resources taken by
the context switch itself are part of the overhead of the operating system – cost to
the executing program that does not directly benefit the executing program. Thus,
there is pressure to make each context switch as efficient as possible. Introducing
security code into the context switch slows it down.
There are three main services of operating systems that interact with security.
User Interface
Service Management
services
authenticates a user, allows him access to the system,
and handles all interaction with the user.
this allows a user access to many of the low-level
of the operating system.
Resource Allocation
time
this allocates resources, such as memory, I/O devices,
on the CPU, etc.
In a trusted operating system, designed from the beginning with security in mind,
each of these main services is written as a distinct object with its own security
controls, especially user authentication, least privilege (don’t let a user do more
than is necessary), and complete mediation (verifying that the input is of the
expected form and adheres to the “edit” rules). Here the UNIX operating system
shows its major flaw – users are either not trusted or, being super-users, given
access to every resource.
Consider figure 5-11 on page 255 of the textbook. This shows the above strategy
taken to its logical and preferable conclusion. We have postulated that the resource
allocator have a security front-end to increase its security. Each of the resources
allocated by this feature should be viewed also as an object – a data structure with
software to manage its access.
The bottom line here is that computers are fast and memory is cheap. A recent
check
(10/31/2003) of the Gateway web site found a server configured with a 3.08 GHz
processor, 512 KB of cache memory, and 4GB of main memory. We might as well
spend a few of these inexpensive resources to do the job correctly.
Some of the features of a security-oriented operating system are obvious, while
other features require a bit of explanation. We discuss those features that are not
obvious.
Mandatory access control (MAC) refers to the granting of access by a central
authority, not by individual users. If I have SECRET data to show you and you do not
have a SECRET clearance, I cannot of my own volition grant you a SECRET clearance
(although I have actually seen it done – I wonder what the Defense Department
would think of that). MAC should exist along with discretionary access control (DAC)
in that objects not managed by the central authority can be managed by the
individual user owning them.
Object reuse protection refers to the complete removal of an object before it is
returned to the object pool for reuse. The simplest example of this is protection of
files. What happens when a file is deleted. In many operating systems, the file
allocation table is modified to no longer reference the object and to place its data
sectors on the free list as available for reuse. Note that the data sectors are not
overwritten, so that the original data remains. In theory, I could declare a large file
and, without writing anything to it, just read what is already there, left over from
when its sectors were used by a number of other files, now deleted.
Object reuse protection also has a place in large object-oriented systems. In these
systems, the creation of some objects is often very computationally intense. This
leads to the practice of pooling the discarded objects rather than actually destroying
the object and releasing the memory when the object is no longer in use. A program
attempting to create a new object of the type in the pool will get an object already
created if one exists in the pool. This leads to more efficient operation, but also
introduces a security hole.
Audit log management refers to the practice of logging all events with potential
security impact, protecting that log from unauthorized access and modification, and
creation of procedures and software to examine the log periodically and analyze it
for irregularities. A security log is of no use if nobody looks at it.
Intrusion detection refers to the creation and use of system software that scans all
activity looking for unusual events. Such software is hard to write, but one should
try. For example, this author has a 128 MB flash drive that he occasionally attaches
to his computer at work via the USB port. The intrusion detection software always
reports that the number of hard drives on the system has changed and says to call
the administrator if this was not an intentional act.
Kernelized Design
A kernel is the part of an operating system that performs low-level functions. This is
distinct from the high-level services part of the operating system that does things
such as handle shared printers, provides for e-mail and Internet access, etc. The
kernel of an operating system is often called the nucleus, and rarely the core. In an
operating system designed with security in mind there are two kernels: the security
kernel and the operating system kernel, which includes the security kernel.
The security kernel is responsible for enforcing the security mechanisms of the
operating system, including the handling of most of the functions normally allocated
to the operating system kernel itself, as most of these low-level facilities have impact
on security.
The reference monitor is one of the most important parts of the security
kernel. This is the process that controls access to all objects, including devices, files,
memory, interprocess communication, and other objects. Naturally, the reference
monitor must monitor access to itself and include protection against its being
modified in an unauthorized way.
The Trusted Computing Base (TCB)
The trusted computing base is the name given to the part of the operating system
used to enforce security policy. Naturally, this must include the security
kernel. Functions of the TCB include the following:
1) hardware management, including processors, memory, registers, and I/O
devices,
2) process management, including process scheduling,
3) interrupt handling, including management of the clocks and timing functions,
and
4) management of primitive low-level I/O operations.
Virtualization is one of the more important tools of a trusted operating system. By
this term we mean that the operating system emulates a collection of the computer
system’s sensitive resources. Obviously virtualized objects must be supported by
real objects, but the idea is that these real objects can be managed via the virtual
objects.
As an example of a virtualized object, consider a shared printer. The printer is a real
object to which it is possible to print directly. Simultaneous execution of several
programs, each with direct access to the printer would yield an output with the
results of each program intermixed – a big mess. In fact the printer is virtualized and
replaced by the print spooler, which is the only process allowed to print directly to
the printer. Each process accessing the virtualized printer is really accessing the print
spooler, which writes the data to a disk file associated with the process. When the
process is finished with the printer, the spooler closes the file, and queues it up for
being printed on the real printer.
A virtual machine is a collection of hardware facilities, each of which could be real or
simulated in software. One common feature is virtual memory, in which each
process appears to have access to all of the memory of the computer, with the
possible exception of memory allocated to the operating system.
Assurance in Trusted Operating Systems
For an operating system designed to be secure, assurance is the mechanism for
convincing others that the security model is correct, as are the design and
implementation of the OS. How does one gain confidence that an operating system
should be trusted? One way is by gaining confidence that a number of the more
obvious security vulnerabilities have been addressed in the design of the system.
Input/Output processing represents one of the larger vulnerabilities in operating
systems. There are a number of reasons for the vulnerability of this processing,
including
1) the fact that I/O processing is interrupt driven, and
2) the fact that I/O processing is often performed by independent hardware
systems, and
3) the complexity of the I/O code itself, and
4) the desire to have the I/O process bypass the security monitors as an
efficiency issue.
Methods for gaining assurance include testing by the creator of the software, formal
testing by a unit that is independent of the software development process, formal
verification (when possible – it is very difficult), and formal validation by an outside
vendor. The author of these notes had been part of a software V&V (verification and
validation) team, assigned to be sure that the code was written correctly and that it
adhered to the requirements.
Formal Evaluation
We now turn to formal evaluation of an operating system against a published set of
criteria. One of the earliest attempts for formal evaluation was called the Trusted
Computer System Evaluation Criteria (TCSEC), more loosely the “Orange Book”
because that was the color of the book. This was published in the late 1970’s by the
U. S. Department of Defense. The TCSEC defined a number of levels of assurance.
D
C1
C2
control)
B1
– basically, no protection. Any system can get this level.
– discretionary access control
– controlled access protection ( a finer grained discretionary access
– labeled security protection
Each object is assigned a security level and mandatory access controls are
used.
B2
– structured protection. This is level B1 with formal testing of a verified
design.
B3
– security domains. The security kernel must be small and testable.
A1
– verified design. A formal design exists and has been thoroughly
examined.
The TCSEC was a good document for its day, but it was overtaken by the arrival of
the Internet and connectivity to the Internet. Several operating systems were rated
as C1 or better, provided that the system was running without connection to the
Internet.
More recently, the U. S. Government has published the Combined Federal Criteria,
followed in 1998 by the Common Criteria. This document proposed a number of
levels of assurance (seven, I think) with higher levels being more secure and the top
level being characterized as “ridiculously secure”. The book has a discussion of these
criteria, but few details.
Assurance in Trusted Operating Systems
This chapter has moved our discussion from the general to the particular. We began by
studying different models of protection systems. By the time we reached the last section, we
examined three principlesisolation, security kernel, and layered structureused in designing
secure operating systems, and we looked in detail at the approaches taken by designers of
particular operating systems. Now, we suppose that an operating system provider has taken
these considerations into account and claims to have a secure design. It is time for us to
consider assurance, ways of convincing others that a model, design, and implementation
are correct.
What justifies our confidence in the security features of an operating system? If someone
else has evaluated the system, how have the confidence levels of operating systems been
rated? In our assessment, we must recognize that operating systems are used in different
environments; in some applications, less secure operating systems may be acceptable.
Overall, then, we need ways of determining whether a particular operating system is
appropriate for a certain set of needs. Both in Chapter 4 and in the previous section, we
looked at design and process techniques for building confidence in the quality and
correctness of a system. In this section, we explore ways to actually demonstrate the
security of an operating system, using techniques such as testing, formal verification, and
informal validation. Snow [SNO05] explains what assurance is and why we need it.
Typical Operating System Flaws
Periodically throughout our analysis of operating system security features, we have used
the phrase "exploit a vulnerability." Throughout the years, many vulnerabilities have been
uncovered in many operating systems. They have gradually been corrected, and the body
of knowledge about likely weak spots has grown.
Known Vulnerabilities
In this section, we discuss typical vulnerabilities that have been uncovered in operating
systems. Our goal is not to provide a "how-to" guide for potential penetrators of operating
systems. Rather, we study these flaws to understand the careful analysis necessary in
designing and testing operating systems. User interaction is the largest single source of
operating system vulnerabilities, for several reasons:
o
The user interface is performed by independent, intelligent hardware subsystems.
The humancomputer interface often falls outside the security kernel or security
restrictions implemented by an operating system.
o
o
Code to interact with users is often much more complex and much more dependent
on the specific device hardware than code for any other component of the computing
system. For these reasons, it is harder to review this code for correctness, let alone
to verify it formally.
User interactions are often character oriented. Again, in the interest of fast data
transfer, the operating systems designers may have tried to take shortcuts by limiting
the number of instructions executed by the operating system during actual data
transfer. Sometimes the instructions eliminated are those that enforce security
policies as each character is transferred.
A second prominent weakness in operating system security reflects an ambiguity in access
policy. On one hand, we want to separate users and protect their individual resources. On
the other hand, users depend on shared access to libraries, utility programs, common data,
and system tables. The distinction between isolation and sharing is not always clear at the
policy level, so the distinction cannot be sharply drawn at implementation.
A third potential problem area is incomplete mediation. Recall that Saltzer [SAL74]
recommended an operating system design in which every requested access was checked
for proper authorization. However, some systems check access only once per user interface
operation, process execution, or machine interval. The mechanism is available to implement
full protection, but the policy decision on when to invoke the mechanism is not complete.
Therefore, in the absence of any explicit requirement, system designers adopt the "most
efficient" enforcement; that is, the one that will lead to the least use of machine resources.
Generality is a fourth protection weakness, especially among commercial operating
systems for large computing systems. Implementers try to provide a means for users to
customize their operating system installation and to allow installation of software packages
written by other companies. Some of these packages, which themselves operate as part of
the operating system, must execute with the same access privileges as the operating
system. For example, there are programs that provide stricter access control than the
standard control available from the operating system. The "hooks" by which these packages
are installed are also trapdoors for any user to penetrate the operating system.
Thus, several well-known points of security weakness are common to many commercial
operating systems. Let us consider several examples of actual vulnerabilities that have
been exploited to penetrate operating systems.
Examples of Exploitations
Earlier, we discussed why the user interface is a weak point in many major operating
systems. We begin our examples by exploring this weakness in greater detail. On some
systems, after access has been checked to initiate a user operation, the operation
continues without subsequent checking, leading to classic time-of-check to time-of-use
flaws. Checking access permission with each character transferred is a substantial
overhead for the protection system. The command often resides in the user's memory
space. Any user can alter the source or destination address of the command after the
operation has commenced. Because access has already been checked once, the new
address will be used without further checkingit is not checked each time a piece of data is
transferred. By exploiting this flaw, users have been able to transfer data to or from any
memory address they desire.
Another example of exploitation involves a procedural problem. In one system a special
supervisor function was reserved for the installation of other security packages. When
executed, this supervisor call returned control to the user in privileged mode. The
operations allowable in that mode were not monitored closely, so the supervisor call could
be used for access control or for any other high-security system access. The particular
supervisor call required some effort to execute, but it was fully available on the system.
Additional checking should have been used to authenticate the program executing the
supervisor request. As an alternative, the access rights for any subject entering under that
supervisor request could have been limited to the objects necessary to perform the function
of the added program.
The time-of-check to time-of-use mismatch described in Chapter 3 can introduce security
problems, too. In an attack based on this vulnerability, access permission is checked for a
particular user to access an object, such as a buffer. But between the time the access is
approved and the access actually occurs, the user changes the designation of the object, so
that instead of accessing the approved object, the user now accesses another,
unacceptable, one.
Other penetrations have occurred by exploitation of more complex combinations of
vulnerabilities. In general, however, security flaws in trusted operating systems have
resulted from a faulty analysis of a complex situation, such as user interaction, or from an
ambiguity or omission in the security policy. When simple security mechanisms are used to
implement clear and complete security policies, the number of penetrations falls
dramatically.
Assurance Methods
Once we understand the potential vulnerabilities in a system, we can apply assurance
techniques to seek out the vulnerabilities and mitigate or eliminate their effects. In this
section, we consider three such techniques, showing how they give us confidence in a
system's correctness: testing, verification, and validation. None of these is complete or
foolproof, and each has advantages and disadvantages. However, used with
understanding, each can play an important role in deriving overall assurance of the
systems' security.
Testing
Testing, first presented in Chapter 3, is the most widely accepted assurance technique. As
Boebert [BOE92] observes, conclusions from testing are based on the actual product being
evaluated, not on some abstraction or precursor of the product. This realism is a security
advantage. However, conclusions based on testing are necessarily limited, for the following
reasons:
o
Testing can demonstrate the existence of a problem, but passing tests does not
demonstrate the absence of problems.
o
o
o
o
Testing adequately within reasonable time or effort is difficult because the
combinatorial explosion of inputs and internal states makes testing very complex.
Testing based only on observable effects, not on the internal structure of a product,
does not ensure any degree of completeness.
Testing based on the internal structure of a product involves modifying the product
by adding code to extract and display internal states. That extra functionality affects
the product's behavior and can itself be a source of vulnerabilities or mask other
vulnerabilities.
Testing real-time or complex systems presents the problem of keeping track of all
states and triggers. This problem makes it hard to reproduce and analyze problems
reported as testers proceed.
Ordinarily, we think of testing in terms of the developer: unit testing a module, integration
testing to ensure that modules function properly together, function testing to trace
correctness across all aspects of a given function, and system testing to combine hardware
with software. Likewise, regression testing is performed to make sure a change to one part
of a system does not degrade any other functionality. But for other tests, including
acceptance tests, the user or customer administers tests to determine if what was ordered
is what is delivered. Thus, an important aspect of assurance is considering whether the
tests run are appropriate for the application and level of security. The nature and kinds of
testing reflect the developer's testing strategy: which tests address what issues.
Similarly, it is important to recognize that testing is almost always constrained by a project's
budget and schedule. The constraints usually mean that testing is incomplete in some way.
For this reason, we consider notions of test coverage, test completeness, and testing
effectiveness in a testing strategy. The more complete and effective our testing, the more
confidence we have in the software. More information on testing can be found in Pfleeger
and Atlee [PFL06a].
Penetration Testing
A testing strategy often used in computer security is called penetration testing, tiger team
analysis, or ethical hacking. In this approach, a team of experts in the use and design of
operating systems tries to crack the system being tested. (See, for example, [RUB01,
TIL03, PAL01].) The tiger team knows well the typical vulnerabilities in operating systems
and computing systems, as described in previous sections and chapters. With this
knowledge, the team attempts to identify and exploit the system's particular vulnerabilities.
The work of penetration testers closely resembles what an actual attacker might do
[AND04, SCH00b].
Penetration testing is both an art and a science. The artistic side requires careful analysis
and creativity in choosing the test cases. But the scientific side requires rigor, order,
precision, and organization. As Weissman observes [WEI95], there is an organized
methodology for hypothesizing and verifying flaws. It is not, as some might assume, a
random punching contest.
Using penetration testing is much like asking a mechanic to look over a used car on a sales
lot. The mechanic knows potential weak spots and checks as many of them as possible. It
is likely that a good mechanic will find significant problems, but finding a problem (and fixing
it) is no guarantee that no other problems are lurking in other parts of the system. For
instance, if the mechanic checks the fuel system, the cooling system, and the brakes, there
is no guarantee that the muffler is good. In the same way, an operating system that fails a
penetration test is known to have faults, but a system that does not fail is not guaranteed to
be fault-free. Nevertheless, penetration testing is useful and often finds faults that might
have been overlooked by other forms of testing. One possible reason for the success of
penetration testing is its use under real-life conditions. Users often exercise a system in
ways that its designers never anticipated or intended. So penetration testers can exploit this
real-life environment and knowledge to make certain kinds of problems visible.
Penetration testing is popular with the commercial community who think skilled hackers will
test (attack) a site and find problems in hours if not days. These people do not realize that
finding flaws in complex code can take weeks if not months. Indeed, the original military red
teams to test security in software systems were convened for 4- to 6-month exercises.
Anderson et al. [AND04] point out the limitation of penetration testing. To find one flaw in a
space of 1 million inputs may require testing all 1 million possibilities; unless the space is
reasonably limited, this search is prohibitive. Karger and Schell [KAR02] point out that even
after they informed testers of a piece of malicious code they inserted in a system, the
testers were unable to find it. Penetration testing is not a magic technique for finding
needles in haystacks.
Formal Verification
The most rigorous method of analyzing security is through formal verification, which was
introduced in Chapter 3. Formal verification uses rules of mathematical logic to demonstrate
that a system has certain security properties. In formal verification, the operating system is
modeled and the operating system principles are described as assertions. The collection of
models and assertions is viewed as a theorem, which is then proven. The theorem asserts
that the operating system is correct. That is, formal verification confirms that the operating
system provides the security features it should and nothing else.
Proving correctness of an entire operating system is a formidable task, often requiring
months or even years of effort by several people. Computer programs called theorem
provers can assist in this effort, although much human activity is still needed. The amount
of work required and the methods used are well beyond the scope of this book. However,
we illustrate the general principle of verification by presenting a simple example that
uses proofs of correctness. You can find more extensive coverage of this topic in [BOW95],
[CHE81], [GRI81], [HAN76], [PFL06a], and [SAI96].
Consider the flow diagram of Figure 5-22, illustrating the logic in a program to determine the
smallest of a set of n values, A[1] through A[n]. The flow chart has a single identified
beginning point, a single identified ending point, and five internal blocks, including an if-then
structure and a loop.
Figure 5-22. Flow Diagram for Finding the Minimum Value.
In program verification, we rewrite the program as a series of assertions about the
program's variables and values. The initial assertion is a statement of conditions on entry to
the module. Next, we identify a series of intermediate assertions associated with the work of
the module. We also determine an ending assertion, a statement of the expected result.
Finally, we show that the initial assertion leads logically to the intermediate assertions that
in turn lead logically to the ending assertion.
We can formally verify the example in Figure 5-22 by using four assertions. The first
assertion, P, is a statement of initial conditions, assumed to be true on entry to the
procedure.
n > 0 (P)
The second assertion, Q, is the result of applying the initialization code in the first box.
n > 0 and (Q)
1
min
A[1]
The third assertion, R, is the loop assertion. It asserts what is true at the start of each
iteration of the loop.
n > 0 and (R)
1 for all j, 1 min A[The final assertion, S, is the concluding assertion, the statement
of conditions true at the time the loop exit occurs.
n > 0 and (S)
i = n + 1 and
for all j, 1 min A[These four assertions, shown in Figure 5-23, capture the essence of
the flow chart. The next step in the verification process involves showing the logical
progression of these four assertions. That is, we must show that, assuming P is true on
entry to this procedure, Q is true after completion of the initialization section, R is true the
first time the loop is entered, R is true each time through the loop, and the truth of R implies
that S is true at the termination of the loop.
Figure 5-23. Verification Assertions.
Clearly, Q follows from P and the semantics of the two statements in the second box. When
we enter the loop for the first time, i = 2, so i - 1 = 1. Thus, the assertion about min applies
only for j = 1, which follows from Q. To prove that R remains true with each execution of the
loop, we can use the principle of mathematical induction. The basis of the induction is
that R was true the first time through the loop. With each iteration of the loop the value
of i increases by 1, so it is necessary to show only that min i] for this new value of i. That
proof follows from the meaning of the comparison and replacement statements.
Therefore, R is true with each iteration of the loop. Finally, S follows from the final iteration
value of R. This step completes the formal verification that this flow chart exits with the
smallest value of A[1] through A[n] in min.
The algorithm (not the verification) shown here is frequently used as an example in the first
few weeks of introductory programming classes. It is quite simple; in fact, after studying the
algorithm for a short time, most students convince themselves that the algorithm is correct.
The verification itself takes much longer to explain; it also takes far longer to write than the
algorithm itself. Thus, this proof-of-correctness example highlights two principal difficulties
with formal verification methods:
o
o
Time. The methods of formal verification are time consuming to perform. Stating the
assertions at each step and verifying the logical flow of the assertions are both slow
processes.
Complexity. Formal verification is a complex process. For some systems with large
numbers of states and transitions, it is hopeless to try to state and verify the
assertions. This situation is especially true for systems that have not been designed
with formal verification in mind.
These two difficulties constrain the situations in which formal verification can be used
successfully. Gerhart [GER89] succinctly describes the advantages and disadvantages of
using formal methods, including proof of correctness. As Schaefer [SCH89a] points out, too
often people focus so much on the formalism and on deriving a formal proof that they ignore
the underlying security properties to be ensured.
Validation
Formal verification is a particular instance of the more general approach to assuring
correctness: verification. As we have seen in Chapter 3, there are many ways to show that
each of a system's functions works correctly. Validation is the counterpart to verification,
assuring that the system developers have implemented all requirements. Thus, validation
makes sure that the developer is building the right product (according to the specification),
and verification checks the quality of the implementation [PFL06a]. There are several
different ways to validate an operating system.
o
o
Requirements checking. One technique is to cross-check each operating system
requirement with the system's source code or execution-time behavior. The goal is to
demonstrate that the system does each thing listed in the functional requirements.
This process is a narrow one, in the sense that it demonstrates only that the system
does everything it should do. In security, we are equally concerned about prevention:
making sure the system does not do the things it is not supposed to do.
Requirements checking seldom addresses this aspect of requirements compliance.
Design and code reviews. As described in Chapter 3, design and code reviews
usually address system correctness (that is, verification). But a review can also
o
address requirements implementation. To support validation, the reviewers
scrutinize the design or the code to ensure traceability from each requirement to
design and code components, noting problems along the way (including faults,
incorrect assumptions, incomplete or inconsistent behavior, or faulty logic). The
success of this process depends on the rigor of the review.
System testing. The programmers or an independent test team select data to check
the system. These test data can be organized much like acceptance testing, so
behaviors and data expected from reading the requirements document can be
confirmed in the actual running of the system. The checking is done in a methodical
manner to ensure completeness.
Open Source
A debate has opened in the software development community over so-called open
source operating systems (and other programs), ones for which the source code is freely
released for public analysis. The arguments are predictable: With open source, many critics
can peruse the code, presumably finding flaws, whereas closed (proprietary) source makes
it more difficult for attackers to find and exploit flaws.
The Linux operating system is the prime example of open source software, although the
source of its predecessor Unix was also widely available. The open source idea is catching
on: According to a survey by IDG Research, reported in the Washington Post [CHA01], 27
percent of high-end servers now run Linux, as opposed to 41 percent for a Microsoft
operating system, and the open source Apache web server outruns Microsoft Internet
Information Server by 63 percent to 20 percent.
Lawton [LAW02] lists additional benefits of open source:
o
o
o
o
Cost: Because the source code is available to the public, if the owner charges a high
fee, the public will trade the software unofficially.
Quality: The code can be analyzed by many reviewers who are unrelated to the
development effort or the firm that developed the software.
Support: As the public finds flaws, it may also be in the best position to propose the
fixes for those flaws.
Extensibility: The public can readily figure how to extend code to meet new needs
and can share those extensions with other users.
Opponents of public release argue that giving the attacker knowledge of the design and
implementation of a piece of code allows a search for shortcomings and provides a
blueprint for their exploitation. Many commercial vendors have opposed open source for
years, and Microsoft is currently being quite vocal in its opposition. Craig Mundie, senior
vice president of Microsoft, says open source software "puts at risk the continued vitality of
the independent software sector" [CHA01]. Microsoft favors a scheme under which it would
share source code of some of its products with selected partners, while still retaining
intellectual property rights. The Alexis de Tocqueville Institution argues that "terrorists trying
to hack or disrupt U.S. computer networks might find it easier if the Federal government
attempts to switch to 'open source' as some groups propose," citing threats against air
traffic control or surveillance systems [BRO02].
But noted computer security researchers argue that open or closed source is not the real
issue to examine. Marcus Ranum, president of Network Flight Recorder, has said, "I don't
think making [software] open source contributes to making it better at all. What makes good
software is single-minded focus." Eugene Spafford of Purdue University [LAW02] agrees,
saying, "What really determines whether it is trustable is quality and care. Was it designed
well? Was it built using proper tools? Did the people who built it use discipline and not add a
lot of features?" Ross Anderson of Cambridge University [AND02] argues that "there are
more pressing security problems for the open source community. The interaction between
security and openness is entangled with attempts to use security mechanisms for
commercial advantage, to entrench monopolies, to control copyright, and above all to
control interoperability."
Anderson presents a statistical model of reliability that shows that after open or closed
testing, the two approaches are equivalent in expected failure rate [AND05]. Boulanger
[BOU05] comes to a similar conclusion.
Evaluation
Most system consumers (that is, users or system purchasers) are not security experts. They
need the security functions, but they are not usually capable of verifying the accuracy or
adequacy of test coverage, checking the validity of a proof of correctness, or determining in
any other way that a system correctly implements a security policy. Thus, it is useful (and
sometimes essential) to have an independent third party evaluate an operating system's
security. Independent experts can review the requirements, design, implementation, and
evidence of assurance for a system. Because it is helpful to have a standard approach for
an evaluation, several schemes have been devised for structuring an independent review.
In this section, we examine three different approaches: from the United States, from
Europe, and a scheme that combines several known approaches.
U.S. "Orange Book" Evaluation
In the late 1970s, the U.S. Department of Defense (DoD) defined a set of distinct,
hierarchical levels of trust in operating systems. Published in a document [DOD85] that has
become known informally as the "Orange Book," the Trusted Computer System Evaluation
Criteria (TCSEC) provides the criteria for an independent evaluation. The National
Computer Security Center (NCSC), an organization within the National Security Agency,
guided and sanctioned the actual evaluations.
The levels of trust are described as four divisions, A, B, C, and D, where A has the most
comprehensive degree of security. Within a division, additional distinctions are denoted with
numbers; the higher numbers indicate tighter security requirements. Thus, the complete set
of ratings ranging from lowest to highest assurance is D, C1, C2, B1, B2, B3, and A1. Table
5-7 (from Appendix D of [DOD85]) shows the security requirements for each of the seven
evaluated classes of NCSC certification. (Class D has no requirements because it denotes
minimal protection.)
Table 5-7. Trusted Computer System Evaluation Criteria.
Criteria
Security Policy
Discretionary access control
Object reuse
Labels
Label integrity
Exportation of labeled information
Labeling human-readable output
Mandatory access control
Subject sensitivity labels
Device labels
Accountability
Identification and authentication
Audit
Trusted path
Assurance
System architecture
System integrity
Security testing
Design specification and verification
Covert channel analysis
Trusted facility management
Configuration management
Trusted recovery
Trusted distribution
Documentation
Security features user's guide
Trusted facility manual
Test documentation
Design documentation
Legend: -: no requirement;
requirement
D C1
C2
B1
-
-
-
-
-
-
-
-
-
-
-
-
B2
B3
-
-
A1
-
: same requirement as previous class;
: additional
The table's pattern reveals four clusters of ratings:
o
o
o
o
D, with no requirements
C1/C2/B1, requiring security features common to many commercial operating
systems
B2, requiring a precise proof of security of the underlying model and a narrative
specification of the trusted computing base
B3/A1, requiring more precisely proven descriptive and formal designs of the trusted
computing base
These clusters do not imply that classes C1, C2, and B1 are equivalent. However, there are
substantial increases of stringency between B1 and B2, and between B2 and B3 (especially
in the assurance area). To see why, consider the requirements for C1, C2, and B1. An
operating system developer might be able to add security measures to an existing operating
system in order to qualify for these ratings. However, security must be included in
the design of the operating system for a B2 rating. Furthermore, the design of a B3 or A1
system must begin with construction and proof of a formal model of security. Thus, the
distinctions between B1 and B2 and between B2 and B3 are significant.
Let us look at each class of security described in the TCSEC. In our descriptions, terms in
quotation marks have been taken directly from the Orange Book to convey the spirit of the
evaluation criteria.
Class D: Minimal Protection
This class is applied to systems that have been evaluated for a higher category but have
failed the evaluation. No security characteristics are needed for a D rating.
Class C1: Discretionary Security Protection
C1 is intended for an environment of cooperating users processing data at the same level of
sensitivity. A system evaluated as C1 separates users from data. Controls must seemingly
be sufficient to implement access limitation, to allow users to protect their own data. The
controls of a C1 system may not have been stringently evaluated; the evaluation may be
based more on the presence of certain features. To qualify for a C1 rating, a system must
have a domain that includes security functions and that is protected against tampering. A
keyword in the classification is "discretionary." A user is "allowed" to decide when the
controls apply, when they do not, and which named individuals or groups are allowed
access.
Class C2: Controlled Access Protection
A C2 system still implements discretionary access control, although the granularity of
control is finer. The audit trail must be capable of tracking each individual's access (or
attempted access) to each object.
Class B1: Labeled Security Protection
All certifications in the B division include nondiscretionary access control. At the B1 level,
each controlled subject and object must be assigned a security level. (For class B1, the
protection system does not need to control every object.)
Each controlled object must be individually labeled for security level, and these labels must
be used as the basis for access control decisions. The access control must be based on a
model employing both hierarchical levels and nonhierarchical categories. (The military
model is an example of a system with hierarchical levelsunclassified, classified, secret, top
secretand nonhierarchical categories, need-to-know category sets.) The mandatory access
policy is the BellLa Padula model. Thus, a B1 system must implement BellLa Padula
controls for all accesses, with user discretionary access controls to further limit access.
Class B2: Structured Protection
The major enhancement for B2 is a design requirement: The design and implementation of
a B2 system must enable a more thorough testing and review. A verifiable top-level design
must be presented, and testing must confirm that the system implements this design. The
system must be internally structured into "well-defined largely independent modules." The
principle of least privilege is to be enforced in the design. Access control policies must be
enforced on all objects and subjects, including devices. Analysis of covert channels is
required.
Class B3: Security Domains
The security functions of a B3 system must be small enough for extensive testing. A highlevel design must be complete and conceptually simple, and a "convincing argument" must
exist that the system implements this design. The implementation of the design must
"incorporate significant use of layering, abstraction, and information hiding."
The security functions must be tamperproof. Furthermore, the system must be "highly
resistant to penetration." There is also a requirement that the system audit facility be able to
identify when a violation of security is imminent.
Class A1: Verified Design
Class A1 requires a formally verified system design. The capabilities of the system are the
same as for class B3. But in addition there are five important criteria for class A1
certification: (1) a formal model of the protection system and a proof of its consistency and
adequacy, (2) a formal top-level specification of the protection system, (3) a demonstration
that the top-level specification corresponds to the model, (4) an implementation "informally"
shown to be consistent with the specification, and (5) formal analysis of covert channels.
European ITSEC Evaluation
The TCSEC was developed in the United States, but representatives from several
European countries also recognized the need for criteria and a methodology for evaluating
security-enforcing products. The European efforts culminated in the ITSEC, the Information
Technology Security Evaluation Criteria [ITS91b].
Origins of the ITSEC
England, Germany, and France independently began work on evaluation criteria at
approximately the same time. Both England and Germany published their first drafts in
1989; France had its criteria in limited review when these three nations, joined by the
Netherlands, decided to work together to develop a common criteria document. We
examine Britain and Germany's efforts separately, followed by their combined output.
German Green Book
The (then West) German Information Security Agency (GISA) produced a catalog of criteria
[GIS88] five years after the first use of the U.S. TCSEC. Keeping with tradition, the security
community began to call the document the German Green Book because of its green cover.
The German criteria identified eight basic security functions, deemed sufficient to enforce a
broad spectrum of security policies:
o
o
o
o
o
o
o
o
identification and authentication: unique and certain association of an identity with a
subject or object
administration of rights: the ability to control the assignment and revocation of
access rights between subjects and objects
verification of rights: mediation of the attempt of a subject to exercise rights with
respect to an object
audit: a record of information on the successful or attempted unsuccessful exercise
of rights
object reuse: reusable resources reset in such a way that no information flow occurs
in contradiction to the security policy
error recovery: identification of situations from which recovery is necessary and
invocation of an appropriate action
continuity of service: identification of functionality that must be available in the
system and what degree of delay or loss (if any) can be tolerated
data communication security: peer entity authentication, control of access to
communications resources, data confidentiality, data integrity, data origin
authentication, and nonrepudiation
Note that the first five of these eight functions closely resemble the U.S. TCSEC, but the
last three move into entirely new areas: integrity of data, availability, and a range of
communications concerns.
Like the U.S. DoD, GISA did not expect ordinary users (that is, those who were not security
experts) to select appropriate sets of security functions, so ten functional classes were
defined. Classes F1 through F5 corresponded closely to the functionality requirements of
U.S. classes C1 through B3. (Recall that the functionality requirements of class A1 are
identical to those of B3.) Class F6 was for high data and program integrity requirements,
class F7 was appropriate for high availability, and classes F8 through F10 relate to data
communications situations. The German method addressed assurance by defining eight
quality levels, Q0 through Q7, corresponding roughly to the assurance requirements of U.S.
TCSEC levels D through A1, respectively. For example,
o
o
o
The evaluation of a Q1 system is merely intended to ensure that the implementation
more or less enforces the security policy and that no major errors exist.
The goal of a Q3 evaluation is to show that the system is largely resistant to simple
penetration attempts.
To achieve assurance level Q6, it must be formally proven that the highest
specification level meets all the requirements of the formal security policy model. In
addition, the source code is analyzed precisely.
These functionality classes and assurance levels can be combined in any way, producing
potentially 80 different evaluation results, as shown in Table 5-8. The region in the upperright portion of the table represents requirements in excess of U.S. TCSEC requirements,
showing higher assurance requirements for a given functionality class. Even though
assurance and functionality can be combined in any way, there may be limited applicability
for a low-assurance, multilevel system (for example, F5, Q1) in usage. The Germans did not
assert that all possibilities would necessarily be useful, however.
Table 5-8. Relationship of German and U.S. Evaluation Criteria.
Q0 Q1
Q2
Q3
Q4
=U.S.C1
=U.S.C2
=U.S.B1
=U.S.B2
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
Q5
Q6
Q7
Beyond U.S.A1
Beyond U.S.A1
Beyond U.S.A1
Beyond U.S.A1
=U.S.B3=U.S.A1Beyond U.S.A1
New functional class
New functional class
New functional class
New functional class
New functional class
Another significant contribution of the German approach was to support evaluations by
independent, commercial evaluation facilities.
British Criteria
The British criteria development was a joint activity between the U.K. Department of Trade
and Industry (DTI) and the Ministry of Defence (MoD). The first public version, published in
1989 [DTI89a], was issued in several volumes.
The original U.K. criteria were based on the "claims" language, a metalanguage by which a
vendor could make claims about functionality in a product. The claims language consisted
of lists of action phrases and target phrases with parameters. For example, a typical
action phrase might look like this:
This product can [not] determine … [using the mechanism described in paragraph n of this
document] …
The parameters product and n are, obviously, replaced with specific references to the
product to be evaluated. An example of a target phrase is
… the access-type granted to a [user, process] in respect of a(n) object.
These two phrases can be combined and parameters replaced to produce a claim about a
product.
This access control subsystem can determine the read access granted to all subjects in
respect to system files.
The claims language was intended to provide an open-ended structure by which a vendor
could assert qualities of a product and independent evaluators could verify the truth of those
claims. Because of the generality of the claims language, there was no direct correlation of
U.K. and U.S. evaluation levels.
In addition to the claims language, there were six levels of assurance evaluation, numbered
L1 through L6, corresponding roughly to U.S. assurance C1 through A1 or German Q1
through Q6.
The claims language was intentionally open-ended because the British felt it was impossible
to predict which functionality manufacturers would choose to put in their products. In this
regard, the British differed from Germany and the United States, who thought
manufacturers needed to be guided to include specific functions with precise functionality
requirements. The British envisioned certain popular groups of claims being combined into
bundles that could be reused by many manufacturers.
The British defined and documented a scheme for Commercial Licensed Evaluation
Facilities (CLEFs) [DTI89b], with precise requirements for the conduct and process of
evaluation by independent commercial organizations.
Other Activities
As if these two efforts were not enough, Canada, Australia, and France were also working
on evaluation criteria. The similarities among these efforts were far greater than their
differences. It was as if each profited by building upon the predecessors' successes.
Three difficulties, which were really different aspects of the same problem, became
immediately apparent.
o
o
o
Comparability. It was not clear how the different evaluation criteria related. A
German F2/E2 evaluation was structurally quite similar to a U.S. C2 evaluation, but
an F4/E7 or F6/E3 evaluation had no direct U.S. counterpart. It was not obvious
which U.K. claims would correspond to a particular U.S. evaluation level.
Transferability. Would a vendor get credit for a German F2/E2 evaluation in a
context requiring a U.S. C2? Would the stronger F2/E3 or F3/E2 be accepted?
Marketability. Could a vendor be expected to have a product evaluated
independently in the United States, Germany, Britain, Canada, and Australia? How
many evaluations would a vendor support? (Many vendors suggested that they
would be interested in at most one because the evaluations were costly and time
consuming.)
For reasons including these problems, Britain, Germany, France, and the Netherlands
decided to pool their knowledge and synthesize their work.
ITSEC: Information Technology Security Evaluation Criteria
In 1991 the Commission of the European Communities sponsored the work of these four
nations to produce a harmonized version for use by all European Union member nations.
The result was a good amalgamation.
The ITSEC preserved the German functionality classes F1F10, while allowing the flexibility
of the British claims language. There is similarly an effectiveness component to the
evaluation, corresponding roughly to the U.S. notion of assurance and to the German E0E7
effectiveness levels.
A vendor (or other "sponsor" of an evaluation) has to define a target of evaluation (TOE),
the item that is the evaluation's focus. The TOE is considered in the context of an
operational environment (that is, an expected set of threats) and security enforcement
requirements. An evaluation can address either a product (in general distribution for use in
a variety of environments) or a system (designed and built for use in a specified setting).
The sponsor or vendor states the following information:
o
o
o
o
o
system security policy or rationale: why this product (or system) was built
specification of security-enforcing functions: security properties of the product (or
system)
definition of the mechanisms of the product (or system) by which security is enforced
a claim about the strength of the mechanisms
the target evaluation level in terms of functionality and effectiveness
The evaluation proceeds to determine the following aspects:
o
o
o
o
o
suitability of functionality: whether the chosen functions implement the desired
security features
binding of functionality: whether the chosen functions work together synergistically
vulnerabilities: whether vulnerabilities exist either in the construction of the TOE or
how it will work in its intended environment
ease of use
strength of mechanism: the ability of the TOE to withstand direct attack
The results of these subjective evaluations determine whether the evaluators agree that the
product or system deserves its proposed functionality and effectiveness rating.
Significant Departures from the Orange Book
The European ITSEC offers the following significant changes compared with the Orange
Book. These variations have both advantages and disadvantages, as listed in Table 5-9.
Table 5-9. Advantages and Disadvantages of ITSEC Approach vs. TCSEC.
Quality
New functionality
requirement classes
Advantages of ITSEC over
TCSEC
o Surpasses traditional
confidentiality focus of
TCSEC
o Shows additional
areas in which
products are needed
Disadvantages of ITSEC
Compared with TCSEC
o Complicates user's choice
Table 5-9. Advantages and Disadvantages of ITSEC Approach vs. TCSEC.
Quality
Decoupling of features
and assurance
Permitting new feature
definitions; independence
from specific security
policy
Advantages of ITSEC over
TCSEC
o Allows low-assurance
or high-assurance
product
o
o
Allows evaluation of
any kind of securityenforcing product
Allows vendor to
decide what products
the market requires
Disadvantages of ITSEC
Compared with TCSEC
o Requires user
sophistication to decide
when high assurance is
needed
o Some functionality may
inherently require high
assurance but not
guarantee receiving it
o
o
o
Commercial evaluation
facilities
o
Subject to market
forces for time,
schedule, price
o
o
Complicates comparison of
evaluations of differently
described but similar
products
Requires vendor to
formulate requirements to
highlight product's features
Preset feature bundles not
necessarily hierarchical
Government does not have
direct control of evaluation
Evaluation cost paid by
vendor
U.S. Combined Federal Criteria
In 1992, partly in response to other international criteria efforts, the United States began a
successor to the TCSEC, which had been written over a decade earlier. This successor,
the Combined Federal Criteria [NSA92], was produced jointly by the National Institute for
Standards and Technology (NIST) and the National Security Agency (NSA) (which formerly
handled criteria and evaluations through its National Computer Security Center, the NCSC).
The team creating the Combined Federal Criteria was strongly influenced by Canada's
criteria [CSS93], released in draft status just before the combined criteria effort began.
Although many of the issues addressed by other countries' criteria were the same for the
United States, there was a compatibility issue that did not affect the Europeans, namely, the
need to be fair to vendors that had already passed U.S. evaluations at a particular level or
that were planning for or in the middle of evaluations. Within that context, the new U.S.
evaluation model was significantly different from the TCSEC. The combined criteria draft
resembled the European model, with some separation between features and assurance.
The Combined Federal Criteria introduced the notions of security target (not to be confused
with a target of evaluation, or TOE) and protection profile. A user would generate
a protection profile to detail the protection needs, both functional and assurance, for a
specific situation or a generic scenario. This user might be a government sponsor, a
commercial user, an organization representing many similar users, a product vendor's
marketing representative, or a product inventor. The protection profile would be an abstract
specification of the security aspects needed in an information technology (IT) product. The
protection profile would contain the elements listed in Table 5-10.
Table 5-10. Protection Profile.
Rationale
Protection policy and regulations
Information protection philosophy
Expected threats
Environmental assumptions
Intended use
Functionality
Security features
Security services
Available security mechanisms (optional)
Assurance
Profile-specific assurances
Profile-independent assurances
Dependencies
Internal dependencies
External dependencies
In response to a protection profile, a vendor might produce a product that, the vendor would
assert, met the requirements of the profile. The vendor would then map the requirements of
the protection profile in the context of the specific product onto a statement called
a security target. As shown in Table 5-11, the security target matches the elements of the
protection profile.
Table 5-11. Security Target.
Rationale
Implementation fundamentals
Information protection philosophy
Countered threats
Environmental assumptions
Intended use
Functionality
Security features
Security services
Security mechanisms selected
Assurance
Target-specific assurances
Target-independent assurances
Dependencies
Internal dependencies
External dependencies
The security target then becomes the basis for the evaluation. The target details which
threats are countered by which features, to what degree of assurance and using which
mechanisms. The security target outlines the convincing argument that the product satisfies
the requirements of the protection profile. Whereas the protection profile is an abstract
description of requirements, the security target is a detailed specification of how each of
those requirements is met in the specific product.
The criteria document also included long lists of potential requirements (a subset of which
could be selected for a particular protection profile), covering topics from object reuse to
accountability and from covert channel analysis to fault tolerance. Much of the work in
specifying precise requirement statements came from the draft version of the Canadian
criteria.
The U.S. Combined Federal Criteria was issued only once, in initial draft form. After
receiving a round of comments, the editorial team announced that the United States had
decided to join forces with the Canadians and the editorial board from the ITSEC to produce
the Common Criteria for the entire world.
Common Criteria
The Common Criteria [CCE94, CCE98] approach closely resembles the U.S. Federal
Criteria (which, of course, was heavily influenced by the ITSEC and Canadian efforts). It
preserves the concepts of security targets and protections profiles. The U.S. Federal
Criteria were intended to have packages of protection requirements that were complete and
consistent for a particular type of application, such as a network communications switch, a
local area network, or a stand-alone operating system. The example packages received
special attention in the Common Criteria.
The Common Criteria defined topics of interest to security, shown in Table 5-12. Under
each of these classes, they defined families of functions or assurance needs, and from
those families, they defined individual components, as shown in Figure 5-24.
Table 5-12. Classes in Common Criteria.
Functionality
Assurance
Identification and authentication
Development
Trusted path
Testing
Security audit
Vulnerability assessment
Invocation of security functions
Configuration management
User data protection
Life-cycle support
Resource utilization
Guidance documents
Protection of the trusted security functionsDelivery and operation
Privacy
Communication
Figure 5-24. Classes, Families, and Components in Common Criteria.
Individual components were then combined into packages of components that met some
comprehensive requirement (for functionality) or some level of trust (for assurance), as
shown in Figure 5-25.
Figure 5-25. Functionality or Assurance Packages in Common Criteria.
Finally, the packages were combined into requirements sets, or assertions, for specific
applications or products, as shown in Figure 5-26.
Figure 5-26. Protection Profiles and Security Targets in Common Criteria.
Summary of Evaluation Criteria
The criteria were intended to provide independent security assessments in which we could
have some confidence. Have the criteria development efforts been successful? For some, it
is too soon to tell. For others, the answer lies in the number and kinds of products that have
passed evaluation and how well the products have been accepted in the marketplace.
Evaluation Process
We can examine the evaluation process itself, using our own set of objective criteria. For
instance, it is fair to say that there are several desirable qualities we would like to see in an
evaluation, including the following:
o
o
o
o
o
o
o
o
o
Extensibility. Can the evaluation be extended as the product is enhanced?
Granularity. Does the evaluation look at the product at the right level of detail?
Speed. Can the evaluation be done quickly enough to allow the product to compete
in the marketplace?
Thoroughness. Does the evaluation look at all relevant aspects of the product?
Objectivity. Is the evaluation independent of the reviewer's opinions? That is, will two
different reviewers give the same rating to a product?
Portability. Does the evaluation apply to the product no matter what platform the
product runs on?
Consistency. Do similar products receive similar ratings? Would one product
evaluated by different teams receive the same results?
Compatibility. Could a product be evaluated similarly under different criteria? That is,
does one evaluation have aspects that are not examined in another?
Exportability. Could an evaluation under one scheme be accepted as meeting all or
certain requirements of another scheme?
Using these characteristics, we can see that the applicability and extensibility of the TCSEC
are somewhat limited. Compatibility is being addressed by combination of criteria, although
the experience with the ITSEC has shown that simply combining the words of criteria
documents does not necessarily produce a consistent understanding of them. Consistency
has been an important issue, too. It was unacceptable for a vendor to receive different
results after bringing the same product to two different evaluation facilities or to one facility
at two different times. For this reason, the British criteria documents stressed consistency of
evaluation results; this characteristic was carried through to the ITSEC and its companion
evaluation methodology, the ITSEM. Even though speed, thoroughness, and objectivity are
considered to be three essential qualities, in reality evaluations still take a long time relative
to a commercial computer product delivery cycle of 6 to 18 months.
Criteria Development Activities
Evaluation criteria continue to be developed and refined. If you are interested in doing
evaluations, in buying an evaluated product, or in submitting a product for evaluation, you
should follow events closely in the evaluation community. You can use the evaluation goals
listed above to help you decide whether an evaluation is appropriate and which kind of
evaluation it should be.
It is instructive to look back at the evolution of evaluation criteria documents, too. Figure 527 shows the timeline for different criteria publications; remember that the writing preceded
the publication by one or more years. The figure begins with Anderson's original Security
Technology Planning Study [AND72], calling for methodical, independent evaluation. To see
whether progress is being made, look at the dates when different criteria documents were
published; earlier documents influenced the contents and philosophy of later ones.
Figure 5-27. Criteria Development Efforts.
The criteria development activities have made significant progress since 1983. The U.S.
TCSEC was based on the state of best practice known around 1980. For this reason, it
draws heavily from the structured programming paradigm that was popular throughout the
1970s. Its major difficulty was its prescriptive manner; it forced its model on all
developments and all types of products. The TCSEC applied most naturally to monolithic,
stand-alone, multiuser operating systems, not to the heterogeneous, distributed, networked
environment based largely on individual intelligent workstations that followed in the next
decade.
Experience with Evaluation Criteria
To date, criteria efforts have been paid attention to by the military, but those efforts have not
led to much commercial acceptance of trusted products. The computer security research
community is heavily dominated by defense needs because much of the funding for security
research is derived from defense departments. Ware [WAR95] points out the following
about the initial TCSEC:
o
o
o
o
It was driven by the U.S. Department of Defense.
It focused on threat as perceived by the U.S. Department of Defense.
It was based on a U.S. Department of Defense concept of operations, including
cleared personnel, strong respect for authority and management, and generally
secure physical environments.
It had little relevance to networks, LANs, WANs, Internets, client-server distributed
architectures, and other more recent modes of computing.
When the TCSEC was introduced, there was an implicit contract between the U.S.
government and vendors, saying that if vendors built products and had them evaluated, the
government would buy them. Anderson [AND82] warned how important it was for the
government to keep its end of this bargain. The vendors did their part by building numerous
products: KSOS, PSOS, Scomp, KVM, and Multics. But unfortunately, the products are now
only of historical interest because the U.S. government did not follow through and create the
market that would encourage those vendors to continue and other vendors to join. Had
many evaluated products been on the market, support and usability would have been more
adequately addressed, and the chance for commercial adoption would have been good.
Without government support or perceived commercial need, almost no commercial
acceptance of any of these products has occurred, even though they have been developed
to some of the highest quality standards.
Schaefer [SCH04a] gives a thorough description of the development and use of the
TCSEC. In his paper he explains how the higher evaluation classes became virtually
unreachable for several reasons, and thus the world has been left with less trustworthy
systems than before the start of the evaluation process. The TCSEC's almost exclusive
focus on confidentiality would have permitted serious integrity failures (as obliquely
described in [SCH89b]).
On the other hand, some major vendors are actively embracing low and moderate
assurance evaluations: As of May 2006, there are 78 products at EAL2, 22 at EAL3, 36 at
EAL4, 2 at EAL5 and 1 at EAL7. Product types include operating systems, firewalls,
antivirus software, printers, and intrusion detection products. (The current list of completed
evaluations (worldwide) is maintained at www.commoncriteriaportal.org.) Some vendors
have announced corporate commitments to evaluation, noting that independent evaluation
is a mark of quality that will always be a stronger selling point than so-called emphatic
assertion (when a vendor makes loud claims about the strength of a product, with no
independent evidence to substantiate those claims). Current efforts in criteria-writing
support objectives, such as integrity and availability, as strongly as confidentiality. This
approach can allow a vendor to identify a market niche and build a product for it, rather than
building a product for a paper need (that is, the dictates of the evaluation criteria) not
matched by purchases. Thus, there is reason for optimism regarding criteria and
evaluations. But realism requires everyone to accept that the marketnot a criteria
documentwill dictate what is desired and delivered. As Sidebar 5-7 describes, secure
systems are sometimes seen as a marketing niche: not part of the mainstream product line,
and that can only be bad for security.
It is generally believed that the market will eventually choose quality products. The
evaluation principles described above were derived over time; empirical evidence shows us
that they can produce high-quality, reliable products deserving our confidence. Thus,
evaluation criteria and related efforts have not been in vain, especially as we see dramatic
increases in security threats and the corresponding increased need for trusted products.
However, it is often easier and cheaper for product proponents to speak loudly than to
present clear evidence of trust. We caution you to look for solid support for the trust you
seek, whether that support be in test and review results, evaluation ratings, or specialized
assessment.
Sidebar 5-7: Security as an Add-On
In the 1980s, the U.S. State Department handled its diplomatic office functions
with a network of Wang computers. Each American embassy had at least one
Wang system, with specialized word processing software to create documents,
modify them, store and retrieve them, and send them from one location to another.
Supplementing Wang's office automation software was the State Department's
own Foreign Affairs Information System (FAIS).
In the mid-1980s, the State Department commissioned a private contractor to add
security to FAIS. Diplomatic and other correspondence was to be protected by a
secure "envelope" surrounding sensitive materials. The added protection was
intended to prevent unauthorized parties from "opening" an envelope and reading
the contents.
To design and implement the security features, the contractor had to supplement
features offered by Wang's operating system and utilities. The security design
depended on the current Wang VS operating system design, including the use of
unused words in operating system files. As designed and implemented, the new
security features worked properly and met the State Department requirements.
But the system was bound for failure because the evolutionary goals of VS were
different from those of the State Department. That is, Wang could not guarantee
that future modifications to VS would preserve the functions and structure required
by the contractor's security software. Eventually, there were fatal clashes of intent
and practice.
Download