Toward an Organization for Software System Security Principles and

IIIA Technical Paper 08-01
Toward an Organization for
Software System Security
Principles and Guidelines
Samuel T. Redwine, Jr.
Associate Professor of Computer Science
James Madison University
Copyright © 2008. The Institute for Infrastructure & Information Assurance at James Madison University, Harrisonburg, Virginia, USA
and the Individual Authors. All Rights Reserved. No part of this publication may be reproduced, stored in any retrieval system, or
transmitted in any form or by any means - electronic, mechanical, digital, photocopy, recording, or any other - except for brief
quotations in printed reviews, without the prior explicit written permission of the publisher, editors or respective author(s).
Contents Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved
Author: Samuel T. Redwine, Jr.
Editor: Samuel T. Redwine, Jr.
Institute for Infrastructure and Information Assurance, James Madison University, IIIA Technical Paper 08-01
Version 1.00.113
Citation:
Samuel T. Redwine, Jr. Towards an Organization for Software System Security Principles and Guidelines version 1.0,
Institute for Infrastructure and Information Assurance, James Madison University, IIIA Technical Paper 08-01.
February 2008.
NO WARRANTY
THIS MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. THE AUTHOR, CONTRIBUTORS, REVIEWERS,
ENDORSERS, AND OTHERS INVOLVED, AND EMPLOYERS; JAMES MASISON UNIVERSITY AND ALL
OTHER ENTITIES ASSOCIATED; AND ENTITIES AND PRODUCTS MENTIONED WITHIN THIS
DOCUMENT MAKE NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY
MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR
MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. NO
WARRANTY OF ANY KIND IS MADE WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR
COPYRIGHT INFRINGEMENT.
Use of any trademarks in this guide is not intended in any way to infringe on the rights of the trademark holder.
References in this document to any specific commercial products, process, or service by trade name, trademark,
manufacturer, or otherwise, do not necessarily constitute or imply its endorsement, recommendation, or favoring by
any of the parties involved in or related to this document.
No warranty is made that use of the guidance in this document or from any site or supplier, which is the source of this
document, will result in systems or software that are secure or more secure. Examples are for illustrative purposes and
are not intended to be used as is or without undergoing analysis.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
Towards an Organization for
Software System Security Principles
and Guidelines
Samuel T. Redwine, Jr.
Version 1.00, February 2008
Institute for Infrastructure and Information Assurance
IIIA Technical Paper 08-01
James Madison University
Table of Contents
0.
INTRODUCTION ...........................................................................................................................................................1
0.1.
0.2.
0.3.
0.4.
1.
PURPOSE .....................................................................................................................................................................1
SCOPE .........................................................................................................................................................................1
REASONING UNDERLYING THE O RGANIZATION .......................................................................................................2
ORGANIZATION OF REMAINDER OF DOCUMENT ......................................................................................................4
THE ADVERSE...............................................................................................................................................................7
1.1.
LIMIT, REDUCE, OR MANAGE VIOLATORS ...............................................................................................................7
1.1.1.
Adversaries are Intelligent and Malicious ......................................................................................................7
1.1.2.
Limit, Reduce, or Manage Set of Violators .....................................................................................................8
1.1.3.
Limit, Reduce, or Manage Attempted Violations ............................................................................................9
1.1.4.
Think like an Attacker .....................................................................................................................................10
1.2.
LIMIT, REDUCE, OR MANAGE BENEFITS TO VIOLATORS OR A TTACKERS ............................................................10
1.2.1.
Unequal Attacker Benefits and Defender Losses ..........................................................................................10
1.2.2.
Limit, Reduce, or Manage Violators’ Ability to Exploit Success for Gain..................................................11
1.3.
INCREASE ATTACKER LOSSES .................................................................................................................................11
1.3.1.
Limit, Reduce, or Manage Violators’ Ease in Taking Steps towards Fruitful Violation ...........................11
1.3.2.
Increase Losses and Likely Penalties for Preparation .................................................................................11
1.3.3.
Increase Expense of Attacking .......................................................................................................................11
1.3.4.
Increase Attacker Losses and Likely Penalties .............................................................................................11
1.4.
INCREASE ATTACKER UNCERTAINTY .....................................................................................................................12
1.4.1.
Conceal Information Useful to Attacker........................................................................................................12
1.4.2.
Exploit Deception............................................................................................................................................12
2.
THE SYSTEM................................................................................................................................................................13
2.1.
LIMIT, REDUCE, OR MANAGE VIOLATIONS ............................................................................................................13
2.1.1.
Specify Security Requirements .......................................................................................................................13
2.1.2.
Limit, Reduce, or Manage Opportunities for Violations ..............................................................................14
2.1.3.
Limit Reduce, or Manage Actual Violations .................................................................................................23
2.1.4.
Limit, Reduce, or Manage Lack of Accountability .......................................................................................27
2.2.
IMPROVE BENEFITS OR AVOID ADVERSE EFFECTS ON SYSTEM BENEFITS ...........................................................27
2.2.1.
Access Fulfills Needs and Facilitates User ...................................................................................................27
2.2.2.
Encourage and Ease Use of Security Aspects ...............................................................................................27
2.2.3.
Articulate the Desired Characteristics and Tradeoff among Them.............................................................28
2.2.4.
Efficient Security .............................................................................................................................................28
2.2.5.
Provide Added Benefits...................................................................................................................................29
2.2.6.
Learn, Adapt, and Improve.............................................................................................................................30
2.3.
LIMIT, REDUCE, OR MANAGE SECURITY-RELATED COSTS....................................................................................30
2.3.1.
Limit, Reduce, or Manage Security-Related Adverse Consequences ..........................................................30
2.3.2.
Limit, Reduce, or Manage Security-Related Expenses across the Lifecycle...............................................34
2.4.
LIMIT, REDUCE, OR MANAGE SECURITY-RELATED U NCERTAINTIES ....................................................................34
2.4.1.
Identify Uncertainties .....................................................................................................................................34
2.4.2.
Limit, Reduce, or Manage Security-Related Unknowns...............................................................................34
2.4.3.
Limit, Reduce, or Manage Security-Related Assumptions ...........................................................................34
2.4.4.
Limit, Reduce, or Manage Lack of Integrity or Validity ..............................................................................35
2.4.5.
Limit, Reduce, or Manage Lack of Reliability or Availability of Security-related Resources...................35
2.4.6.
Predictability – Limit, Reduce, or Manage Unpredictability of System Behavior .....................................35
2.4.7.
Informed Consent............................................................................................................................................37
2.4.8.
Limit, Reduce, or Manage Consequences or Risks related to Uncertainty.................................................37
2.4.9.
Increase Assurance regarding Product .........................................................................................................37
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
i
3.
THE ENVIRONMENT.................................................................................................................................................49
3.1.
NATURE OF ENVIRONMENT .....................................................................................................................................49
3.1.1.
Security is a System, Organizational, and Societal Problem .......................................................................49
3.1.2.
The Conflict Extents beyond Computing .......................................................................................................49
3.1.3.
New Technologies Have Security Problems..................................................................................................50
3.2.
BENEFITS TO AND FROM ENVIRONMENT .................................................................................................................50
3.2.1.
Utilize Security Mechanisms Existing in Environment to Enhance One’s Security ...................................50
3.2.2.
Create, Learn, and Adapt and Improve Organizational Policy...................................................................50
3.2.3.
Learn from Environment ................................................................................................................................50
3.2.4.
Help, but do not Help Attackers .....................................................................................................................50
3.3.
LIMIT, REDUCE, OR MANAGE ENVIRONMENT-RELATED LOSSES .........................................................................51
3.3.1.
Do Not Cause Security Problems for Systems in the Environment .............................................................51
3.3.2.
Do Not Thwart Security Mechanisms in Environment .................................................................................51
3.3.3.
Avoid Dependence...........................................................................................................................................51
3.3.4.
Presume Environment is Dangerous .............................................................................................................51
3.4.
LIMIT, REDUCE, OR MANAGE ENVIRONMENT-RELATED UNCERTAINTIES ...........................................................52
3.4.1.
Know One’s Environment...............................................................................................................................52
3.4.2.
Limit, Reduce, or Manage Trust ....................................................................................................................52
3.4.3.
Ensure Adequate Assurance for Dependences..............................................................................................53
3.4.4.
Third-Parties are Sources of Uncertainty .....................................................................................................54
4.
CONCLUSION ..............................................................................................................................................................55
5.
APPENDIX A: PRINCIPLES OF WAR ...................................................................................................................57
6.
APPENDIX B: PURPOSE-CONDITION-ACTION-RESULT MATRIX ...........................................................59
7.
BIBLIOGRAPHY..........................................................................................................................................................61
8.
ACKNOWLEDGEMENTS .........................................................................................................................................63
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
ii
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
iii
0. Introduction
0.1.
Purpose
Principles and guidelines for software system 1 security have originated variously over thirty-plus years, and their
authors have tended to provide flat lists occasionally organized topically, by major lifecycle stages, or by the author’s
judgment of importance. The result is hundreds of items whose relationships to each other are unclear and therefore
hard to systematically learn, remember, and teach. Therefore, this report contains a set of software system security
principles and guidelines organized in a logical, in-depth fashion. As well as providing coherence, the structure
provides grounds for arguing completeness – at least at the higher levels.
This is the first highly organized presentation of such a comprehensive set of principles and guidelines. Its structure
emphasizes how they relate to each other. The organization aims to start with the most basic, abstract, or inclusive ones
and recursively identify the ones that are logically subordinate to each – generally as parts or causes of them. Thus, it
aims to begin to bring needed coherence and intellectual manageability to the area.
I was inspired by the insistence of first Matt Bishop and then by the participants in the August 17-18, 2006 Software
Assurance Common Body of Knowledge Workshop at the Naval Postgraduate School in Monterey, CA.2 The result
was I agreed to lead and serve as editor/author for an effort by the participants and a small – but open – set of interested
parties. Soon, any intended editorship folded into actual authorship.
Generally, the principles and guidelines are stated from the viewpoint of the stakeholders with interests in
adequate/better system security. They and related educators, trainers, consultants, and other interested parties are the
intended audience for this document. Some background in computer or software security would be useful for better
understanding and appreciating this document. The viewpoint is often similar to the viewpoints of software system
analysts, designers, or their clients, but some may be more relevant to others such as law or policy makers. However,
no limitation is placed the kinds of security-supportive stakeholders that might use the principles or guidelines.
The organization of items in this document was constructed to bring increased intellectual coherence and
understandability to the area of software system security principles and guidelines. I hope readers will find this report a
useful step toward doing this as well as serving as a basis for systematically structured learning and teaching. Some
may also find it useful – in whole or part – as a checklist.3
0.2.
Scope
What various authors have meant by the words “principle” and “guideline” has often been unclear, and this report will
not attempt to draw any boundaries. Characteristically, each item included in this report is essentially intended to be an
‘ideal’ and usually contains guidance (possibly implied) for certain aspects of a secure software system or how
activities related to systems and/or software should be performed. These principally relate to producing software
systems with significant emphasis to their design, but many others relate to other parts of the lifecycle.
This report emphasizes structure while leaving out detailed explanations or definitions for many individual principles
or guidelines.4
Semantically, the principles or guidelines vary from absolute commands to conditional suggestions. However,
following the long tradition within computing security started in Saltzer and Schroeder’s seminal 1975 article, the list
1
No sharp boundary exists between principles relevant to software vs. systems. The medium used to provide some feature or
functionality is not relevant at the principles level and most guidelines in this document. It is only mentioned when it is relevant, for
example guidelines regarding software code.
2
While many sources were used, this effort originated from two sources (1) A set of principles or guidelines compiled by Redwine
and appearing first topically organized in a 2005 technical report [Redwine 2005] and later in the common body of knowledge
[Redwine 2006] (2) Urgings from reviewers of early drafts of Software Assurance to bring a sound organization to these allowing
teaching and learning starting from the – then yet to be identified – first principles
3
Collections exist of general design principles, computer sciences principles, and software engineering principles; and a few of these
also appear herein.
4
Many elaborations or definitions exist elsewhere for example in [Benzel 2005], [Redwine 2005], [Berg 2006], and [Redwine 2006]
as well as in other the items in the bibliography. To aid understanding, a number of short explanations or definitions of principles or
guidelines are included herein.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
1
below is often in noun phrases rather than commands or suggestions. Generally, you can implicitly place one of the
following in front of each noun phrase
•
Strive to
•
Use
•
Find and use
At the higher levels, completeness is a goal, and the items listed aim for intellectual coherence and completeness of
coverage. At the middle and lower levels an attempt was made to include a substantial number of items derived from a
variety of sources – in part to verify the completeness of the higher levels.5 However, no attempt was made to include
every item anyone ever labeled a “principle” much less a “guideline”. In addition, technology or domain specific items
were generally excluded.
0.3.
Reasoning Underlying the Organization
The ultimate goal of systems or software security is to minimize real-world, security-related adverse consequences.6 In
practice this often means to limit, reduce, or manage security-related losses or expenses. These tend to fall into three
categories.
•
Adverse consequences resulting from security violations
•
Adverse effects on system benefits because of security requirements’ effects on the system
•
Additional security-related developmental and operational expenses and time
A logical approach might be to start with these.
However, readers found
an earlier presentation
starting from this ultimate
point and working
backward – roughly equal
to going right to left
across Figure 1 –
unnatural and hard to
follow. So, this report is in
a more natural order
starting with the existence
of potential security
violators and software
system development and
working through to the
ultimate consequences –
roughly left to right.
It starts with attackers and
their attempted attacks,
Figure 0: Simplified Security "Cause and Effect" Diagram
and the opportunities for
successful or partially
successful attacks provided by the system and possibly facilitated by its environment.7 Adequately performed attacks
via opportunities offered by the system are the usual basis for successful attacks on computing systems. These must
then be exploited for gain by the attacker while the defenders try to limit their losses.
5
Due to the process used by some of the secondary sources used and the emphasis in this report on organization, some principles or
guidelines are individually labeled as to their source of publication (possibly secondary), but most are not. However, an attempt was
made to ensure all the sources used be listed in the bibliography. The author welcomes help in better identifying origins.
6
Providing possible security-related positive benefits, e.g. valuable privacy, cans also be a goal, but a generally less important one.
This is covered within the body of the document.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
2
Software system security is an ongoing conflict with a variety of stakeholders. As partially reflected in Figure 1, highlevel goals for security-supportive stakeholders might include
1.
Fewer and less capable or less motivated sources of danger
2.
Fewer and less serious attempts to violate security
3.
Fewer and less serious opportunities or preconditions for violations throughout the lifecycle
4.
Dealing adequately with attempted and actual violations
5.
Limiting attacker gains
6.
Limiting adverse consequences resulting from security violations
7.
Providing security-related benefits (e.g. privacy-related ones)
8.
Limiting adverse effects on system benefits because of security requirements’ effects on the system
9.
Limiting additional security-related developmental and operational expenses and time
10. Not harming the environment
11. Taking advantage of the environment without overdependence
12. Managing trust
These goals have associated concerns for security-related costs, benefits, and uncertainties. The more detailed specifics
vary over time and the life of a system.
All secure software system goals and issues fall naturally into three streams flowing through time and their interactions
and changes (including variations over time). These streams8 will be labeled:
•
The adverse – related to malicious and non-malicious attempts to violate security or exploit violations (Goals
1, 2, 5)
•
The system – includes the “software system” of interest and security-supportive stakeholders (Goals 3, 4, 6, 7,
8, 9)
•
The environment – milieu and connecting infrastructure in which system and its stakeholders including
attackers operate (Goals 10, 11, 12)
This last is the context in which the conflict takes place. You might roughly think of these three as the offense, the
defense, and the arena of conflict. Thus, the security principles and guidelines coverage includes the following along
with their related uncertainties, expenses, and consequences
•
Potential and actual violators and their activities
•
System conception, development, deployment, operation, evolution, changing ownership or control, and
retirement and disposal including the opportunities system presents for security violations
•
Aspects of environments with implications for security-related interactions across the software system’s
lifespan and possibly beyond
Entities in all three streams have security-related costs, benefits, and uncertainties – in particular, these involve dangers
and damage because, “Software systems are in danger all their lives.”9
Starting at the top, the organization used for principles and guidelines first divides by these three streams – the adverse,
system, and environment – and then, within each stream, its second level is divided into four parts. The first of these
four second-level parts covers the stream’s key entities or phenomena
1.
Key entities or phenomena’s relevant size or nature
7
Such opportunities are commonly called vulnerabilities.
Readers who prefer to can also think of these as currents or portions within a single all-encompassing stream.
9
The phrase, “Software is in danger all its life,” is attributed to Sam Redwine.
8
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
3
Completing the second-level, parts two through four fill out the division of each stream with additional parts motivated
by the desire for benefits, dislike of losses, and yearning for confidence – namely
2.
Benefits
3.
Losses
4.
Uncertainty
Below the second level, the organizational structure emphasizes cause-and-effect and whole-part relationships.
In these lower levels, several themes frequently recur across the three streams and their four second-level divisions.
These themes involve the goals listed above and other aspects including fewer and less capable or less motivated
sources of problems, fewer and less serious attempts, fewer opportunities for undesirable events, avoiding
preconditions for undesirable events or losses (and sequences of conditions and acts leading to these), limiting damage,
having awareness, and learning and improving. Occasionally, the same item may have multiple motivations or embody
multiple themes and appear multiple places in this document’s hierarchical presentation of the structure.
Thus, the principles and guidelines often concern limiting (sometimes at the top and sometimes at the bottom),
increasing, reducing, or managing motivations, capabilities, attempts, or chances or opportunities for something and
related benefits, losses, and/or uncertainties. One example of this is the organization of principles and guidelines related
to violations.
Each stream involves sets of entities – potentially overlapping. Usually these can be identified by where their interests
or desires lie, for example stakeholders with interests in or supportive of adequate/better system security would belong
(at least) in the system stream.
Figure 1 gives a simplified conceptual view with a system (blue) and its adverse (red) interacting on a green field of
conflict and suggesting a flow of “time” or cause and effect – going generally left to right
To achieve benefits, avoid losses, and have confidence in these, the entities in each stream need the proper
•
wherewithal and
•
decision-making
to use it successfully. Generally, better decisions result
from possession of the relevant kinds of information
and from it having less uncertainty. In addition, the
right information having lower uncertainty allows
decisions to be arrived at more quickly and easily (and
with less anxiety). Thus, they have needs for
•
Relevant information with
•
Less uncertainty
Environment
Violators
Try to
V io la te
Ill-us e
Con seq uences
Develop
Operate
Res pond
Con seq uences
Figure 2: The Three Streams Oversimplified
With these combining to make conclusions with
adequately low uncertainty suitable for the decisions stakeholders need to make.
Entities in a stream also want the other two streams to facilitate and not harm them. When conflicts are involved, this
usually means they also wish to hinder and discourage their opponents and to find, cooperate with, and encourage
allies. Thus, the needed wherewithal also depends on other entities and conditions beyond one entity’s control and
includes tangibles and intangibles such as morale, mental agility, persistence, and discipline. Unsurprisingly, these
motivations and needs of the entities involved along with their nature, and the nature of their situations and oftencompetitive relationships influence the principles and guidelines that are organized below.
As previously stated, below the second level, the organization emphasizes cause-and-effect and whole-part
relationships.
0.4.
Organization of Remainder of Document
Separate numbered sections cover each of the three streams
1.
The Adverse – Emphasizes violators, violator gains, and attempted violations
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
4
2.
The System – Emphasizes opportunities for violations, violations, and potential and actual losses
3.
The Environment – Emphasizes environment of conflict, dependence on environment, and trust
One aspect of the boundary of the first (The Adverse) section and the second (The System) is that it is also the
boundary between attempted violations and actual violations.
As stated above, each of these three streams’ section is organized into four numbered subsections concerned with
1.
Number, size, or amount of the key entities, phenomena, opportunities, or events involved
2.
Benefits
3.
Losses
4.
Uncertainties
For example, subsection 2.4 concerns uncertainties related to the system and its supportive stakeholders. The document
ends with a Conclusion; two appendices, one on the principles of war and another that shows some of my underlying
thinking; and a Bibliography.
At the higher-levels, the individual principles and guidelines often concern limiting, reducing, or managing 10 the
amount or kind of offense, defense, or environment events, attributes, or aspects.
The organization is general, and principles and guidelines that a particular kind of reader (e.g. designer) would find
most relevant may be next to ones he or she might not find directly relevant. This is particularly true in section 1 on
The Adverse, and readers may need to move quickly over any irrelevant ones to the ones beyond. Readers mainly
interested in the internals of systems may want to start with section 2 on The System and return to section 1 after
reading sections 2 and 3. In all cases, for full understanding readers need to read all three sections – 1, 2, and 3. In
many cases, the reader’s richness of understanding of a principle or guideline can be enhanced by the awareness of
certain other ones.
The organization of items in this document is constructed to bring increased intellectual coherence, manageability, and
understandability to the area of software system security principles and guidelines. The two characteristics explicitly
sought are completeness and coherence thereby easing understanding, learning, and use including in professional
learning, practice, policy making, education, and training.11
10
For brevity, sometimes in discursive text only one of these three words is used even though more might apply. Note that “reduce”
only applies if something to reduce already exists.
11
Please send any suggestions for improvements, particularly discussion of gaps and additional “principles” or guidelines, and their
sources to the author at redwinst@jmu.edu.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
5
1.
The Adverse
The adverse includes malicious and non-malicious sources of danger or threat. This section addresses what the different
stakeholders interested in good security need to know and might do about these sources and their thinking, activities,
and gains.
Following the organization’s scheme for the second level, subsections exist concerning (1) amount, nature, and
activities of violators, and their (2) benefits, (3) losses, and (4) uncertainties. The important goals of these subsection’s
principles and guidelines include
•
Fewer and less capable or less motivated sources of danger
•
Fewer and less serious attempts to violate security
To repeat, no limitation is placed on which kind of security-supportive stakeholder might have these goals or use a
principle or guideline. For some items, the natural users might be system developers, and for others they might be
lawmakers or enforcers. Readers with specialized or mainly technical interests should be careful to not stop reading or
reject the document as a whole because they find several entries irrelevant or feel, “I could never do anything about
that.”
1.1.
Limit, Reduce, or Manage Violators
Violators are the immediate source of a security violation. Their violations may be intentional or unintentional,
malicious or non-malicious, and they may be intrinsically or extrinsically motivated and their own agent or the agent of
others – voluntarily or involuntarily. These violators may act anywhere in the lifecycle and may be outsiders or insiders
with a variety of roles including developer or user. In any case, reducing their number and influencing their
motivations, capabilities, intentions, and behavior can reduce the danger to a system. In addition, the system acting
alone may cause violations. This section discusses violators, their characteristics, and influencing or affecting them.
1.1.1. Adversaries are Intelligent and Malicious
The dangers faced may come from a number of sources including intelligent, skilled adversaries. When possession,
damage, or denial of assets is highly valuable to someone else, then they could justify bringing considerable skill and
resources to bear to cause them. Whenever poor software security makes these actions relatively easy and risk free,
even lower value targets may become attractive either in mass or individually.
One cannot simply use a probabilistic approach to one’s analyses because, for example, serious, intelligent opponents
tend to attack where and when one least expects them – where the estimate of the probability of such an attack is
relatively low.12
1.1.1.1. Attackers Vary Widely
Attackers vary in nature, motivation and objectives, capability, resources, persistence, and whether
targeting specifically or targeting more generally.
1.1.1.1.1.
System Suitable or Fit for Use
A system needs to be suitable and fit for its uses in its environment given the attackers and other
dangers it will face.
1.1.1.1.2.
Both Human and Automated Attackers Exist
Normally, one needs to be prepared for both. This is especially so if one has something of significant
value to attackers.
1.1.1.1.2.1. Armies of Bots
An attack might use a large number of people especially where people are less expensive. However,
large groups of automated attackers usually spread over an equal number of machines are generally
larger, cheaper, and faster.
12
In theory, game theory techniques that do not require the probabilities to be known could be applicable, but limited progresshas
been made towards doing this.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
7
1.1.1.2. Everything is Possible
Every possibility is not just possible but also relevant. Beware of all possibilities inside or outside the
computing system as beliefs that things are unlikely can be exploited for surprise. Thus, probabilistic
analyses do not always apply.
Murphy’s Law can be deliberately malicious. Murphy’s Law in its non-malicious form still applies,
but attackers can be quite intelligent, creative, and malicious in imitating it.
1.1.1.2.1.
Correctness not Statistical Reliability
While this principle or guideline about the system should and does appear elsewhere under The
System, it is such an important corollary of “Everything is Possible” that it is included here as well.
1.1.1.3. Potential for significant benefits to attackers will attract correspondingly capable
attackers
This includes attack and concealment capabilities, resources, and persistence as well as an ability to
further exploit gains. Some nation states and criminal enterprises have considerable abilities and
successful experience.
1.1.1.3.1.
Do not Presume all Attackers Weak or Unable to Improve
Obviously, such presumptions might be dangerous. Gathering evidence and arguing convincingly that
all will be weak are also hard.
1.1.1.4. Do Not Fight the Last War
This means do not plan to fight as if the current or upcoming war will be like the prior war – a
frequent mistake made by nations that lose (at least initially). Take the initiative and plan to do
software system security as the situation will be and not as it was.
1.1.2. Limit, Reduce, or Manage Set of Violators
As mentioned above, violators are the source of violations. While violations may arise from self-initiated, intrinsic
behavior of the software system itself, the violators of interest here are generally people and organizations, possibly
acting through bots or other systems. Limiting and influencing them can be beneficial. This might be done in a number
of ways.
1.1.2.1. Insiders and Users
1.1.2.1.1.
Ensure Security Awareness
1.1.2.1.2.
Have an Acceptable Use Policy
1.1.2.1.3.
Ensure Users Know Acceptable Use Policy (and what is Abuse)
1.1.2.1.4.
Ensure Users can and do Use System Properly
1.1.2.1.5.
Limit or Reduce Number of Malicious Insiders
For example, this might be done though good personnel practices and not giving cause for
disgruntlement.
1.1.2.1.5.1. Counter-intelligence and Mole Prevention and Hunting
In addition to traditional police, investigative, and counter-intelligence techniques, monitoring and
analyzing computing-related behaviors can help identify suspects and confirm suspicions.
1.1.2.1.5.2. Software that is malicious or susceptible to subversion is as dangerous as humans who
are malicious or susceptible to subversion
Such software may already be inside. Some may be unintentionally dangerous software.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
8
1.1.2.1.6.
Software or System can Cause Violations without outside Intervention
As has been shown repeatedly in regards to safety, systems and software can behave dangerously and
cause damage without involving outside agency or intervention. This has often been characterized as
primarily a result of complexity.
1.1.2.1.7.
Tools can Cause Vulnerable Products
This has long been an issue in safety, and a classic Turing Lecture discusses malicious compilers.
Tools may do this without involving an outside agency or intervention. Likewise, this has been
characterized as primarily a result of tool complexity and poor reliability, but it could be deliberate.
1.1.2.2. Limit, Reduce, or Manage Set of Attackers
1.1.2.2.1.
Limit, Remove, and Discourage Aspiration to be an Attacker
One might
• Encourage and reward good ethics and law abiding behavior
• Minimize attractiveness of being an attacker such as by limiting suggestive stimuli; role
models; and apparent ease, benefits, and lack of risk
• Deter
1.1.2.2.2.
Prevent or Discourage Each Step towards Becoming Attacker
1.1.2.2.3.
Hinder and Breakup Attacker Alliances, Association Networks, and Communications
1.1.2.2.4.
Block Proliferation
1.1.2.2.5.
Discourage Others Motivating Someone to Attempt to Violate
Discourage recruitment or coercion, and support by industry (industrial espionage), organized crime,
etc.
1.1.2.2.6.
Require Greater Capability, Resources, or Persistence to Violate
Raising these requirements can exclude persons or organizations not meeting them from the set of
violators as well as deter them.
1.1.3. Limit, Reduce, or Manage Attempted Violations13
This is the ultimate goal of much of section 1. Relevant concerns covered here or elsewhere in the section include
motivation, intent, and capability.
1.1.3.1. Discourage Violations
1.1.3.1.1.
By Non-malicious Humans
1.1.3.1.1.1. Ease Secure Operation
1.1.3.1.1.2. Acceptable Security
1.1.3.1.1.3. Psychological Acceptability
1.1.3.1.1.4. Ergonomic Security
1.1.3.1.1.5. Sufficient User Aids
Documentation, help facilities, training, help desk services, and other such user aids all need to
support and ease security-related learning and interactions.
1.1.3.1.1.6. Administrative Controllability
This is addressed in [Berg 2006].
13
Reduce number and severity
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
9
• Provide easy to use administrative tools
• Provide for decentralized administration or application of security policy
• Define privileges orthogonally
For simplicity, define privileges using orthogonal aspects. Permissions for resources should be
defined to be orthogonal. Also, see fine-grained privileges.
1.1.3.1.1.7. Manageability
Ease use of security aspects at the enterprise level. Particularly ensure ease of updating.
1.1.3.1.2.
By Malicious Humans
1.1.3.1.2.1. Exploit Deception and Hiding
Appear less attractive, hinder, misinform, waste or cost attacker resources and tim, and gather
intelligence, for example by a honeynet [Rowe 2004a], [Rowe 2004b], and [Cohen 2001]. The Soviet
Maskirovka approach is potentially an attractive framework. Several lists of general principles of
deception exist in the literature and military manuals, but for brevity are not listed here.
1.1.3.1.2.2. Design Attacker’s Experience
This allows a systematic approach to exploiting deception, hiding, and hindering attacker and wasting
or costing attacker resources and time. One designs the experience of legitimate users to create an
overall effect, so do the same for the illegitimate users.
1.1.3.1.2.3. Deter Attackers
Make the possible negative consequences for an attack appear significant enough to affect attacker
decisions to attack.
1.1.3.1.2.4. Low Attractiveness
Minimize attractiveness of attempts such as might be created by suggestive stimuli; examples; and
apparent ease, benefits, and lack of risk.
1.1.3.1.2.4.1.
Appear less attractive than other potential victims
Let others be attacked first and possibly provide warning to you.
1.1.4. Think like an Attacker
To better understand and predict what may happen and better design and prepare for it, one needs to be able to think
likes one’s opponents. This is true here as in all conflict.
1.1.4.1. Security Mechanisms can be Attack Mechanisms
Mechanisms to control access may be exploitable in a denial of service attack. For example, allowing
only three incorrect tries of passwords and then rejecting further attempts (for some time period) can
be used to deny service by simply making three tries with random passwords. Another example is the
wiretapping mechanisms intended for use by police placed in an Athens, Greece mobile telephone
system being used by others for illegal wiretapping.14
1.2.
Limit, Reduce, or Manage Benefits to Violators or Attackers
Serious attackers ultimately attack to obtain some gain, so limiting their gain is an obvious way to discourage them.
The values for the same events may be different for attackers and other stakeholders. Reducing the value of success to
attackers is the issue here.
1.2.1. Unequal Attacker Benefits and Defender Losses
Often, this is not a zero-sum conflict where losses of one side equal the gain of the other. The utility functions and
situations are not the same for both..
14
For information of Athens eavesdropping see IEEE Spectrum October 2007.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
10
1.2.1.1. Attacker’s context is different
Attackers have different world views, skill sets, and ways of thinking from defenders or other
stakeholders as well as differing among themselves. Sometimes this derives from different lives,
ethics, affiliations, and roles.
1.2.2. Limit, Reduce, or Manage Violators’ Ability to Exploit Success for Gain
Generally, serious attackers need to take further steps to exploit the immediate result of a successful attack – such as
disclosure, data corruption, or denial of service – for gain, for example monetary gain from blackmail or use of
competitor’s research results, arrest a spy, or success in battle. Making these follow-on steps more difficult or their
yield less may affect the attacker’s gain and possibly even discourage an attack. Sometimes, this may also decrease the
losses to supportive stakeholders.
1.3.
Increase Attacker Losses
Losses can take a variety of forms including increased expenses, delays, surveillance, restrictions, and punishments.
1.3.1. Limit, Reduce, or Manage Violators’ Ease in Taking Steps towards Fruitful
Violation
Make the required steps slow, difficult, and expensive – discouraging or otherwise hindering use and successfulness of
attacker’s capability, resources, or persistence.
1.3.1.1. Detect and hinder scouting and pre-attack information collection
Detect but do not respond (at least not truthfully) to attempts to footprint the system or otherwise
suspiciously gather information. Deceiving scouts and creating favorable misleading impressions has
a long and honorable history.
1.3.1.2. Hinder entities suspected of bad intent
1.3.1.2.1.
Block Sources of Prior Attempts
Doing this without causing problems to legitimate users may require careful policies and judgments.
1.3.2. Increase Losses and Likely Penalties for Preparation
This might relate to planning, conspiracies, recruitment, aiding and abetting, and information collection and scouting.
1.3.3. Increase Expense of Attacking
Try to increase the resources and time an attacker expends before success – or better yet failure. Emotional degradation
of the attacker and of his or her will, confidence, and competence are also useful objectives.
1.3.4. Increase Attacker Losses and Likely Penalties
Deterrence and removal of attackers (e.g. judicially) can both be useful.
1.3.4.1. Adequate detection and forensics
Unless a violation is detected and adequately understood, adequate response and dynamic loss
limitation are difficult. In addition, proper situational awareness and understanding may not be
possible and false confidence may result. Finally, countermeasures toward and punishment of the
source are unlikely.
1.3.4.2. Rapid violation detection and violator identification
Just as for a police detective, the system and stakeholders can take more effective action and better
prevent future attacks or adverse consequences when the violation or preparatory step towards a
violation is quickly detected and the actual or potential violator identified.
1.3.4.2.1.
Prevent or Limit Attacker Anonymity
1.3.4.3. Enforce policies regarding violations and abuses
Continued violations are more likely and deterrence weakened if known violators are not punished.
This is true whether the violator is an insider or outsider. However, one needs to be careful about
exposing weaknesses or significant potential rewards to other potential violators.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
11
1.3.4.4. Follow and provide support for legal processes, which lead to successful
prosecution
Tradeoff with entity’s time, success, and expense makes this often a questionable path. However,
ultimately for most it is the only avenue open for taking the fight to the opponent, particularly the
outsider.
1.4.
Increase Attacker Uncertainty
Keep attackers uninformed – or possibly misinformed – restricting information useful to them. Make apparent
uncertainties high regarding ease and benefits thereby influencing attackers’ decisions (both potential and actual
attackers). The one possible exception to this is in deterrence where one may better deter an attack by reducing
potential attackers’ uncertainties and convincing them that retaliation or other adverse consequences will follow an
attack.
1.4.1. Conceal Information Useful to Attacker
1.4.1.1. Conceal Information from Intelligence Gatherers
1.4.1.1.1.
Conceal Information from Scanning or Footprinting
1.4.1.1.2.
Secure Directories and Other Online Sources of Information about System
1.4.1.1.3.
Do Not Publish Information about System
1.4.1.1.4.
Do Not Place Sensitive Information about System in Insecure Environment
One example could be an insecure internal, contractor, or beta test environment.
1.4.1.2. Do not reveal information useful to attacker in error messages
1.4.1.2.1.
Upon access denial, provide no information that provides information useful to attacker
See [Berg 2006, p.187].
1.4.1.3. Do not reveal metadata useful to attacker
This may include implicit information such as the existence of the data as well as explicit metadata.
1.4.1.3.1.
Actionability
Mere possession of bits (ones and zeros) or characters may not be enough for an attacker to
successfully take actions based on them. Exploitation may require decryption, metadata, or
understanding the meaning or context of the data. In addition, some way to exploit them for gain or to
cause harm must exist – now or in the future. For example, some data such as troop locations may
rapidly decay in actionability as time passes.15
1.4.2. Exploit Deception
While deception should never be relied on exclusively, it has useful purposes such as aiding
concealment or obfuscation, gathering intelligence, misleading attackers for purposes of Maskirovka,
deterring or confusing them, or causing them to waste their time and resources. While it provides no
guarantees, deception can be as useful in computing security as it is in many other conflict situations.
15
For example: techniques with “one-time” in their name often limit what is actionable and exploitation possibilities.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
12
2.
The System
This section addresses the “software system” (system and/or software) under consideration, and security-supportive
stakeholders and their interests. It includes concern for system conception, development, deployment, operation,
evolution, changing ownership or control, and retirement and disposal. It emphasizes opportunities for violations,
violations, detecting and dealing with them, and potential and actual losses as well as having a subsection on benefits.
2.1.
Limit, Reduce, or Manage Violations
This is one of the key objectives of security. So not surprisingly, it appears near the top of the hierarchy as an
organizing principle. Obviously to do this, an unambiguous definition of what is (or is not) a violation needs to exist.
This is done by establishing and documenting security-relevant system requirements and unambiguously specifying
constrains or policy defining legitimate (or illegitimate) system behaviors.
In this section, two prominent themes are (1) to limit, reduce, or manage opportunities or preconditions for violations
throughout the lifecycle and all along attack paths from attempts onward (see coverage of limiting attempts in section
above) and (2) dealing adequately with attempted and actual violations.
2.1.1. Specify Security Requirements
To limit, reduce, or manage security violations, one must know what is and is not a violation. This starts with system
requirements that are refined to an unambiguous statement of system security policy that is the constraints on system
behaviors – and possibly other items besides behaviors particularly existence of capabilities necessary to perform
required behaviors.
2.1.1.1. Consider both Immediate and Future Security Requirements
Today, both organizations and computer security tend to change rapidly, and as true for other
requirements one needs appropriate capabilities or ease of change for likely changes. This means that
future kinds of attacks need to be considered and an approach developed for addressing them.
Remember countermeasures beget counter-countermeasures, and the correct anticipator or the
speedier in adjustment often wins.
2.1.1.1.1.
Avoid Preparing to Fight the Last War
Do not expect future conflicts or security problems to be (too) similar to the past ones. One’s
opponents will keep advancing their capabilities, approaches, and preparations including preparatory
intelligence gathering and subversions even if one does not.
2.1.1.1.2.
Specify Considering Likely Dangerous Changes
Consider possible dangers from likely changes and severe dangers of possible changes. As one
possible example, one might implement countermeasures to dangers arising from software aging of
operational software.
2.1.1.1.3.
Base Requirements for Future on what Attackers might Achieve
Do not base them on the details of how. Do not try to predict all the possible future methods of attack
or means of achieving partial successes but rather what these partial successes might be might be.
2.1.1.2. Unambiguous Specify System Security Policy
One needs to specify policy unambiguously to allow building and operation of the system to be done
correctly and securely.
2.1.1.2.1.
System Security Policy Conforms to Organizational Security Policy
Of course, the system must conform to all relevant governance documents. The external ones related
to the organization should have been distilled into the organization’s security policy along with
internally originated policy providing input to determining the policy or range of policies that the
system must have the capability to support.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
13
2.1.1.2.2.
Specify Security-related Constraints on System Behavior
This includes more than any requirements for specific functionality. Remember that because
everything may happen and maliciousness makes it all relevant verification of correctness is preferred
to statistical reliability.
2.1.1.2.2.1. Specify Security in a Way that Allows Verification that Design Conforms
Possibly, stronger word than “allows” would be appropriate say, “eases” or “allows ready”. This is a
basic need in system development as s is the next item.
2.1.1.2.2.2. Specify Security in a Way that Allows Verification that System as Built Conforms
2.1.1.3. Specify Security Requirements in a Way that Allows Verification that System as
Operated Conforms
2.1.2. Limit, Reduce, or Manage Opportunities for Violations
Limit the origination or continuing existence of opportunities or possible ways for performing violations throughout
system’s lifecycle/lifespan.
2.1.2.1. Hazards
To avoid violations, avoid the preconditions allowing, promoting, or resulting in them. A duality
exists between not having occurrences of bad events and avoiding the preconditions for them. By
definition, events only occur when their preconditions are true.
2.1.2.1.1.
Avoid Preconditions for Possible Violations
2.1.2.1.2.
Avoid Preconditions and Events Leading to Preconditions for Violations, etc.
2.1.2.1.3.
Keep System in Safely Secure States
2.1.2.2. Accurate Identification
One of the most important principles in security is to adequately identify entities or requestors. This
may include concern for related attributes or environmental attributes that influence decisions about
access and the granting of privileges, and the preservation of security policies or properties that
specify who is allowed to do what requires good identification of the passive and active entities
involved particularly the actors (e.g. user, process, or device) because they are often the more
problematic.
2.1.2.2.1.
Positive Identification
Is the identity established with proper integrity as legitimate, from a trustworthy source, adequately
verified, and current? Avoid assuming, for example, that identification made sometime in the past is
still accurate. Never presume an identity because it is simply claimed or place more trust in an
identification than is justified not some other identity or identities.
2.1.2.2.2.
Adequate Authentication
Identification and other relevant characteristics of an entity need to be adequately verified before
making decisions depending upon it or them (e.g. decisions granting access).
An initial identification might be done by the entity claiming an identity, by inference, from metadata, or by recognition. Authentication is the verification of the initial identification of an entity,
often as a prerequisite to determining the entity’s authorization to access resources in a system.
Authentication involves evidence used for the verification. This may be something the entity knows
or possesses, the context or location of the entity, an inherent characteristic of the entity, or
verification provided by a third party.
2.1.2.2.2.1. More Evidence Strengthens Authentication
Requiring additional means of authentication of the same or different kinds (e.g. two passwords or a
password and a smartcard).may strengthened an authentication, but one also needs to consider its
validity and integrity as addressed below.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
14
2.1.2.2.2.2. Unforgeable Proof of Identity
The evidence used in identification is better (higher quality) if harder to forge, has lower availability
of forgeries, available forgeries are of low quality or otherwise recognizable, is hard to modify, or a
forgery (or illegitimate possession or use of the original) has a higher cost or price. Often, a key
consideration is the source of the evidence or credential – and their trustworthiness, processes, and
willingness to verify their credentials on request.
2.1.2.2.2.3. Use Single Sign-on Sparingly
While convenient, single sign-on has problems similar to situations where “one key fits all locks” or
“if you can enter the building, you can enter the vault”. See [Berg 2006, p. 277].
2.1.2.2.3.
Securely Identify All Resources
Unidentified or insecurely identified entities are potential sources of attack and problems for
accountability. See [Berg 2006, p.248].
2.1.2.2.3.1. Canonicalize All Security-Sensitive Names
Confusion regarding multiple forms of names or IDs can lead to problems [Berg 2006, p. 250].
2.1.2.2.4.
Identification Verification Data Protected By One-Way Function
One should not be able to forge an authenticator (or a real-world identity for that matter) from
retained identification data. Thus, the practice exists of storing hashes of passwords.
2.1.2.2.5.
Valid, Tamper-Proof Identification-Related Data
Data needs to be accurate, up-to-date, and timely; and adequately resistant to tampering.
2.1.2.2.5.1. Provide for Timely Revocation of Privileges
Delays in revocation may cause invalidity of privilege-related data and allow a period of
vulnerability. See [Berg 2006, p. 216].
2.1.2.3. Separate Identity from Privilege
Privileges extended to a particular entity may change – as may entities given a particular privilege.
No entities or its persona surrogate within the system should be “hardwired” with a particular
privilege or privileges. Keeping one or more levels of indirection between them is essential for
dealing with change and mistakes.
2.1.2.3.1.
Separate External Identity from Internal Persona
Of course, more accurately this could be stated as separating external identities from personae as a
many to many relationships may exist between external entities and their personae, surrogates, or
“stand-in’s” within the software, for example accounts.
2.1.2.3.2.
Separate Internal Persona from Privileges
Privileges for a persona should not be hardwired as they are likely to change, for example due to
personnel changes.
2.1.2.4. Distinguish Trust from Authorization
Authorization should be based on willingness and appropriateness to extend trust [Berg 2006, p.124].
The traditional mindset is often called “need to know,” but sometimes it might be better described as
“allowed to know” or “privileged to know”. Allowing only those with demonstrated “need to know”
information to access it conforms to the principle of Least Privilege. “Need to Share” might be stated
as all those who should know are allowed to know. “Push Information” attempts to ensure that all
those who should know do know. “Risk-based Access Control” attempts to tradeoff the benefits of
access with the resulting dangers of undesirable disclosures and damage.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
15
2.1.2.4.1.
Need to Know
2.1.2.4.2.
Need to Share
2.1.2.4.3.
Push Information to where it is Needed
2.1.2.4.4.
Risk-based Access Control
2.1.2.5. Positive Authorization16
This principle calls for basing access decisions on permission rather than exclusion – that is specific
and positively stated – not stated negatively (all but X) and not by default or via a general/blanket
authorization. Thus, the default situation is lack of access, and the protection scheme identifies
conditions under which access is permitted. To be conservative, a design must be based on arguments
stating why objects should be accessible, rather than why they should not.
2.1.2.6. Least Exposure
Eliminate or reduce exposure of assets or interests to danger or damage as well as opportunities to
access or damage. This applies throughout the lifecycle including during normal development and
operation and in abnormal conditions such as loss, theft, or capture of equipment.
2.1.2.6.1.
Broadly Eliminate Exposure
Preferably, eliminate universally as opposed to in a specific situation. When protecting, protect
against everything or at least against the current or future attacker capabilities and not just the ones
seen in the past – do not fight the last war.
2.1.2.6.1.1. Protect throughout the Lifecycle
Protect everything sensitive – while it is. This includes software and other kinds of information and
data as well as anything that might be damaged or result in losses including to stakeholder interests.
This can extend before system development and beyond retirement and include disposal or transfer of
control.
2.1.2.6.1.1.1.
Protect System or Software throughout at the Maximum Level ever Required
Do not allow easier before the fact subversion of the system and, in some cases, after the fact
compromise.
2.1.2.6.1.2. Protect Valuable Assets
The requirement for broad or universal protection has resulted in a number of principles or
guidelines.
2.1.2.6.1.2.1.
Continuous Protection of Assets
Why protect sometimes but leave a window of vulnerability at other times? Avoid exposing assets or
weaknesses at any time or in any system state. Among others, this includes during startup, operation,
shutdown, exception handling, system failure, updating, recovery, and loss or disposal.
2.1.2.6.1.2.2.
Protect It Everyplace It Goes
A sensitive asset requires protection regardless of its location in the system which may change, for
example from primary to secondary memory. Possibly more commonly stated in a different context
as, “Protect him everywhere he goes.”
2.1.2.6.1.2.2.1. End-to-end Protection
Often used in the context of a communication but more broadly it means from origin to finish using
or destroy – source to sink.
2.1.2.6.1.2.2.2. Protect All Media
A variety of storage media may be used including thumbdrives and other portable devices.
16
Part of [Saltzer 1975] Fail-Safe Defaults – explicit, positively stated access privilege required.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
16
2.1.2.6.1.2.3.
Protect (All) Copies
This certainly applies to secrets but also can apply in other circumstances. Each copy or instance
generates its own associated risks.
2.1.2.6.1.2.4.
Protect all Forms or Guises
2.1.2.6.1.2.5.
Eliminate (All) Hazards
Preclude preconditions allowing mishaps or damage. A similar, but ambiguous statement is
“Eliminate All Threats”.
2.1.2.6.1.2.6.
Protect against All Threats
Protect against everything – all sources of threats, all methods of attack, all means of exploiting
success, all kinds of damage or losses, etc.
2.1.2.6.1.2.6.1. Guard All Approaches
2.1.2.6.1.2.6.2. Guard Adequately
2.1.2.6.1.3. Correctness not Statistical Reliability
Everything is possible and attackers tend to attack where defenders (would) assign them a low
probability of doing so and where defenders readiness is thereby low. Thus, handling every
possibility correctly is important for security where maliciousness exists as well as whenever
consequences could be intolerably severe or catastrophic.
2.1.2.6.1.4. Non-Bypassibility
Allow no bypassing of required security functionality to (learn about or) access sensitive capabilities,
assets, or resources. Functionality is always invoked whenever needed to achieve required security.
2.1.2.6.1.4.1.
Ensure identification, authorization, and access mechanisms cannot be bypassed
These are the most common of the mechanisms that should not be bypassed.
2.1.2.6.1.5. Security Functionality does not equal Security
The existence of security functionality does not ensure it will always be invoked or never bypassed.
2.1.2.6.1.6. Isolation from Source of Danger
2.1.2.6.1.6.1.
Isolation of user groups
2.1.2.6.1.6.1.1. Isolate publicly accessible systems from mission-critical resources (e.g., data,
processes).
This publicly accessible “system” might be just a portion of a system, and the isolation might be
• Physical isolation
• Logical isolation
Physical separation or layers of security services and mechanisms might be established between
public systems and secure systems responsible for protecting sensitive or mission-critical resources.
2.1.2.6.1.6.2.
Isolation and separation are fundamental
Knowing which entities (absolutely) cannot interact (communicate, interfere) with each other is
crucial to security-related design and operation. This allows a concentration of concern on those
which might be able to interact.17 In practice, this separation may sometimes mean they can interact
only through rigidly controlled means or conditions.
2.1.2.6.1.6.2.1. Separation Protects
Given that information flows (including from attackers) are a central concern, ensuring proper
separation or isolation has a powerful potential to protect.
17
This has recently been emphasized by John Rushby.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
17
2.1.2.6.1.6.3.
Domain Isolation
Use physical isolation, or boundary or guardian mechanisms (e.g. guards, encryption) to separate
computing systems and network infrastructures to control the flow of information and access across
trust domain or network boundaries, and enforce proper separation of user groups.
2.1.2.6.1.6.4.
Prevent Direct Access to Software Infrastructure Internals
This includes preventing direct software access to library internals possibly by information hiding
and abstraction.
2.1.2.6.1.7. Complete Mediation (of Accesses)
Every access to every (security-sensitive) object must be checked for proper authorization; and access
denied if it violates authorizations. When systematically applied, this principle is the primary
underpinning of the protection system, and it implies the existence and integrity of methods to (1)
identify the source of every request, (2) ensure the request is unchanged since its origination, (3)
check the relevant authorizations, and (4) ensure access request blocked if not authorized (and not
cause blocking if authorized). It also requires examining skeptically any design proposals to allow
access by remembering the result of a prior authorization.
2.1.2.6.1.8. Separate Policy from Mechanism
This general design principle has applications in security – for example, in the flexibility for
changing access authorizations provided by unchanging access control mechanisms.
2.1.2.6.1.9. Least Privilege
Least privilege is a principle whereby each entity (user, process, or device) is granted the most
restrictive set of privileges needed for the performance of that entity’s authorized tasks. Application
of this principle limits the damage that can result from accident, error, or unauthorized use of a
system. Least privilege also reduces the number of potential interactions among privileged processes
or programs, so that unintentional, unwanted, or improper uses of privilege are less likely to occur.
2.1.2.6.1.9.1.
Fine-grained Privileges
Small granularity of privileges can help with achieving least privilege and less unintentional,
unwanted, or improper uses of privilege. A straightforward example is privileges per field within
database row versus by row and by row rather than by table. Privileges may also have smaller
granularity by allowing or using multiple dimensions or parameters in defining privileges.
2.1.2.6.1.9.2.
Dynamic Privileges
Changes privileges so needed ones are awarded only as immediately needed and removed
immediately after each (instance or period of) need ends also aids in implementing least privilege and
lessening unintentional, unwanted, or improper uses of privilege. The ability to quickly change can
also aid flexibility, but might lead to lack of the time to exercise adequate control of changes.
2.1.2.6.1.10. Tamperproof or Resistant
This integrity characteristic is theoretically important in almost all sensitive situations and practically
important in many. It is a powerful avenue of attack, and it is easy to overlook its importance and
criticality.
2.1.2.6.1.11. Reference Monitor
The reference monitor concept combines complete mediation with tamperproof into a (conceptual)
mechanism that is tamperproof and mediates all accesses or requests for access.
2.1.2.6.1.12. Cryptology
Cryptology offers techniques for separation and access control; and for providing credentials,
authenticating, splitting (misleadingly called secret sharing), sharing, signing, and hiding secrets as
well as for integrity and impersonation or repudiation problems. It can provide protection of
confidentiality through hashing and encryption even when attackers gain access to material (but not
to means of decryption). While useful in many circumstances, if needed, cryptology can therefore
provide a last line of defense.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
18
2.1.2.6.1.12.1. No Secrets in Plain Text
Secrets should never unnecessarily be in plain text as this obviously makes them closer to being
understandable and actionable.
2.1.2.6.1.12.2. Do not Do Cryptology Yourself
This clearly does not apply to the small set of experts within each cryptographic subspecialty needed,
but it does to everyone else.
2.1.2.6.1.12.3. Use Strong Cryptography
Using weak cryptology that is supposedly “strong enough”
seldom is the right choice in the longer run. In the short-term,
few have the ability to accurately know or predict the level of
the cryptographic analysis state of the art.
2.1.2.6.1.12.4.
Table 1: Cryptology Based on
Three Fundamental Ideas
One-way Function
Pseudorandom Generation
Zero-Knowledge Proof
Use Certified Cryptology Implemented by
Experts
The non-expert should never produce and use their own cryptography software (or hardware) – and
neither should anyone else use it. The usual commercial-level certification is to the latest FIPS 140
standard and done by NIST.
2.1.2.6.1.12.5.
Use Cryptology Expertly
2.1.2.6.1.12.5.1. Match Cryptographic Capabilities to Requirements
2.1.2.6.1.12.5.2. Know the Threats a Technology Does and Does not Protect Against
2.1.2.6.1.12.5.3. Separate Key from Encrypted Material
Whenever an encrypted item and its key are in the same location, the level of opportunity to steal
them together is increased – as well as to decrypt and then steal.
2.1.2.6.1.12.5.4. Combine Cryptographic Techniques Properly
Generally, combining techniques either in the same or different parts of a system or environment
needs special (expert) care and explicit, convincing assurance.
2.1.2.6.2.
Eliminate Exposure in Each Situation
To provide continuous protection one must provide it in each situation.
2.1.2.6.2.1. Defend against Large (Broadest Possible) Categories of Attacks
Use practices that prevent or deal with as general a set of attack methods or situations as possible. For
example, use one that defends against all SQL injection attacks not ones for particular kinds.
2.1.2.6.2.2. Develop it as Securely as it will Need to Operate
2.1.2.6.2.2.1.
Develop in at least as Secure an Environment as the Security Level Required
during Operation and Use
2.1.2.6.2.3. Distribute and Deploy Securely
2.1.2.6.2.4. Update Securely
2.1.2.6.2.5. Secure Defaults
Deploy with secure initial defaults and use secure defaults (not necessarily the same) throughout
lifecycle.
2.1.2.6.2.6. Secure Timeout
Employ a secure and accurate mechanism to timeout and securely stop excessive capture, use, or
“hanging” of resources.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
19
2.1.2.6.2.7. Secure Failure
2.1.2.6.2.7.1.
Store failure-related artifacts securely
See [Berg 2006, p.258].
2.1.2.6.2.8. Secure Recovery
2.1.2.6.2.8.1.
Secure Recovery from Failure during Recovery
2.1.2.6.2.9. Secure Diagnosis
2.1.2.6.2.10. Secure System Modification
2.1.2.6.2.10.1.
Secure Repair
2.1.2.6.2.11. Secure Shutdown
2.1.2.6.2.11.1.
Secure during process to shutdown
2.1.2.6.2.11.2. Secure while shutdown and powered down
Although a system may be powered down, critical information still resides on the system and could
be retrieved by an unauthorized user or organization.
2.1.2.6.2.12. Secure Acquisition
2.1.2.6.2.12.1.
Do Not Violate Security during
Acquisition
For example, do not supply sensitive
information to suppliers without adequate
guarantees of its security. Limit their sensitive
information to that needed by suppliers.
2.1.2.6.2.12.2.
Acquire Software Systems
Shown to be Adequately
Secure
2.1.2.6.2.12.2.1. Ask Supplier How they Know
18
it is Secure
2.1.2.6.2.13. Test Securely
2.1.2.6.2.13.1.
Do not Use Sensitive Data in
Testing
2.1.2.6.2.13.1.1. Do Not Used Unsanitized
Real Data in Testing
2.1.2.6.2.13.1.1.1. Sanitize with Care
Ensuring adequate sanitization may not be
easy. Note inference may be an issue.
Table 2: Google’s Software Principles
•
Software should not trick you into
installing it.
• When an application is installed or enabled,
it should inform you of its principal and
significant functions.
• It should be easy for you to figure out how
to disable or delete an application.
• Applications that affect or change your user
experience should make clear they are the
reason for those changes.
• If an application collects or transmits your
personal information such as your address,
you should know.
• Application providers should not allow
their products to be bundled with
applications that do not meet these
guidelines.
Source:
http://www.google.com/corporate/software_prin
ciples.html (accessed 20080207)
2.1.2.6.2.14. Protect against Malware
2.1.2.6.2.15. Protect against False or Misleading Directions, Guidance, or Rules
18
A suggestion made by Gary McGraw in IEEE Security and Privacy.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
20
2.1.2.6.2.16. Operate Securely
This is a big topic and, while not
specifically emphasized in this document,
much of what is emphasized is to allow,
prepare for, facilitate, and help ensure
secure operation.
2.1.2.6.2.16.1.
Configure for Secure
Operation
2.1.2.6.2.16.1.1. Secure Settings
2.1.2.6.2.16.1.2. Operate within Operating
System Mode/Partition
with Lowest Feasible
Privileges
2.1.2.6.2.16.2.
Accredit/Assess Suitability
before Operation
Table 3: Seven Design Requirements for Web 2.0 Threat
Prevention
Secure Computing Corporation recommends
1. Deploy proactive real-time reputation-based URL
and message filtering for all domains—even those
not yet categorized
2. Deploy anti-malware protection utilizing realtime, local “intent-based” analysis of code to
protect against unknown threats, as well as
signature-based, anti-malware protection for known
threats
2.1.2.6.2.16.3.
Follow Security Policy and
Guidance
3. Implement bi-directional filtering and application
control at the gateway for all Web traffic including
Web protocols from HTTP to IM, including
encrypted traffic
2.1.2.6.2.16.4.
Operate within the
Conditions in Assurance
Case
4. Monitor for, and protect against, data leakage on
all key Web and messaging protocols
2.1.2.6.2.16.5. Build Security In
Operate software that has security built in
rather than suffer from repeated,
“Penetrated, Panic, Patch and Pray.”
2.1.2.6.2.16.5.1. Acquire Secure Software
2.1.2.6.2.16.6.
Apply Security Patches
5. Ensure that when deployed, all proxies and
caches are fully security-aware
6. Design layered defenses with minimal number of
proven and secured devices
7. Use robust management and audit reporting tools
for all Web and messaging protocols, services and
solutions including filtering, malware, and caching.
2.1.2.6.2.17. Secure if Lost or Stolen
Source [Secure Computing 2007]
This relates closely to entry above about
secure when shutdown, but a lost or stolen
machine may also be turned on and have its own capabilities exploited. In large part, this is an
important problem because roughly half a million laptops are stolen each year and thumb drives and
other portable media have become common.
2.1.2.6.2.18. Safe Sale or Lease
This is another kind of transfer of control and shares concerns with “lost or stolen” and “disposal”.
However, here some sensitive resources may be supposed to transfer and some not.
2.1.2.6.2.19. Secure Disposal
At the end of a system’s life-cycle, procedures must be implemented to ensure system hard drives,
volatile memory, and other media are purged to an acceptable level and do not retain residual
information.
2.1.2.6.2.20. Equal Protection for All Copies
Equivalent instances of entities need the same level of protection as the original. The same is true of
non-identical information from which the same sensitive information can be derived.
2.1.2.6.2.21. Control Information Flows
Related entries exist in many places including ones involving protection and separation.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
21
2.1.2.6.2.21.1.
Trusted Communication Channels
2.1.2.6.2.21.2.
Limited Access Paths
2.1.2.6.2.21.2.1. Reduce number of paths
2.1.2.6.2.21.2.2. Reduce number of kinds of access paths
2.1.2.6.2.21.2.3. Reduce capacity of paths
2.1.2.6.2.21.2.4. Reduce input/output points
2.1.2.6.2.21.2.5. Reduce attack surface (weighted points for I/O)
2.1.2.6.2.21.3. Indirect or Covert Observability
One should not presume that because something is not obviously or labeled sensitive that it cannot
somehow be exploited. Any connection or means of observation may be used.
2.1.2.6.2.21.3.1. Eavesdropping
2.1.2.6.2.21.3.1.1. Traffic Analysis
2.1.2.6.2.21.3.2. Inference
2.1.2.6.2.21.3.3. From Product
2.1.2.6.2.21.3.3.1. “Temporary Code” e.g. Test, Debug, and Instrumentation Code or Capability
For example remove debugger hooks (implicit and explicit), other developer backdoors, unused calls,
data-collecting trapdoors, and relative pathnames.
2.1.2.6.2.21.3.3.2. User-Viewable Code
For example, if the software contains user-viewable source code or identifiable sensitive contents in
object code, remove all hard-coded credentials, sensitive comments, and pathnames to sensitive,
unreferenced, hidden, and unused files.
2.1.2.6.2.21.3.3.3. Secrets in or Exposable via Software
See subsection on “clients”. One partial countermeasure might be to isolate program control data
from user-accessible data.
2.1.2.6.2.21.3.3.4. Analysis of Changes
Attackers analyze differences between new versions or patches issued to resolve security problems
and older version to locate vulnerabilities in older version that can be exploited wherever change has
not yet been installed.
2.1.2.6.2.21.3.3.5. Covert Channel
These include channels that are based on
• Timing
• Resources
2.1.2.6.2.21.3.4. Interference
2.1.2.6.2.21.3.5. Radio Frequency Emission
2.1.2.6.2.21.3.6. Power Supply
2.1.2.6.2.21.4.
Minimize Sharing
2.1.2.6.2.21.4.1. Least Common Mechanism
Minimize the security mechanisms common to more than one user or depended on by multiple users
or levels of sensitivity. Whenever the same executing process services multiple users or handles data
from multiple security levels or compartments, this potentially creates an opportunity for illegitimate
information flow. Every shared mechanism (as opposed, for example, to non-communicating
multiple instances) represents a potential information path between users or across security
boundaries and must be designed with great care to ensure against unintentionally compromising
security. Virtual machines each with their own copies of the operating system are an example of not
sharing the usage of the operating system mechanisms – a single instance is not common across the
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
22
users of the virtual machines. Thus, one desires the least possible sharing of instances of common
mechanisms.
2.1.2.6.2.21.5. Non-interference
Non-interference and related concepts provide ways to define and characterize separation. However,
currently they are more theoretical than practical.
2.1.2.6.2.22. Limit Trust
Limiting the placement of trust is one way to limit risks. Trust is treated at greater length in section 3.
2.1.2.6.2.22.1. Trust Only Components Known to be Trustworthy
Reuse designs and components only if known to be (or can be shown to be) secure in the fashion
required for this system.
2.1.2.6.2.22.2.
Lattice Trust for Components
2.1.2.6.2.22.2.1. Do not invoke components whose trustworthiness is less, or not known or
commensurate
2.1.2.6.2.22.2.2. Hierarchical Trust for Components
2.1.2.6.2.22.2.2.1. Do not invoke less-trusted components from within more-trusted ones
2.1.2.6.2.22.2.2.2. Do not invoke untrustworthy (untrusted) components from within trusted one
2.1.2.7. Duality of Confidentiality and Integrity
The freely allowed information flows of confidentiality and integrity are in opposite directions. This
and other comparisons have led some to state that they are duals. It is certainly the case that in many
situations they are both concerns and trying to simultaneously make access rules for both explicitly
hierarchical can lead to difficulties.
2.1.3. Limit Reduce, or Manage Actual Violations
Violations may be detected or not. They may also be such as to immediately dangerous or not. If attackers must
perform several violations before significant damage may result then their costs go up and the system’s chance of
detecting them may increase. Ensuring this need for multiple kinds of violations is called defense in depth.
2.1.3.1. Limit, Reduce, or Manage Undetected Violations
If a violation is not detected, then one neither has an accurate knowledge of reality nor is able to
actions to deal with it and with possible similar events in the future. Plus, one is lacking an important
basis for learning and improving. Also see Section 2.1.3.3,
2.1.3.1.1.1. Fewer Opportunities for Undetected Violations Leads to Fewer Undetected Violations
While not always true, this is an often useful heuristic.
2.1.3.1.1.2. Detection of Violations
Detecting violations has many advantages and provides a basis for recovery, countermeasures, and
learning and improvement.
2.1.3.1.1.3. Effective Detection
Detection needs occur as specified and be properly recorded and reported. If a type of event shall be
detected then all the subtypes of the event – legitimate or not – must be detected.
2.1.3.1.1.3.1.1. Avoid False Positives
Crying wolf to often as the expected effect, people waste resources or ignore.
2.1.3.1.1.3.1.2. Balance Type I and Type II Errors
In practice, imperfect detection techniques have errors, and the best one can do is to try to achieve the
most desirable tradeoff between false positives and false negatives.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
23
2.1.3.1.1.4. Design in Anomaly Awareness
2.1.3.1.1.5. Self Analysis
Self-analysis involves a portion of a system or the entire system testing or analyzing itself for
problems. This could involve built-in testing.
2.1.3.1.1.6. No Need to Detect?
Deciding not to attempt to detect certain kinds of occurrences must be done very carefully and is
often dangerous. Suggestions of possibilities might be Non-Sensitive Events: If nothing sensitive or
security-related is involved, why make the effort to detect? Several reasons might exist – for example
following a full trail of actions where only some are sensitive, support recovery, or looking for
suspicious behavior. Universal Actions: One might claim an action that covers entire system may
need not to be detected in all its individual effects. Inevitable Actions: If an action will certainly
occur, does its occurrence need to be detected?
2.1.3.1.1.7. Make Cheating Detectable
This particularly applies to the playing of games. Making cheating detectable by a central server or a
trusted third party is easier than providing such detection capability to players. To consistently detect
cheating, a player should at least know the relevant rules of the game.
2.1.3.1.1.8. Recording of Compromises
If a system’s defense was not fully successful, trails of evidence should exist to aid understanding,
recovery, diagnosis and repair, forensics, and accountability. Likewise, records of suspicious
behavior and “near misses”, and records of legitimate behaviors can also have value.
2.1.3.1.1.9. Know Any Vulnerabilities
Knowing the existing vulnerabilities allows easier prevention, detection, and recovery from attempts
to exploit them – not to mention repair.
2.1.3.2. Limit, Reduce, or Manage Detected Violations
2.1.3.2.1.
The Sooner One Knows (Something is Wrong) the Better
This is a general principle that is certainly applicable to software system security. Examples include
early knowledge of vulnerabilities and attacks.
2.1.3.2.1.1. Warn Soon Enough to Get Ready
Warnings that do not come in time are less useful than ones that do – possibly even worthless or
counterproductive. Warning time needs to at least as long as the time to take corrective action,
prepare, or achieve readiness. Stated the opposite way the time it takes to get ready needs to be no
longer than the time between the warning about an event and the event.
2.1.3.2.2.
Incident Response
Response and follow-up to significant incidents relates to recovery, damage control, repair, learning,
and accountability. Incidents may include violations, attempted violations, suspicious behavior, and
“close calls,”
2.1.3.2.2.1. Response Readiness
2.1.3.2.2.2. Plan and Train to Respond
2.1.3.2.2.3. Practice Response
2.1.3.2.2.4. Test Ability to Respond under Realistic Conditions
2.1.3.2.2.5. Notify/Inform Only Those with a Need to Know
Unnecessary distraction and dissemination of (mis)information about the event are to be avoided.
2.1.3.2.2.6. Tolerate
See below for entries related to tolerance of violations and attempted violations.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
24
2.1.3.3. Defense in Depth
Defense in depth is a strategy in which human, technological, and operational capabilities are
integrated to establish variable protective barriers across multiple layers and dimensions of a system.
This principle ensures that an attacker must compromise more than one protection mechanism to
successfully exploit a system. Diversity of mechanisms can make the attacker’s problem even harder.
The increased cost of an attack may dissuade an attacker from continuing the attack. Note that
multiple less expensive but weak mechanisms do not necessarily make a stronger barrier than fewer
more expensive and stronger ones.
Whether to add yet another defense can be difficult. Where judging risks is difficult, one possible
approach is to make them at least not unacceptable (tolerable) and “as low as reasonably practicable”
(ALARP).19 In employing the ALARP approach, judgments about what to do are based on the costbenefit of techniques – not on total budget. However, additional techniques or efforts are not
employed once risks have achieved acceptable (not just tolerable) levels. This approach is attractive
from an engineering viewpoint, but the amount of benefit cannot always be adequately established.
2.1.3.3.1.
Avoid Single-Point Security Failure
Avoid any security failure requiring only one thing to go wrong.20
2.1.3.3.1.1. Eliminate “Weak Links”
2.1.3.3.1.1.1.
Know Weaknesses and Vulnerabilities
2.1.3.3.1.2. Avoid Multiple Losses from Single Attack Success
Avoid common cause of multiple security failures
2.1.3.3.1.3. Require Multi-Pronged Attack
2.1.3.3.1.3.1.
Collusion
Design so multiple people must collude to steal a secret, tamper, or otherwise violate security or
cause harm. Include the ability and requirement to report requests to collude and observations of
violations, attempts to violate, suspicious behavior, and abuse.
2.1.3.3.1.3.1.1. Separation of Privilege
A protection mechanism that requires two keys to unlock it is more robust and flexible than one that
allows access to the presenter of a single key. By requiring two keys, no single accident, or breach of
trust is sufficient to compromise the protected information.
2.1.3.3.1.3.1.1.1. Separation of duties
Redundancy is also used in the traditional “separation of duties” in human financial processes, e.g. a
different person fills out a check than signs it.
2.1.3.3.1.3.2.
Require Multiple Penetrations
Use separation of related data or metadata, fragmentation, and cryptographic secret sharing
particularly onto multiple computers, but possibly in separate files or databases especially if
differently encrypted.
2.1.3.3.1.4. Implement Layered Security
Layered and compartmentalized security can provide natural boundaries to defend and to localize
needs to recover.
19
ALARP is a significant concept in UK law, and an excellent engineering-oriented discussions of it appears in Annex B of Part 2 of
DEF STAND 00-56 and a more general discussion in Section 10 Part 1 [Ministry of Defence 2007]
20
Separation of duties and privileges, and mechanisms can aid in avoiding a “single point of total vulnerability,” analogous to
avoiding a “single point of failure.” for a system. As an additional example, one might require secret sharing (splitting) across
machines for added confidentiality or redundancy across machines as well as for improved availability and integrity. Of course,
secret sharing (splitting) across machines increases the number of machines that must be accessed to access the secret.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
25
2.1.3.3.1.5. Defend against both Improper Input and Improper Output
In addition to ensuring no unauthorized accesses occur exiting data can be checked to doubly ensure
legitimacy and guard against dangers residing internally.
2.1.3.3.1.5.1.
Validate Input
This is one of the most powerful and essential actions for integrity, but also can contribute to
bettering other security-related properties and behaviors.
2.1.3.3.1.5.1.1. Do not trust parameters that come directly from components in environment
2.1.3.3.1.6. Surround Critical or Untrustworthy Elements
This may be done through establishing a perimeter, using wrappers, or other means. This can be
especially important when off-the-shelf (OTS) products are used. Close-in defenses may also be more
easily made harder to bypass.
2.1.3.3.1.7. Design to Defend Perfectly, then Design to Continue Defense even if Some Defenses
Fail
Do not automatically concede that defense is impossible, but prepare to continue to defend even after
an initial misstep by or bypassing of part of the defense (or even multiple defensive errors).
2.1.3.3.1.8. Eggshell Defense
Avoid a single-layer perimeter defense that may crack catastrophically like an eggshell when
penetrated. This has sometimes been called “deperimeterization”.
2.1.3.3.1.9. Diversity in Defenses
Possible lack of variety can be indicated not just by items being identical but also by common
heritage of software, common involvement of persons or practices, common settings, and common
components. To be most effective, combine physical, procedural, and software-related security
[Moffett 2003] and [Moffett 2004].
2.1.3.3.1.10. Measures Encounter Countermeasures
Despite designers’ positive approach to assuring protection, in practice one also engages in a
measure-countermeasure cycle between offense and defense. Currently, reducing the attackers’
relative capabilities and increasing systems’ resilience dominates many approaches. Anonymity of
attackers has led to asymmetric situations where defenders must defend everywhere and always,21
and attackers can chose the time and place of their attacks. Means for reducing anonymity – thereby
making deterrence and removal of offenders more effective – could somewhat calm the current riot of
illicit behavior.
2.1.3.3.1.11. Survivable Security
Concern for survivability must include concern for the survival of security. Likewise one desires
security that survives degradation, failure, or capture of parts of the system or its infrastructure.
2.1.3.3.1.12. Secure Recovery
2.1.3.3.1.12.1.
Secure Recovery from Failure during Recovery
2.1.3.3.1.13. Secure Diagnosis and Repair
2.1.3.4. Limit, Reduce, or Manage Violations Unable to Respond to Acceptably or Learn
From
Establish capabilities to detect, respond to, and learn from any violations. In particular, try to record
enough information to allow reconstruction, learning, (root cause) analysis, and improvement. To a
great extent, the needed capabilities here parallel those needed for accountability.
21
Defending everything may not be possible or may waste resources. “He who defends everything defends nothing.” – Frederick II
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
26
2.1.4. Limit, Reduce, or Manage Lack of Accountability
Accountability is aided by knowing what existed, what happened, and actors not being able to deny or repudiate their
actions. Accountability can be aided by prompt detection, accurate identification, effective record keeping as well as
lack of anonymity.
2.1.4.1.1.
Accurate Clock
In order to reconstruct events their timing (or at least sequence) one needs to be able to rely on (or
reliably analyze) timestamps derived from clocks usually the system clock. Problems in
synchronization and in corrections (resetting) of the clock (particularly resetting backwards) can
create difficulties.
2.1.4.1.2.
Traceability Supports Accountability
2.1.4.1.3.
Non-Repudiation
Non-Repudiation is important to accountability because it prevents repudiation or denial of its acts by
an entity.
2.1.4.1.3.1. Deniability Allows Repudiation
2.1.4.1.3.2. Possibility of Impersonation Implies Deniability
If one could possibly have been impersonated, then one can claim that one was indeed impersonated.
2.1.4.1.3.3. Use Cryptographic Signing to Ensure Non-Repudiation
2.1.4.1.4.
Prevent or Limit Attacker Anonymity
2.1.4.1.5.
Support Investigation of Violations
This includes keeping logs, configuration management, or other means of preservation of valid
evidence, reconstruction, and identification of resources and actors. Support for prosecution requires
special considerations. Forensics is also mentioned elsewhere.
2.2.
Improve Benefits or Avoid Adverse Effects on System Benefits
Fulfilling security requirements could negatively affect the usage and benefits that the system would otherwise provide.
One would like to minimize any such effects or even avoid them entirely. This section covers several principles or
guidelines that may aid in this including ensuring the desired work or uses can still be performed, easing any burdens
security requirements might place on users or operators, and performing tradeoff studies.
2.2.1. Access Fulfills Needs and Facilitates User
Accesses including any security aspects should be done in ways actively helpful to the user.
2.2.1.1. Authorizations Fulfill Needs
User’s needs must be allowed to be met. See discussion under Distinguish Trust from Authorization.
2.2.2. Encourage and Ease Use of Security Aspects
2.2.2.1. Acceptable Security
Avoid users either not using or bypassing security because of inconvenience or dislike.
2.2.2.2. Psychological Acceptability
It is essential that the human interface be designed for ease of use so that users routinely and
automatically apply the protection mechanisms correctly.
2.2.2.3. Sufficient User Documentation
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
27
Users should have proper documentation about security-related aspects. [Benzel 2004, p. 18]. However, do not depend
on users to read documentation in order to establish a
Table 4: Human Interface Security Design Principles
“secure” system [Schoonover 2005].
2.2.2.4. Ergonomic Security
The security-related user interface should be easy to
learn and use, not mistake prone, and appealing to the
user. It should provide guidance and warnings to aid
secure use but not prevent users from being productive.
[Benzel 2004, p. 16].
2.2.2.5. Do Not Depend on Users for
Actions Critical to Security
Do not rely on users to read
documentation, make uniformed
decisions, or disable features
[Schoonover 2005]. This is
particularly the case for initially
establishing the system.
2.2.2.5.1.
Security Should Not Depend
On Users Reading
Documentation
2.2.2.6. Ease Secure Operation
[Berg 2006] addresses to concerns
that are often not end-user concerns,
administrative controllability and
manageability of security.
2.2.2.6.1.
Administrative Controllability
2.2.2.6.2.
Manageability
2.2.3. Articulate the Desired
Characteristics and
Tradeoff among Them
Path of Least Resistance. The most natural way to do any
task should also be the most secure way.
Appropriate Boundaries. The interface should expose,
and the system should enforce, distinctions between
objects and between actions along boundaries that matter
to the user.
Explicit Authorization. A user’s authorities must only be
provided to other actors as a result of an explicit user
action that is understood to imply granting.
Visibility. The interface should allow the user to easily
review any active actors and authority relationships that
would affect security-relevant decisions.
Revocability. The interface should allow the user to
easily revoke authorities that the user has granted,
wherever revocation is possible.
Expected Ability. The interface must not give the user
the impression that it is possible to do something that
cannot actually be done.
Trusted Path. The interface must provide an unspoofable
and faithful communication channel between the user
and any entity trusted to manipulate authorities on the
user’s behalf.
Identifiability. The interface should enforce that distinct
objects and distinct actions have unspoofably identifiable
and distinguishable representations.
Expressiveness. The interface should provide enough
expressive power (a) to describe a safe security policy
without undue difficulty; and (b) to allow users to
express security policies in terms that fit their goals.
Clarity. The effect of any security-relevant action must
be clearly apparent to the user before the action is taken.
Source: [Yee 2003]
This was stated as a general principle in [jabir 1998].
Tradeoffs exist between security and efficiency, speed,
and usability. For some systems, other significant
tradeoffs with security may also exist. Design decisions may exacerbate or ease these tradeoffs. For example,
innovative user interface design may ease security’s impact on usability. [Cranor 2005]
Attempting defense in depth raises the tradeoff between fewer layers of defenses each constructed with more resources
or more layers each costing less resources. An attacker may overcome several weak lines of defense with less difficulty
than fewer but stronger lines of defense.
Business tradeoffs also exist (or are believed to exist) for software producers between effort and time-to-market, and
security. Finally, producers want users and other developers to use features, especially new features, of a product but
the likelihood of this is reduced if these are shipped “turned off”.
2.2.4. Efficient Security
Security mechanisms should only enforce the necessary level of security. Similarly these mechanisms should not be
more costly than the entity they protect [Benzel 2004, p. 15]. In general, security should be efficient and not
excessively impact system efficiency; or speed, throughout, and capacity.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
28
2.2.4.1. Do Not Pay for Security Not Used
Whenever a kind of security is available but not in use, avoid spending computing cycles or other
resources on it – for example, when the finest granularity of privileges is not being used.
2.2.4.2. Lattice Protection
In theory, entities must protect themselves from entities known to be less trustworthy and entities
whose trustworthiness is unknown (i.e. incommensurate) but not from equal or more trustworthy
entities. This may need to be supplemented by the question of trustworthy for what responsibility,
when, under what conditions.
Less generally, a trustworthy system component does not require protection from more trustworthy
components. Likewise, this is the case for users. Correspondingly, components must protect
themselves from less trustworthy components. This less general form is called Hierarchical
Protection in [Benzel 2004, p. 10].
2.2.4.3. Quickly Mediated Access
If security checking of access takes too long, users will object and productivity will suffer.
2.2.4.4. Efficiently Mediated Access
Access mechanism should be efficient as well as quick. [Benzel 2004, p. 6].
2.2.4.5. High-Performance Security
Efficiency is important but so are characteristics like
speed. As stated in [Benzel 2004, p. 16), “… security
mechanisms should not be unnecessary detrimental to
system performance. …choose components that
provide the most security with the least possible
overhead.”
2.2.5. Provide Added Benefits
Security can be attractive to customers and value
adding to a software system. Supposing that security
provides forms of confidentiality, integrity,
availability, and accountability, stakeholders may also
derive benefits they did not originally expect in the
form of additional forms of these or other properties.
For example, privacy is primarily an aspect of
confidentiality, IRM of accountability and
confidentiality, and business continuity of
availability.
2.2.5.1. Privacy Benefit
Protect personally (and family or
household) identifiable data
ensuring its confidentiality,
accuracy, integrity. One should not
be able to forge a real-world
identity from retained identification
data. Both entities to whom the data
relates, and system owners and
users will benefit as exposure and
losses for violating privacy can
occur to the latter as well.
2.2.5.1.1.
Anonymity
Antimony can be of significant
benefit for individual privacy and in
Table 5: Safe Harbor Principles
Notice: Organizations must notify individuals about the
purposes for which they collect and use information about
them.
Choice: Organizations must give individuals the
opportunity to choose (opt out) whether their personal
information will be disclosed to a third party or used for a
purpose incompatible with the purpose for which it was
originally collected or subsequently authorized by the
individual.
Onward Transfer (Transfers to Third Parties): To
disclose information to a third party, organizations must
apply the notice and choice principles.
Access: Individuals must have access to personal
information about them that an organization holds and be
able to correct, amend, or delete that information where it
is inaccurate, except where the burden or expense would
be disproportionate to the risks, or where the rights of
persons other than the individual would be violated.
Security: Organizations must take reasonable precautions
to protect personal information from loss, misuse and
unauthorized access, disclosure, alteration and destruction.
Data integrity: Personal information must be relevant for
the purposes for which it is to be used. An organization
should take reasonable steps to ensure that data is reliable
for its intended use, accurate, complete, and current.
Enforcement: There must be (a) readily available and
affordable independent recourse mechanisms; (b)
procedures for verifying that the commitments companies
make to adhere to the safe harbor principles have been
implemented; and (c) obligations to remedy problems
arising out of a failure to comply with the principles.
Sanctions must be sufficiently rigorous.
Condensed from:
http://www.export.gov/safeharbor/sh_overview.html
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
29
some organizational activities. One example is antimony of responders to personnel evaluation
questionnaires. However, it can hide criminal as well as legal activities. Ethically, this can be a
serious human rights issue when oppressive regimes outlaw certain behaviors.
2.2.5.2. Intellectual Property Management
Intellectual property law and usage agreements (including rules covering employees) may restrict
either initial disclosure or usage such as copying or distribution. This may often be outside the
immediate control of the property owner, or other holder or enforcer of rights.
2.2.5.3. Business Continuity
2.2.5.3.1.
Secure Disaster Recovery
2.2.5.4. Provide Reliability
Because adequate security normally requires adequate correctness, this can have a side effect of
improving system reliability. The same may be true for availability and integrity.
2.2.6. Learn, Adapt, and Improve
The attack and defense of software-intensive systems is a normal but not a mundane situation; it is a serious conflict
situation with serious adversaries such as criminal organizations, terrorist groups, and nation states, and competitors
committing industrial espionage. Serious analyses of past and current experience can improve tactics and strategy.
One should not forget how to be successful in conflicts. while it is difficult to state the principles of conflict in a brief
manner, some principles exist such as exploiting the arenas in which the conflict occurs; using initiative, speed,
movement, timing, and surprise; using and trading-off quality and quantity including technology and preparation;
carefully defining success and pursuing it universally and persistently but flexibly; and hindering adversaries.
Do not lie to oneself – for example, record the actual configuration. Never record what is intended as if actual.
2.3.
Limit, Reduce, or Manage Security-related Costs
2.3.1. Limit, Reduce, or Manage Security-Related Adverse Consequences
The ultimate goal of systems or software security is to minimize real-world, security-related costs and adverse
consequences. In practice this often means to limit, reduce, or manage security-related losses or costs. These tend to fall
into three categories.
•
Adverse consequences resulting from security violations
•
Adverse effects on system benefits because of security requirements’ effects on the system
•
Additional security-related developmental and operational expenses and time
Limiting the first, adverse consequences, is why one incurs the costs of the second and third bullets, and it is the subject
of this subsection.
2.3.1.1. All Actions have Consequences
While all such consequences are not necessarily adverse, this truth is still highly relevant to security.
One is also reminded of the potential relevance of the Law of Unintended Consequences that roughly
states that in complex systems such as societies all actions will have unintended consequences.
2.3.1.2. Losses can take Many Forms
Protecting humans and organizations from harm includes many items, for example, wealth, health,
death, power, freedom, pain, psychological damage, concern for privacy [Cannon 2005], reputation,
damage to or illegitimate use of equipment and facilities, and lack of (cyber) stalking, abuse, and
criminal acts.
2.3.1.2.1.
All Interests
Potentially, any interests of supportive stakeholders may in danger. These can be nearby or far-flung,
organizational or personal, etc. Opponents may not make clear distinctions.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
30
2.3.1.3. Values of a Consequence Vary among Stakeholders
A consequence may have differing effects, for example on suppliers, users, and maintainers of a
system.
2.3.1.4. Predict Consequences
2.3.1.4.1.
Identify Possible Consequences
2.3.1.4.2.
Estimate Consequence Values
Estimate values of each consequence to attackers, users, and other significant stakeholders. Values
often involve measures of size, chance of occurrence, and associated uncertainties.
2.3.1.4.3.
Incorporate Consequences in Decision Making and Assurance Case
Assurance cases for security are analogous to safety cases for safety. They include top-level claim (or
claims) and rationale for it (or their) truth (or possibly falsity) with associated uncertainty (or
uncertainties). They usually have multiple levels of supporting arguments and sub-claims, and
evidence or assumptions supporting the arguments or sub-claims.
2.3.1.5. Software that is Malicious or Susceptible to Subversion is as Dangerous as
Humans who are Malicious or Susceptible to Subversion
2.3.1.6. Limit, Reduce, or Manage Post-Violation Consequences
Damage control is an important even essential function that directly attempts to decrease damage and
losses although possibly not to acceptable or tolerable levels. Also see the Intelligent Adversaries,
Tolerate Security Violations (where tolerable levels are an objective), and Continuous Risk
Management subsections.
One can take active or passive measures to limit adverse consequences. These can be direct or the
reduction of opportunities for consequences. Adverse consequences can involve damage, loss of
benefit, recovery and repair cost, and increased uncertainty. Among the kinds of losses to be
prevented or otherwise limited are intellectual property losses, loss of opportunities for benefit or
recovery.
2.3.1.6.1.
Limit, Reduce, or Manage Damage
2.3.1.6.1.1. Minimax
2.3.1.6.2.
Limit, Reduce, or Manage Loss of Benefits
2.3.1.6.2.1. Limit, Reduce, or Manage Security-related Improper Information Usage
2.3.1.6.2.1.1.
Limit Intellectual Property Losses
See Intellectual Property Management.
2.3.1.6.2.2. Limit Loss of Opportunities
One can not only lose expected benefits but chances to possibly achieve even more benefits.
2.3.1.6.3.
Exclusion of Dangerous Assets
Consider not including assets that are potential sources or contributors to substantial losses.
2.3.1.6.4.
Retain Minimal State
Attackers will have shorter and fewer opportunities to find information or to execute illegitimate
actions.
2.3.1.6.4.1. One-Time Representation
Avoid duplication. Duplication may increase opportunities for violations of confidentiality and
increase integrity and may also improve availability and integrity. One-time representation is also
important for engineering artifacts.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
31
2.3.1.6.5.
Limit Harmful Side Effects and Byproducts
One needs to consider not only primary products and effects, but importantly also secondary,
derivative, or incidental ones. Their consequences may also be significant, for example in
environmental damage.
2.3.1.7. Tolerate Security Violations
Tolerate bad events or conditions eliminating or reducing adverse consequences.
2.3.1.7.1.
Forecast Problems and Avoid
2.3.1.7.2.
Limit Damage
2.3.1.7.2.1. Limit damage in size, location,
duration, occurrence,
propagation, and nature
2.3.1.7.2.2. Limit or contain vulnerabilities’
impacts
2.3.1.7.3.
Be Resilient in Response to Events
2.3.1.7.4.
Choose Safe Default Actions and
Values
2.3.1.7.5.
Self-Limit Program Consumption of
Resources
Attempts to exhaust system resources (e.g.,
memory, processing time.) are common
attacks.
Table 6: Tolerance-related Activities
Forecasting violations
Detection of violations
Notification
Damage isolation or confinement
Continuing service although possibly degraded
Recovery of the system to a legitimate state
Diagnosis of vulnerability
Repair
Recording
Tactics that adapt to attackers’ actions
Follow-on
• Repair of similar vulnerabilities elsewhere
• Avoid similar vulnerabilities in future
products
2.3.1.7.5.1. Add capabilities into the program to prevent overusing system resources
2.3.1.7.6.
Design for Ease of Diagnosis and Repair
2.3.1.7.7.
Explicitly Design Responses to All Failures
2.3.1.7.8.
Design for Survivability
See [Ellison 2003].
2.3.1.7.9.
Survivable Security
Essential security requirements need to continue to be met even when parts of a system degrade, fail,
or are penetrated or captured.
2.3.1.7.10. Fail Securely
2.3.1.7.10.1. Ensure System has a Well-Defined Status after Failure, either Preferably to a Secure
Failure State or Possibly via a Recovery Procedure to a Known Secure State
Recovery to a secure (and safe) state might be done via
o
Rollback,
o
Failing forward, or a
o
Compensating change
See [Avizienis 2004]
o
Partial or full shutdown and/or disconnect
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
32
2.3.1.8. Recover
2.3.1.8.1.1. Recover rapidly
2.3.1.8.1.2. Be able to recover from system failure in any state
2.3.1.8.1.2.1.
Be able to recover from failure during recovery (applies recursively)
2.3.1.8.1.3. Make sure it is possible to reconstruct events
2.3.1.8.1.3.1.
o
Record secure audit logs
2.3.1.8.1.3.2.
o Facilitate periodical review to confirm reconstruction is possible
Review could also ensure system resources are functioning, and identify unauthorized users or abuse.
2.3.1.8.1.3.3.
o Help focus response and reconstitution efforts to those areas that are most in
need
2.3.1.9. Support Forensics and Incident Investigations
2.3.1.10. Allocation of Defenses according to Consequences
2.3.1.10.1. Protection Corresponds to Consequences
The level of protection should correspond to the level of value of the item being protected and of the
consequences of related security violations.
2.3.1.10.1.1. Tamper-Resistance Corresponds to Trust
A component’s defenses against tampering or modification need to correspond to the level of trust
placed in it, its claimed trustworthiness, or its criticality with a higher level of protection provided for
more critical components. This is called Inverse Modification Threshold in [Benzel 2005] with the
“threshold” for protection and difficulty of unauthorized modification being higher. Likewise, the
more trusted a component is (or the more its claimed trustworthiness) the more care needs be taken
during authorized modifications.
2.3.1.10.1.2. Protection of Secret Corresponds to Consequence of Compromise
Secrets normally deserve more protection than non-secrets as presumably they are secrets because
value exists in restricting knowledge of them. This may be violated for deceptive purposes, for
example to mislead by exploiting the thieves’ principle that “the most valuable items are behind the
strongest locks”.
2.3.1.10.1.2.1.
Secret’s Protection Corresponds to its Value
2.3.1.10.1.2.2. Periodically Replace Changeable Secrets
To increase attackers’ needed efforts and potentially reduce its period of actionability for adversaries
and the size of consequences, regularly change encryption keys and other changeable secrets. This
can also protect older secret messages if the current encryption key is compromised. Generally, the
more dire the consequences the more frequently the secret should be changed.
2.3.1.10.2. Work Factor
The cost of performing to a higher level of quality or of a countermeasure to eliminate or mitigate a
vulnerability should be commensurate with the cost of a loss if a successful attack were to otherwise
occur. Generally, the more valuable the asset targeted by an attacker, the more effort and resources
that attacker is willing expend, and therefore the more effort and resources the defender should
expend to prevent or thwart the attack.
2.3.1.10.3. Use Highly Reliable Components for Sensitive Functions
Wording of this principle or guideline is from [Berg 2006]. This might also be stated as “securitycritical” or just “critical” functions. Critical components need to be such that they perform correctly
not only when attacked but also when not attacked but are used normally – i.e. they are reliable. Also
see Correctness not Statistical Reliability.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
33
2.3.2. Limit, Reduce, or Manage Security-Related Expenses across the Lifecycle
Expenses or costs are incurred to provide trustworthiness, protection, and other security-enhancing
and consequence reducing capabilities as well as when reducing uncertainties. This, of course, needs
to be done across the lifecycle not just for development and operations, but these are often the chief
costs.
2.3.2.1. Limit, Reduce, or Manage Security-Related Developmental and Operational
Expenses
This specialization of the principle or guideline above it is a common way to state this principle or
guideline.
2.3.2.2. Cannot Retrofit Security
Experience shows it is expensive and difficult, or impossible to retrofit security to an existing system
because among other things the architecture is often unsuitable – rather build security in.
2.3.2.3. Ease Downstream Security-related Activities
Earlier activities should be done in a way to ease later activities – keeping total benefit, cost, and time
in mind.
2.3.2.3.1.
Ease Preserving Security while Performing Changes in Product and Assurance Case
2.3.2.3.2.
Ease (Cost-Effective and Timely) Certification and Accreditation
2.3.2.4. Reuse only Adequately Specified and Assured Components
Attempting to find and eliminate exploitable faults and weaknesses in acquired or reused components
is not enough. Performing due diligence to find and fix is hard to define and mainly helps in
litigation. Adequate assurance is what is needed – an adequate assurance case.
2.4.
Limit, Reduce, or Manage Security-related Uncertainties
Do this for those of stakeholders with interests in adequate/better system security. This is not for potential attackers
except possibly for deterrence or convincing deception. Note that the issue of “trust” is mainly covered in the
Environment section.
2.4.1. Identify Uncertainties
In order to limit, reduce, or manage uncertainties one needs to know what they are. In part, this can be done by
identifying where they might come from. Uncertainties often have single causes or otherwise correlate affecting their
values and impacts as well as allowing some to be treated together.
2.4.1.1. Identify Sources of Uncertainty
Identify all (possibly and actually) significant relevant sources of uncertainty. These provide one
basis for claims relating to combined uncertainties, total risk, or collective consequences.
2.4.1.2. Identify Individual Uncertainties
2.4.1.3. Identify Relationships among Uncertainties
2.4.2. Limit, Reduce, or Manage Security-Related Unknowns
The more that is not known the more uncertainty one should have. A thorough investigation and understanding of the
situation, the state of the art, and current and future possibilities and their consequences is, as always, wise.
2.4.3. Limit, Reduce, or Manage Security-Related Assumptions
When not well-founded, assumptions can be particularly dangerous as once made they may not be given much
additional consideration. The probability of the truth of any assumption should be estimated.
2.4.3.1. Reasoned Assumptions
An assumption must have a good, explicitly recorded reason. Ignorance and laziness are not good
reasons.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
34
2.4.3.2. Avoid Critical Assumptions
Avoid assumptions important to technical or managerial reasoning or to adequate prediction of
consequences.
2.4.3.2.1.
Test Critical Assumptions
2.4.3.2.2.
Defaults can Hide Ignorance
2.4.4. Limit, Reduce, or Manage Lack of Integrity or Validity
These are almost always critical issues. Do not become concerned with confidentiality to the determent of integrity,
accuracy, or validity.
2.4.4.1. Representation of Reality is Not Reality
In case of disagreement between the terrain and the map, the terrain dominates and is the “ground”
truth. The possible variance of information from reality is a source of uncertainty.
2.4.4.2. Possible Lack of Integrity is a Source of Uncertainty
The possibility for lower (or maybe higher) than intended or claimed integrity can cause uncertainty
about its achievement.
2.4.4.3. Limit, reduce, or manage lack of integrity or validity of security-related resources
2.4.4.3.1.1. Valid, tamper-proof security-related data
Data needs to be accurate, up-to-date, and timely; and adequately resistant to tampering. Data may
consist of information or software.
2.4.4.3.1.1.1.
Use up-to-date security-related data
Do not use possibly dirty cashed values for security-critical data
Data needs to be up-to-date. This is particularly the case when authorization or trust service is
unavailable [Berg 2006, p.175].
2.4.5. Limit, Reduce, or Manage Lack of Reliability or Availability of Securityrelated Resources
This includes facilities, tools, consultants, supplies, computers, networks, and any other resources.
2.4.6. Predictability – Limit, Reduce, or Manage Unpredictability of System
Behavior
2.4.6.1. Use repeatable engineering process, means, and environment to produce
predictably behaving product
The process used to produce it is a substantial factor in the resulting product as well as trustworthy
information about it.
2.4.6.1.1.
The Capability to Produce Low-Defect Software is a Prerequisite
Some might argue that this is not a logical necessity, but it appears to be generally true in practice.
This includes the ability to produce reliable and available software and preserve data integrity at least
under non-malicious conditions.
2.4.6.1.2.
Overcome Maliciousness in All Parts of Lifecycle
Security-related dangers exist throughout the lifecycle and bad-intended persons, organizations, and
software likewise can exist anywhere within the lifecycle.
2.4.6.1.2.1. Make Collusion Necessary
The more people or resources that must all act against one in order for success to be achieved the less
likely they will have success.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
35
2.4.6.1.2.1.1.
Minimize Substitution of Incompetence’s for Collusion
For example an incompetent reviewer who overlooks a backdoor in the code has the same result as a
reviewer who is in collusion with the author. The straightforward approach to this problem is to
ensure competence in personnel, and accuracy and trustworthiness in tools and the computing
environment.
2.4.6.1.2.2. Production Process Must Overcome Maliciousness
During the development of software, an insider could intentionally implant malicious code. The
malicious code could be an intentional backdoor to allow someone to remotely log in to the system
running the software or could be a time or logic bomb. Alternatively, the malicious code could be an
intentionally implanted vulnerability 22 that would allow the attacker to later exploit the vulnerability.
This method would provide the software company with plausible deniability of intent should the
vulnerable code be found.
2.4.6.1.2.3. Deployment Process Must Overcome Maliciousness
During deployment an attacker might:
• Change the product or update it after approval and before distribution;
• Usurp or alter the means of distribution;
• Change the product at user site before or during (as well as after) installation.
2.4.6.2. Ensure Engineering Artifacts Exist that Show How System Meets Assured
Requirements
See the discussion below covering assurance cases.
2.4.6.2.1.
Engineering Artifacts Exist at All Levels Needed for Assurance
2.4.6.2.1.1.1.
Ensure architecture exists that show how system meets assured requirements
2.4.6.2.1.1.2.
Ensure relevant rationales are recorded
2.4.6.3. Verifiability
2.4.6.3.1.
Analyzability
Systems whose behavior is analyzable from their engineering descriptions such as design
specifications and code have a higher chance of performing correctly because relevant aspects of
their behavior can be predicted in advance. In any field, analysis techniques are never available for all
structures and substances. To ensure analyzability one must restrict structural arrangements and other
aspects to those that are analyzable.23
2.4.6.3.1.1. Predict Dynamic Behavior from Static Representation
2.4.6.3.1.2. Compositionality
One needs to analyze and predict the behavior of component
2.4.6.3.2.
Empirical Verifiability
2.4.6.3.2.1. Testability
2.4.6.3.2.2. Simulatability
2.4.6.3.3.
Reviewability
This includes all forms of review including technical and managerial ones and audits.
22
Such as the buffer overruns or race conditions discussed in Section 7 on Secure Software Construction.
Certainly one would never want a critical structure such as a bridge to be built using a structural arrangement whose behavior
could not be analyzed and predicted. Why should this be different for critical software?
23
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
36
2.4.7. Informed Consent
Users and others deciding to use or otherwise depend on a system in circumstances that are dangerous or could result in
significant losses should do so with informed consent. While possibly not a legal requirement, this both an ethical
question and in advanced societies a societal norm. As it is in some other industries such as air travel, some might
conceivably delegate their concerns about risks to third parties.
2.4.8. Limit, Reduce, or Manage Consequences or Risks related to Uncertainty
Consequence can exist related to the intended, claimed, or believed state of the situation including the system and its
future. But additional consequences may be possible because the real situation might be different – an uncertainty
exists. These must also be considered and sometimes dominate concerns.
2.4.8.1. Continuous Risk Management
Risk management is fundamental to security. Risk management must consider events or condition to
which probabilities can be assigned and those where they cannot. A number of sections and their
principles and guidelines address limiting, reducing, or managing a variety of kinds of adverse
consequences, damage, or losses. These are also relevant here.
2.4.8.1.1.
Consider Security and Assurance Risks throughout Lifecycle
Consider security and assurance risks from conception and include in all decisions where relevant
throughout the product and project lifespans – and because of lingering consequences, beyond.
2.4.8.1.2.
Challenge Assumptions
Assumptions may hide risks. Assumptions are discussed elsewhere under the assurance.
2.4.8.1.3.
Consider Consequences
2.4.8.1.3.1. Identify Possible Damage or Losses
2.4.8.1.3.1.1.
Consider Mission or Service
2.4.8.1.3.2. Establish Sizes and Relationships
2.4.8.1.4.
Limit, Reduce, or Manage Total Risks
2.4.8.1.4.1. Consider Security or Assurance Risks Together With Other Risks
Prevent, avoid, limit, Reduce, or manage combined, possibly interrelated, risks. This includes all
risks including project risks.
2.4.8.2. Risk Sharing
Risk sharing arrangement can exist among stakeholders. For example, these might be between
developer and customer or with a government or an insurance company.
2.4.9. Increase Assurance regarding Product
This is a restatement of reduce the uncertainty the product meets its (security-related) claims. Claims should normally
involve security policy and consequences.
2.4.9.1. System Assurability
In order to provide adequate assurance a system with the requisite assurability is needed
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
37
2.4.9.2. Reduce Danger from other Software or Systems
2.4.9.2.1.
Avoid and Workaround Environment’s Security Endangering Weaknesses
2.4.9.2.2.
System Does what the Specification Calls for and Nothing Else
2.4.9.3. Limit or Reduce Complexity
2.4.9.3.1.
Economy of Mechanism
“Keep the design as simple and small as possible” applies to any aspect of a system, but it deserves
emphasis for security-relevant mechanisms in order to allow adequate review and analysis.
Addressing every possibility is important, since because obscure design and implementation faults
that result in unwanted access paths may not be noticed during normal use.
2.4.9.3.2.
Make Keep It Small
2.4.9.3.2.1. Implement no unnecessary functionality
If a feature or functionality is not needed or not in the specifications ensure that it is not included.
However, also ensure all that is needed is in the specifications.
2.4.9.3.2.2. Minimize the portion of the software that must be trusted
2.4.9.3.2.2.1.
Minimal retained processes and state
For example, do not keep startup processes or state after startup, or normal running ones during
shutdown [Schell 2005]
2.4.9.3.2.3. Minimized Security Elements
2.4.9.3.2.3.1.
Limit number, size, and complexity.
2.4.9.3.2.3.1.1. •
Minimize support for functionality, which is outside the trusted software
component(s)
2.4.9.3.2.3.1.2. •
Exclude non-security relevant functionality
2.4.9.3.2.3.1.3. •
Localize or constrain dependencies of security elements
2.4.9.3.2.3.1.4. •
Separate policy and mechanism
2.4.9.3.2.3.1.5. •
Virtualize roles of hardware and device drivers
2.4.9.3.2.3.1.6. •
Implement no unnecessary security functionality
2.4.9.3.2.3.2.
24
Minimize the portion of the software that must be trusted
2.4.9.3.2.3.2.1. •
Exclude non-security relevant functionality
2.4.9.3.2.3.2.2. •
Separate policy and mechanism
2.4.9.3.2.3.2.3. •
Localize or constrain dependencies
2.4.9.3.2.3.2.4. •
Maintain minimal retained state – e.g., do not keep startup processes or state after
startup, or normal running ones during shutdown [Schell 2005]
2.4.9.3.2.3.2.5. •
Virtualize roles of hardware and device drivers
2.4.9.3.2.3.2.6. •
Minimize support for functionality, which is outside the trusted software
component(s)
2.4.9.3.2.3.2.7. •
Other ways to reduce possibilities include:
2.4.9.3.2.3.2.8. •
Not implementing unnecessary functionality
24
For general information on virtualization, IEEE Computer Magazine Special Issue on virtualization. Renato Figueiredo, Peter A.
Dinda, Jose Fortes, "Guest Editors' Introduction: Resource Virtualization Renaissance," Computer, vol. 38, no. 5, pp. 28-31, May,
2005.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
38
2.4.9.3.2.3.3.
Limit dependences
Localize or constrain dependencies limiting security-relevant dependences for security functionality
and other elements.
2.4.9.3.3.
Simplicity
The advantages for reduced uncertainty and enhanced confidence provided by simplicity are
appreciated throughout engineering as immortalized in the KISS principle. A recent book [Maeda
2006] lists and discusses a number of general guidelines that may be of interest.
2.4.9.3.3.1. Localization
2.4.9.3.3.2. Orderliness
Regularity and repeating patterns can aid this. A brute-force solution may be simple and orderly but
possibly inelegant.
2.4.9.3.3.3. Elegance
Elegant solutions are usually smaller and simpler than inelegant ones although they make require
knowledge of additional insights to understand.
2.4.9.3.3.4. Control complexity with multiple perspectives and multiple levels of abstraction
Control complexity with multiple perspectives and multiple levels of abstraction by such methods as
listed below.
2.4.9.3.3.4.1.
Use information hiding and encapsulation
2.4.9.3.3.4.2.
Clear Abstractions
2.4.9.3.3.4.3.
Partially ordered Dependencies
2.4.9.3.3.4.3.1. Use layering
This is an example of a larger idea of constraining dependency.
2.4.9.3.3.4.3.1.1. Layers should not be bypassed
See [Berg 2006, p. 233]. This is a tenet of layered virtual machine architecture, but sometimes
violated by designing potions covering more than one layer.
2.4.9.3.4.
Straightforward Composition
2.4.9.3.4.1. Trustworthy Components
The requirement to deal with components of inadequate trustworthiness complicates the problem as
they then require supplemental defenses and/or avoidance of dependence on them.
2.4.9.3.4.2. Self-reliant Trustworthiness
The trustworthiness of the component does not rely on the trustworthiness of other components.
Accountability issues can also arise for example does the component keep logs.
2.4.9.3.4.3. Trustworthy Composition Mechanisms
For example, remote procedure calls, communication, and shared memory – persistent or transient –
must be adequately trustworthy. See also Trust Management.
2.4.9.3.4.3.1.
Trustworthy Invocation
2.4.9.3.4.3.2.
Trustworthy Information Sharing
2.4.9.3.4.3.3.
Trustworthy Concurrency
Currency is difficult for humans to do correctly, and therefore oftern a source of uncertainty.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
39
2.4.9.3.4.3.3.1. Simple, Machine Analyzable Concurrency
2.4.9.3.4.3.3.2. Atomic Transactions/Interactions
2.4.9.3.4.3.3.3. Trustworthy Distributed System
2.4.9.3.4.3.3.3.1. Trustworthy Communication – Reliable, Available, Integrity, Accountable and Nonreputable, and Confidentiality
2.4.9.3.4.3.3.3.2. Handle Distributed Element or Communication Failures
2.4.9.3.4.3.3.3.3. Guard Boundaries among Elements
2.4.9.3.4.3.3.3.4. Delays Prohibit Simultaneity of Shared Values
2.4.9.3.4.3.3.3.5. Use GPS for Clocks
2.4.9.3.4.4. Composability and Additivity
See below.
2.4.9.3.5.
To Improve Design Study Previous Solutions to Similar Problems
See [jabir 1998].
2.4.9.3.5.1. Use known security techniques and solutions
2.4.9.3.5.2. Employ security design patterns
2.4.9.3.5.3. Use standards25
2.4.9.4. Predictable Change
Of course, changed systems must meet the requirements for assurance at least as well as the original.
In addition, several guidelines exist related to changes.
2.4.9.4.1.
Plan and Design for Change
Include consideration for security and its assurance.
2.4.9.5. Change Slowly
2.4.9.5.1.
Use a Stable Architecture
While one main reason is to ease maintenance of assurance case a stable architecture is also useful to
• Ease evolution of the system as architecture must be learned only once and relevant
interfaces of component can remain unchanged
• Facilitate achievement of security requirements and evolution
• Eliminate possibilities for violations – particularly of information flow policies
2.4.9.5.1.1. Amendable to Supporting assurance arguments and evidence
The design and assurance case for security-relevant portion or facets are
• Amendable to review and analysis including formal proofs
• Facilitates localization of arguments and evidence particularly for local or likely
changes
A separate trusted computing base can isolate security policy concerns and localize reasoning and
assurance arguments.
2.4.9.5.1.2. Amendable to system change
The items under the prior entry also facilitate change particularly localization.
25
Within design constraints and ensuring analyzability, security based on sound, open standards may aid in evolution,
portability, and interoperability, as well as assurance.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
40
2.4.9.6. Assure Security of Product
2.4.9.6.1.
Create and Maintain an Assurance Case
A number of principles or guidelines for improving the quality of assurance cases are given in
[Redwine 2007] – too many to include them all here. All interested readers should see [Redwine
2007].
2.4.9.6.1.1. Better Decisions
The primary purpose of assurance cases is to improve stakeholder decision making by reducing
uncertainties related to claims in assurance case.
2.4.9.6.1.1.1.
Consider Users of Assurance Case in its Requirements and Design
Both kinds of users and their decisions, and particular users or decision makers may need to be
considered. For example, the kinds of doubts harbored by its key decision making users and the kinds
of evidence that gives them the most confidence influence requirements and design of an assurance
case.
2.4.9.6.1.1.2.
Present Assurance Case in Forms Suitable to Support Decision-making
2.4.9.6.1.2. Eliminate or Analyze elsewhere Consequences or Risks Not Addressed in Assurance
Case
2.4.9.6.2.
All Claims are Conditional
2.4.9.6.3.
Grounds for Confidence is Not Same as Confidence
2.4.9.6.4.
Trustworthiness is Not Same as Placing Trust
2.4.9.6.5.
Verifiability
2.4.9.6.5.1. Design to Ease Traceability, Verification, Validation, and Evaluation
Includes certification and accreditation
2.4.9.6.5.1.1.
Employ Security Design Patterns that are Verifiable
2.4.9.6.5.2. Review, Analyze, and Test to Provide Evidence for Assurance Case
The objectives should relate to providing support or refutation to an argument or arguments within
the assurance case often through the support of a sub-claim. Each test should have an established
objective and its results should be interpretable and preferably significant – have meaning and be
meaningful.
2.4.9.6.5.3. Reviewability
Make work products easier to review, for example organized and understandable, and with atomic
identifiable items, separation of concerns, and comparable organizations across artifacts that will
need to be compared. More concerning reviews is covered under Rigorous Engineering and
elsewhere below
2.4.9.6.5.4. Analyzability
For security concerns, this means that, at a minimum, security-relevant behavioral aspects need to be
analyzable. This analysis might include sequential behavior and concurrency for the existence of the
potential to violate the required security properties. Almost all concurrent structures require
automated analysis for adequate confidence.
Object orientation has merit; but beware the difficulties of ensuring the correctness of inheritance,
polymorphism, and generics. Indeed, one may need to avoid these unless adequate means of analysis
and assurance for their use are in place
Many kinds of analysis exist both manual and automated. These vary in power, rigor, creditability,
and validity.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
41
2.4.9.6.5.4.1.
Cover all possibilities
2.4.9.6.5.4.1.1. Review Everything
More concerning this is covered under Rigorous Engineering below.
2.4.9.6.5.4.1.2. Use formal methods
2.4.9.6.5.4.1.2.1. Design to Ease Proofs
Make both verification conditions and proofs simpler or easier.
2.4.9.6.5.4.1.2.2. Deterministic, Total Functions
Generally, proofs are easier when the systems operations handle all possible input and always have
the same result for the same input (possibly including state). Avoid timing dependences. Some
advocate synchronizing all tasks and threads in multitasking/multithreaded systems.
2.4.9.6.5.4.1.2.3. Beware of attacks beneath the level of analysis
For example, attacks via binary code after analysis based on source code.
2.4.9.6.5.4.1.2.4. Beware of attacks lower in the uses hierarchy
That is attacks on or caused by entities or items one is dependent upon.
2.4.9.6.5.4.1.3. Beware of attacks after analysis (or review)
These attacks will be closer to occurrences of execution. While some such attacks will also fall under
the above warnings to beware, some may not. Integrity of or changes to items analyzed or recorded
results of analyses may be issues.
2.4.9.6.5.4.2.
Do static analyses for weaknesses
This can apply to many artifacts describing system. In particular
2.4.9.6.5.4.2.1. Do static analyses for code weaknesses
2.4.9.6.5.4.2.2. Do static analyses for design security weaknesses
2.4.9.6.5.4.3.
Compositionality
All compositions must be analyzable for the relevant properties and preferably for correctness.
2.4.9.6.5.5. Testability
2.4.9.6.5.5.1.
Testability and Testing’s Incompleteness
2.4.9.6.5.5.1.1. Random Tests
Since everything is possible, random testing – while not covering all possibilities – may expose
conditions that have not been considered. Even though a precise, all-encompassing oracle for such
testing may be hard to construct, random testing can be useful especially for older software.
However, good software should never fail such a random or fuzz test when a crude oracle such as
“did it crash” is used.
2.4.9.6.5.5.1.2. Testing can show the existence of vulnerabilities but not their absence
Attack or penetration testing seldom if ever allows conclusion that system is of high quality.
Likewise, for weaknesses, one wishes to ensure the absence of whole categories not one by one.
2.4.9.6.6.
Composability
John Rushby has stated one should have
2.4.9.6.6.1. No unknown interfaces
Generally, this implies no unspecified interfaces and all included in relevant analyses.
2.4.9.6.6.1.1.
No unintended interaction including thru shared resources
2.4.9.6.6.1.1.1. Isolation, separation, controlled boundaries, or partitioning
These are central to showing lack of interactions.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
42
2.4.9.6.6.1.2.
No interference of components with each other, or with partitioning or separation
mechanisms
2.4.9.6.6.2. Ensure security preserving composition at all levels of detail
2.4.9.6.6.3. Secure Distributed Composition
This is a common problem where concurrency, communication, delays, and survivability
(particularly survivable security) are also usually concerns.
2.4.9.6.6.4. Additivity
Components combine to achieve overall properties even though properties and functionality may be
different. For example, operating system combined with file system.
2.4.9.6.6.5. Ease Production of an Accompanying Assurance Case for the Security-Related
Correctness of Compositions
2.4.9.6.7.
Chain of Custody
Ensure all (computing) entities (e.g. information) that one depends upon have a history that ensures
adequate integrity and preferably validity. This may include origination, storage, and communication
both one-way and multi-way.
2.4.9.6.8.
Chain of Trust
Understand and enforce the Chain of Trust established by validating each component of hardware
and software from the bottom up to insure that only trusted software and hardware can be used or is
depended upon.
2.4.9.7. Use Production Process and Means that Ease and Increase Assurance
2.4.9.7.1.
Build the Right Thing, Build it Right, and Show it is Right
2.4.9.7.1.1. Showing it is Right is Easier if it Is
2.4.9.7.1.1.1.
Showing it is Right is Easier if it was Built Right
2.4.9.7.1.1.2.
Do it Right Starting from the Beginning
2.4.9.7.1.2. Showing it is Right is Easier if Planned from Beginning
2.4.9.7.1.3. Feasibility includes Feasibility of Adequately Showing it is Adequate
2.4.9.7.2.
Ease Creation and Maintenance of an Assurance Case
This issue has been mentioned in several guises already, but it is an important concern.
2.4.9.7.3.
Process
2.4.9.7.3.1. Use Repeatable, Documented Procedures
2.4.9.7.3.2. Make Assurance Case Integral to Process
2.4.9.7.3.2.1.
Assurance Case is Considered from Beginning and Influences all Activities and
Products
2.4.9.7.3.2.2.
Plan, Develop, and Maintain Assurance Case Concurrently with Software System
2.4.9.7.3.2.3.
Starting from Conception every Activity Considers and is Influenced by Assurance
Case
2.4.9.7.3.3. Be Ready and have Wherewithal
Be ready (including proper skills) and have wherewithal before needed for a task.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
43
2.4.9.7.3.4. Adequate Time
2.4.9.7.3.5. Flexibility to Increase Resources and Time if Needed
2.4.9.7.3.6. If process output doesn’t work under non-malicious conditions, it wouldn’t work under
malicious ones
Normally, the best behavior one can expect of a product on use is under non-malicious conditions.
2.4.9.7.3.7. Address Maliciousness
2.4.9.7.3.8. Continuous Improvement
2.4.9.7.4.
High-Quality Means and Resources
2.4.9.7.4.1. Trustworthy Means and Resources
2.4.9.7.4.1.1.
Trustworthy People
2.4.9.7.4.1.2.
Trustworthy Tools
2.4.9.7.4.1.3.
Trustworthy Environment for Development
2.4.9.7.4.1.4.
Tolerate Untrustworthiness and Maliciousness
2.4.9.7.4.1.5.
Eliminate Untrustworthiness and Maliciousness
2.4.9.7.4.1.6.
Trustworthy Environment
2.4.9.7.4.2. Capable Means and Resources
2.4.9.7.4.2.1.
Powerful, Reliable Toolset
2.4.9.7.4.2.2.
Tools are further addressed under Engineering Rigor.
2.4.9.7.4.2.3.
High-Levels of Expertise and Skill
2.4.9.7.4.2.3.1. Software Engineering (and other required engineering fields)
2.4.9.7.4.2.3.2. Security Engineering and Technology
2.4.9.7.4.2.3.3. Management of High-Assurance Projects
2.4.9.7.4.2.3.4. Have expertise in technologies being used and in application domain
2.4.9.7.4.3. Readiness of Means and Resources
This includes high (or at least adequate) readiness of persons, organizations, tools and mechanisms,
facilities, and other means and resources.
2.4.9.7.4.4. Everything Traditionally done for Correctness and Quality has Potential for Security
This includes measurement, process, defects, personnel and other areas. Sometimes essentially the
same thing is suitable – possibly more rigorously done – and sometimes an analogy is appropriate.
On the other hand recognize that high security requires more than just more of the same; do not try to
get to the moon by climbing higher and higher trees.
2.4.9.7.4.5. Continuous Improvement
This is a powerful concept and central to traditional approaches to quality. No organization can be
expected to achieve and maintain high levels of quality without continuously striving to improve.
2.4.9.7.5.
Engineering Rigor
Roughly the term rigor means: unrelenting strictness or toughness in the use of precise and exacting
methods and unwillingness to make allowances. One may also speak of the degree thereof. Security
is often about dealing with intelligent adversaries and all the possibilities. Under these conditions,
causal engineering is quite likely to result in problems. Many of the relevant principles and guidelines
were covered above under analyzability and related topics. These included the use of formal methods.
Here the emphases are on tools and reviews.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
44
2.4.9.7.5.1. Engineer Thoroughly and Rigorously
2.4.9.7.5.1.1.
Everything under Configuration Management
This is an essential aid for integrity and accountability. See [Berg 2006, p. 547].
2.4.9.7.5.1.2.
Implement Secure Configuration Management
Secure CM procedures and tools help prevent, for example, insertion during development of
malicious code or intentional vulnerability.
2.4.9.7.5.1.3.
Chose and use notations and tools that facilitate achieving security and its
assurance
2.4.9.7.5.1.3.1. Know One’s Tools
Security and reliability are likely to be better is tool selectors and users thoroughly understand the
tools and notations and users are skilled in their use.
2.4.9.7.5.1.3.1.1. Tool Usability
2.4.9.7.5.1.3.1.2. Skilled at Security-related Aspects of Tools
2.4.9.7.5.1.3.2. Tool Performance Rigor and Assurance
Tool performs correctly (and nothing else) and an assurance case exists to show this.
2.4.9.7.5.1.3.3. Tool Product Rigor and Assurability
Tool product is correct and tool detects (and possible deals with) problems with input and output as
well as possibly its own integrity or other relevant properties. A second tool that (independently and
reliably) checks the results of the first can reduce uncertainty. For example, a theorem prover paired
with a separately developed, small (easy to assure) proof checker.
2.4.9.7.5.1.3.3.1. Use strongly typed languages preferably with safe typing
2.4.9.7.5.1.3.3.2. Use notations with formal semantics
2.4.9.7.5.1.3.3.3. Employ tools having rigorously verifiable results
2.4.9.7.5.1.3.3.4. Employ standards and guidelines for tool and notation usage
2.4.9.7.5.1.3.3.5. Use coding standards suitable for security
This includes the areas of importance for correctness, predictability and analyzability, and
reviewability and change as well as security. For example consistent style and naming, strong and
safe typing, and avoiding exceptions and implementing safe exception handling.
2.4.9.7.5.1.3.3.6. Deal explicitly with finiteness of computer from beginning
For example, all types are finite.
2.4.9.7.5.1.3.4. Tool’s Products’ Self-Defense
Does the product the tool produces have self-detection or protection?
2.4.9.7.5.1.3.4.1. Compiler Produces Product That Detects (and repairs) Tampering
Preserves integrity of executing code
2.4.9.7.5.1.3.4.2. Code produced supports all operations needed for security
For example, it does not produce code where final overwrite of secrets are optimized away.
2.4.9.7.5.2. Review Everything
This is important as faults in any engineering or management artifact could potentially propagate
causing serious consequences. It is also one of the key ways to ensure quality and reduce defects and
risks.
2.4.9.7.5.2.1.
Review Everything Thoroughly and Rigorously
This is important to any engineering artifact that affects serious consequences. It is also one of the
key ways to ensure quality and reduce defects and risks.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
45
2.4.9.7.5.2.1.1. Reviews include Comparing
All reviews for the purpose of verification (build it right) should involve at least two artifacts – for
example, specification and design, or module specification and code. Whenever possible, this should
also be true within reviews to aid validation (build the right thing).
2.4.9.7.5.2.1.1.1. Review against Security Policy
In particular, one must compare high-level specifications to the system’s security policy (security
requirements, preferably formalized) and could benefit by comparing other kinds of artifacts
whenever practical.
2.4.9.7.5.2.2.
Reviewed by Different Relevant Areas of Expertise
This is always needed but is particularly important in software system security where the expertise is
not widespread and can involve multiple specialties even within security.
2.4.9.7.5.2.3.
Open Design
Security mechanisms should not depend on the ignorance of potential attackers, but rather on
assurance of correctness and/or the possession of specific, more easily protected secrets –, keys or
passwords. This and an explicit, recorded design permits the mechanisms to be examined by a
number of reviewers without concern that the review may itself compromise the safeguards.
The practice of openly exposing one’s design to scrutiny is not universally accepted. The notion that
the mechanism should not depend on ignorance is generally accepted, but some would argue that its
design should remain secret since a secret design may have the additional advantage of significantly
raising the price of penetration. . This principle still can be applied, however, restricted to within an
organization or a “trusted” group.
2.4.9.7.5.2.3.1. Documentation of Design Assumptions
Assumptions needed to be remembered by recording, and reviewed and analyzed.
2.4.9.7.5.2.3.2. Review for Use of Design Principles (and guidelines)
This is one of the key potential uses of the information in this document and other more concrete
guidelines.
2.4.9.7.5.2.3.3. Vulnerability Analyses
Perform Vulnerability Analyses at each level of abstraction including architecture risk analysis.
2.4.9.7.5.2.3.3.1. Identify Weaknesses
The analysis to show a weakness or poor security practice does not actually result in vulnerabilities
may require significant effort and care, and they might become vulnerabilities as the system changes
during its evolution. Thus, many suggest that
• As far as practical, correct weaknesses whenever found
2.4.9.7.5.3. Test Thoroughly and Rigorously
This involves both security-oriented and non-security-oriented testing.
2.4.9.7.5.3.1.
Risk-based Testing
Normally, testing resources are limited and decisions must be made on how to use them. These
decisions should be based on the assurance case and the risks reflected in it.
2.4.9.7.5.3.1.1. Test to Support Assurance Case
This involves both attempts to confirm and to contradict or disconfirm.
2.4.9.7.5.3.1.2. Test Critical Assumptions
2.4.9.7.5.3.2.
Continue Security Testing after Delivery
Not only might this allow one to discover vulnerabilities in delivered products before others do, but
new kinds of attacks become known that one did not previously know to use in testing.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
46
2.4.9.7.6.
Avoid Known Pitfalls
Avoid known pitfalls, omissions, and mistakes. Do not place them (or leave them) in products and
avoid them in others’ products. Have expertise, technologies, and the other wherewithal sufficient for
success.
2.4.9.7.6.1. Avoid Known Vulnerabilities
2.4.9.7.6.1.1.
Avoid Security Weaknesses and Poor Practices
2.4.9.7.6.1.1.1. Eliminate Root Causes
2.4.9.7.6.2. Avoid and Workaround Tools’ Security Endangering Weaknesses
2.4.9.7.6.3. Avoid Non-malicious Pitfalls
2.4.9.7.6.3.1.
Avoid Known Poor Practices
2.4.9.7.6.3.2.
Avoid Common and Prior Mistakes
2.4.9.7.6.3.3.
Avoid Sources of Mistakes and Faults
2.4.9.7.6.3.3.1. Eliminate Root Causes
2.4.9.7.6.3.3.2. Avoid Mistake-prone Processes
2.4.9.7.6.3.3.3. Avoid Mistake-prone Entities
In particular, do not use mistake-prone people or tools.
2.4.9.7.6.4. If Software System Doesn’t Work under Non-Malicious Conditions, It Wouldn’t Work
Under Malicious Ones
Normally, the best behavior one can expect of a product is under non-malicious conditions.
2.4.9.7.6.5. Avoid Maliciousness-related Pitfalls
This involves not just the results of maliciousness, but also sources and opportunities for it as well as
detection and investigation. Thus, it encompasses processes plus potentially all kinds of security –
physical, personnel, operational, etc. Various pitfalls are mentioned throughout this document, but
others exist and are documented in the literature related to for example, conflict, law enforcement,
and security.
2.4.9.7.6.5.1.
Prevent or Eliminate Root Causes
2.4.9.7.6.5.2.
Find and Fix
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
47
3.
The Environment
The environment is everything other than the system and the known actual or potential attackers or violators covered in
the adversary stream. However, similar attackers or violators may reside in the environment with their identities
unknown or uncertain. Therefore, one of the concerns is to identify and understand sources of danger in the
environment.
This section addresses some relevant characteristics of environments and environment-related benefits, losses, and
uncertainties. In particular, the last addresses trust and trust management issues.
3.1.
Nature of Environment
Driven in part by the Internet, the environment of computing systems have become quite complex. Interaction and
interdependence continue to increase. This, of course, provides increasing opportunities for not only the owners and
users of systems but also attackers. Attackers often exist embedded in the environment, communicate through it, and
attack through the system’s boundary with its environment.
3.1.1. Security is a System, Organizational, and Societal Problem
Security is not merely a software concern, and mistaken decisions can result from confining attention to only software.
Attempts to break security are often part of a larger conflict such as business competition, crime and law enforcement,
social protest, political rivalry, or conflicts among nation states, e.g. espionage. Specifically in secure software or
system development, the social norms and ethics of members of the development team and organization as well as
suppliers deserve serious attention. Insiders constitute one of the most dangerous populations of potential attackers, and
communicating successfully with them concerning their responsibilities and obligations is important in avoiding
subversion or other damage.
One could make many obvious points regarding the environment and not all are included here. Some of these might be
•
Malicious Entities Exist
•
Incompetence is Widespread
•
Don’t Trust What You Hear 26
Many of the items under The Adverse section and some under The System have aspects relevant to or could apply in
some way to The Environment. This section repeats only the most salient of these, but one should keep all in mind.
3.1.2. The Conflict Extents beyond Computing
Many motivations and conflicts that result in computer security problems originate in larger conflicts resulting from the
goals and interests of nation states, organized crime and its victims (and law enforcement), and ranging down to such
situations as estranged spouses. Computing is often only one of several battlespaces27 in which the conflict is taking
place, and defenders should not act as if it is the only one (including fighting, and achieving gains or suffering losses
in).
3.1.2.1. Non-computing Aspects of Security are Important
This includes physical, operational, human including personnel, and communication security as well as possibly any of
the many security subspecialties such as the judiciary, bodyguards, and counter-intelligence. Attackers will seek the
weak points whether directly computing-related or not.
26
Misinformation, lying, concealing information, incorrect observation, and misrepresentation and misinterpretation are common.
Motivation need not be malicious. Studies show most people frequently lie or omit relevant information for such reasons as
smoothing or improving interpersonal relationships, making excuses, avoiding embarrassment, and following cultural norms.
Relevant cliché might include, “Don’t believe everything you read in the newspaper,” and the military officers’ one that, “The first
report (from a battle) is always wrong.”
27
The term “battlespace” is a generalization of the term “battlefield”.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
49
3.1.2.2. “Security-affecting” Entities
Even exclusive of outside attackers, the set of entities that affect security related to a computing system operation and
use includes operators, users, facilities, hardware, software, services used, and others. (Also, remember sometimes a
system is even called a system of systems.) Many of these might be in a system’s environment.
3.1.3. New Technologies Have Security Problems
Intentionally or unintentionally, persons originating and producing new technologies seldom have either the expertise,
interest, funds, or time to consider and accommodate security including privacy. Even if the technology-related risks
are low in its original use, successful technology is likely to spread to dangerous uses and environments. Recent
examples include Ajax website technology, RDIF, net-centric warfare, mobile phones including recent inclusion of an
ability to pay venting machines that potentially allows mass man-in-the-middle attacks taking 9.99 from each phone.
The history of new technologies has lessons for both technology developers and users – and attackers. Generally, only
the last appears to have learned these lessons.
3.2.
Benefits to and from Environment
Some of the benefits to entities in the environment come from supplying secure services and what is learned from the
experience related to the system throughout its lifecycle. The other concerns include not causing harm to entities in the
environment and their security as well as providing benefits to the environment and benefiting from it while not adding
unnecessary risks (or at least unwarranted ones).
3.2.1. Utilize Security Mechanisms Existing in Environment to Enhance One’s
Security
The environment’s security mechanisms should have no unwarranted trust extended to them and might best be used as
a supplement. However, sometimes the most trustworthy mechanisms are supplied from environment.
3.2.1.1. Do Not Allow Software to Continue Operating when Exposed because Reliedupon External Defenses Fail28
3.2.2. Create, Learn, and Adapt and Improve Organizational Policy
An organization should have an organizational security policy relevant to computing. During the lifecycle of a system
discoveries may be made and lessons may be learned that should be codified in the organization’s security policy.
3.2.3. Learn from Environment
Use information from environment from suppliers, users of the same products, security vendors, software experts,
publications, Internet, and other sources. Leaning from others problems, experiences, and accumulated knowledge is
generally less expensive than learning by recreating it.
3.2.3.1. Gather and Analyze Intelligence
Much information is available on attackers and their behavior as well as newly recognized
vulnerabilities.
3.2.4. Help, but do not Help Attackers
A tradeoff exists between sharing information and guidance among those interested in improving the security of
software systems and the possibility that sharing information may make it more likely to be disclosed to potential
attackers.
3.2.4.1. Notifying Suppliers
Notify developers of any vulnerability you discover in software that you use, but do so using their
method of secure reporting.
28
Suggested by Karen M. Goertzel
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
50
3.2.4.2. Supply Customers with Information for their Decisions, but Not Information that
would Aid Attackers
Customers benefit from the information they need to make system-security-related decisions.
Normally, this needed information does not include technical details of vulnerabilities or other
actionable information for attackers.
3.2.4.3. Form Alliances with Trustworthy Allies
Sharing information and experience with others can be mutually beneficial. Groups or entities already
exist for this purpose – many industry oriented.
3.3.
Limit, Reduce, or Manage Environment-Related Losses
3.3.1. Do Not Cause Security Problems for Systems in the Environment
Care needs to be taken to not be a danger or cause damage to others.
3.3.2. Do Not Thwart Security Mechanisms in Environment
While thwarting protections that exist in the environment may sometimes seem convenient, this can increase the risks
for others as well as oneself.
3.3.3. Avoid Dependence
Avoiding dependence is a general principle that can apply both inside and outside the system. This is particularly true
of dependence over which one does not have control and about which one lacks knowledge or most importantly an
adequate assurance case. However, within the system avoiding dependence seldom adequately justifies not having only
a single instance of something. On the other hand, it can motivate having multiple means of doing the same thing to
avoid overdependence on one.
Dependence is an underlying issue in many principles and guidelines. For example, see the discussions under
uncertainty of (1) assurance in the system section and (2) trust in the next subsection.
3.3.3.1. Dependence means Exposure
If one relies on an entity or service, then one is exposed to whatever consequences exists of its
undependability or untrustworthiness – or of one’s misunderstandings regarding it.
3.3.3.2. Avoid Dependence on Protection by Environment
Relying on protections provided outside the system can raise questions of control, quality, and
trustworthiness.
3.3.4. Presume Environment is Dangerous
Unless shown otherwise, presume the environment and connections to it are potential sources of danger and damage.
Some of this comes from attackers using the environment, but other derives from the possibly non-malicious
environment.
3.3.4.1. Secure System in Insecure Environment
At some level, most systems attempting to be secure have the problem of attempting it in an insecure
environment. This includes no only connections to the outside but any infrastructure upon which it is
dependent. These surrounding insecurities may fundamentally undermine many systems, and most
attempt to survive on half-measures and mitigations – and an ability to (somewhat) recover. Real
separation from entities can provide useful protection. Cross-checking can also be useful somewhat
similarly to its usefulness in survivability.
3.3.4.2. Presume Attackers Use Environment
Mush of the interaction that outside attackers may have with the system would be through the
environment. Generally, one cannot depend on interactions with the environment to never be with
attackers or their proxies.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
51
3.3.4.3. Protect against Environment
In addition to maliciousness, non-malicious entities can nevertheless be dangerous. Incompetence,
ignorance, innocence mistakes, or exploration, acts of nature, or shear complexity can all cause
problems. Non-malicious suppliers of software, services, and other items may have faults and failures
that unless dealt with may cause security problems. Entities interacted with over networks or other
connecting communication paths may behave in undesirable ways. Everything is possible, and one
needs adequate assurance before placing trust.
3.3.4.3.1.
Do Not Rely Only on Hiding for Protection from Environment
This may or may not be successful as inadvertent undesirable interaction is still possible. So one
should have additional defenses and not rely primarily on obfuscation or hiding.
3.3.4.4. Implement Trustworthy Distribution Protections and Procedures
This should be carefully designed and assured, and become routine.
3.4.
Limit, Reduce, or Manage Environment-Related Uncertainties
To limit uncertainties about the environment one needs to have adequate knowledge about it and manage trust
appropriately. This subsection addresses these plus the dangers of third parties.
3.4.1. Know One’s Environment
As in any conflict, knowledge of the terrain or playing field and entities involved is useful – and not knowing is risky.
3.4.1.1. Avoid Assumptions about Environment
Assumptions should not cover up ignorance or fears. On the other hand, assumptions should be things
with lots evidence they are true, but unworthy the effort to collect and analyze it.
3.4.1.1.1.
Do Not Assume an Entity is Human
3.4.1.1.1.1. Use Reverse Turing Test
3.4.1.2. Make only Weak, Non-critical Assumptions about Environment
This should be true individually and collectively. Preferably, assumptions should have
• Low uncertainty
• Low criticality
o Low risk because of assumptions’ locations in design rationale and assurance
arguments
o Small effect if false rather than true
• Weak
o Only applies in limited circumstances
o (Weak) conclusion allowing wide latitude in assumption
o Changes little from not making assumption
and be few in number.
3.4.2. Limit, Reduce, or Manage Trust
Usually, for one to place trust in an entity (or group of entities) may be expose one to losses. However, trust also
includes an element of uncertainty, and therefore trust is mainly covered in this subsection in order to bring many of
trust-related principles or guidelines together into a more comprehensive structure.
3.4.2.1. Trust Only Services or Components in Environment, which are Adequately
Known to be Adequately Trustworthy
“Shown to be trustworthy” or “with adequately assured trustworthiness” might be even better words
to replace “known to be trustworthy” in this principle or guideline as possibly might “known not to
be dangerous.” Reuse designs and components only if known to be (or can be shown to be) secure
and dependably responsible in the fashion required for this system.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
52
3.4.2.2. Nothing is Trustworthy for Everything for All Time
3.4.2.2.1.
Conditional Trust
Explicitly decide and record the conditions and applicability related to trustworthiness and the
placing of trust.
3.4.2.2.2.
Limit Trust Extended
Trust responsibilities not extended are risks not taken.
3.4.2.2.2.1. Circumscribe Roles of Trusted Third-Parties
Limit the power of and protect against problems deriving from third parties.
3.4.2.2.2.2. Trustworthiness Does Not Necessitate Trust
Just because an entity is (believe to be) trustworthy does not mean one must place trust in it.
3.4.2.2.3.
Trust Does Not Scale Well29
3.4.2.2.3.1. Technology Scales Up Easier than Trust
3.4.2.3. More Trustworthy Components Do Not Depend on Less Trustworthy Services or
Entities in Environment
More trustworthy components should not have dependences on less trustworthy ones.
3.4.2.3.1.
Identify all dependencies
This is a fundamental necessity to identify all risks.
3.4.2.3.2.
Do not invoke from within system external services that are untrustworthy or of unknown
trustworthiness
This is an analogue of Lattice Protection. These kinds of invocations have been a temptation to many
who are designing or employing service-oriented architectures. However, they are dependences with
the usual associated risks plus those associated with lack of control when one does not control
services and entities one depends upon.
3.4.2.3.2.1. Avoid Known Security Weaknesses in Environment
Use only appropriate, safe calls and APIs to external entities and resources; and validate parameters
from external components. Avoid use of services, protocols, and technologies known to contain
security faults or defects.
3.4.2.3.2.2. Avoid Opportunities for Environment-related Security Problems
This includes not only validating inputs, but avoiding the many others kinds of problems mentioned
throughout this document.
3.4.3. Ensure Adequate Assurance for Dependences
One not only needs adequately low uncertainty about ones own components and services on which one depends but
those supplied by others.
3.4.3.1. Require assurance case(s) for dependencies
One assurance case could cover one or several dependencies.
29
The author first heard this from Brian D. Snow.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
53
3.4.3.2. Do not trust without knowledge
3.4.3.2.1.
Do Not Trust what is Invisible
3.4.3.2.2.
Do Not Trust what You Do Not Understand
3.4.3.2.3.
Do Not Trust what Might have Changed
3.4.3.2.4.
Do Not Trust Input from (or interactions with) Unknown or Unauthenticated Source
3.4.3.3. Do not trust what you do not control
3.4.3.3.1.
Do Not Trust Clients
The clients referred to here are nominally clients in client-server architectures. Particularly on the
Internet, clients often are entities that are totally under the control of different party than that of the
server. Thus, the server should not trust the client.
3.4.3.3.1.1. Ensure identification of client and security of communication
3.4.3.3.1.2. Do not trust input from client
3.4.3.3.1.2.1.
Do not trust unsupported claims by client
3.4.3.3.1.3. Do not put any secrets in client software
This has been a particular problem in the online game industry. This also includes not being able to
infer secrets.
3.4.3.3.2.
Do Not Send Any Unauthorized Secrets to the Client
This is simply a repeat of what is implied or stated in other items.
3.4.4. Third-Parties are Sources of Uncertainty
Balance any benefits against the potential costs including those deriving from the uncertainty regarding the third parties
behavior.
3.4.4.1. Trusted Third-Parties are Sources of Uncertainty
Placing trust involves uncertainty even if it is sometimes quite small. However, one’s analysis and
judgment can be wrong and the trusted entity and its resulting trustworthy can change over time.
3.4.4.2. Equitable Adjudication is Never Certain
The courts or arbitration may decide in a manner one thinks is correct or they may not. The chances
on which will happen vary from situation to situation with many variables involved, but equity is
never a sure outcome.
3.4.4.3. Involve Third Parties Sparingly
Generally, the fewer parties involved the simpler and more predictable the situation becomes.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
54
4.
Conclusion
Certainly, one could get a sense of completeness from the triplets of good guys, bad guys, and arena of conflict; and
benefits, losses, and uncertainties. The desired intellectual coherence must, of course, extend beyond the top two levels,
and the structure supplied herein attempts to do this mainly by higher-level items subsuming or being generalizations of
lower-level ones, or by lower-level ones being causes or partial or alternate solutions of higher-level ones.
The concept of an attempted attack needing to match with an opportunity for success offered by the system helps
connect sections 1 and 2. The concept of limiting, reducing, or managing entities, attributes, activities, or interactions is
widespread and adds an additional element of commonality to the sense of coherence.
I hope that readers find this report a useful step forward toward understandable coherence, intellectual mastery, and a
sense of completeness. Finally, I hope it can form a basis for teaching and learning based on first principles and
elaborated through many subordinate principles and guidelines.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
55
5.
Appendix A: Principles of War
The following table has a good comprehensive list of the principles of war that Samuel T. Redwine III
extracted in 2001 via an extensive survey of the literature of the last 2500 years plus the additional
principle of the effectiveness of elimination (permanent removal; historically death, exile, or sale into
slavery).
Table 7: Principles of War
Objective
Initiative
Unity of Effort
Concentration
Economy of Effort
Orchestration
Clarity
Surprise
Security
Logistics
Readiness and
Training
Moral-Political Factor
Speed
Morale
Deception
Intelligence
Decision Making
Minds as Battlespaces
Simultaneous Operations in
Depth
Reserves
Technology
Fortification
Terrain
Health
Courage
Discipline
Elimination
Regardless of when they were first written down, they have proven timeless. A definition of each will not
be included here. Rather, because the list is long and somewhat hard to remember and gain intellectual
mastery of as an unorganized list, below I suggest their organization under five more abstract principles.
One way I organize this list is to separate concerns into aspects such as where conflict occurs, time, quality,
and hindering opponents. One might have principles of
•
Universality: consider all arenas or battlespaces and their combination and interactions; identify,
understand, shape, and exploit
•
Purposeful Dynamism: think dynamic events and change – past, present, and future – be
prepared, decide rapidly and well, persevere, and sustain by changing and acting to achieve the
conditions and outcomes that will best meet one’s objectives into the future
•
Quality: systematically improve value, ability, and potential
•
Location of Victory: Defeat ultimately occurs in the adversaries’ minds or via death, and belief
by survivors in one’s moral and practical superiority eventually dominates
•
Hindrance and Exploitation: if it would be a good for them, hinder adversaries and potential
adversaries’ ability to do it except for explicit reasons favoring oneself. Exploit adversaries’
weaknesses and failure to follow principles (voluntarily or because hindered) while using one’s
strengths.
While these bullets give good synopses of these principles, they are worth elaboration and discussion. A
beginning is provided below.
A Principle of Universality might say all possible arenas of conflict or battlespaces (or other interactions)
need consideration and a set of principles apply across them. Battlespaces include human minds
(perceptions, beliefs, intellectual capabilities, motivations, intentions, persistence, courage, and emotions);
cyberspace; outer space; human bodies, intelligence and counter-intelligence; economies, markets,
infrastructures, production capabilities, brainpower (e.g. education, emigration), intellectual property,
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
57
technology; diplomacy; political power; and public information as well as air or space, and the surface and
subsurface of land or water including natural and artificial features. The interactions and interdependencies
among these battlespaces lead to the inter-service and interagency joint or combined efforts (including
combined arms and more) central to effective strategy and tactics.
Across these battlespaces, adversaries may include terrorists, insurgents, organized criminals and bandits,
illegal immigrants, organized disruptors, espionage agents and agents of influence, corrupt officials, private
militia, and infiltrators. Elements that can be pluses or minuses include media, religious personages and
organizations, “purchasers” and manipulators of public opinion and political power, business, special
interest organizations, civilian populations, private security or military companies, politicians, public safety
and health personnel, ethnic/racial or tribal groups and leaders, supra-national organizations (e.g. UN or
EU) or alliances, and non-governmental organizations (NGOs). The potential extend and complexities of
conflict (and cooperation) need to be remembered and considered by planners and combatants as well as
analysts and historians all of whom need to consider the larger context of their immediate concern.
Principle of Purposeful Dynamism might encompass establishing and meeting one’s objectives for the
futures through being prepared and sustained perseverance achieved by meeting the needs to identify,
understand, forecast, influence, detect, tolerate, recover and learn from, and exploit one’s and others’ (ally,
adversary, and third-party) past histories; current situations, capabilities, actions and minds including
outlook towards, predispositions, plans, intentions, uncertainties, and decision-making process. Be agile
and flexible; learn and change. Think in terms of strategies and tactics covering concurrent and sequential
sets of events – actions and reactions, conflicts and tranquility – extending from the past, through the
present, and into the uncertain future trading off benefit and risk to best meet objectives extending into the
future. Achieve the preconditions for success (“condition the battlefield”) and then success.
Principle of Quality involves enhancing one’s grasp of reality, competence and capabilities, speed,
efficiency, opportunities, sustainability, morale, courage, and satisfaction while decreasing likelihood of
and vulnerability to dangers (including mistakes) and their impacts particularly decisive losses and a
catastrophic, unrecoverable eventual outcome – particularly avoiding it being due to one’s own
composition and arrangement.
Location of Victory: Generally, one can change one’s adversary’s intentions or capabilities. Violent
conflict may end by destroying the effectiveness or existence of the enemies’ capabilities or their
immediate willingness to fight, but capabilities can be regenerated and willingness regained. Defeat
ultimately occurs in the adversaries’ minds or via death (or possibly via removal into captivity). Obtaining
the surviving (or new) adversary populations’ (and one’s, allied, and third party populations’) belief in
one’s moral and practical superiority eventually dominates.30 One example is the West’s “victory” in the
Cold War. Even if one “loses” the violent war, one may win the peace as in several “successful conquests”
of China.
Principle of Hindrance and Exploitation involves decreasing the adversaries’ (real and potential) ability
to do all the above (and any other principles) except as is desirable, e.g. for creditable deterrence or enough
command and control to effectively surrender. Exploit opponents’ weaknesses and one’s strengths. This
can call for a number of things including security, stealth, deception, and exploiting others lack of
consideration for Universality and Dynamism through surprise actions or in unexpected areas. Exploit
adversaries’ weaknesses and failures to follow principles whether they exist voluntarily or because caused
by one’s actions either hindering or otherwise.
By separating concerns and organizing numerous key elements within each principle, the resulting short list
provides a suggestive example and allows users of these principles wide-ranging guidance for minimal
recall. While the basic concept of each is clear and easily grasped, many implications and specific
applications exist. However, the end result is a set of timeless, universal guideposts adaptable to changing
situations.
30
Thus unsurprisingly, despite their abhorrence to many, their effectiveness causes genocide and ethic cleansing to
continue to occur. The influencing of populations and particularly the influencing or indoctrination of youth are gentler
and frequently in many ways positive alternatives and can sometimes be effective – although some indiscriminately
label such actions “destruction of cultures” and undesirable.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
58
6.
Appendix B: Purpose-Condition-Action-Result Matrix
I found this matrix helpful in my thinking, and I have included it here in the hope that it will be helpful to others –
particularly those preparing instruction.
For each attacker, defender, or environmental high-level activity the actor needs to develop a capability and intention to
do it, attempt it, and succeed to some degree. Likewise to continue the actor needs to sustain a capability and an
intention. For each of these, the actor must have an opportunity, and actually perform at some level of proficiency
resulting in particular benefits, losses, and uncertainties.
The table below has an arrangement that may help some to think about the situation in this way. Down the left-hand
side it lists high-level activities entities related to the attacker, the system, and the environment. Across the top it lists
stages in such activities, and all cells have the list of three items that apply to each intersection of activities and stages –
but to avoid clutter they are listed here only once.
Develop
Capability to
Develop
Intention to
Attempt to
(Partially or
Wholly)
Succeed at
Sustain
(Enhanced)
Capability to
Continue (or
Strengthen)
Intention to
Attacker
Prepare
Affect Others’
Preparations
Attack
Affect Follow-on
Consequences
Defender
Prepare
• Have Opportunity
Affect Others’
Preparations
• Actually Perform at Some Level of Proficiency
Defend
• Yielding Benefits, Losses, and Uncertainties
Affect Follow-on
Consequences
Environment
Prepare
Affect Others’
Preparations
Interact (e.g.
provide service)
Affect Follow-on
Consequences
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
59
7.
Bibliography
Avizienis, Algirdas, Jean-Claude Laprie, Brian Randell, and Carl Landwehr, “Basic Concepts and Taxonomy of
Dependable and Secure Computing,” IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 1, pp. 1133, Jan.-Mar. 2004. Available at http://csdl.computer.org/dl/trans/tq/2004/01/q0011.pdf
Benzel, T. V., Irvine, C. E., Levin, T. E., Bhaskara, G., Nguyen, T. D., and Clark, P. C. Design Principles for Security.
NPS-CS-05-010, Naval Postgraduate School, September 2005. Available at
www.nps.navy.mil/.../faculty/irvine/Publications/Publications2005/Design_Principles_for_Security.pdf
Berg, Clifford J, High-Assurance Design: Architecting Secure and Reliable Enterprise Applications, Addison Wesley,
2006.
P. Boudra, Jr. Report on rules of system composition: Principles of secure system design. Technical Report, National
Security Agency, Information Security Systems organization, Office of Infosec Systems Engineering, I9 Technical
Report 1-93, Library No. S-240, 330, March 1993.
J. C. Cannon. Privacy, Addison Wesley, 2005.
Fred Cohen "Deception and Perception Management in Cyber-Terrorism," 1988. http://all.net/journal/deception/terrorpm.html
Cohen, Fred, Dave Lambert, Charles Preston, Nina Berry, Corbin Stewart, and Eric Thomas, A Framework for
Deception, Final Report IFIP-TC11, 2001.
Department of Defense Strategic Defense Initiative organization, Trusted Software Development Methodology, SDI-SSD-91-000007, 17 June 1992 Volume 1.
Ellison, Robert J., and Andrew P. Moore. Trustworthy Refinement Through Intrusion-Aware Design (TRIAD).
Technical Report CMU/SEI-2003-TR-002. Software Engineering Institute, October 2002 Revised March 2003.
Renato Figueiredo, Peter A. Dinda, Jose Fortes, "Guest Editors' Introduction: Resource Virtualization Renaissance,"
Computer, vol. 38, no. 5, pp. 28-31, May, 2005.
Mark G. Graff and Kenneth R. Van Wyk, Secure Coding: Principles and Practices, O'Reilly & Associates, June 2003.
Carl Hunt, Jeffrey R. Bowes, and Doug Gardner. Net force Maneuver: A JTF-GNO Construct. Proceedings of the 2005
IEEE Workshop on Information Assurance and Security, IEEE, 2005a
Carl Hunt. Presentation, IEEE Workshop on Information Assurance and Security, 2005b
Ministry of Defence. Defence Standard 00-56, Safety Management Requirements for Defence Systems, 17 June 2007.
John Maeda. The Laws of Simplicity. MIT Press 2006.
jabir and J. W. Moore, A Search for Software Engineering Principles, Computer Standards and Interfaces, vol. 19 p.
155-160, 1998
Jonathan D. Moffett and Bashar A. Nuseibeh. A Framework for Security Requirements Engineering. Report YCS 368,
Department of Computer Science, University of York, 2003.
Jonathan D. Moffett, Charles B. Haley, and Bashar Nuseibeh, Core Security Requirements Artefacts, Security
Requirements Group, The Open University, UK, 2004.
P.G. Neumann. Practical architectures for survivable systems and networks. Technical report, Final Report, Phase
Two, Project 1688, SRI International, Menlo Park, California, 2000.
P. G. Neumann, Principled Assuredly Trustworthy Composable Architectures (Draft), December 2003.
Charles P. Pfleeger and Shari Lawrence Pfleeger. Security in Computing. Prentice Hall, 2003.
Samuel T. Redwine, Jr., Principles for Secure Software: A Compilation of Lists, Commonwealth Information Security
Center, Technical Report CISC-TR-2005-002, 2005.
Samuel T. Redwine, Jr. (Editor). Software Assurance: A Guide to the Common Body of Knowledge to Produce,
Acquire, and Sustain Secure Software Version 1.1. US Department of Homeland Security, September 2006.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
61
Samuel T. Redwine, Jr., “The Quality of Assurance Cases,” In proceedings of the Workshop on Assurance Cases for
Security - The Metrics Challenge, at the 37th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks, DNS 2007, June 25 - June 28, Edinburgh, UK, 2007
Rowe, Neil C., “Designing Good Deceptions in Defense of Information Systems,” ACSAC, 2004a.
Available at: http://www.acsac.org/2004/abstracts/36.html
Rowe, N., and H. Rothstein, “Two Taxonomies of Deception for Attacks on Information Systems,” Journal of
Information Warfare, Vol. 3, No. 2, pp. 27-39, July 2004b.
J.H. Saltzer and M. D. Schroeder. “The protection of information in computer systems.” Proceedings of the IEEE,
63(9):1278-1308, 1975.
Roger Schell, Keynote Talk, International Workshop on Information Assurance, March 24, 2005.
Glenn Schoonover, Presentation, Software Assurance Summit, National Defense Industries Association, September 7-8,
2005.
Secure Computing, Seven Design Requirements for Web 2.0 Threat Prevention. White Paper, Secure Computing
Corporation, 2007. Available at http://www.securecomputing.com/webform.cfm?id=219&ref=scurhpwp (200712)
Stoneburner, Gary, Hayden, Clark and Feringa, Alexis. Engineering Principles for Information Technology Security (A
Baseline for Achieving Security), Revision A, NIST Special Publication 800-27 Rev A June 2004.
UK CAA. .CAP 670 Air Traffic Services Safety Requirements. UK Civil Aviation Authority Safety Regulation Group,
12 June 2003 amended through 30 June 2006.
Kenneth van Wyk and Gary McGraw, After the Launch: Security for App Deployment and Operations, presentation at
Software Security Summit, April 2005.
Yee, Ka-Ping, Secure Interaction Design and the Principle of Least Authority. Workshop on Human Computer
Interaction and Security Systems. April 6 2003 (www.chi2003.org).
Elizabeth D. Zwicky, Simon Cooper, and D. Brent Chapman. Building Internet Firewalls (2nd Ed.), O'Reilly, 2000.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
62
8.
Acknowledgements
I was inspired to do this attempt at organizing computing security principles and guidelines by the insistence of first
Matt Bishop and then the participants in the August 17-18, 2006 Software Assurance Common Body of Knowledge
Workshop at the Naval Postgraduate School in Monterey, CA. The principles and guidelines included come from many
sources but primarily from S. Redwine (ed.), Software Assurance Common Body of Knowledge document v. 1.1, 2006;
P. G. Neumann, Principled Assuredly Trustworthy Composable Architectures (Draft), December 2003; Benzel, T. V.,
Irvine, C. E., Levin, T. E., Bhaskara, G., Nguyen, T. D., and Clark, P. C. Design Principles for Security. NPS-CS-05010, Naval Postgraduate School, September 2005; jabir and J. W. Moore, A Search for Software Engineering
Principles, Computer Standards and Interfaces, vol. 19 p. 155-160, 1998; and Berg, Clifford J, High-Assurance
Design: Architecting Secure and Reliable Enterprise Applications, Addison Wesley, 2006. The original origin of many
items can be traced through these and their predecessor documents.
In addition to other more limited sources including a few safety-related ones, Matt Bishop and Karen M. Goertzel made
specific contributions. Additionally, I originated items from the zeitgeist and my own work as well as to fill gaps
identified within the organizational structure.
Special thanks go to Matt Bishop for his suggestion of using the concept of “limiting” undesirable things as well as to
Matt, Cynthia Irvine, Carol Taylor, and Joe Jarzombek for organizing the 2006 Monterey Workshop and to its
participants. I wish to thank all that commented, reviewed, or supplied materials including Matt Bishop, Carol Taylor,
Cynthia Irvine, Gary Stoneburner, Joe Jarzombek, Luiz Felipe Perrone, Karen M. Goertzel, Ed Schneider, Jeff
Ingalsbe, Scott Duncan, and my students and graduate assistants at James Madison University. Thanks also to the
Institute for Infrastructure and Information Assurance at James Madison University for publishing this report.
Copyright © 2008 Samuel T. Redwine, Jr. All Rights Reserved.
63
I n s t i t u te fo r
I n f ra s t r u c t u re a n d
I n fo r m at i o n As s u ra n ce
Towards an Organization for Software System Security Principles and Guidelines
MSC 3804
J a m e s M a d i s o n U n i ve r s i t y
H a r r i s o n b u rg, VA 2 2 8 0 7
( 5 4 0 ) 5 6 8 - 4 4 4 2 ( 5 4 0 -J M U - I I I A)
(540) 568-8933 fax
w w w. j m u. e d u / i i i a
Samuel T. Redwine, Jr.
Institute for Infrastructure and Information Assurance
IIIA Technical Paper 08-01
James Madison University