Dissertation_PramanikFinal - College of Engineering and

Increasing Security for Regulatory Constrained
Systems, Through New Systems and Architecture
Methodologies
by
Sarah Pramanik
M.S. CS, University of Colorado at Colorado Springs, 2009
B.S. CS, University of Colorado at Colorado Springs, 2006
A dissertation submitted to the Graduate Faculty of the
University of Colorado at Colorado Springs
in partial fulfillment of the
requirements for the degree of
Doctor of Philosophy
Department of Computer Science
2013
ii
© Copyright by Sarah Pramanik 2013
All Rights Reserved
iii
This dissertation for
Doctor of Philosophy Degree by
Sarah Pramanik
has been approved for the
Department of Computer Science
By
__________________________________________________
Dr. Edward Chow, Chair
__________________________________________________
Dr. Albert Chamillard
__________________________________________________
Dr. Qing Yi
__________________________________________________
Dr. Terrance Boult
__________________________________________________
Dr. Stephen O’Day
______________________
Date
iv
Pramanik, Sarah (Ph.D. in Engineering, Focus in Security)
Increasing Security for Regulatory Constrained Systems, Through New Systems and
Architecture Methodologies
Dissertation directed by Professor Edward Chow
In the Department of Defense (DoD) vulnerabilities can result in casualties,
destruction of assets, or potentially allow the enemy to either escape or infiltrate a
system. This is true for other systems such as those in the medical community and
finance community as well. Vulnerabilities must be mitigated and risks be reduced to an
acceptable level for a system to be considered compliant with mandatory regulations. Not
only does the overall security posture affect a system, programmatic consequences can
also significantly change the system design and implementation.
These effects range
from extra cost and schedule slips to potentially not being allowed to operate the system.
These can occur if security is applied incorrectly or too late to a system.
The author provides new methodologies in several areas of security. The author first
offers a way to efficiently integrate security into the systems engineering lifecycle. This
includes describing the process for developing a security architecture as well as the
software assurance aspects of developing a system. The author also presents ideas on
reducing future cost associated with incorporating security into the total system lifecycle.
The author also provides the mechanics of how to automate assessing the probability
of regulatory compliance. The evaluation of a security architecture and security posture
of a system is currently subjective. There are different ways of implementing security
functionality, and different ways of interpreting security requirements. Automating the
v
ability to provide the probability of compliance makes it easier to build a better case for
acceptable risk.
vi
To My Wonderful Family, Without You This
Would Never Have Been Possible
Soli Deo Gloria
vii
Acknowledgements
First I would like to thank my parents. Without their constant reviews, prayers and
tireless support I would never have made it to this point. I would also like to thank my
husband for his love and patience as he has supported me in this endeavor. He has
encouraged me throughout this process. Thank you to my grandparents as well for their
support. Thank you to mom and dad in India for your prayers and support. Thank you
also to all my prayer warriors who have tirelessly lifted me up through the years.
A special thanks to Dr. Chow for his help and guidance. He taught my first security
class in undergrad and has inspired me to keep pursuing it ever since. It has shaped my
career, both academically and professionally. For this I will always be grateful.
Many thanks to my advisory committee members: Dr. Boult, Dr. Chamillard, Dr.
Lewis, Dr. Qing Yi and Dr. O’Day. Thank you for your suggestions, support, and
understanding. Dr. O’Day thank you for everything! Thank you as well to Patricia Rea
who has been a huge help. Thank you Dan Wilkerson for your legal help.
Thank you to the Northrop Grumman Corporation for your generous funding. A great
big thank you to Mike Marino and Charlie Crimi for your help with completing the
software. Thank you to Dr. Yi-Liang Chen, Steve Micallef, John Whitson, Jim Morris,
Dave Barish (Thanks for taking a chance on me!), Bob Boylan, Pete Gustafson, Barbara
Williams, Sandy Haffey, Gene Fraser, Karen Tokashiki, William Bohdan, Anne Marie
Edwards, Robert Schruhl Robyn Woisin, Kathy Henry, Dr. Bob Duerr, Bonnie Kuethen
and all the others who have all provided encouragement, reviews and support.
And lastly, but most importantly, thank you to my gracious Savior who has set my
path before me and has given me the strength to finish. Thank you.
viii
Table of Contents
Chapter 1 ............................................................................................................................. 1
1.1
Introduction .......................................................................................................... 1
1.2
Motivation ............................................................................................................ 2
1.3
Research Methodology......................................................................................... 3
1.4
Dissertation Outline.............................................................................................. 7
Chapter 2
System Security Engineering Methodology ................................................. 9
2.2
Defining Security ............................................................................................... 11
2.3
Contracted Acquisition and Systems Engineering ............................................. 16
2.4
Customer Contracted Acquisition ...................................................................... 24
Chapter 3
Requirement Writing .................................................................................. 58
3.1
Requirements Engineering and IA ..................................................................... 60
3.2
Potential Solutions.............................................................................................. 62
3.3
Preferred Approach ............................................................................................ 68
Chapter 4
Security Architectures ................................................................................. 71
4.2
Background ........................................................................................................ 73
4.3
Building the Architecture ................................................................................... 81
4.4
A Different Approach to Architecture ............................................................... 96
Chapter 5
Threats....................................................................................................... 109
5.2
Securing A System ........................................................................................... 110
5.3
Threats .............................................................................................................. 112
Chapter 6
Software Assurance .................................................................................. 119
6.2
Requirements Phase ......................................................................................... 121
6.3
Design Phase .................................................................................................... 123
ix
6.4
Development Phase .......................................................................................... 125
6.5
Testing Phase.................................................................................................... 133
6.6
Secure Coding .................................................................................................. 138
Chapter 7
Cost of Security......................................................................................... 152
7.2
Background ...................................................................................................... 153
7.3
Approaching Security from a Software Perspective ........................................ 155
7.4
Principles into Metrics ..................................................................................... 160
7.5
Comparison ...................................................................................................... 169
Chapter 8
Cryptography Overview............................................................................ 172
8.2
Purpose of Cryptography ................................................................................. 173
8.3
Cryptographic Type.......................................................................................... 175
8.4
Cryptographic Application ............................................................................... 177
8.5
Cryptographic Placement ................................................................................. 180
8.6
Cryptographic Boundary .................................................................................. 181
8.7
Key Material ..................................................................................................... 181
8.8
Incident Response ............................................................................................ 183
Chapter 9
Regulatory System Compliance ................................................................ 184
9.2
Requirements .................................................................................................... 185
9.3
Counting Requirements .................................................................................... 186
9.4
A Bayesian Approach to Compliance .............................................................. 200
9.5
Conclusion........................................................................................................ 212
Chapter 10
10.1
Cryptography.................................................................................................... 214
Chapter 11
11.1
Lessons Learned........................................................................................ 214
Future Research and Conclusion .............................................................. 218
Known Concerns and Solutions ....................................................................... 218
x
11.2
Evaluation of Success Criteria ......................................................................... 219
11.3
Contributions .................................................................................................... 221
11.4
Future Research ................................................................................................ 221
11.5
Conclusion........................................................................................................ 222
Works Cited .................................................................................................................... 223
Appendix A – Patient Monitoring System ...................................................................... 236
Appendix B - Engagement Questions ............................................................................. 244
Appendix C - Acronym List ........................................................................................... 250
Appendix D ..................................................................................................................... 253
xi
List of Tables
Table 1 Customer Acquisition and Systems Engineering................................................. 25
Table 2 Security Views ..................................................................................................... 87
Table 3 Ports, Protocols and Services ............................................................................. 104
Table 4: Compiler Options.............................................................................................. 145
Table 5: Risky Permissions ............................................................................................. 148
Table 6 Security Metrics ................................................................................................. 160
Table 7 First Level Decomposition................................................................................. 191
Table 8 Second Level Decomposition ............................................................................ 193
Table 9 Third Level Decomposition ............................................................................... 195
Table 10 Fourth Level Decomposition ........................................................................... 197
xii
List of Figures
Figure 2-1: Layers of Information Assurance [29] ........................................................... 14
Figure 2-2 Requirement Sets ............................................................................................ 37
Figure 2-3 Boundary Crossing Block Diagram ................................................................ 41
Figure 2-4 Ports, Protocols and Services .......................................................................... 44
Figure 4-1 SABSA Matrix [66] ........................................................................................ 79
Figure 4-2 System Flows .................................................................................................. 83
Figure 4-3 Protection Domains ......................................................................................... 83
Figure 4-4 IA Control Mapping ........................................................................................ 85
Figure 4-5 Requirement Mapping ..................................................................................... 85
Figure 4-6 High Level Data Exchanges ............................................................................ 97
Figure 4-7 Security Patching ............................................................................................ 99
Figure 4-8 Mobile Code .................................................................................................. 100
Figure 4-9 Auditing......................................................................................................... 101
Figure 4-10 Environmental Mitigations.......................................................................... 103
Figure 4-11 Ports, Protocols and Services ...................................................................... 106
Figure 5-1: Threat Motivations ....................................................................................... 113
Figure 5-2: Attack Variables ........................................................................................... 114
Figure 5-3: Human Threat .............................................................................................. 115
Figure 5-4: Natural Disasters .......................................................................................... 116
Figure 5-5: Natural Threats ............................................................................................. 117
Figure 7-1 Patient Monitoring System ............................................................................ 162
Figure 9-1 Requirements Decomposition ....................................................................... 186
Figure 9-2. Patient Monitoring System ........................................................................... 189
Figure 9-3 Bayesian Network for Simple Requirement Compliance ............................. 203
Figure 9-4 Bayesian Network for System Compliance .................................................. 207
Figure A-0-1 Total Architecture ..................................................................................... 237
Figure A-0-2 Security Overlay ....................................................................................... 238
1
Chapter 1
1.1 Introduction
This work aims to explore the issues surrounding the incorporation of security into
systems under significant regulatory constraints such as those found in the DoD, health
and finance sectors. There is no perfectly secure system. Threats are evolving and
vulnerabilities are continually being exposed. The goal of system security engineering is
to reduce the risk to an acceptable level. Security is changing with technology. For
every new application or system, new threats, vulnerabilities and risks arise. When there
is a threat to a system, it can affect the system if vulnerabilities exist. Residual risk is the
leftover threats to the system when vulnerabilities can’t be mitigated. When there are
vulnerabilities in a system, they must be mitigated to the extent possible. The application
of security into a system can be seen through the layered views of information assurance.
In the commercial world there are certain information assurance laws and standards,
such as Sarbanes-Oxley that govern the security requirements that must be applied to a
system [1][2].
Another is the Health Insurance Portability and Accountability Act
(HIPAA) [3] [4]. In the Department of Defense one of the documents for security is the
DoD Instruction 8500.2 [5]. There are multiple other security documents that are used,
but the framework from the DoDI 8500.2 is fundamental in the DoD. There is currently a
move [5][6] from the DoDI 8500.2 to the NIST SP 800-53 [7]. The supplemental
guidance provided in [7] is very helpful to the system security engineer. The
documentation, rules and requirements that must be followed are as ever changing as
technology. This transition is also useful, as it will bring the DoD into alignment with
2
Federal and some commercial sector systems, easing some of the regulatory compliance
issues brought on by the abundance of varying regulations. In some respect the system
security engineer must think like a lawyer in understanding the requirements, laws,
regulations and their application to the system. In regulatory constrained systems, those
providing the system are on contract to meet a certain set of requirements. Proper
application of security is not as simple as applying a few security products, but involves a
paper trail of documentation and analysis. Security engineers use a smattering of tools to
create architectures, decompose requirements, and use experience to determine if risk is
acceptable. This is somewhat subjective and not always well organized.
As I introduce these new concepts, a simplified medical system will be used to show
the application I provide a system security engineering methodology. The author also
provides an approach using Bayesian Networks to automate the probability of
compliance. The methodology includes the building of security architectures, secure
coding, the process of protecting software, and the cost of security. I also provide
research into the motivation behind attacks.
1.2 Motivation
Security is an ever evolving part of engineering a system due to global network
threats. In 2008, there were 54,640 attacks [8]. In 2012, it was reported that attacks
against the U.S. had risen 680% in six years [9]. Any measures put in place by system
security engineers may be challenged by hackers. This affects not only commerce,
personal computers, and critical infrastructure, but government systems as well.
My research focuses on systems that must meet heavy regulatory compliance,
because many of their systems have unique challenges with regard to meeting the myriad
3
of requirements that are a fall out from the need to be regulatory compliant. The DoD
needs platforms such as ships and air planes which lend to the complexity of designing in
an appropriate security solution. The difficult faced by these platforms are similar to that
encountered by the various systems in the health and finance sectors. Confidentiality,
integrity and availability are critical aspects to the regulatory compliant nature of
securing these systems.
The current systems engineering process does not fully
encompass security needs. Many of the regulatory compliance issues encountered by
these types of systems are security requirements.
These regulations can be vague and do not address concerns for non-enterprise style
systems. The doctor trying to protect a patient’s information when dealing with new
medical equipment can be just as tricky as protecting the confidentiality associated with
command and control messages coming to a submarine. The research described herein is
purposed to ameliorate some of the difficulties encountered by security engineers as they
work to protect systems constrained by various regulations.
1.3 Research Methodology
I propose a new method of securing regulatory compliant systems. Although security
is both a science and an art, the subjectivity involved with determining whether or not a
system meets acceptable risk leads to difficulty in making objective decisions on
mitigation strategy.
In order to better understand how this might be accomplished, I
endeavored to look at the subject from two perspectives, academically and practically.
In academia it is necessary to look to those who have come before and glean ideas,
theories, approaches and lessons learned. This seemed a good first step.
Literature
surveys were conducted to understand common methods used for applying security to
4
systems. The first hurdle was to look at the systems engineering methodology and
determine if security had already been interposed or if this was still lacking. I needed to
understand what was being done practically as well in this area, in order for this effort to
have useful meaning to those practicing security engineering on systems under heavy
regulation.
I have spent the last several years working and studying and correlating patterns in
the application of security to different systems. During this time, I worked with other
security engineers to understand their approach and I also applied commercial and
academic theories as well. This lead to some trial and error on what was the most
efficient and practical approach to securing a system. Many times it required that I go
back and perform a literature survey to determine if a particular approach had been tried.
This was due in part to the fact that I came onto projects at different stages of the
system life cycle. Security engineers each had their own way of doing things, even
working with the same regulatory document and same compliance needs as a standard
method is not built into best practices [10]. There was no set system security engineering
methodology, security architecture methodology, secure coding standard, software
assurance approach, threat outlook, security metrics or tool set. The only constant was a
need to meet a regulatory document and provide documentation to prove compliance.
This led to a desire to provide a comprehensive approach to regulatory compliance in
order to ease the difficulties that many security engineers face.
The first part was to determine an appropriate system security engineering
methodology that would fit into the context of the regulated systems. System engineering
was studied to see how best to fit in the necessary security aspects. I studied academically
5
what was suggested and reviewed what was being done practically. Aside from the fact
that the practical approach did not completely mimic the academic approach, there was
some correlation which allowed me to see how security might be interjected.
As this was studied, it became apparent that there was a need for a standard security
architecture methodology. As I was fashioning the security architecture for a project,
various architecture methods were studied to see what would be the best approach to
take. I spent a year applying different methodologies to arrive at a suitable security
architecture. Once it was clear that none of the available methodologies and taxonomies
alone fit the need, the original methodologies described in this work were applied in
creating the example medical system, as will be shown throughout this work. This
methodology was aimed at helping future engineers architecting the security functionality
of a system. I also had to ensure that the mitigations recommended in the architecture
came to fruition. Although I do not provide any new cryptographic ideas, this work
contains a section on cryptography as this is a fundamental mitigation that engineers must
understand to adequately protect their system. It is one of the essential protections that
are used and so I provide some insight into when certain cryptographic mitigations might
be appropriate.
One difficulty encountered was that a software assurance approach did not exist on
the current project. There are a multitude of papers on software assurance, but it is not
yet a best practice in all sectors. There was not a specific requirement for a total software
assurance approach, so an approach had not been created. In some regulatory documents,
there is a requirement to reduce software vulnerabilities however this is a vague
requirement and left to interpretation [5], [7], [11]. The author endeavored to extract
6
those elements that would increase the security of software and build an approach for
future programs to use. Another grey area is that of threats. Whether a system should be
designed to reduce vulnerabilities alone, or should the design take into account known
threats. I provide insight into categorizing threats.
The validation of the methodologies and tool is done in two parts. First, a notional
architecture for a toy system was created to illustrate the key concepts and procedures.
The methodologies were applied step by step to show how the architecture changes and
grows. It can be shown through comparison to security best practices that the outcome of
the methodologies results in a secure, regulatory compliant platform. A Bayesian
Network approach to modeling these ideas is provided to show how automation of
predicting regulatory compliance is possible. A second validation on a complex system
will be offered to the committee alone, due to proprietary restrictions.
1.3.1
Contributions
Throughout this dissertation, I provide multiple contributions in various areas of
security engineering. I offer my approach to securing regulatory constrained systems in a
manner that takes into account the customer acquisition life-cycle as well as the systems
engineering methodology. I discuss a method for handling requirements appropriately. I
provide a new method of building security architectures to ensure regulatory compliance.
I present a discussion on software assurance. Related to software, are the security metrics
that I developed in an effort to reduce the future cost of securing a system. I also offer a
method of categorizing threat motivations which allows one to look at the potential of
attack. I then give insight into the application of cryptography in a system. Lastly, I
provide a Bayesian Network to calculate the probability of regulatory compliance.
7
1.4 Dissertation Outline
This dissertation is outlined as follows. In Chapter 2, I present a System Security
Engineering Methodology. This encompasses a cradle to grave approach and provides a
framework for all the other research discussed herein. Although it covers the life of a
project from acquisition through maintenance and end of life, it explores in detail what
accomplishments are necessary during the development phase. This leads into Chapters
3-8, where I provide different facets of secure development specifically with regard to
systems that must meet regulatory compliance.
Chapter 3 provides my insight on writing security requirements. This is an area that
can affect not only the end implementation of a system, but also its cost and usability.
These requirements are the basis upon which an architecture is created.
Chapter 4 provides my techniques for the development of security architectures. It
provides a new methodology for creating security architectures when dealing with
platforms that have size, weight and power (SWaP) constraints, such as a pace maker. In
ensuring that a security architecture is complete, it is necessary to understand the threats
to a system. Chapter 5 contains my mapping of motivation to threats and what security
engineers need to do in order to protect the system from the threats.
Chapter 6 provides my research and insights into the means of ensuring that the
software security portion of secure development is met. As with software and systems
engineering, cost is a defining factor when building a system. Creating a secure system
that not only meets its requirements, but that is also affordable is critical in the fiscal
climate that engineers are currently operating in. Chapter 7 focuses on my proposed
security metrics that can be used to reduce the cost of security in the future. Metrics are
8
used in other areas of engineering and the proposed metrics are keeping in line with those
in other areas of engineering. In Chapter 8, I provide a general overview of cryptography
and its use in securing systems. Cryptography is a versatile tool in the architect’s tool
belt and so a basic understanding of it is necessary in order to assure the security posture
of a system. Cryptography is also an area that is guided by regulatory compliance.
Once this foundation has been laid, in Chapters 2-8, I address the use of Bayesian
Networks to determine the probability of compliance in Chapter 9. The ability to
probabilistically view compliance allows for an understanding of where risk might occur.
Bayesian Networks are useful tools when dealing with uncertainty.
I then provide a conclusion and discussion on future research in Chapter 10. First,
known issues and how they were dealt with are considered. Then I will provide a selfevaluation of the success criteria and research contributions, as well as lessons learned.
The first criterion is to create a new system security engineering methodology that is
integrated with the systems engineering process. The second criterion is to provide
separate literature reviews on threats, cost, software assurance and architecture
frameworks. The last criterion is the means to assess a system for regulatory compliance.
This is followed by a section on how this research might be furthered in future research.
9
Chapter 2
System Security Engineering
Methodology
In today’s environment of security, it is necessary for the system security engineer to
understand all aspects of systems engineering. In the realm of systems engineering,
security engineering is considered a specialty that can be added into the design phase, if
concurrent engineering is being employed, but is too often an after-thought. In other
cases it is seen purely as an assessment tool and not something fundamental to the system
design. It is expensive and to most, it seems unnecessary. Although the perception about
its necessity is changing, the engagement of security engineering hasn’t moved far
enough along.
Many security engineers understand the security issues of a system, but do not
recognize how they fit into the systems engineering development lifecycle. This may be
because security engineers are considered specialists and are not given training in
systems engineering. The other reason may be that many security certifications such as
the COMPTIA Security+, EC-Council’s Certified Ethical Hacker (CEH) and ISC2’s
Certified Information System Security Professional (CISSP) do not require an
understanding of systems engineering [12].
In most cases the systems security engineer needs to be a core member of the systems
engineering team in order to be involved with all relevant aspects of the system. Every
interface in a system whether a functional flow or a physical flow should be examined by
the system security engineer for vulnerabilities.
Information assurance or Information System Security Engineering should not be
cornered into one piece of the system. Systems Engineering typically considers security
10
to just be a specialization that it should be done in parallel with the systems lifecycle.
This doesn’t truly cover the magnitude of effects that security has on a system, especially
in systems that must meet significant regulatory compliance with respect to security such
as is necessary in the various sectors [13]. The mindset of security being just an add-on
to the system as part of “specialization” can increase risk to the overall project. The idea
that security is “specialty engineering” is brought to light in [14] in the author’s
explanation of concurrent engineering. This view of security needs to be re-examined, as
the vulnerabilities and risks to systems increase. The National Institute of Standards and
Technology (NIST) have developed some insights into this, in [6], but this does not cover
overall system implementation and development
This author of [15] would agree that security engineers should be system engineers,
or at least have an understanding of system engineering. The other issue exposed in [16]
and [17] is that “there is no common framework in any current evaluation scheme or
criteria that directly supports measuring the combined security effectiveness of products.”
This needs to be rectified and is addressed in this work. There are multiple frameworks
that are useful for Systems Engineering and some have put together frameworks for
security engineering, but these are not enough to fully integrate security into a large
system
needing
to
[18][19][20][21][22].
be
compliant
with
large
sets
of
security
regulations
With the difficulty of trying to fit security into the systems
engineering world, a new method is necessary.
2.1.1
Contributions:
In this chapter, I provide a new methodology for integrating security into the systems
engineering process. At each phase of the process, I provide the security perspective that
11
should be included in the life-cycle. Systems that must meet significant regulatory
constraints with respect to their security can use this method in order to prove
compliance. I also provide insights throughout the chapter on pitfalls and common
misunderstandings that security engineers should try to avoid. The following methods
will allow the user to develop their system so that it will meet regulatory compliance.
2.2 Defining Security
Before a discussion of integrating security into the overall systems engineering lifecycle can begin, it is necessary to define what is meant by security. In the Department of
Defense there are two major areas of security: Information Assurance (IA) and AntiTamper (AT). Commercially this is typically referred to as computer security and the
prevention of reverse engineering, although each industry has a different term for these
concepts. IA focuses in on the confidentiality, integrity and availability of a system and
its data. This is the definition of security used throughout this document.
One typical misunderstanding is that AT and IA are the same thing. This can lead to
a misallocation of cost and schedule on projects. Although they are closely related, they
are not the same. Information assurance is the art of protecting information throughout
its entire life-cycle. Anti-tamper is primarily concerned with the prevention of reverse
engineering critical technologies [23]. AT in many cases can ride upon the overarching
security architecture and they can mutually share the same technologies in the protection
of information, but they do diverge [23]. An example of this is in software security.
In the realm of IA, software security is ensuring that code is written in such a way as
to prevent vulnerabilities from existing within the code. This is to ensure that the
12
software cannot be used as an attack vector on the system. Static code analysis and other
methods are used to reduce such flaws and to strengthen the integrity of the code [24].
This would not be the focus of AT. AT would look at using a technology to ensure
that an attacker cannot understand what the software does or how it works. This would
lead to the use of a technique such as code obfuscation [25], with watch dog timers to
prevent the disassembly of the software.
Both IA and AT may rely on technologies such as encryption, but the intent of the
application is different. This is critical to understand, because they are different security
specialties. If asking an AT expert for advice in the realm of IA, or vice-versa, it is
similar to approaching a neurosurgeon about a heart problem. The wrong specialist can
provide some general guidance, but not the specialty information. Some system security
engineers may be cross-trained in both disciplines, but this isn’t always the case, even
though they are both referred to as Information System Security Engineers (ISSE) [23].
As stated before the purpose of security engineering is to secure the system with the
intent to protect the system from attackers. The primary means of protecting the system is
through the application of IA principles. Although AT is an important aspect of
preventing reverse engineering, it is not considered in this dissertation for the purposes of
defining security.
2.2.1
Information Assurance
Information Assurance (IA) is the practice of managing risks related to information.
IA primarily deals with the Confidentiality, Integrity, and Availability of information in
order to ensure proper business or mission execution. In the commercial world, business
13
execution is critical, as is compliance with regulations. Application of IA in a manner
consistent with defense in depth is part of system security.
Confidentiality limits the access of information to authorized personnel. Integrity
ensures data is not changed without proper authorization. Availability guarantees the
accessibility of information to approved personnel [26]. IA also is concerned with
authentication and non-repudiation. Authentication requires that correct identification
occurs before an entity is given access to a system. Non-repudiation guarantees that the
identification of the sender and recipient is known and cannot be faked, nor can the
sender or recipient deny involvement.
Information assurance can be defined as measures that protect and defend information
and information systems by ensuring their availability, integrity, authentication,
confidentiality, and non-repudiation [27]. These measures include providing for
restoration of information systems by incorporating protection, detection, and reaction
capabilities [28]. It should be noted that there is no standard definition. Every industry
has varying terms for securing the system. According to [29], the term Information
Security should be used as opposed to Information Assurance, as IA is supposedly a
subset of Information Security. However in the realm of DoD, IA is used as the
comprehensive term, as defined in [27]. IA will be used interchangeably with security to
mean both IA and Information Security throughout the rest of this document. The
principles discussed herein can be used in multiple sectors such as health and finance to
meet regulatory compliance.
14
Including the elements of IA into a system throughout the life cycle is the primary
purpose of the system security engineer. The protection of the innermost layers, such as
software starts with an assumption that the systems cannot be physically manipulated.
Figure 2-1: Layers of Information Assurance [29]
Physical security provides this assurance. Physical Security as required by the DoDI
8500.2 [5] and in the newer NIST SP 800-53 [7] should include automatic fire
suppression system, automated fire alarms connected to the dispatching emergency
response center, voltage controls,
physical barriers, emergency lighting, humidity
controls, a master power switch, temperature controls (for both equipment and humans),
plan for uninterrupted power supply, disaster recovery procedures, as well as the
requirement for personnel to be inspected (through badge inspection) and a second form
15
of identification before allowed access. The Health Insurance Portability and
Accountability Act (HIPAA) Security Rule also requires physical security controls. NIST
SP 800-30 provides a means of meeting HIPAA compliance [30]. Sections 302 and 404
of Sarbanes-Oxley (SOX) does not call out physical security explicitly, however it is a
necessary part of the overall security as discussed in [31].
Next the boundary of the systems from a cyber-security purview must be assessed.
Boundary Defense is a crucial aspect of the security posture. This is the first line of
protection for the cyber aspects of a system. Boundary defense is comprised of items
such as firewalls and Routers with access control lists (ACLs). Without identifying and
protecting the boundary, unnecessary chinks in the armor will appear. This will be
covered in depth in Chapter 3.
I have found that once boundary defenses are established, it is necessary to look into
how the system will protect data in transit. Data that flows within the system and to and
from the enclave must be available and their confidentiality and integrity must be
protected.
Data in Transit is one of the largest concerns in Information Assurance. It involves
confidentiality, availability, integrity, and non-repudiation. Ensuring that data is safely
moved from point A to B and that data integrity remains is one of the priorities, no matter
what sector a system is being built for.
A potential problem for data in transit is a breach of confidentiality. If the enemy
can hijack the information it must be un-useable (although it is best if a prevention of the
hijacking can be done).
Confidentiality protection can handled through the use of
cryptography or cross domain solutions. The National Security Agency (NSA) was given
16
the responsibility of handling all cryptography for National Security Systems through the
Warner Amendment [32]. NIST approved cryptography and FIPS compliant algorithms
can be used in commercial systems and are typically required in Federal systems. The
Unified Cross Domain Management Office (UCDMO) supports the delivery of cross
domain solutions (CDS) [33]. A cross domain solution is a product that allows for the
extraction of a lower classification of data from a network or device running at a higher
classification level. Another area of concern is availability. There is a need to prevent
bottlenecks or single point of failures within the system. Each of these aspects of IA must
be applied to a system in order to provide a correct security posture. The focus of the
next sections is to describe the best way to apply these protections to a system.
2.3 Contracted Acquisition and Systems Engineering
Systems engineering performed internally and systems engineering that is done on
contract can be different. This is due in part to the fact that when a customer decides to
acquire a specific capability, they begin the concept development, but it evolves when the
contracting company begins its work. In order to best describe how security needs to be
added, it needs to be addressed in both contexts. I will first describe an approach to
include security with respect to the high level system engineering concepts and then will
describe in detail how the approach should be added to the contracted acquisition life
cycle.
2.3.1.1 Systems Engineering
The Systems engineering life-cycle model has three stages, broken in to nine phases
[14]. The first stage is Concept Development which consists of: the Needs Analysis
phase, Concept Exploration phase and Concept Definition phase. The second stage is
17
Engineering Development, broken into the Advanced Development, Engineering Design
and the Integration and Evaluation phases. The last stage is Post Development which
includes Production, Operation and Support and finally the Phase out and Disposal
phases.
2.3.1.2 Stage 1: Concept Development
The purpose of concept development is to formulate and define the “best” system
concept such that it satisfies a valid need. This requires an analysis of needs. Once a
valid need is decided upon, the systems engineer must explore various designs and finally
choose the “best design.” It should be noted that in the customer contracted setting
concept development occurs twice. The customer typically comes up with a concept, and
then once the contracting company is put on contract, they typically come up with a
different concept. This will be discussed in detail later on.
There are longer lead times associated with certain concepts such as new
cryptographic devices and cross domain solutions. Before a concept is selected, I have
found it prudent to identify of any outside entities that might have an impact on
development time, such as NIST, SEC, NSA or the UCDMO.
Needs analysis establishes a valid need for the new system that is technically and
economically feasible. Needs analysis encompasses operations analysis, functional
analysis, feasibility definition and ends with needs validation. This is sometimes done
through simulation [34]. If there is not a valid need, or the requested addition goes
outside the bounds of the requirements or intent of the customer, then this can lead to
scope creep which will increase the need for money, lengthen the schedule and perhaps
increase risk.
18
In the concept exploration phase of concept development, the systems engineer
should examine and analyze potential system concepts. Then formulate (and validate) a
set of system performance requirements. In order for this to occur the systems engineer
must perform operations requirements analysis, performance requirements formulation,
implementation concept exploration and performance requirements validation. Many of
the security requirements fall into performance requirements rather than system
requirements.
This is another vital aspect of the systems engineering development
lifecycle in which the security engineer should be involved.
The last phase of concept development is concept definition. Concept definition
involves selecting the “best” system concept, defining functional characteristics, and then
developing a detailed plan for engineering, production, and operational deployment of the
system. In order for this phase to be complete the system engineer must performance
requirements analysis as explained earlier, functional analysis and formulation, then the
final concept must be selected and validated. Throughout this phase and the other phases
in this first stage of systems engineering, the expertise of a security engineer should be
employed to ensure that security is truly “baked in.”
2.3.1.3 Stage 2: Engineering Development
The purpose of the second stage, engineering development, is to translate the system
concept into a validated physical system design. The design must meet operational, cost,
and schedule requirements. It is in this phase that engineers develop new technology,
create the design and focus on the cost of production and use. This is the most intense
stage for the Integrated Product Team (IPT) engineers, and this is the stage in which the
system materializes. An IPT is a group of engineers that are dedicated to a particular
19
piece of system functionality. An IPT should consist of engineers from varying
disciplines. Engineering development has three phases.
The beginning phase is advanced development, in which the system engineers
develop cutting edge technology for the selected system concept and then validate its
capabilities to meet requirements.
This requires that the system engineer perform
requirements analysis, functional analysis and design, prototype development and finally
development testing. As explained previously, it is imperative that the security engineer
be involved with advanced technology choices. It is also necessary that the security
engineer understand the functional analysis and design. One of the ways that the security
team can better understand the functional design is by using the questions in Appendix B
to facilitate communication with the various IPTs. I developed Appendix B for just this
purpose. A security engineer may be assigned to each IPT, if cost allows, otherwise a
security engineer may have to handle multiple IPTs. The answers to the Appendix B
questions allow the security engineer to understand the security posture of a particular
subsystem or component within the system.
It is also at this stage that the software development team will be laying out the
software architecture descriptions and begin prototyping software modules.
Secure
design patterns should be included as part of this design. The design should also take into
account the need to stay up to date with security patches. These can come in the form of
Common Vulnerability Enumerations, Information Assurance Vulnerability Alerts
(IAVAs), or from a particular vendor, such as Cisco or Microsoft. This is covered more
thoroughly in Chapter 6.
20
The security engineer should also be involved in the prototype development and
testing aspects of this phase. If prototypes are built to show functionality, if the security
measures are not part of the prototype, it can skew results and cause problems later on in
development. Testing is important as well, to assure the customer, that the security
aspects of the system will be correctly incorporated.
The next phase is engineering design. This phase can include the development of a
prototype system satisfying performance, reliability, maintainability, and safety
requirements. Without the creation of a prototype, it is still in this phase that the
engineers must show how the system will meet the performance requirements. As stated
earlier many of the security requirements are performance based, and so this phase
requires extra consideration on the part of the security engineer. Again in an iterative
fashion the systems engineer must perform requirements analysis, functional analysis and
design, component design and design validation.
The last phase is integration and evaluation. The systems engineer must show the
ability for economical production and use then demonstrate operational effectiveness and
suitability of the system. With respect to the security world, this is where certification
and accreditation activities are at their peak.
This last phase also includes test planning and preparation. This includes a system
requirements review to accommodate: changes in customer requirements, technology or
project plans. The systems engineer must pay special attention to all the test items:
management oversight of the Test and Evaluation (T&E) functions, test resource
planning, equipment and facilities, system integration, developmental system testing and
then operational test and evaluation. On some projects a separate IPT is created for
21
testing activities, although a systems engineer is usually included on the team. It is also
necessary for a security engineer to assist as well. Test activities allow the security
engineer to provide verification that all the security elements have been incorporated
correctly.
This is critical to proving compliance with any necessary regulatory
documents.
It is also possible at this stage for a penetration testing team to become involved.
This team can attack the system from a black box or white box method. If the penetration
team has no previous knowledge of the system, this is considered a black box test and can
be useful in providing an understanding of how easy an outside attack would be. White
box testing may be even more useful at this stage. The white team would be given
knowledge of the system before the attack, which could allow them to more easily exploit
any lingering vulnerabilities. This is a useful tool for risk assessments. For systems
needing to be compliant with SOX, this is also when external auditors may become
involved to prove compliance. Internal business assessments should be completed first
[35].
Final testing or auditing is typically performed by an outside party and is not done by
the contractor or builder of the system. This assures impartiality and reduces the potential
of collusion. Documentation gathered throughout the life cycle of the system is put into a
package to show regulatory compliance. All of the documentation gathered throughout
the life-cycle is put together in a package for customer review. This package is given to
the approving authority for use in determining if the security controls have been met. In
the case of SOX, the Securities and Exchange Commission (SEC) determine compliance.
22
If there are any discrepancies, the security engineer must work with the IPTs to rectify
the problem.
It is at the end of this stage where the security engineer should be complete with the
security mitigation implementation. However, as will be shown, there are always changes
necessary and that security must be incorporated through the entire cradle to grave
lifecycle. In most systems that have to meet regulatory compliance, the proof that the
system is still in compliance must be done on a yearly basis [31].
2.3.1.4 Stage 3: Post Development
The last stage is to produce, deploy, operate, and provide system support throughout
the system’s useful life and provide for safe system phase-out and disposal. There are
three phases: production, operation and phase out.
During the production phase, the goal is to economically produce, develop, and
manufacture the selected system. The chosen system must meet the specifications. It is in
this phase that the engineers must begin engineering for production. This necessitates a
transition from development to production. Design and production require two very
different types of engineering. The production team must know how production is going
to operate and they must have a knowledge base on how to produce the system. This is
especially critical if the system is going to be mass produced. In the case where only one
or two systems will be created, the production operation is going to look quite different
from that of the mass produced system. During the production time frame there will be
security changes. New vulnerabilities will be found, new attack vectors will be formed
and additional threats to the system may arise. While the system is being produced, the
security engineer must continue to research and understand the ever evolving security
23
issues. Without the security engineer being involved, during production the security
posture of the system will change.
The operational phase is when the system is being used. Typically it is the system
administrator that is assigned the security tasks during operation. Instruction and manuals
should be provided by the security engineers to the system administrator to ensure that
the security posture stays intact. During operation there is a need to continually update
the security policies and configurations, and to patch the system for new vulnerabilities.
It is discouraged to apply patches into a system without first testing them in a lab to
ensure that the new patches will not adversely affect the system.
This also involves supply chain security issues [36], how does the system
administrator know that the patches can be trusted, when they are sent from the lab? Or if
downloaded from the same site, how can it be assured that the same versions are used?
There are a few techniques, such as digital signatures and hashes that should be
incorporated into this patching lifecycle.
The last phase is disposal. This can lead to vulnerabilities for other systems. When a
system is being disposed of, information can still remain. Whether this information is
encrypted or in the clear, it needs to be removed. Also reverse engineering of old
products can provide attackers insight into how to attack similar systems.
The replacement system will most likely be built upon existing technology. This
means that if any of the products in the system that is being disposed of provide insight
into the new system, or if the same or similar technology is being used, the new system
will have a high risk of being exploited. The security engineer should be involved in the
24
disposal of the system to provide protection for other systems, and for the replacement
system.
2.4 Customer Contracted Acquisition
The customer contracted acquisition life-cycle follows the systems engineering
process, although some have formalized the documentation process and the review
process. One of the customer process types require that in order for funding to continue,
milestones and reviews such as system requirements review (SRR) must be passed. The
systems engineering process as defined in [14] doesn’t cover funding issues. Table 1
describes my understanding of how they compare. I have also provided the security
aspects should be considered at each step. Other customers may not be focusing on
development, as much as an assessment of capability and regulatory compliance [35].
This leads to a different structure of meeting regulatory compliance documents such as
SOX. The focus of this section is on development and not just compliance assessment.
The first column lays out the customer’s view of the life-cycle including review gates, as
explained in [37], [35]. The third column of the Table 2-1 shows my proposed inclusion
of security into the steps of the systems engineering life-cycle, based on hands on
experience. The following sections in this portion of the chapter describe my method to
introduce security into the overall life cycle. It should be noted that many of the activities
are cyclical in nature. Before the system security engineering process can begin, the right
people have to be in place. I provide insight into the skill sets required to have a
successful team.
25
Table 1 Customer Acquisition and Systems Engineering
Customer
Contractor
Security Inclusion
Needs Analysis:
System Studies,
Technology
Assessment,
Operational
Analysis
Systems Engineering
Concept Development
Concept
Exploration
Request for
Proposal (RFP)
Pre-proposal
Include IA personnel on proposal
team
Concept
development
Look at residual risk in concept
Proposal
Create Conceptual Security
Architecture, initial trade studies,
based on known operational needs
Concept Definition Pre-planning
Contract Award
System
Requirements
Review (SRR)
System Functional
Review (SFR)
Planning
Needs Analysis:
System Studies,
Technology
Schedule IA tasking, build IA team
Assessment
study operational analysis for security
(Operational Analysis
impacts
done by government
and provided to
contractor)
Requirements
Decomposition
Work with IPTs using questions in
Appendix B, to assess component risk
Trade Studies
Perform trade studies on potential IA
boundaries
System Concept
Define IA boundaries, and
involvement of other agencies, such
as NIST, SEC, NSA, UCDMO, etc.
Functional
Allocation
Review interfaces to begin looking at
security attributes of functional flows, Concept Exploration
define CIA functionality
Functional
Architecture
Work with functional architecture
team to ensure appropriate flow down
Concept Definition
of security functions are captured in
the functional allocation
IPT Requirements
Allocation
Work with each IPT using questions
in appendix B to determine security
requirement allocation
Development
26
Customer
Contractor
Security Inclusion
Systems Engineering
Physical Interface
Design
Begin the analysis of ports, protocols
and services (PPS) that will be used
in the system to determine risk
Advanced
Development
Create Security Architecture off of
physical architecture to ensure correct
Physical Architecture
placement of network mitigations,
such as Firewalls, IDS, CDS, etc.
Update Functional
Architecture
Preliminary Design
Review (PDR) or
Engineering Design
Initial Design
Review
As development proceeds, it may be
necessary to change which functions
provide security relevant features
Continue work with each IPT to
ensure correct inclusion of necessary
mitigations, work with outside
agencies such as NIST as necessary
Software
Architecture
Ensure secure design patterns are
used, ensure that software protocols
and services are not vulnerable and
continue work on PPS Mapping.
Software Design
Ensure secure coding standards,
software assurance plans and static
code analysis are being performed.
Begin working on abuse cases,
pertinent to software interfaces. Work
through security patching scheme
IPT Design
Continue work with each IPT to
ensure correct inclusion of necessary
mitigations and continue work with
outside agencies as necessary
Prototyping/Trade
Studies
Perform trade studies on potential IA
mitigations such as firewalls, CDS,
etc.
Architecture
Reviews
Security Architecture should be
complete
Develop and test
Components (Sys I
Test)
Ensure individual components meet
their security requirements, begin
hardening of individual components
Test Readiness
Review (TRR)
Integrate
Components into
Subsystems (Sys II
Test)
Ensure subsystems meet security
requirements, harden subsystems and
perform necessary regression testing
Secondary
Readiness Review
As system is put together, ensure
integration does not negate security
Integrate Subsystems
hardening. Make configuration
in lab (Sys III Test)
changes, add security patches and
regression test as necessary.
Critical Design
Review (CDR) or
Final Design
Review
Engineering Design
Integration and
Evaluation
27
Customer
System
Verification
Review (SVR)
Contractor
Security Inclusion
System Tests or
other such as
Business Internal
Testing,
Perform penetration tests, Submit all
certification and accreditation or
compliance reporting documentation
Update and
Regression Test as
necessary
Maintain certifications/compliance
Production
Readiness Review
(PRR)
Systems Engineering
Maintain certifications/compliance
Operational Test
Readiness Review
External Auditor
Testing
Maintain certifications/compliance
Initial Production
Build a few fully
functioning systems
Maintain certifications, through
security patching, if any changes are
made review for security impact
Post Development
Operational Test
Maintain certifications/compliance
Production
Initial Operating
Capability (IOC)
Maintain certifications/compliance
Full Rate
Production (FRP)
Production
Maintain certifications/compliance
Maintenance
Maintenance Plan
Maintain certifications/compliance
End of Life
Operations and
Support
Ensure Sanitization and destruction of Phase out and
all sensitive components
Disposal
2.4.1.1 Security Team
While going through the process of securing a platform that must work under
regulatory constraints there are a few peculiarities that must be accounted for that are not
seen in the rest of industry. The first is the need to understand who has the ability to
determine whether or not the system is at an acceptable level of risk. This person may be
the Designated Approval Authority (DAA), External Auditor or other authority,
depending on who the system is being built for such as the medical industry, financial
industry, etc. In some cases it is possible for a system to fall under multiple approval
authorities. The combination of approvers must be considered to streamline the process.
28
For example if a system has both an internal approver and an external approver, the
security engineers should work to see if there is commonality in the way compliance or
certification artifacts can be presented. Customer representatives should be available to
provide guidance to the system security engineer throughout the development life cycle.
It may be possible to create one certification and accreditation (C&A) or compliance
package that will be sufficient for all approvers. This is important to understand, because
the security engineer’s management does not have the final authority to determine
whether or not the system is secure. For example with respect to publicly traded financial
systems, the Security and Exchange Commission (SEC) has authority under the
Sarbanes-Oxley Act.
The next group to consider is the contracted security team. Security will never be
taken seriously, unless it has the full support of upper management. Appropriately hiring,
training and supporting the security engineers is necessary. Security engineers have
primarily been associated with systems and are called system security engineers, but this
is something of a misnomer. A security engineer must understand not only system
design, but also requirements decomposition, software, networking, testing, and other
aspects of system development.
When it comes to hiring security engineers, just because someone has a certification
does not mean they are qualified to perform any security task. However, certifications are
necessary. In department of defense (DoD) programs the DoD Directive 8570.01-M
establishes the certification requirements for those performing IA work [12]. There are
similar certifications required by customers in industry for working on financial or
medical systems. In addition to the appropriate certification, the hiring manager should
29
work with known IA personnel to ensure that during the interview process, the right types
of questions are asked. I have found that questions should be focused on the necessary
task at hand. If a security architect is needed, having the candidate describe how they
would incorporate security into the system and the approach to handling various design
scenarios would be appropriate.
This might be too difficult for a more novice IA
engineer. In any event, working with knowledgeable security personnel to provide
feedback of answers is key in hiring the right security engineers. Once engineers are
hired, keeping them up to date is crucial in ensuring that the system will be protected
using the most effective, and cost-efficient methods.
Part of training for security engineers should require them to maintain IA
certifications. Any IA certification of significance requires continuing education points
to stay in good standing, or recertification. Providing security engineers with access to
security journals and publications is one way of supporting their growth. Another is to
provide them with access to security conferences. Although training budgets are always
tight, this is an area that should not be skipped. The protections of the system are only as
good as their implementation. If the security engineers are not up to speed on the newest
threats, vulnerabilities and mitigations, the system will not be protected adequately.
The security team is only as good as its individual engineers, but the team must also
be composed appropriately. My experience suggests that a security team should include
a security architect, security requirements engineer, network security engineer, software
security engineer, and a general security engineer for each segment of the system, as
defined below.
30
The security architect is responsible for the security architecture and design of
security mitigations within the system. The security architect should also be responsible
for ensuring that all the necessary security design artifacts are in place for certification
and accreditation. This includes the security architecture (encompassing both system and
software design) as well as vulnerability assessment of the architecture.
The security
architect cannot accomplish this without a thorough understanding of every aspect of
system, software and network design. This is why there are multiple members of the
security team, although in some cases it may be necessary for one engineer to wear
multiple hats.
As a system evolves, the security requirements can change. This is why there must
be a security requirements engineer. This engineer must work with the security architect
to establish the baseline set of security requirements that should be flowed to each
segment. Working in concert with the security engineers assigned to each segment, a
feedback loop should be established to ensure that the security architect is aware of any
changes that are occurring in each segment that may impact security. The security
architect then must work with the security requirements engineer to assess the security
requirements impact.
This will be covered more extensively in the requirements
decomposition section. Although it may not be its own integrated product team (IPT) the
network team will need to understand the impact of security requirements. This requires
a working combination of the security architect, security requirements engineer and the
network security engineer.
The network security engineer must work with the network architect to ensure that
appropriate mitigations are in place. In some instances, companies refer to the network
31
security engineer as the security architect. Or they may want the network architect to
also have the security background.
The terminology may differ, but the need for
engineers with these skills is critical. The network security engineer must work with the
network team to understand all data flows in the system. This should encompass every
port, protocol and service running in the system, with all origins and destinations
understood. This cannot be done without an understanding of how the software
communicates. The network engineer must work with the security architect to establish
all of the network protections that are necessary. The security architect in turn must
coordinate with the software security engineer to ensure that software is communicating
securely.
There must be coordination between the software security engineer and
network security engineer to ensure that this communication is understood and
documented, because this affects the ability for a system to be hardened against attacks.
Software security is one area of system development that is often overlooked. The
software security engineer must work with the security architect to develop a software
security architecture that integrates into the system security architecture. This should
define how the software is being protected at each layer. The integration of the software
security architecture and system security architecture should show not only the physical
protections of the system, but should show the flow of software and how it is securely
operating.
The software security engineer is also responsible for ensuring that secure code
development is performed throughout the software development life cycle. They should
ensure that static code analysis is performed, and that vulnerabilities are reduced in the
software in every way possible. This means overseeing everything from version control
32
of software to the implementation of code hashing. The responsibilities are described
more in depth in Chapter 6.
Aside from these main security engineers, it is beneficial to have a security engineer
assigned to each segment of the system. Depending on the complexity of the segment, it
may be possible to have a security engineer assigned to more than one segment, or it may
be necessary to have multiple security engineers assigned to a single segment.
These
engineers must work with the responsible engineer for their piece of the system and
provide that information back to the security architect. The security architect will view
the security posture of the total system and provide guidance on the type of mitigations
necessary for each piece. The security engineers are then responsible for ensuring that
the mitigations required in their part of the system are implemented and verified. In order
for a system to pass certification and accreditation, it is important that the security
architecture can be traced to individual implementations and be verified through
functional testing. Some security requirements will fall outside the realm of functional
testing, but they still must be verified.
The creation and maintenance of a solid security team will result in smoother
implementation of security requirements, and a better security posture for the system. It
will also allow a platform to have IA “baked-in” which will reduce cost and schedule
impact.
2.4.1.2 Proposal Phase
The proposal phase is the first time an IA team will begin to understand what is
necessary for a new system or update to an existing system. As IA is a newer practice, in
some instances the request for proposal (RFP) will not be clear on the aspects of security
33
that is expected. It may even list every compliance process possible without the
understanding that only a subset may apply, or on the flip-side it may not list any process
at all. During the proposal phase, it may be possible to send questions back to the
customer, at certain points.
The security engineers should respectfully request any
necessary clarification on issues such as which certification process will be followed.
These questions may or may not be answered. As such, the team should state any
assumptions they are making about the system, such as security boundaries, accreditation
processes, etc.
There are other elements of IA to consider in a proposal. The proposal should explain
how the team will uniquely address IA as an integrated solution that is harmonious with
functional needs such as maintainability, and net centricity. The proposal should also
address specific concerns such as multi-level security that will affect the overall
architecture and system design.
The proposal team may also want to include the
certifications of the people that will be on the team. This is necessary in some industries,
as many customers now require those working IA to maintain specific certifications,
depending on the part they play in security [12].
Contingent upon how well known the company is with respect to IA, partnering may
be required. It is also essential to know who the customer is and the relationship with that
particular customer. For example, if the customer has always used company X for IA, it
may be difficult to convince the customer that you have the expertise. This is why
including the certifications of those on your team, should be given consideration in the
proposal.
34
There are different types of proposals. Some may be asking for studies to be done,
such as compliance assessments for SOX, while others are looking for actual system
development. IA should be explicitly addressed in any of these types of proposals, but the
degree to which it is addressed may differ.
When preparing a proposal for a
developmental project, IA may be addressed differently than functional needs. IA is
necessary, but does not necessarily add to the functionality of the system. In many cases,
this leads to a lack of budgeting for IA. Adequate allocation of time and resources to
perform IA must be part of the proposal.
Proposals are typically bid to win, so there will be pressure to make do with less
when it comes to security. Do not under bid to the point where the security team can’t
function. If this is done, then the system risks going over cost or risks non-compliance.
Although not truly an IA concern, it is a concern to management and so must be taken
into consideration.
2.4.1.3 Planning Phase
The schedule needs to include adequate time for IA activities. I have found that Level
of Effort (LOE) charging may be frowned upon, but many of the IA activities are LOE.
This means there is no specific time table or schedule associated with the work. Some
activities, such as requirements decomposition are discrete tasks that can be managed
against a work package. It is advisable to use a combination of work packages and LOE
accounts when planning for IA. The hours spent on IA activities need to be tracked one
way or another to show the benefit to the customer.
The type of project should be considered when planning IA. Similar to software
engineering it is easier to incorporate IA from the beginning of a project and much more
35
costly to add in later. A system design and development project is at an advantage for
building IA in from the beginning.
Discrete tasks should include requirements decomposition, creation of system
security architecture, creation of software security architecture, boundary defense design,
hardening activities, determination of ports, protocols and services, analysis of static code
analysis results and development of compliance artifacts. An LOE account should be
maintained for the support of responsible engineers, software team and customer
interaction.
As discussed in the beginning section, there needs to be a security architect, network
security engineer, software security engineer, security requirements engineer and one or
more security engineers for the system segments. Ideally the security engineers should
be integrated into each IPT and be able to provide feedback to the security architect. It
may be possible to have one security engineer for several smaller segments. During the
planning phase, the security architect should be brought on board to determine how many
security engineers will be needed.
The number of security engineers necessary will correlate to the life-cycle model and
the timeline. For example, if the approach is a spiral approach, the number of security
engineers needed may ebb and flow. If it is a waterfall model, the same number of
engineers may be needed throughout the life-cycle. Agile development requires security
throughout the life-cycle. It also can reduce the risk associated with security, if done
correctly. IA and accreditation activities will have objectives at each review to show that
the security architecture is on track. These should be part of the overall schedule.
For
each component in the system, the set of IA controls will have to be applied. The
36
application of IA controls cannot be fully understood until an initial threat assessment
occurs. This also means that the IA requirements cannot be levied until this
understanding is complete. In the event that the project is an upgrade or an assessment, a
vulnerability assessment may be even more important than a threat assessment, as it will
provide insight into the issues at hand.
2.4.1.4 System Requirement Review
At System Requirement Review (SRR) the basics of how the IA requirements need to
apply to the system should be laid out.
A customer document will provide an
overarching requirement set as well as a certification and accreditation (C&A) or
compliance regulatory requirement. For example, the requirement set might be the DoDI
8500.2 [5] and the C&A portion might be DIACAP. In the federal realm, the
requirements might be the NIST SP 800-53 [7] and the certification might be the NIST
Risk Management Framework codified in the NIST SP 800-37[6]. There are different
requirement sets, compliance regulations and C&A paths depending on the customer.
Requirement sets and regulatory documents seem to change frequently, so the IA team
must stay abreast of developments. Examples of these are seen in Figure 2-2 Requirement
Sets.
The ISO 27001 is a commonly used commercial standard for C&A and is not only
used in the United States, but in North Atlantic Treaty Organization (NATO) programs as
well.
Within Federal systems, National Information Assurance Certification and
Accreditation Process (NIACAP) is commonly used [38].
Systems needing to be
compliant with Sarbanes-Oxley can use several different constructs such as Control
37
Objectives for Information and Related Technology (COBIT) [39], ISO-17799, or
Standards for Attestation Engagements (SSAE 16) [40].
Figure 2-2 Requirement Sets
It should be noted that COBIT is a management framework and not truly a technical
framework. The U.S. Department of Health and Human Services suggests the use of
NIST SP 800-30 as a framework for HIPAA and HITECH compliance [41] with regards
to risk analysis. The NIACAP, DIACAP, National Information Systems Certification and
Accreditation Process (NISCAP), are similar in content. As such, there is a movement to
move to a unified risk management framework for C&A and compliance activities. The
National Institute of Standards and Technology (NIST) Risk Management Framework
(RMF) is currently the framework that is being adopted. It is codified in the NIST
Special Publication (SP) 800-37 [6]. The RMF is flexible and allows for application to
38
non-standard systems. The RMF is similar to NIST SP 800-30 can be used to show SOX
compliance [30].
The National Industrial Security Program Operating Manual (NISPOM) is a little
different. This is the requirement set that contractor facilities must meet if they are
handling classified information. Depending on its implementation, it can be difficult to
work with when building a non-standard system. It would be useful for NISPOM, for the
sake of consistency, to also move to the RMF, but currently there is no evidence that this
is the case.
When a project is at the SRR stage, the requirement set should be laid out to meet the
system’s needs.
The security architect should work with the security requirements
engineer and customer or auditor, to determine what is applicable. The overarching
requirement set typically contains requirements for facilities, customer actions, technical
and non-technical specifications. The confidentiality, availability and integrity level(s)
must be taken into account in determining how the requirement set will apply. For
example, the NIST RMF contains requirements for emergency lighting [7].
The
application of this to an unmanned vehicle is unreasonable. A requirement like this must
be discussed and justified. This requirement may apply to the control station, so it would
still need to remain in the system level requirement set, but during the next phase of
decomposition, it would only flow to the control station portion. Justification for each
requirement either being or not being applied to the system should be documented.
Justification for this would include something to the effect that emergency lighting is
meant to allow personnel to maneuver should the main lights be compromised as there
are no personnel in an unmanned vehicle this requirement is not applicable.
As the
39
development progresses, these requirements will be sifted and refined to apply to specific
subsystems and component.
It should be noted that performance requirements should take into account security
features, encryption can add tremendous overhead. Firewalls, intrusion detection and
prevention systems can restrict the flow.
Redundancy and when necessary load
balancing is advised.
2.4.1.5 System Functional Review/ System Design Review
By the time development has reached the System Functional Review (SFR) a top
level security architecture should be created. The system concept will have been defined,
which will drive the security architecture.
Some systems architects have the tendency to lump all the security functionality into
a single function in the functional architecture. This will not allow for adequate system
engineering design. The security architect should work with the systems engineers to
ensure that the functional architecture accurately represents the security architecture.
There are multiple ways of developing a security architecture, although a formal
method should be followed to ensure the completeness of the architecture. My approach
is covered in Chapter 4. One of the difficulties lies in the fact that IA requires adherence
to both technical and non-technical specifications. For example, the IA requirement that
a project follow strict configuration management processes, cannot be mapped into the
functional architecture, but must be part of the overall security model.
The functional architecture should model confidentiality, availability and integrity
flows, based on the system concept.
interoperability requirements exist.
This is tightly coupled to interoperability, if
Specifically all internal and external need for
40
communication should have been defined. These will impact compliance and/or
accreditation. Once the system security concept is defined, the IA team will then begin
working towards a preliminary design.
2.4.1.6 Initial or Preliminary Design Review (PDR)
The Initial or Preliminary Design Review (PDR) is the point at which the technical
design is assessed for adherence to system requirements. Before going to the review, the
IA team should have accomplished several tasks. These tasks include providing a
conceptual security architecture, flowing requirements to the subsystem and component
level, begin to understand data flows and performing an initial vulnerability analysis.
The security architect must work in tandem with the system and network architects to
understand how the system will fit together. There are some IA restrictions on how to
design the system, such as requiring boundary defense at the edge of the system that the
system and network architects must be aware of when they are designing the system.
Working with the security architect to ensure these functional elements are built into the
design, will prevent a bolt on IA solution. One method that is helpful is for the security
architect to have a block diagram of the system that shows the inputs and outputs of each
component in the system, similar to that shown in Figure 2-3 Boundary Crossing Block
Diagram.
The security architect can then take each IA area and assess it against the components
in the system. The architect will end up with a view of the system for each area of
security concern. There should be a view for boundary defense, host-based intrusion
detection/anti-virus, hardening, etc.
41
Figure 2-3 Boundary Crossing Block Diagram
My method allows the security architect to work through the system concept and
begin to understand how to break down the security for all the components. Although the
functional elements may help, modeling the system this way will allow for a more
concise allocation of IA requirements. This is particularly important to do before a
supplier is put on contract, as a misallocation can be quite costly.
As this straw architecture is created, the security architect should work with the
requirements engineer to decompose requirements to the subsystems and components. It
is at this phase that the security engineer from each IPT should be involved. Although
some pieces of the system may be built from scratch, many will most likely be off the
shelf (OTS). The items may be modified or used as is.
42
The security engineers ingrained in each IPT should provide feedback to the security
architect and security requirements engineer on the attributes of their part of the system,
this includes the network security engineer providing network feedback. For example, if
the system happens to contain a Global Positioning System (GPS), this will most likely
be an off the shelf solution. The security engineer dealing with this piece should provide
information on how this part of the system is to be used. The communication of the
device, such as whether it only receives transmissions, but does not transmit, as well as
the type of memory contained in the box, which components it is going to connect to, and
other information about the component. The list of questions that should be asked at a
minimum are found in Appendix B.
This information will not only help during
development, but will be useful in later phases of the life-cycle.
This feedback will allow for better requirements decomposition and will allow the
architect to begin a vulnerability analysis on the emerging security architecture. Some
may relegate a vulnerability analysis to running scans against a built system, but unless
vulnerabilities are found and mitigated early in the development life-cycle, no amount of
scanning at the end will compensate for missed mitigations. There are many tools such as
SCAP, Gold Disk, Retina and SRR scripts that can be run against a system and seeing if
the system has applied all the newest patches and if the configurations match those
required by the customer. Similar to what is described in [42]. This is different than the
way it is used in academia such as described in [43], [44], [45], and [46].
Vulnerabilities are those holes in the system that can be exploited by a threat [47].
This exploitation is known as risk. This leads to the classic equation Risk = Threat X
Vulnerability, and its derivatives. By identifying the vulnerabilities early on, it will allow
43
the security engineers to identify known unknowns as well as unknown unknowns and
reduce programmatic risk.
Although traditional systems engineering would mandate requirements be broken
down to the component level, some projects have failed to adequately decompose
requirements all the way [14]. One reason this occurs, is due to the IPT taking over
requirements decomposition at the component level, without the interaction from the
systems engineering team. This is a general systems engineering problem, but can be
greatly magnified in the security world. For example, the requirement to harden the
system should be flowed to most components in a system. If the requirement is only
flowed to the subsystem level, this causes confusion in the IPT.
For example if the X IPT is responsible for the X subsystem that contains multiple
components perhaps only one component may need hardening, but the other six do not.
The IPT would not know how to appropriately flow this to the single component, if they
are not working with a security engineer.
2.4.1.7 Final or Critical Design Review (CDR)
The Final or Critical Design Review (CDR) is the agreement on the final design that
is to be built. The security architecture should be complete. At this point the software
security architecture should also be complete. The first cut of the ports, protocols and
services map should also be complete. There should be an IP baseline showing all
network interactions on this should be mapped all necessary open ports, the protocols that
the software is using to communicate over and the services that are using the protocols.
One way to document this is to use a combination of spread sheets or a database and
Visio diagrams as shown in Figure 2-4.
44
Figure 2-4 Ports, Protocols and Services
This will be discussed further in Chapter 4. The ability for users to use pull down
menus to view all the flows will allow the network team to visualize all the network
45
communications. This can show where some flows may be incorrect or missing. For
example, if different versions of Interface Design Documents (IDDs) are used this can
lead to incorrect flows. The creation of the ports, protocols and services (PPS) artifact
can increase the team’s ability to communicate the network flows to other teams as well
as the customer. This flow also allows the security architect and network security
engineer to perform a more accurate vulnerability analysis on the system.
2.4.1.8 Software Security Architecture
A Software Architecture Description (SAD) document is created by the software
architect, and details the software interactions of the system. The software security
architecture should be interwoven in the SAD. The software security architect should
ensure that all software interactions are secure. This may include ensuring the integrity of
file transfers, authentication between software elements, etc.
2.4.1.9 Test Readiness Review (TRR)
IA requirements can be tested at several levels, but they primarily need to be tested at
the component and subsystem level.
When it comes time for integration, if the
requirements are tested at the lower level, it will be possible to minimize the type of
testing done at the integration level. I am assuming that integration test is meant for
testing between subsystems to ensure the system as a whole will cooperate.
There are two particular items that need to be considered with respect to Test
Readiness Review (TRR). The first is the hardening of equipment. Before integration
begins each component should be hardened according to customer requirements. This can
mean using Security Technical Implementation Guides (STIGs), NIST guidance, vendor
specifications or other industry documents. Depending on the requirement set there will
46
be items that have to be changed for functionality. These items must be documented. The
second part is patching. The customer must agree on the period of applying patches.
Security patches come frequently and must be added to the system. These must be tested
before inclusion into the system. This is best done by incorporating them into a software
version. This may mean that patches are several months behind. This may or may not be
acceptable depending on the customer and accrediting authority such as the SEC.
An agreement must be reached early on in the project else the baseline for test will
continually change.
When the system is handed over to the customer, patches are
expected to be up to date.
The PPS should be complete and will help the integration team during test.
Depending on the complexity of the network a separate network lab is advised.
Throughput and bandwidth are affected by IA mitigations such as firewalls and
encryption. Therefore the network elements should be tested with the mitigations in
place in order to ascertain the true effect on the network.
Test plans are created and should be reviewed by the IA team. The IA team should
provide each IPT the expectation of how a particular IA item should be tested, this way
when they review the test items, there are no surprises.
As discussed in section 2.2.3.4, if the customer is the DoD, then the IA team will
have to work with both Defense Security Service (DSS) and the customer to provide the
necessary IA artifacts for certification and accreditation compliance. If a lab is being
used for testing, an agreement must be made early on as to the settings used in the lab. If
the lab or other part of the system needs to be connected to a DoD test network, a
separate accreditation on the lab equipment must be performed. In some cases a bailment
47
might be necessary. This is when the customer takes possession of the equipment so that
it is no longer under contractor control.
The IA team must work towards this
accreditation early on otherwise the test schedule may be impacted. This accreditation
may or may not have useful items for the final C&A package.
By TRR the IA team should have an initial cut of the compliance documentation.
This information should be a contractual item. By making this a contractually obligatory
item, it will allow for better feedback from the customer and forces collaboration with all
IPTs. The package contents will vary between accreditation and compliance processes,
but at a minimum it should include the security architecture, software security
architecture, PPS, and hardening artifacts. Hardening artifacts are typically the retina and
gold disk scans, although Nessus and NMAP scans can also be provided as part of the
evidence, depending on what sector the system is being designed for such as health or
finance. Any other verification artifact, such as low level testing should also be included.
It is advisable to create a matrix that maps all components to their IA requirements
and then to the test method, steps and results. This matrix should be created preceding
PDR and should be maintained through the life-cycle of the project. This is especially
true, if the project has multiple increments. If a project has multiple increments updated
information will be needed for each increment. When dealing with Sarbanes-Oxley, these
packages will need yearly updates [31].
The final package needs to coincide with
operational assessment when the system is hooking into the final network and is being
handed over to the customer.
48
2.4.1.10 System Verification Review (SVR)
In some platforms a secondary readiness review is conducted to ensure the platform is
safe. From an IA standpoint, network security may be applicable during the secondary
readiness review. If any part of the system is considered to affect human safety, the
protection schemes may have to vary slightly. For example, and IPS may be considered a
safety hazard so an IDS that allows the packets to continue through but provides an alert
to an operator, could be used instead. For example if medical equipment is monitoring a
patient and this information is being monitored remotely and must be handled in real
time, a network glitch could be disastrous.
If IA has been appropriately ingrained into the overall system development lifecycle, IA should not be an issue during this secondary review. If however it is not done
correctly, the system will be considered non-compliant. In the realm of the DoD, this can
keep a system from operating. In the financial world this can carry hefty criminal and
punitive damages. The author of [48] explains that the costs of SOX non-compliance can
be in the billions of dollars. The cost of non-compliance with respect to HIPAA can be
up to $1.5 million, against any specific doctor/institution [49]. Non-compliance or failure
of certification occurs when the risk is assessed to be at an unacceptable level. This can
come from multiple issues, but the biggest culprits tend to be poor configuration
management, a failure to patch, missing boundary defense or a failure to adequately
harden the system.
2.4.1.11 System Verification Review (SVR)
During the System Verification Review (SVR), the matrix mapping component
compliance should be included as should patching activities.
Where necessary,
49
hardening should continue as well. Scans are only acceptable within a certain time
frame, depending on the customer. This means that the IA team must continue routine
scanning of the system. The customer may require witnessed testing. Arrangements
should be made to accomplish this in a timely manner.
2.4.1.12 System Hand Off or Initial Operating Capability (IOC)
At Initial Operational capability (IOC) or when the system is being handed off, the
final compliance documentation should be submitted in order to obtain an approval. If
the security team has been appropriately integrated, and the risks against the system have
been mitigated, then being judged in compliance is possible. If there is a problem with
the security posture non-compliance will be assessed. This is significant and everything
possible should be done to avoid the issuance of non-compliance. It is also crucial that
the security team have worked closely with the customer to ensure the approvers have all
the information necessary. The system may still receive an approval, but it could be
delayed, resulting in a schedule slip. If there are multiple increments within a project, the
IA work will be cyclical. As there will be a design, implementation and test phase for
each increment, it will be necessary to plan for IA work at each phase.
2.4.2
Encompassing Methodology
Requirements decomposition is detailed in Chapter 3 and the probability of
compliance is covered in Chapter 9. Requirements are the contractual vehicle between
contractor and customer, as well as contractor and supplier. Requirements drive design
and so should be crafted carefully to reduce any confusion. This can be difficult when
working with the vast number of requirements that come from needing to meet regulatory
compliance.
50
2.4.2.1 Design Phase
Each component in the system must incorporate IA requirements, depending on how
they are situated in the system. In order to do this a basic threat assessment must be done.
The IA team must work together and with the IPTs to create an acceptable design.
The security architect must create a comprehensive security architecture. This must
incorporate all elements of the security design including: crypto, key management,
sanitization, data at rest, data in transit, etc. The IA members embedded in each of the
IPTs must work with the architect and IPT to determine an appropriate design that
provides for IA mitigations. Regular meetings with the IPT can be useful. Those
embedded into the IPT must understand the purpose of the component, its connections
and its vulnerabilities.
The IA team member must then work with the security
requirements engineer and architect to ensure that the appropriate requirements and
mitigations are in the design. IA is new for many engineers. In working with the IPT,
provide a trade study on the protections of the system. A cost analysis should be part of
the trade study, to ensure that the mitigations are within the budget for that component
and adequately protect the component.
It is also important to ensure that the systems
engineering and networking teams understand the flows of the system that may be
affected by IA mitigations.
The architect should work with the customer to request a threat assessment. A threat
assessment provides an idea of what might attack the system. The security architecture
will be a living document that the security team works with to ensure that the risk of the
system is brought to an acceptable level.
51
2.4.2.2 Implementation Phase
During implementation, the IA team must maintain continuing conversations with the
IPTs. Although the design may be firm, very often during implementation elements
change. It is necessary that the IA team be made aware of any changes, to ensure that
they don’t affect the security posture of the system.
Each of the IPT security engineers should work with the responsible engineers during
implementation to ensure that the IA requirements in the design are actually
implemented. This is important, because in many cases the original design may have to
be modified during implementation. Training the engineers around the team on what the
IA team is looking for will help. Inculcating everyone on the team with a degree of IA
awareness will allow the IPTs to make better decisions and will provide the IA team extra
sets of eyes on the design. This can reduce the amount of rework necessary due to
mitigation flaws.
Data flows including ports, protocols and services must be created to ensure that only
necessary ones are enabled. This should also include quality of service information for
each flow. This should be created by the network team, although the IA team should be
involved.
Some flows may compromise the security posture of the system. This
information also provides a good understanding of how firewalls and intrusion detection
should be configured. The system should deny be default any port, protocol or service
that is not specified in the PPS. The network security engineer should be the point of
contact for the IA team for this artifact.
The software security engineer should ensure that all software being written is
minimizing software vulnerabilities. The software team should incorporate a secure
52
coding standard and software assurance plan into their software process, as discussed in
Chapter 6.
2.4.2.3 Test Phase
The IA team must at a minimum review all the test plan sections on IA to ensure that
the requirements are being correctly tested. The IA team must also ensure that if there
are any IA requirements that cannot be met, that they are documented and justified. As
today’s schedules are severely condensed, it is most likely that due to cost and schedule,
certain components will not be able to meet all of their IA requirements. Ideally the IA
team should be involved in the creation of all IA test documents. Working with each
individual responsible engineer (RE) is necessary to ensure that the testing of the IA
requirements is done appropriately. Although requirement language may be similar,
depending on how the requirement applies to a particular box, the test may be different.
For example auditing on a tactical box may look quite different than an enterprise level
server, even though they both have a similar requirement. In some cases, it is useful to
provide a test template to all the IPTs with a generic understanding of how to test the
requirements. The IA engineers embedded in each IPT should then sit with each RE to
go over the nuances of how to test their particular requirements. Doing this during the
system requirements phase is a good idea, although modifications may be necessary at
this point of the life-cycle.
Sys I and Sys II should test most of the IA requirements. Each component should
come with a letter of volatility and sanitization method. These documents should describe
the memory types included in a particular component and the means of removing
information from the memory if necessary. There are some components that cannot be
53
sanitized, only destroyed.
This must be documented. Hardening should be applied
throughout the test life-cycle. In some cases, customer hardening requirements may not
be written for a particular component. In the event that this occurs, the security team
should follow the hardening advice provided by manufacturer. This can be difficult to do
when needing to have a stable system configuration for test. Incorporating an approach
up front for this work will reduce some of the difficulty in achieving this.
In order to harden the system appropriately, it is easiest to start with the use of an
automated scanning tool.
Each sector has a preferred tool set for this. The tool sets
change and so the team must be abreast of any changes to their tool sets. The IA team
should have access to necessary tools such as: a secure coding standard, a software
assurance plan, static code analysis tool, SCAP tools (or Gold Disk), security technical
implementation guides (STIGs), NMAP, Nessus, HITECH checklists, ControlPanelGRC,
EyeRetina, Visio Pro, Word, Excel, power point, and a fuzzing framework.
Sys III should test a handful of IA requirements and synthesize the results from Sys I,
II, and III to provide verification of all IA requirements.
Sys IV Special Testing.
Typically this phase will test crypto management and
zeroization of key material. It will also test centralized sanitization of private data.
Sys V. The customer takes control of the system and performs their tests at this point.
The IA team should participate when it is possible to do so. In some cases the customer
may perform different tests to verify compliance with an IA requirement. It is more
important in non-enterprise systems, where IA requirements are interpreted to work
closely with the customer test team to ensure they understand the reasoning and
justification of IA decisions. This is also when third party auditors may become involved,
54
such as for SOX compliant systems, where the SEC needs assurance of compliance
before the customer can actually begin using the system with real data.
2.4.2.4 Working with DSS
At defense contracting facilities, Defense Security Service (DSS) is responsible for
the security of classified data and material. At a contractor site, an Information System
Security Manager (ISSM) is responsible for interfacing with DSS to process paperwork
and ensure the security of the facility meets National Industrial Security Program
Operating Manual (NISPOM) requirements [50].
In this situation, the IA team is not responsible for the security of facilities. This
means that it is necessary to work with those accrediting the facilities, especially if the
facility happens to be a lab. During the testing phase of a project, labs are typically built
in order to perform integration testing. Although hardening a system should be the same
across the board it isn’t. There are differences between the STIGs and the NISPOM
hardening required by DSS.
In some cases a risk acceptance letter may be written to
provide either guidance or relief on how to deal with the differences. This is because a
project cannot afford to test to one set of configurations and then change them when
done.
The risk acceptance letter allows the government customer to take on the risk
associated with a piece of equipment. For example, a tactical piece of equipment should
be associated with the risk acceptance letter, as it cannot be hardened in accordance to the
NISPOM.
The decision on how to proceed with a piece of equipment requires that the
IPT coordinate with the ISSMs.
55
One of the difficulties is providing information to the ISSMs early enough to
minimize schedule impact due to changes in the lab. When a new piece of software or
hardware goes into the lab, it must be first approved. Each piece of equipment should be
accompanied by a letter of volatility and sanitization procedure. These will be of use
both in getting the equipment into the lab, and as evidence for IA artifacts.
The letter of volatility should contain a listing of all types of memory.
The
sanitization procedure should explain how to sanitize each piece of memory in the
component. DSS will require sanitization procedures however each project may have
their own specific sanitization requirements.
2.4.2.5 Production Phase
During production it is critical that the security components of the system are not
compromised. For most regulatory constrained systems, it is necessary to run scans on
each system produced, to ensure that configurations are correct. It is also important that
as software is being loaded, that the integrity of the software is maintained. Software
should come from a version control system and should be encrypted for confidentiality.
A hash or checksum can also be used to verify the integrity of the software, depending on
the level of integrity necessary. For most of the regulatory constrained systems, it is
necessary to run assessments against the system on at least a yearly basis [31], some only
define these audits as periodic [41]. The outcome of these assessments is paramount to
maintaining the ability to operate. In the financial realm, as CEOs and CFOs are
personally held responsible for poor outcomes, during the production phase and into the
maintenance phase, the security posture can have monumental impact either for good or
bad.
56
2.4.2.6 Maintenance Phase
Patch management is a critical part of maintaining the security posture of the system.
In order to handle IA in the maintenance phase, developers of the system must ensure
these are integrated during the design phase. If the contractor is on contract for
maintenance, it is necessary that patches be up to date. If patches are not maintained, the
system risks non-compliance.
In some systems, certain types of information are necessary for maintenance of the
system. In some cases maintainers are not allowed access to parts of the system, and so
the designers must ensure that necessary information can be easily accessed. A cross
domain solution provides the ability to extract information at a lower protection level out
of a system protected at a higher level.
Such solutions can take years to achieve
certification. If this is part of the design, the needed data must be designated as early as
possible in the life cycle. This must be designed down to the message level in a
document such as an interface design document.
As in production, throughout the maintenance phase of a system, the security posture
must be assessed regularly and any issues found must be remediated. The cost of noncompliance can cripple a company, especially if it is a small company, such as a small
doctor’s office.
2.4.2.7 Disposal
Although the customer is generally responsible for the system at this phase, there are
some times when this phase falls to the contractor. The sanitization of equipment is one
of the biggest IA chores related to disposal.
The sanitization procedures and any
documentation relating to memory status should be maintained through the life of the
57
system as these will provide the basis for system sanitization during disposal. Some items
may be useful after system disposal. These items can only be reused if they can be
sanitized. For example, if a computer contains health records of patients, the hard drive
would have to be scrubbed if the computer were to be reused. The safest way to handle
the computer would be to destroy the hard drive and replace it.
2.4.2.8 Conclusion
I have provided a means of including security throughout the whole of the systems
engineering life-cycle. This is necessary in order to create and maintain the security
posture of a system, especially for regulatory constrained systems. If security is
disregarded at any phase of a project, there is the possibility of non-compliance, which
can result in fines and in some cases criminal charges. Meeting regulatory compliance is
possible, if using a systematic approach such as I have detailed in this chapter.
58
Chapter 3
Requirement Writing
There are many thoughts on the best way to write requirements. The main theme is
that the requirements must not be vague and must be testable [14]. It is no different in the
realm of information assurance (IA) but, there are nuances in security that can make this
an especially daunting task. This is particularly true when working on a Department of
Defense (DoD) program because stringent protection of information is critical.
Proposals for DoD programs, are often bid to win. This does not mean “low-balling”
the estimate, but finding a way to meet the government’s need at the lowest cost possible
while assuring the system to an acceptable level of risk. Proposals can also occur on a
very tight schedule where time for requirement decomposition is not adequately
provided. In other cases, the misunderstanding of cost estimation is a key reason why IA
is not bid or contracted out correctly. Unlike other aspects of systems engineering [14]
the concept of building IA in from the beginning in DoD programs is still relatively new,
and so it is harder to estimate cost. When a prime contractor bids a project, they are
typically required, as part of the proposal, to share some of the work with smaller
companies. This many times comes in the form of using the smaller companies as
suppliers.
Government contracting is regulated by Federal Acquisition Regulation (FAR), which
has different rules than commercial contracting. Contracts that a government contractor
executes for the government can vary greatly from those with their commercial suppliers.
The government typically provides the prime contractor with a high level set of
requirements that must be met. Modern engineering process and change management
allow that these requirements may be re-negotiated between the prime contractor and the
59
government. However, when the prime contractor goes to work with suppliers,
requirements flowed to the supplier at the time of subcontract are written as a contractual
obligation. The prime contractor contains the program execution risk by ensuring that the
supplier requirements do not cause problems for the program. This means that engineers,
who write the requirements, need to consider both the technical and the legal implications
of the text that goes into the supplier requirements. Whether it is a contractor team or
government team writing the requirement, IA requirements and the necessary
deliverables must be clear, understood and agreed upon.
In the typical procurement cycle, requirements decomposition would occur at the
beginning of a program and would be complete before design and development begin.
With the use of concurrent engineering and condensed schedules, there is higher
incidence of requirements volatility and rework. Therefore, it is necessary to have a
concurrent process to refine the IA requirements. This becomes difficult when working
with suppliers who are already on contract, especially if it is a cost plus contract. It is
expensive and time consuming to change a supplier’s contract midstream. In a cost plus
contract any change made to the contract will incur a fee. Increased cost and rework
must be taken into account when working through the IA requirements.
In working with supplier requirements, the requirement needs to function as part of a
binding obligation, but it must also meet the functional need. It is essential that the
engineer be able to understand what is expected when reading each requirement and the
tester must be able to design and execute a test to verify that each requirement has been
met, as well. If a requirement is too vague, then this cannot be done. If a supplier team
does not understand the requirement, or the requirement can be too broadly interpreted
60
for implementation, the contractor cannot be assured that they are receiving what is
expected. The intent of the requirement may make it onto paper, but the language can
make contracts difficult to execute.
3.1 Requirements Engineering and IA
Requirements engineering is a challenging field. In their discussions of requirements
engineering difficulties, the authors of [51] describe some of the difficulties in both
general requirements engineering and more specifically, requirements engineering with
respect to software-engineering. It is pointed out that requirements and software reside in
two different spaces. Requirements reside in the problem space, and software resides in
the solution space. Information assurance is also in the solution space, but has the added
complexity of being in a non-technical space. It is not always possible to meet an IA
requirement through a technical solution.
As stated in [51], “Requirements analysts start with ill-defined, and often conflicting,
ideas of what the proposed system is to do, and must progress towards a single, detailed,
technical specification of the system.” This certainly defines the beginning state of IA
requirements decomposition in the world of DoD. The authors of [51] also bring to light
the necessity of refining security requirements.
Laying out clear requirements allows for accurate scheduling, which leads to better
cost management. Value for cost is the bottom line. If an integrated product team agrees
to an IA requirement, but doesn’t fully understand its impacts, the team will not schedule
adequate time for the implementation. This can lead to one of two things; either a
schedule slip to include the IA implementation or, a less secure system because necessary
IA functionality is not included. In many cases, award fees are directly tied to contractor
61
performance parameters such as requirements volatility and completion. For a contractor
to achieve a high award feel, performance is crucial. The contractor cannot meet the key
performance parameters if the requirements are not clear enough.
The security posture of a system is the end product of information system security
engineering. If the necessary mitigations to be implemented are not understood, because
of poor requirements, the system security posture will be lacking. This could potentially
lead to a system not having an acceptable level of risk. Risk is the culmination of threat
and vulnerability. Risk mitigation, such that risk is reduced to an acceptable level is the
goal of security engineering. Risk assessments look at residual vulnerabilities with
respect to threats against the system. Relevant risk assessments cannot take place if the
contractor claims that IA requirements are being met, but there isn’t an understanding of
where mitigations must be implemented. The root of the issue is directly tied to poor
requirements decomposition. Without clear requirements that are written so that they are
contractually binding and executable, a clear risk assessment cannot be done.
The other difficulty when developing information assurance requirements is that they
are there to support a component, not necessarily dictate its main functionality. This
means that a security engineer must have an understanding of how the component is to
function, before applicable requirements can be determined.
For example, if a
component on the system is an unmodified commercial-off-the-shelf (COTS) product,
some of the typical IA requirements must be reinterpreted to fit. An IA requirement such
as limiting the number of software vulnerabilities during development may not be
feasible. Instead the engineer must write language into the supplier’s statement of work
that provides this reinterpretation.
This might include a statement that requires
62
documentation and other evidence to prove that software is maintained under version
control and developers have limited access. The prime contractor could also require that
the supplier run the source code through a static code analysis tool such as Fortify 360™
and provide the results. This can be difficult when the software is proprietary or has
limited data rights.
Developing a set of well-defined security requirements is a difficult task, but not an
impossible one. In working with supplier teams or even internally, a contracting team
must choose the right approach to create a set of clear, executable security requirements.
This chapter presents a potential approach to meet this goal.
3.2 Potential Solutions
In working with a set of IA requirements, such as the DoDI 8500.2 [5], a system
security engineer must take abstract requirements and make them into a set of concrete
“shall” statements. Depending on the type of component that a high level requirement
must apply to, the high level requirement should be reinterpreted to fit each individual
component. It is not sufficient to take a set of IA requirements and arbitrarily assign
them to components based upon generic or abstract wording. In some cases, security
engineers are tempted to apply the whole of a security requirement set to a component,
based on the idea that someday they might apply. This is a dangerous approach.
Several of the suggestions, such as trying to simplify the problem space by putting
constraints around the environment would be helpful for the security engineering, as
described in [51]. This may be more of a challenge to those working on mobile platforms
such as ships or air vehicles.
63
The author of [52] asserts that levying requirements against a program requires an
understanding of the risks against that program’s system. Unfortunately, the ability to
accurately define the system early on to perform a total risk assessment is difficult. This
is because system design is a process, and the application of accurate security
requirement follows after the design to some degree. High level security requirements
such as the need for boundary defense can be done early on, but more intricate
requirements such as password management on a particular component may not be fully
understood. For example, when laying out the block diagram of the physical security
architecture, determining where the firewalls and intrusion detection need to sit can be
seen in this process. If inserting a box that will be purchased as a COTS item, it is not
immediately apparent from the initial block diagram whether or not the piece of
equipment will interact with the system in such a way as to require a password for access.
When concurrent engineering is unavoidable, it is common to be in the process of
decomposing requirements while the integrated product teams (IPT) are beginning
designing and developing of the system implementation. It is critical to perform the
initial breakdown of the system as suggested in [52]; however, it is important to go one
step further to the component level.
When developing security requirements, it is necessary to go down to the component
level. If the requirements are not decomposed to the component level, it is likely that the
engineers will misunderstand where the security requirements apply. For example, if
there are five components in a segment, and a security requirement only needs to apply to
two of the components; it is ambiguous for the engineer reviewing the requirements.
Such ambiguity leads to later rework that is unnecessary.
64
Another part of [51] refers to the need to include stakeholders.
During IA
requirements decomposition it is necessary to include the responsible engineer for each
component as a stakeholder. On a large program, an engineer experienced in security is
not necessarily embedded into each integrated product team. This means information
assurance requirements must be broken down into discrete pieces, from which engineers
of varying backgrounds can understand. On the other hand, security engineers do not
necessarily have the background to understand each component in the system. This is
especially true when dealing with complex systems such as movable platform, e.g. a ship
or plane. It is essential that the security team and responsible engineers work together
and are both stakeholders with respect to security requirement decomposition.
It is not adequate to just avoid known vulnerabilities or defend against a specific
threat. Often, requirements must be defined before a threat assessment is complete or
vulnerabilities are known. This chapter proposes that the ability to derive correct
requirements with or without this type of information is crucial. There are three potential
methods to define information assurance requirements. These have to be written such
that they are binding for suppliers as well.
3.2.1
Method 1
Apply all security requirements from the high level requirement set and use terms
such as “as applicable.” An example of this would be the application of the DoDI 8500.2
[5] requirements for a Mission Assurance Category 1, classified system to all components
in the system. In some cases this works in supplier specs, because the security engineer
can ensure that the requirement makes it into the specification. Although it is bad
practice to add in a requirement, “just in case,” it is sometimes necessary. In the realm of
65
IA, contracts are often agreed upon before enough information is understood on how a
component will affect the security posture of the system. This leads to a need for open
ended requirements because a security engineer needs to ensure that components will
comply with necessary IA controls, but, a security engineer won’t be able to determine
applicability until later in the program.
The downside is that this is typically not strong enough language for a contract and is
not testable. The use of open ended terms can lead to long disputes over whether the
requirement is applicable or not. If the supplier feels it isn’t applicable but the security
engineer disagrees, there may be nothing the security engineer can do to enforce the
requirement.
The requirement isn’t binding. Many suppliers charge based on the
requirement set, so this means that the contracting team will be charged for requirements
that will never be met, because they are unnecessary.
One example of this is a requirement that passwords be at least 8 characters long on a
box that has no passwords associated with it. The supplier team will have to find some
way of verifying that they meet the requirement in order to fulfill the contract. The
supplier team will most likely say that it is “not applicable” as for their verification, but
the contractor will still be charged for the time it took the supplier team to reach that
conclusion. The supplier team may also come to the conclusion much later on in the
development cycle, and so may even charge the contracting team for having to work with
the requirement. This can lead to unnecessary costs for the contracting team.
Unless this wording is completely qualified, this type of wording should only be used
internally. Even if it is used internally, these types of requirements should be revisited
when more information about the component is known. Once better understood, the
66
requirement should be revised to become more concrete. This needs to be done as early in
the lifecycle as possible to manage requirements volatility.
One way of qualifying these types of requirements is to have a contract letter that
clearly states the intent of applicability. The contract letter should direct the application
of a requirement to one or more specific components. This can alleviate some of the
difficulty mentioned above.
3.2.2
Method 2
Security engineers must work closely with each responsible engineer to determine the
necessary requirements. Then, apply the closest subset of requirements, understanding
that some may have to be removed or added at a later date. When dealing with a
supplier, any change to the supplier contract can cause the supplier to ask for additional
funding or time. If the component is well understood and won’t be modified, this is the
best approach.
In this method it is possible to make the requirements clear and testable. The security
engineer has to ensure that the requirement is also worded such that it is contractually
binding when working with suppliers. Ensuring that wording is contractually binding
requires the engineer to be aware of every word in the requirement.
In some cases it
may be useful to work with legal and contract experts to review the requirements to
ensure that they can be enforced.
3.2.3
Method 3
Apply all controls and remove unnecessary ones at a later date, once the component is
better understood. This is one of the most popular methods, and can lead to confusion.
This can also be costly, as any change to a contract can result in fees. Removing myriads
67
of requirements at a later date can generate large costs. Internally, this can also leads to
requirements volatility, which can cause disruption to the design and development
processes, eventually impacting schedule.
Suppliers should be wary of accepting requirements that they do not understand or do
not know how to test. IA requirements can be convoluted and so should be discussed
with the prime, before they are accepted. Verification of requirements is tied directly to
payment. If a requirement that is not applicable and therefore not testable, it will lead to
contract issues such as payment refusal.
This method is often employed when a schedule does not include sufficient time for
requirement decomposition. The security engineers take the controls that apply to the
system as a whole and apply them to every component in the system. This is not the
most efficient means from an overall lifecycle perspective, but is attractive to engineers
that must have a decomposition done in an unreasonable amount of time.
Providing a schedule that allows an appropriate amount of time for requirements
decomposition can go a long way in preventing this approach from being used.
This approach also occurs when engineers, not versed in requirements decomposition,
are asked to perform this task. Security engineers are not typically requirements
decomposition experts. It is wise to have the security expert work with a requirements
expert to derive the correct decomposition. This pairing would allows for a preferred
approach to requirements decomposition.
Regardless of who decomposes the security requirements or what method is used, a
peer review system should be in place to ensure that requirements are reviewed for
correctness, clarity and testability. The peer review process should include an IA expert,
68
a requirements expert and the responsible engineer for the component. Any agreements
or changes that occur from this process should be documented.
3.3 Preferred Approach
Combinations of method 1 and 2 can be used to provide clear, concise requirements
that are contractually binding on a supplier. If done correctly, this is one way to decrease
extraneous costs due to contract changes. Also, providing clear IA requirements early on
will help ensure that mitigations are built into a system and not “bolted on” or reengineered at a later date. In many cases, if the IA requirements are understood at the
beginning, it is possible for IPTs to integrate necessary mitigations as part of the normal
functionality of the system. In my experience, IA requirements are typically ignored if
they are not well understood, or if they are broad enough to circumvent implementation.
This chapter proposes that the construct of a system’s network greatly impact the IA
mitigations necessary on a particular component. In my experience, defense in depth is
the ultimate goal, but it is sometimes necessary, due to cost considerations, to change a
particular mitigation. Depending on how the requirements are written, this mitigations
can be complicated when dealing with a supplier. Also, a change in a component that is
being modified, can have unforeseen IA impacts. This means that an IA requirement
originally deemed not necessary may suddenly become necessary. For example, there is
an IA control in the DoDI 8500.2 (IAIA-2) [5] that requires that removal or changes to
default usernames and passwords must be removed or changed.
If this particular
component originally had no passwords, but due to a design change now requires a
password, a new IA control now applies. If support for the security requirement was not
originally in the supplier’s specification there will be a schedule and cost impact to have
69
it added. In such cases, there are three possible paths. The first is to pay the cost and
assure the intended security posture. The second is to find another way to mitigate the
situation, such as restricting access to the component. The third way to handle the
problem is to assume the security risk created by the change. This is typically the least
costly solution for the near term. There are often unintended consequences that emerge
later in such circumstances, such as a leaving out a small mitigation that to find later that
it resulted in a backdoor to the system.
It is necessary for engineers to “think like lawyers” and assure proper legal and
technical language for suppliers when developing information assurance requirements.
This means that every word and phrase in a requirement must be scrutinized for alternate
meanings. Open ended terms should be avoided. It may be worth the time to have a
couple of engineers from other fields read through the requirement and provide feedback
on their interpretation. If the result has different interpretations, the requirement is not
clear enough. If it can be applied multiple ways, this also must be addressed by adding
context must be added to the requirement.
Instead of decomposing a boundary defense requirement as “The segment shall
provide boundary defense.” It should be defined as discrete non-negotiable items such as
“The segment shall provide one or more firewalls at the boundary of the system. The
segment shall be configured such that all traffic must proceed through a firewall before
entering the system. The segment shall provide one or more intrusion detection systems
at the boundary of the system. The segment shall be configured such that all traffic must
proceed through an intrusion detection system before entering the system. ”
70
The first decomposition allows for a great deal of room in interpretation. A supplier
may argue that boundary defense is only a firewall. This also leads to disagreement on
how to test the requirement. The second set of decomposed requirements provides
unambiguous meaning and application. When requirements are written in a discrete,
unambiguous manner, so that there cannot be any secondary meaning, the engineer has
begun to think like a lawyer.
Several methods were defined in this chapter to address the challenges in managing
security requirements. Thinking like a lawyer, working as a team and following a peer
review process can aid in writing requirements that are usable by internal teams and
suppliers alike.
Ultimately, IA requirements must be clear and testable. When
developing IA requirements for suppliers it is even more important that the requirements
properly function as a contractually binding obligation. Defining IA requirements is a
challenging task, but can be accomplished. The methods proposed in this chapter can
ameliorate some of the difficulties in decomposing security requirements for DoD
programs.
71
Chapter 4
Security Architectures
Security architecture development is key in determining whether a system will have
an adequate security posture.
A good security architecture allows system security
engineers to understand what mitigations are necessary and how they need to be placed
into a system. It also allows for an understanding of information flows between elements
of the system and flows out of the system. Without this type of information it is not
possible to ensure that the risk level of a system has reduced to an adequate level, which
is necessary for systems constrained by regulatory bodies. There are many taxonomies,
frameworks and methodologies for creating security architectures, although there is not a
standard required in industry, whether it is the DoD, health or finance sector. The lack of
a standard architecture approach, prevents security architects from having a common
language. As there is no systematic approach, there is no guarantee that an architect will
capture all necessary information to meet compliance. I will compare the current
frameworks and presents a systematic approach to handling security architectures, with a
focus on regulatory compliance.
There are several thoughts on approaching security architectures. The first approach
is through adding security into a normal Enterprise Level architecture, which is most
common. In the DoD, there is a required use of the Department of Defense Architecture
Framework (DoDAF) [22]. The second is to have a separate system security architecture,
such as described in [53]. Many of the papers on security architectures, such as [34]
define a specific architecture for a specific situation, and are not meant as a pattern on
which to base the creation of a system security architecture. These approaches are
difficult to work with in designing non-standard systems.
Designing security
72
architectures for a financial institution or medical equipment is quite different than
building security architectures for an office environment.
Even though the requirement sets are identical in some cases, the interpretation and
resulting implementation may look radically different. There are a myriad of documents
that cover security requirements and frameworks depending on the particular sector for
which a system is being built. Requirement sets such as the DoD Instruction 8500.2 [5],
Information Assurance Technical Framework (IATF) [54], HITECH [41], SAS 70 [35],
COBIT [39], and NIST SP 800-37 [6]. Each system will be designed according to a
specific set of security requirements, but the requirement sets are the same whether it is
an enterprise type system or a non-standard system.
The security engineer must
understand the purpose of the requirement and the underlying security principles well
enough to translate them into a security architecture that will protect non-enterprise style
systems.
4.1.1
Contributions:
In this Chapter, we make the following contributions: (1) We compare various
methodologies commonly used to create security architectures and will provide
suggestions on how to apply them for non-enterprise systems that are constrained by
regulations. Many of the existing methodologies provide a solid basis for beginning an
architecture, although others are sorely lacking. Choosing which methodology to follow
is crucial, as it will be the basis on which a security architect will potentially have to
work with for many years. The wrong framework can lead to a great headache for
engineers and can lead to poor security postures. (2) We propose a new security
architecture approache which allows for an iterative design being driven by customer
73
collaboration and a growing understanding of the system as the concepts develop, allows
for those following it to ensure that they meet regulatory compliance, and allows for
easing the issues during development as well as during the critical maintenance phase.
4.2 Background
There are multiple frameworks, and taxonomies for the creation of system
architectures [55]. Much of the research attempts to give a specific architecture to solve a
problem e.g. transient trust [56], email [57] or public computers with trusted end points
[58]. One interesting paper is on architectures for fault tolerant systems [59], because it
may be possible to abstract some of the principles and turn them around to apply to
systems that cannot tolerate faults. [57] Discusses security architectures for networks,
although it is primarily focused on what should be taught to computer science students
for them to better understand security for distributed systems. Some of these are specific
to system security architectures and others are system oriented with security views.
The Defense Architecture Framework [22] is a system architecture framework with a
security view. In contrast to this are the concepts discussed in [26]. The authors describe
various technologies that could be added to an architecture to make it secure, but don’t
truly provide an actual architecture.
The authors of [60] describe a multi-layered
approach for modeling the security architecture of a complex system.
Multi-level
security is one aspect of some architectures, but it is not the only focus of a security
architecture [61]. In [21] the authors describe various architecture frameworks and
methods that are available, trying to define what is meant by security architecture.
System security architecture frameworks are described in some of the papers such as
Sherwood’s approach in [18] and furthered in [53]. Some have even tried to abstract it to
74
an architecture pattern such as described in [62]. There many ideas associated with
security architectures, but in any case, the system security architecture must be totally
aligned with the system architecture. This can be especially difficult when designing a
non-enterprise type system, and those that require the need to meet multiple regulatory
documents.
The security architecture is a detailed explanation of how the requirements need to fit
within the system.
This needs to be done in such a way as to ensure that all
vulnerabilities are covered, and not just by one layer of defense, hence the heavy use of
the term “defense in depth”. In some cases, the need to meet compliance may even
outweigh reducing all the vulnerabilities. Many methods have been suggested [27], [63],
[18], [57], [55], [22], [29], [64], and [65]. Each has a similar goal: build security into the
system as a part of the architecture. Often security is bolted onto the end of system
development, or as an upgrade after the system is completed. As much of the finance and
health systems are focused on compliance assessment as opposed to development, these
can be particularly difficult. Each framework and taxonomy has a different way of
representing the security elements of a system. One of the earliest taxonomies developed
for building enterprise architectures is the Zachman Framework [64].
The Zachman Framework is used for organizing architectural artifacts.
This
framework did not supply a formal methodology or process for collecting the artifacts,
and so other frameworks were created from this base. The Department of Defense
Architecture Framework (DODAF) was built off of this Framework. SALSA [18] and the
Sherwood Applied Business Security Architecture (SABSA) [66] are both built off of the
Zachman Framework as well. SABSA is one of the few frameworks developed to focus
75
on the security aspects of a system. Its goal is different than that of the National Institute
for Standards and Technologies (NIST) Risk Management Framework (RMF). The
intent of the RMF is to improve information security, strengthen risk management
processes, and encourage reciprocity among federal agencies. The authors of [67] provide
a new overarching framework for HIPAA systems, although it does not take into account
the whole development life-cycle. COBIT provides a managerial framework for SOX
compliance [39]. Each of the Frameworks and Taxonomies has strengths and
weaknesses.
The biggest complaint made in previous research is that security is done in an ad hoc
manner [26].
If an overall security picture of the system is not developed in the
beginning, technologies and procedures will be thrown at the system in hopes that
something will stick. That leads to the hope that the stuff that sticks is good enough.
How can you know what is good enough, without a strategic understanding of the system,
its environment, its operations, users and the like. Simply, you can’t. It is not possible to
do a risk assessment of a system, if all of the information flows, vulnerabilities, threats
and mitigations are not known. The frameworks, taxonomies and methodologies all aim
to layout this information.
4.2.1
DoDAF
The DoD requires the use of DoDAF for functional system architectures. The goal of
the functional architecture is to show how different types of data should move through
the various functions of the system. From this model the system engineers are to derive
the interface requirements. This model is also supposed to help define the subsystems
and allocation of requirements.
76
In looking at the DoDAF, each view takes a system and tries to explain it in a
different fashion from the other views. There are operational views, system views,
technical views, and all views [22]. These describe a system using specific techniques.
The operational views (OV) provide a very high level view. They pictorially depict
how different pieces of the system should communicate. These flows do not show any
component level information.
They show the flows between various organizations,
specifically activities, rules, states, events and interoperability flows [68].
The system views (SV) start to breakdown the flows into system level components.
The flows show services, interfaces and data exchanges. This is also the view that is
supposed to start to break down the system into a physical schema [68].
These views do not always show actual protocols. This means that if one segment is
planning on using TCP/IP for connection and another is planning on UDP it will not
necessarily be evident in this view. It truly depends on how an organization chooses to
use the framework. This can lead to confusion between segments. Many times these
details are left up to the individual designers of each segment.
If there is no
communication between the designers then the segments won’t communicate.
The technical views (TV) explain the technology standards that should be used in the
system, and the emerging standards. These can be difficult to pin down. A system
cannot be on contract for an incomplete or unsigned standard. In a system where the
development will take several years to complete, it is unknown when the project begins
which draft documents will become standards. Typically, this means that if a project
lasts for any substantial amount of time then the system will not be created using the
newest standards, just those that are on contract. From a security standpoint this affects
77
certain protocols and communication types. The TV may require the use of a technology
that has known security issues. It is then necessary to ensure that mitigations are in place
[68].
The all views (AV) provide summary information and a glossary. These are helpful
in defining the language in which the developers will use to communicate. DoD programs
are notorious for speaking in “alphabet soup.” Many of the acronyms used on one
program are used on a different program with a different meaning. This can be one of the
most useful tools in the framework for developers. The system can only be created if all
the designers, developers and architects can communicate clearly. The use of a system
wide dictionary can help facilitate this communication [68].
In each of the views, it is possible to add security attributes. However, this does not
adequately express all the security considerations. Security issues can easily be lost in
the overall enterprise architecture. This is one of the reasons that, for some systems, it
becomes important to break out the system security architecture out of the enterprise
architecture and bring it into its own architecture. This also must be done carefully, to
ensure that the security architecture aligns with the system architecture.
Another aspect that must be considered in upcoming program is that DoD is moving
from DIACAP to the RMF. This then means that a program will potentially be required
to create security views in the DoDAF and follow the NIST RMF. Ensuring that these
two methods do not disagree with each other and ensuring that inefficiencies are not
increased will be a new task for system security architects.
78
4.2.2
NIST Risk Management Framework (RMF)
The Risk Management Framework is designed to allow security engineers to
understand the risk level associate with a system. It is supposed to work in conjunction
with normal systems engineering. The concepts provide a good basis for DoD, health and
finance systems. The RMF in conjunction with the allocated information assurance
controls from [7] provide something similar to the DoDI 8500.2 [5] and DoDI 8510.01
[69]. The wording in [7] is less vague than that in [5] and provides a more granular way
of allocating controls. In and of itself it does not provide guidance on what the security
architecture is to look like, nor what it should contain, and is similar to suggestions made
in [21]. Its only requirement is that the controls are allocated in such a way as to reduce
risk to an acceptable level. It is still up to the security architect to select a method of
describing the architecture.
The RMF provides good guidance on boundary determinations as well as words of
caution in dealing with service oriented architectures (SOAs) [46]. It also provides a list
of criteria that a system security architect should consider when drawing boundary lines
and with respect to the overall risk of a system, as well as other information that security
engineers would do well to heed. It is also meant to be cyclical.
Once controls are allocated, security engineers are to continually monitor how well
they abate the risk and modify mitigations when necessary. Although this is a noble goal,
it becomes more challenging if a security architect has not laid out the information in a
way that is clear and understandable. Failure to layout this information can be ruthlessly
detrimental to systems needing to be SOX or HIPAA compliant as the punitive damages
are severe [49], [48].
79
4.2.3
Sherwood Applied Business Security Architecture (SABSA)
SABSA is a matrix based taxonomy very similar to the Zachman Framework, but
with a security twist. This can be seen in [53].
Each layer of the SABSA matrix is meant to force the security architect to ask
questions to ensure they understand all the various attributes of security mitigations. Each
layer takes a different view of the system and provides an explanation to a different set of
developers and designers.
It is meant to provide a basis for conversation between the security team and all the
other engineers. Although this model provides a good framework for asking questions, it
does not provide enough of a framework for systems needing to be compliant with
significant regulatory restraints.
Figure 4-1 SABSA Matrix [66]
For example, in looking at the SABSA matrix, the facility manager’s view is
primarily focused on backups and handling support for what is supposed to be a fixed
facility. Although this is a good start, there is a need to tweak this for most systems
80
under regulatory constraints. Physical security is also something that must be kept in
mind if dealing with both fixed and mobile platforms. Although physical security is
discussed in [66] it assumes fixed facilities, in some environments, guns, guards, gates
and dogs may not always be available to protect a mobile platform. The issue of mobile
and fixed platform must not only be part of the concept of operations (CONOPS) and
functional/conceptual architecture, but must also be folded into the detailed architectures
as well.
4.2.4
Information Assurance Technical Framework (IATF)
The IATF is very different from the RMF and other frameworks. It is a living
document that contains information on how to use specific security mitigations such as
cross domain solutions and firewalls. There are elements of the document that are not
complete and leave room for new information as technology changes.
The IATF is best used in conjunction with the other frameworks to provide the
security engineer better information on a particular mitigation than is provided in any of
the other frameworks. In and of itself the IATF does not provide a discussion on how to
go about putting a security architecture together, but provides useful information on
various pieces that might be incorporated into an architecture.
4.2.5
Control Objectives for Information and Related Technology (COBIT)
COBIT is a best practice used in the financial industry to meet SOX compliance.
First released in 1996, it was meant to bridge the gap between business and information
technology needs [39]. It is meant as an umbrella framework to integrate other industry
best practices. Although quite useful for management and as a roadmap, it may
81
overwhelm new users [70]. It also does not necessarily address certain aspects of IT and
system development.
4.3 Building the Architecture
In order to ensure that all aspects of an architecture are covered, a systematic
approach is necessary. The initial task in creating a security architecture is outlining
communication between components within a system and in and out of the system
boundaries. This is true whether a system is an enterprise or a non-enterprise system. The
security architecture must define which of the flows will need protection, such as flows
carrying a person’s health record. This piece of the architecture will be built upon
throughout the security architecture process.
Along with this, the security architect must understand the concept of operation
(CONOPS) and the intended use of each part of the system. This will outline the high
level data flows for the system. There is a distressing tendency for system engineers to
just throw all the security attributes into one function in the functional architecture. This
is especially true if the security engineers are not part of the functional architecture team,
or if security is bolted on at the end.
One of the unique aspects of non- enterprise type systems, is that portions of the
system may be mobile. If working with a remote medical sensor, the architect must take
into account not only the primary control station, but the sensor itself. The system may
be connecting through differing networks at various protection levels. If the system is
going through unprotected territory, there must be means of protecting that information,
such as cryptography.
This comes both from the understanding of the flow of
information as well as CONOPS.
82
The security architecture needs to have both functional and physical details which
must meet the operational needs of the mission. These need to be tied together to ensure
that the physical manifestation of the system performs the functions that are intended.
These should include such things as key assets, the most critical pieces of information
that need to be protected, user interfaces, unsafe environments, etc. Once these are
identified, then the next step is to figure out how to mitigate or eliminate these
vulnerabilities. This is described in the following approach.
4.3.1
A Modified Approach
Before creating the security architecture it is necessary to understand the
requirements of the system. Aside from the information assurance requirements, such as
the NIST SP 800-53[7], the concept of operations, data protection needs and data
exchanges must be recognized. This information should be translated into a series of
block diagrams. Although a functional architecture must exist showing this information,
it will necessarily have to overlay on a physical diagram at the end. One thing to note is
that as soon the diagram goes under configuration control it makes it difficult to maintain.
The first diagram should show internal and external flows of information similar to
the one shown in Figure 4-2. The security architect should ensure that the functional
information matches the physical architecture from a security standpoint.
The
complexity of this view can increase significantly when dealing with multiple levels of
security as opposed to operating at a single security level.
83
Figure 4-2 System Flows
Figure 4-3 Protection Domains
84
Once the basic flows are understood, the protection level of each flow should be
identified. There should also be notations of where data will cross a protection boundary,
such as shown in Figure 4-3. This information allows the security architect to know
where specific solutions such as encryption, cross domain solutions or data guards might
need to be employed. Appropriate cryptography is assumed on the wireless links, so it
does not need a CDS.
As the functional architecture is being completed, a corresponding physical
architecture is typically being created, especially when the project is employing
concurrent engineering.
The physical architecture should be of great interest to the
security architect. As the physical architecture is being designed, it will go through
multiple iterations. As the first physical architecture is laid out, it becomes a base on
which to build the physical security architecture.
One of the primary purposes of the security architecture is to ensure that the security
requirements are being met in the system. As subsystems and components are being
defined the security engineers should begin listing the security requirements that will be
applicable to each item. A spreadsheet can be maintained for each major component, in
the beginning. Microsoft Access or another database can also be used.
The first tab should map the IA controls to the parts of the component and to the
expected mitigation, as seen in Figure 4-4. The engineer can include notes and
justification for clarification. The second tab should map the IA control to the
requirements associated with the control, as seen in Figure 4-5. The IA Control is used to
link the mapping and the requirement tabs together.
85
Figure 4-4 IA Control Mapping
Figure 4-5 Requirement Mapping
This mapping allows for the security engineers to double check that requirement
wording has been allocated appropriately. In the example, the security engineer should
see this and note a need for a change in procurement specification wording. This is also
an important part of the overall system security engineering method, because not all
aspects of the security requirements can be laid out on a physical diagram. Requirement
86
such as configuration management must be met for every component, but are more
accurately addressed using language rather than diagrams. Later on in the life cycle, the
mapping can also be used to link it to verification of each of the controls.
Dynamic Object-Oriented Requirements System (DOORS) or a similar requirement
repository under configuration control should be used to maintain the information once it
has reached a baseline [71]. The mappings will change as the security architecture
matures, but in order to begin creating the security views of the system, the security
architect must first understand the basic function and contribution of each subsystem and
component.
There are IA concepts to consider in most systems, regardless of the IA requirement
set. These can be defined by the security views I developed as listed in Table 2 Security
Views and will change over time, as components are added or changed in the system.
Also, there are times when control of a subsystem function may move from one
integrated product team to another. For example, boundary defense may originally be
allocated to both the communications IPT and the ground segment IPT. As the system
grows, the network piece may then be owned by the communications IPT.
In the
example above, this then would mean that the communications IPT now owns the
responsibility for all boundary defense in the system. The requirement allocation would
stay the same, and the diagram would stay the same, but the ground segment would claim
inheritance for meeting boundary defense. This would then be added back into the
requirements mapping discussed earlier.
87
Table 2 Security Views
Security View
Domains and
Interconnections
Network Flow
Cryptographic
Usage
Shared Control
(for remote
systems)
Patch Management
Mobile Code
Audit
Describes
- Interconnections with other enclaves or networks
- Accreditation boundaries
- Should show the protection level of data within the system
such as public, private, proprietary, etc.
- Points of private/public separation
- Boundary Defense (firewalls, IDS/IPS, DMZ chain, etc.)
- Data at Rest (cryptography, zeroization, etc.)
- Data in Transit (cryptography, separate physical line, etc.)
- Network management data (SNMP, etc.)
- Partitioning of User Interfaces and Storage
- Physical or logical separation of users and backend
- VoIP/Phone lines (indicate protection: SIP or other)
- Certificate Authority
- Key Storage
o Asymmetric
o Symmetric
- Interfaces to cryptographic keying instruments, and separate
cables for keying if appropriate
- Key loading areas (at the box, or a harness, etc.)
- Key zeroization paths (hardware, software paths)
- Cryptographic type: Type-1, Type-2, etc.
o Embedded, separate hardware, etc.
- Outline positive control steps
o How control is acquired
o How control is passed
o Include authentication steps
o *Non-repudiation of Control items are critical and
should be indicated
- Indicate which pieces of equipment have software/firmware
that will require patching
- Indicate how patches will be downloaded, tested, protected and
applied
- *Patches will require regression testing of system elements
- Indicate pieces of the system potentially have access to mobile
code
- Level of mobile code
- Protections against unwanted mobile code
- Indicate elements of the system with audit capability
- What is audited(e.g. can only log user, but not time stamp)
- Central audit repository
- Audit process
88
Security View
Backup/Recovery
Zeroization/
Sanitization
System State
Changes
IA Physical
Security
(Typically
associated with
facilities
requirements)
Other Physical
Security
Authentication
Describes
- Location of recovery software( e.g. backup images in safe)
- Location of data that has to be backed up
- Location of backup repository
- Timing of backups (weekly, daily, etc.)
- *This provides useful information for the disaster recovery
plan
- Define memory element in every component, (volatile, nonvolatile)
- Type of memory
- Classification of data in each memory type
- Sanitization approach for NVM
- Zeroization approach for Volatile memory
- Associated analysis for components that plug into private
networks, but are claiming to remain public at shutdown
- Identify when the system is considered private, public (or
range of classification)
- Should have a view for system running, powered off, and intransit (if applicable)
- Status of platform (mobile, fixed, semi-fixed, etc.)
- Fire suppression systems (type and placement)
- Smoke detectors (placement and type)
- Temperature/humidity controls (for humans and equipment)
- Lighting (emergency)
- Power (backup generators, failover period, etc)
- *Depending on the system, these items may already be in
place and a site survey will provide this data
If a new building is being constructed, a separate physical security
architecture will have to be created. These items are not generally
covered in IA documentation. For reference see [72].
- Indicate how/where users will authenticate to the system
o Single sign on, role based access control, group
accounts, etc
- Indicate machine to machine authentication
o Note how the authentication is being achieved
Each view is to be overlaid onto the physical architecture. By layering these views,
using a tool such as Microsoft™ Visio, the defense in depth picture can be seen. It also
allows the architect to see where vulnerabilities may still exist. The earlier in the life
cycle of a project that an accurate architecture can be created, the easier it is for security
engineers to develop affordable security mitigations.
89
In some cases there may be multiple physical architectures. For example, if a local
system and a remote system are being designed, creation of two physical architectures is
necessary.
They must show the interconnections between the two pieces and any
interconnections to outside networks that may be used to allow them to communicate.
Developing security views for both architectures is also essential.
One of the difficulties is ensuring that as the physical architecture changes, the
security architecture changes with it. This is especially true when the network diagrams
and ports, protocols and services are in the process of being defined. These flows must be
captured as part of the network view. They will determine how access control lists and
firewall settings are configured in the final system.
Security concepts can somewhat overlap between the security views. For example,
access control could be its own view, although its components are covered in
authentication, domains and interconnections, network flow, shared control and physical
security. It is possible to change the security views if a particular customer wants to see
an attribute highlighted, but as a base I consider the following as necessary.
4.3.1.1 Domains and Interconnections
The sketch of domains and interconnections must show how the system will interact
at a high level with other systems and networks. This shows how interconnected the
system is, whether it is standalone or highly connective. This is important to understand,
as each interconnection has a different affect on the overall security posture. Showing
where the accreditation boundaries is also important. In HIPAA systems, any business
associates should be noted, as they can present liabilities. Although their individual
systems may not be touched, it is important for the assessments to know that they are part
90
of the total boundary. The accreditation boundary represents which pieces of a
system/network that the designer or customer is responsible for. As part of this, the
protection levels must be noted. For example, if dealing with a health system, patient
information must be protected and should not be visible outside a specific area of the
system. Other areas of the system may process publicly available information. These
items should be annotated, as well as areas of the system where these two pieces might
touch. For example, if there are two subnets, one that deals with private information and
another with public information, the router that separates the subnets would be designated
as a boundary point. In some systems that deal with classified and unclassified data,
these separation points are referred to as RED/BLACK separation.
4.3.1.2 Network Flow
The network provides rich opportunity both in terms of mitigation and vulnerability.
Boundary defense is the first line of protection for a cyber system. Firewalls, intrusion
detection/prevention and equipment to monitor network traffic should be outlined as part
of the network flow sketch to show where the design intends to handle basic attacks. It’s
not enough just to protect the borders of a system. It must be protected through and
through, which is why understanding how data is handled throughout the system is
necessary.
This leads to the need to outline where data is going to be transiting in the system and
where it is being stored and processed. If data is being held, it should be protected, either
by encryption or in some cases, it may be necessary to remove data once it is processed.
If data is flowing through the system, or into or out of the system, this must be
understood.
The high level sketch provides an understanding of how the data is
91
processed. As will be discussed later in this chapter, the ports, protocols and services
artifacts should be developed to flesh this information out.
One type of data that will flow through the network is that of network management
data. This information can be diagnostically useful, or even required for the system to
function. From a security standpoint, it can also provide an attacker an advantage if they
can gain access to it. This is why it is necessary to understand how the network
management data will flow in the system.
This management of the system can also take other forms such as database or server
administration.
In order to ensure that the management of the system is not
compromised, it is necessary to partition user interfaces and these backend systems. This
also prevents the normal user from unwittingly wreaking havoc upon the system.
One area that can be forgotten is that of voice data, specifically with respect to voice
over IP (VoIP).
The protection of voice communications can play a key role in
protecting the system, depending on the type of system. This is sometimes done via
encryption.
4.3.1.3 Cryptographic Usage
In using encryption to protect VoIP or other data in a system, there are specific
elements that must be dealt with. One of the most common forms of cryptography as
explained in Chapter 8 is asymmetric cryptography, aka public key encryption (PKE),
which requires a public key infrastructure (PKI). Part of this is the need for a certificate
authority and a place to store certificates. How this is handled in the system should be
outlined in a sketch.
92
Some forms of cryptography require special cabling or interfaces for being keyed. All
of this type of equipment should be laid out as part of the diagram. In some cases, an
approving authority will require the engineer to outline how the key material will be
loaded and how it can be zeroized when needed. All special crypto requirements should
be accounted for in the final crypto architecture.
4.3.1.4 Shared Control
As systems are becoming more automated, it is sometimes necessary for a system to
be controlled from different locations or operators. In the event that control has to be
handed off from one operator to another, a sketch should be constructed that shows all the
details on this process. This should include how the control is acquired, passed and if
control is dropped how it is re-established.
4.3.1.5 Patch Management
Almost all types of system will have to have patches applied at some point in the
system’s life-cycle. If the system is going to maintain its’ security posture, patches will
have to be applied on a frequent basis. A sketch containing which pieces of equipment
contain commercial software should be put together, so that it is understood which pieces
will need to be addressed. Aside from the sketch, a database containing a listing of all
equipment and its associated software should be maintained. This will also aid in the
patching process.
A procedure should be written that explains the process of downloading, testing and
applying patches. This should also cover the process of regression testing the system after
patches are applied.
93
4.3.1.6 Mobile Code
Mobile code can be very useful, but also dangerous. Depending on the customer,
certain types of mobile code may not be allowed. Mobile code is code that is brought
over a network and run on a remote system, such as ActiveX. Unallowable mobile code
should be blocked at the boundary of the system, and in some cases at the browser level.
A sketch should be created that shows where mobile code may run and where it should be
blocked.
4.3.1.7 Audit
There are multiple areas of a system that need to be audited. This is especially
critical in finance systems. The sketch should cover all the different types of audit logs.
Those associated with network devices such as the authorization, authentication, and
accounting server, user login, system audit logs, etc. This could potentially be broken
into multiple sketches, one for networks, one for users, etc. depending on the complexity
of the system.
4.3.1.8 Backup/Recovery
The backup and recovery sketch should cover the different attributes associated with
getting a system’s information backed up, as well as what things would have to occur to
bring the system back, if there was an issue. The first portion of backup/recovery should
be what software or hardware is critical. This should be annotated as part of a
hardware/software matrix. The critical assets should be outlined, as well as dependencies
to ensure smooth recovery.
Backup is not just focused on the data in the system, but also the software that runs
the system. Creating this sketch early on in development can allow for smarter ways of
94
doing recovery. For example, instead of planning on rebuilding a workstation from
scratch, using an image to rebuild would be a faster option. If the system is hardened and
then all the software added, and an image of this is taken, it will significantly reduce the
time necessary to recover the system when something goes wrong. The sketch should
also show where the data will be stored, and how often data backups will occur.
4.3.1.9 Zeroization/Sanitization
The ability to remove information from systems can sometimes be as important as the
ability to put information there in the first place. Zeroization pertains to the removal of
information from volatile memory, whereas sanitization refers to the process of removing
information from non-volatile memory. All memory in the system should be identified,
the type of memory, as well as the approach for removing information from it.
4.3.1.10 System State Changes
If a system contains information that changes states this should be noted. For
example, if a server contains private health information that has to be protected on
shutdown, this constitutes a state change. The server has to go from private to public.
Anything in the system like this should be annotated in the sketch. This ensures that
potentially unseen confidentiality issues can be caught early.
4.3.1.11 IA Physical Security
Physical security may or may not be a requirement on the contract, but an
understanding of the operating environment of the system must be understood. There is
no point putting a server in a backroom, if everyone has access to it. That defeats the
purpose. Physical security can mean the guns, guards, gates and dogs associated with an
air port or it can mean the locks used on a server room.
95
The type of physical security necessary depends on the type of system and the
concept of operations.
At a minimum, the physical security should be understood.
Information such as fire suppression, smoke detectors, temperature/humidity controls,
lighting and power should be understood. The type of fire suppression systems can affect
humans and equipment and should be considered carefully.
4.3.1.12 Other Physical Security
If the IA team is on contract to handle the physical security of a building a separate
security architecture would be necessary for this.
Although an important part of
protecting a system, it is a large enough piece that it is not covered in this dissertation.
For further information, I suggest [72] as a place to get started.
4.3.1.13 Authentication
Authentication, whether user authentication or entity authentication is a cornerstone
with respect to securing a system. A sketch needs to be created showing how the system
will handle authentication. As there are multiple types of authentication, such as single
sign on, role based, rule based, etc it is necessary to explain why specific types of
authentication procedures are chosen.
Creating a security architecture for regulatory constrained systems requires the
architect to be in line with both the information assurance control allocation and the
required certification and accreditation process. This means that the architecture must
show how the controls have been allocated, why they have been allocated and how it
affects the system. This must be documented in a way that allows the security architect
to provide justification for allocation of controls as well as an understanding of the
security posture of the system if a control is not met or something in the system changes.
96
This is fundamental to ensuring the security posture of the system remains at an
acceptable level.
If the security architect defines all of the above views and maintains the requirements
mappings, the evidence necessary for certification and accreditation is also available. The
vulnerabilities in the system can be examined and either mitigated or justified.
4.4 A Different Approach to Architecture
As discussed in my approach above, the first sketches can be done using power point
or a physical drawing with pen and paper, but eventually it must be rendered in an
acceptable format when overlaid onto the physical baseline. The final security
architecture diagram should show all the mitigations on the physical baseline. During
concurrent engineering pieces of the system may initially be left out. They will have to be
added in, but to redo sketches involves time and money. It is better to ensure that if
anything in the baseline is not covered in the sketches that the security architect pay extra
attention to these items and ensure that the appropriate mitigations are covered. The first
view to consider is the security domains and interconnections as shown in Figure 4-3.
In Figure 4-6, those items marked with a 1 denote boundary defense items. Boundary
defense is a critical part of the system. At a minimum firewalls and intrusion detection
need to be in place. Although ACLs are necessary, they are not sufficient on their own to
protect the boundary. There are varying types of firewalls, such as stateful, stateless,
proxy, etc.
If there is enough funding, DMZ chaining can be used, with varying
producers. For example if the first firewall is produced by Cisco, then you would choose
something by Juniper or Palo Alto, etc. for the inside firewall.
97
This can be frustrating to maintainers, so the protection must be absolutely necessary.
By doing this, it forces an attacker to understand two sets of protection mechanisms that
would have to be broken through. Although end to end encryption can be used as part of
the boundary defense, alone it is not always sufficient. Intrusion Detection Systems
(IDS) can be used when traffic still must pass through. For example, in some medical
platforms, safety engineers may object to packets being immediately dropped. This
means that an alert to the user is preferred. In other systems, the ability to immediately
drop malicious packets is preferred. In this event, an intrusion prevention system (IPS)
should be used. Limited bandwidth or throughput may force the IDS to be out of line.
This would mean that it would passively monitor it, but traffic would go through two
paths, to increase performance. Inline inspection is modeled in the diagram.
Figure 4-6 High Level Data Exchanges
98
Those items in Figure 4-6, marked with a 2 denote data at rest. Data at rest should be
protected.
The encryption of the data is typically the way to protect data at rest.
Classified data is typically protected using NSA Type-1 encryption. This level is not
always necessary and can be expensive. It is essential to work with the customer early on
to determine what the expectation is and if it fits within the budget. There are new
encryption devices specifically made for data at rest. The appropriate encryption should
be chosen. For example, a bulk encryptor like a HAIPE will not appropriately encrypt a
disk. Lesser encryption may also be a better solution, and in some cases the risk may be
low enough to forgo encryption entirely. This must be agreed upon by the customer in
writing. If the customer determines that a higher level of encryption is necessary, ensure
that it is in the contract. If not, an engineering change proposal (ECP) is necessary.
Those items in Figure 4-6, marked with a 3 refer to data in transit. If data is flowing
through a network at a lower level, it is necessary to use mandated encryption. If it is
flowing through a network at the same level, lesser encryption should be sufficient [5]. If
you have unclassified data flowing through a higher network, it can be separated out
using encryption as well.
Those items designated by a 4 represent partitioning user interfaces and backend
servers. Users should not have direct access to servers, nor should there be any direct
access from outside the enclave to backend servers or databases. If a web server is
employed, it should sit in the DMZ, so that users outside the enclave do not have access
to the system behind the inner firewall(s). If Voice over IP is a necessary part of the
system, this should also be included in the network sketch. If it is a large system, it
should have its own sketch.
99
The next sketch contains a view of patch management. This is seen in Figure 4-7.
Patch management is critical. All of the items marked with a 1 will need to have patch
management. The sensor is assumed to be using an embedded OS for which security
patches are not released.
Figure 4-7 Security Patching
As for the server, it will need to have patches, but in this example it also acts as the
patch management server. It should be understood that all patches will be tested in a lab
before being pushed from the server.
Also, the patch management server must be
logically or physically separated from the other components. This can be accomplished
using VMware or physically separate machine.
The next sketch depicts where mobile code must be mitigated. Depending on the
type of mobile code, it needs to be blocked at the firewalls. All of the work stations
100
should have their browsers configured to block unauthorized mobile code. These are
denoted by a 1.
There are several classifications of mobile code.
Some are allowed, given that
certifications are signed, or that it is executed in a constrained environment. The DoDI
8500.2 contains the control DCMC-1 [5].
Each requirement set has a similar
requirement. The security architect should be familiar with [60] outlining the use of
mobile code in systems. [60] should be the basis for the mobile code sketch for a DoDI
8500.2 systems. It is also a good security best practice for other systems.
Figure 4-8 Mobile Code
Auditing is the next sketch. This is shown in Figure 4-9. Those listed as 1 show
auditing networking equipment. This can be done using tools built into the equipment as
well as outside tools such as SNARE or Nagios.
The flight computer listed at 2
101
represents those pieces of equipment that can’t be audited in a normal way. For example,
a flight computer in a UAV should audit any hand-over situations, as well as any entity
logging into the flight computer, such as a ground station. The COMM Links are marked
with a 3, as a separate auditing tool would most likely be needed. The server is marked
with a 4, to annotate that there needs to be one centralized auditing management tool.
This will reduce the cost of maintenance. The goal is to reduce total life-cycle costs of the
system, not just the cost of development.
Figure 4-9 Auditing
If the security architect and network security engineer work with the IPTs early in the
system life-cycle, such as at the phase when these sketches should be done, it can reduce
the impact of adding an auditing management system. It also should be mentioned that
many times the requirement will be to off-load the audit logs onto a separate system.
102
This should be taken into account in the sketch. In the above example it is assumed that
the audit logs will be burned to CD from the server and stored in a container.
Another sketch should be done for backup and recovery. Most requirement sets
oblige the contractor to provide for backup of critical systems and the ability to recover in
the event of a disaster. Aside from a continuity of operations plan, the architect should
sketch out how the system will perform these functions.
The zeroization and sanitization sketch should overlap with the network sketch.
Zeroization applies in two areas. Zeroization can refer to the removal of key material
from a cryptographic device or it can refer to the removal of data from volatile memory
when power has been lost. Sanitization refers to the removal of data from non-volatile
memory, such that it cannot be accessed through laboratory means. Every box in the
system must be examined for how the memory will be dealt with. Also, for National
Security Systems, the NSA will have to approve the zeroization method for the crypto
devices.
Temperature and humidity controls are necessary to protect the longevity of IT
equipment. Many systems may have other environmental factors that they must meet,
such as ability to stand against corrosion, salt and sand. These are not the responsibility
of the IA team. The IA team should ensure that temperature and humidity controls are in
place. Related to this is number 4, which is the requirement to have voltage regulators
and backup power. The next environmental issue that should be sketched is numbered 2.
Fire suppression will look different in a fixed facility as opposed to a mobile platform. In
a fixed facility, with humans the fire suppression system must be installed in conjunction
with a smoke detector. Any suppression chemicals should not be toxic to humans and
103
should not damage equipment. The smoke detectors, marked as 3, must be connected to
the local fire department. These are seen in Figure 4-10.
Physical security is often included as part of the facilities document. The IA team
must ensure that the physical security mitigations are in place. No amount of technical
security controls can protect a physically unprotected system. Although this is not fully
covered in many IA requirements, the best practices can be found in [34]. Physical
security is sometimes assumed to be in place.
The IA team should verify any
assumptions about physical security.
Figure 4-10 Environmental Mitigations
The basic assumptions such as fencing, lighting, cameras, access control, guards,
dogs and standoff zones should all be stated. The placement of these mitigations should
be discussed with the customer. This is especially important if an enclave is made up of
mobile units, or a combination of mobile and stationary pieces. For example, if part of
the platform is an air vehicle, there is a possibility that it could land at both physically
104
protected and unprotected locations. The security architect must keep the operational
needs in mind when designing for mobility.
Cryptographic Interfacing is the last of the sketches that should be created. This is
necessary if the system supports a public key infrastructure or other forms of
cryptography. A structure for maintaining certificates and how the certificates will be
published and revoked must be understood.
The certificate authority must also be
established. If other cryptographic devices such as NSA Type-1 end cryptographic units
are used, the method for keying and zeroizing must be established [73].
If cryptography is being used to protect National Security System data, NSA will
review the implementation plan for cryptography to ensure that it meets the security
doctrine. Zeroization of key material is one of the most critical aspects of correct
cryptographic implementation [74].
Once the initial sketches are created, these will need to be overlaid on the physical
and network baseline as seen as described in the following section. There should be a
layer for each IA attribute, such as hardening, audit, encryption, etc.
4.4.1
Ports Protocols and Services (PPS)
The ports, protocols and services (PPS) should be mapped as flows onto a diagram to
ensure that all flows in the system are covered. Table 3 Ports, Protocols and Services
show the information that must be captured and associated with the flows in the diagram.
Table 3 Ports, Protocols and Services
Low Port
High Port
Protocol
Service
Direction
Traffic Type
123
123
UDP
NTP
Listening
Network Time
22
22
TCP
SSH
Outbound
Sensor Output
105
Source Segment Name
Source Segment Hardware
Source IP address
External?
Connection Origination Point
Destination Segment
Destination Segment Hardware
Destination IP address
Connection Termination Point
Patient Monitoring
Sensor
192.168.10.2
N
Internal
Healthcare Control
Server1
192.168.12.45
Internal
Patient Monitoring
Sensor
192.168.10.2
N
Internal
Healthcare Control
Server1
192.168.12.45
Internal
106
Figure 4-11 Ports, Protocols and Services
107
As the network information is constantly changing during design, it is useful to have a
database of all the flow information. It should then be linked to a diagram, so that as
designers are looking at the information they have a visual. An example of this is shown
in Figure 4-11 Ports, Protocols and Services.
The PPS diagram, security architecture and verification of requirements are three key
aspects of the information that must be presented to approvers. An example of
information that should be captures as part of the PPS. The use of the appropriate security
architecture methodology, taxonomy or framework defines the degree of difficulty an
architect will have in adequately describing the security functionality of a system. The
modified approach provided in this chapter allows for incorporating the compliance
needs of regulatory constrained systems into the equation. If followed, it also ensures that
the security requirements are accounted for. This is critical to achieving compliance.
Appendix A provides an in depth application of the above approach.
4.4.2
Conclusion
Security architecture approaches are critical to ensuring the appropriate security
posture is reached and that a system’s design will allow for it to be compliant with the
countless number of regulations that may apply. My approach allows for an iterative
design being driven by customer collaboration and a growing understanding of the
system as the concepts develop. My systematic approach also allows for those following
it to ensure that they meet regulatory compliance. The ability to accurately reflect the
security design is a significant part of meeting compliance. It is also the roadmap from
which the system is implemented. If the security architecture is not appropriately created,
108
there is a risk that the system’s overall design will not meet compliance. If the design is
non-compliant the probability of the implementation being compliant is slim.
The security architecture affects the system development and the ability to maintain
the system’s security posture. As many regulatory constrained systems are required to be
assessed every year, the ability to maintain the security posture is necessary. My
approach allows for easing the issues during development as well as during the critical
maintenance phase.
109
Chapter 5
Threats
In order for an attack vehicle to be feasible, the attacker must have the right skill
level, funding, amount of time and tools. For an attack to occur, the attacker must be
motivated to attack that particular system type. A nation state might attack a government
network, but would most likely not go after a mom and pop shop website.
For an attack to be successful the attack has to occur and it has to be paired with the
right attack vehicle to hit a particular residual vulnerability within the system being
attacked. If the system has mitigated some vulnerabilities, it lessens the pool from which
to draw an attack. The success level depends on what type of vulnerability is used.
Some vulnerabilities may allow for limited entry, but not total privilege escalation. For
example, if a system is not something a nation state would attack, but is one that a script
kiddie might go after, this might change the type of mitigations. A system may only need
commercial cryptography instead of high assurance cryptography if this is the threat.
Intelligence agencies focus on attacks that would harm national interest, but this
doesn’t necessarily help those outside the federal/military arena understand potential
threats to their systems. Some of the threat research has been done in the military arena,
although there is some research being done in the public forum as well.
A common
thought in the military as described in [75] is that “threat assessments are made to form a
basis for making decisions about counter measures. It is not necessary to distinguish
between threats which do not affect the resulting command decision.” This may be true
in the tactical world, but not necessarily when developing a system. In order to better
allocate mitigations in a system, looking at threats, their resources, motivation, and attack
vehicles are necessary.
110
5.1.1.1 Contributions
We provide a way of grouping the motivations of threats in order to view the security
of a system from a different perspective.
We provide a cause and effect view of
motivation to attack.
5.2 Securing A System
There are multiple ways to look at how to secure a system. Most security engineers
focus on reducing vulnerabilities, as these are within the engineer’s control. [76] looks at
Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and
Elevation of privilege or (STRIDE) as a method for looking at mitigations to be added
during system development. The authors focus on brainstorming to come up with
potential threats. They then look at how components are deployed, their environment,
functionality, entry points, etc. The focus is really the exploitation of vulnerabilities and
not the threats themselves.
[77] looks at (a) sparse and ambiguous indicators of potential or actualized threat
activity buried in massive background data; and (b) uncertainty in threat capabilities,
intent and opportunities. This section is a good start on what information engineers
should work with to understand the threats to their system. The authors suggest that by
understanding the threat’s cost-benefit analysis, one can understand the probability of
attack. Some papers such as [78] look at attacks themselves as threats. Denial of service,
malware, etc. are looked at as the threat, as opposed to the tool used by the attacker.
Although this is useful for setting a particular configuration, it does not go to the root of
the problem.
Others have proposed a threat ontology. As discussed in [79], threat
assessment results support multiple decision makers with different goals, objectives,
111
functions, and information needs and therefore it is necessary to consider different threat
assessment functions and items.
As stated in [79], “threat assessment results support multiple decision makers with
different goals, objectives, functions, and information needs and therefore it is necessary
to consider different threat assessment functions and items.”
[79] creates a formal
ontology that relates the intent, capability and opportunity to provide a better
understanding of vulnerabilities. This ontology is meant to represent threats in multiple
domains.
According to [79], threats can be understood as dispersed wholes. This means that
there is a characteristic or feature that unites the pieces of the whole, but they are not
contiguous. The goal is to identify the relationships between the pieces.
This is done
through defining a set of characteristics and relations, proximity measures of those
characteristics and relations as well as a function on those measures. Then a boundary
must be determined, in order to separate this dispersed whole from a collection of items.
Although [79] includes intent, capability and opportunity, motive is treated somewhat as
a priori. In order to better understand intent, it is necessary to study motive.
The method presented herein uses many ideas found in the papers presented in this
chapter, but goes a step further to try and identify a root cause of threats. There are two
main categories defined. The first is the human threat, which is the most complex to
comprehend. The second is the natural threat, those things in nature that pose a threat to
a system.
112
5.3 Threats
The human threat should be looked at from both a scientific and philosophical
standpoint. In the security world, terms like black hat and white hat are used to describe
the bad guys and the good guys. Threats are the people or things that may be after the
system to exploit is vulnerabilities. In trying to come up with a superset of threats, one of
the intriguing pieces was motivation. In a sense, can threats be seen through the lens of
good vs. evil? If so, then the motivations must be defined in order to understand what the
threat truly is.
Even though technology has changed, mankind seemingly has not
changed in character. It seems reasonable then to look at the scope of man’s motivation
through the ages. Historical documents such as the Bible and Sun Tzu’s “The Art of
War,” are good starting points to understand motivation.
I decided to look at these documents as a starting point on what motivates an attack.
This is partly due to the idea that threats begin to enter the realm of philosophy and
anthropology as opposed to hard science. Cyber security is a realm in which philosophy
and science intertwine. Therefore it may be possible that by applying certain aspects of
philosophy a security engineer can better grasp how a system might be attacked. Like a
chess game, understanding the opponent is a large part of understanding what move to
make next. The art of cyber security is anticipating the opponent’s attack and responding
preemptively.
It is also responding quickly and effectively to an attack that was
unanticipated, in other words resiliency.
If the motivation behind an attack can be understood, it may be possible to predict an
attack. Varying Philosophies and belief systems have different names for some of these
motivating behaviors. Whether looking at the motivation of an individual or collection of
113
people, the motivations can be similarly grouped. These are shown in Figure 5-1. In
some instances it can re-scope how a system needs to be protected. It is understood that
an attacker must have the correct tools, funding, skill level and timing to attack a system.
That alone cannot determine threat. Many professional developers have the correct skill
set, tools and perhaps even funding to carry out attacks on a myriad of systems. However
without motivation they would not attack a system.
Figure 5-1: Threat Motivations
An identity thief would most likely not attack a military system, because he is
motivated by greed, not pride. He is looking for a system that would allow him to gain
access to identities. A nation state however would look to attack a military system. This
is not to say that an identity thief would never attack a military system, it just is less
likely.
114
When building security into a system, it is about risk mitigation not risk elimination.
If a security engineer looks at the potential threats to a system based on motive as well as
other factors, it should change the way the mitigations are introduced, especially when
resources are limited.
If the threat to a system is an identity thief, then the confidentiality of personal data
becomes of paramount importance. The security engineer should focus resources to
protect the system from this threat. In this scenario, money is better spent on encryption
equipment than on bullet proof windows. The concept also moves from individual motive
to the collective. The collective motive becomes more complex, as it is made up of
individuals, each of which may have different motives.
“attacker type.”
Figure 5-2: Attack Variables
This motive determines
115
Each attacker type has differing skill levels, funding, motivation, risk tolerance and
time. All of these variables lead to the probability level of an attack occurring. These
variations are seen in Figure 5-2.
Taking all the variables into account is the first part in understanding the attack. The
threat can only exploit residual vulnerabilities. Those items in the system that leave it
exposed after all the protections are in place. The chain of events leading to a successful
human attack is shown in Figure 5-3.
Figure 5-3: Human Threat
116
The human threat is only half the equation. Humans play the largest role in attacking
systems, but natural disasters can also have devastating effects on the system, if not
planned for. Such natural disasters are outlined in Figure 5-4.
Although natural disasters are not motivated to attack a system, there are a number of
types of natural disasters that must be planned for. The way they affect a system is shown
in Figure 5-5. These are labeled as attack types to show consistency across how a system
reacts.
As an engineer looks at how best to lay out the mitigations, both the human and the
natural threats must be taken into account. Understanding the motivations of potential
threats provides an understanding of the attack vehicles that would be used against the
system.
Figure 5-4: Natural Disasters
Understanding the potential attack vehicles being used across the system allows
security engineers to better determine where funds should be spent. As a system is being
developed, it is important to look at the individual types of attackers and determine which
117
ones will most likely attack the system. Nation states would have more resources and
time to attack a system than a lone wolf hacker. They may use similar tools, but they
may have different skill levels. For example a nation state may be able to throw a
hundred engineers at a problem, but if they are not resourceful they may not be able to
get into a system, whereas a lone wolf may have more at stake and be more resourceful.
The time involved is different as well. A nation state may be willing to spend thousands
of hours trying to get into a system, but a lone wolf may only have a day or two. This
changes the way an attack may unfold.
Figure 5-5: Natural Threats
Getting into a system is not just the technical aspect, such as using Metasploit, but
involves the human element as well. Social engineering is one of the most effective
118
avenues of attack. It takes time to plan and execute. There are different types of social
engineering: in person, phishing, phone calls, each taking a different amount of time.
Technical protections will not work against this type of attack. Users have to be trained,
so policy is very important. This is an attack vehicle that must be considered and
accounted for outside the system. Depending on the motive this type of attack may or
may not be used.
Engineers should first understand what are the most likely attackers to come at their
system, then once this is determined they should look at the most common attack vehicles
used by this type of attacker. Some of this information is not fully understood at this time
and warrants further investigation. After the attack vehicles are understood, the engineers
should then put in mitigations to block these types of attacks.
Understanding the motives of attackers can help understand how to better engineer a
system. Using the above models to walk through the steps necessary to understand the
threats that can come at systems allows for an improved systematic approach to looking
at threats. Although any part of a system might be threatened by an attack, one of the
most prominent attack vectors is against a system’s software. The next chapter describes
ways of reducing this threat vector.
119
Chapter 6
Software Assurance
Software security and security software are not the same. Security software can be
insecure if the software is not built securely, so writing code that is secure is vital to
producing code that is more resistant to vulnerabilities. There is increasing ability for
attackers to take advantage of code flaws, with the advent of point and click attack tools.
Previously only elite attackers had the knowledge and skill set to exploit seemingly
obscure code vulnerabilities. It is also an increasing issue due to the supply chain risk
[36].
In today’s culture though, script kiddies can use a list of tools to attack without
understanding the underlying software construction. Script kiddies are those that pretend
to be hackers, but rely solely on premade tools and have little to no knowledge of
software and underlying systems. These are becoming more prevalent as vulnerabilities
are becoming commercialized and tools are being built to take advantage of them
[80].This requires that developers need to build software that is capable of withstanding
the barrage of attacks that it will be encountered. Attackers will continue to increase their
abilities and so it is necessary to reduce any attack vectors that they could use.
“Secure software is software that is able to resist most attacks, tolerate the majority
of attacks it cannot resist, and recover quickly with a minimum of damage from the very
few attacks it cannot tolerate” [81].
That is to say, when attacked, the code will still behave in the way it was intended.
Not only should the code be correct, reliable, and fault tolerant, but it must perform these
functions under duress. There are several areas in the software lifecycle that must be
modified to produce secure code.
120
Each piece of software must be considered in the program. The complexity and grand
scale that affects software production increases the inherent risk associated with creating
software.
The creation of secure code is not trivial. It requires the use of stringent
controls, and code reviews, as well as special care taken in the formation of the code.
These elements are covered throughout the various aspects of the secure coding lifecycle.
The following sections will cover objectives, the changes needed in the requirements
phase, design phase, development phase, testing, integration and deployment phase, then
finally the maintenance phase. Once the phases have been discussed, an overview of
other development needs will be addressed as part of the summary.
Creating secure code provides assurance that software will function effectively on
several levels. The software assurance objectives are [82]:
1. Dependability (Correct and Predictable Execution): Justifiable confidence can be
attained that software, when executed, functions only as intended;
2. Trustworthiness: No exploitable vulnerabilities or malicious logic exist in the
software, either intentionally or unintentionally inserted;
3. Resilience (and Survivability): If compromised, damage to the software will be
minimized, and it will recover quickly to an acceptable level of operating capacity;
4. Conformance: A planned and systematic set of multi-disciplinary activities will
be undertaken to ensure software processes and products conform to requirements and
applicable standards and procedures.
These objectives must have tests associated with them to verify they exist in the
software. These will be met through the use of testable requirements, coding standards,
configuration management, and the other changes to the software development lifecycle.
121
Many programs follow a concurrent engineering model, which condenses the
development lifecycle. This increases the need for due expediency in including security
objectives into each element of the life cycle.
When these objectives are met and verified, there is an assurance that the software
will behave as expected, under normal and duress conditions. This will ensure that if
attacked, the software will degrade gracefully, without damaging other areas of the
system. Software that meets these objectives will be easier to maintain, and will provide
deterministic capabilities.
6.1.1.1 Contribution
We provide in this chapter a literature survey with respect to incorporating security
into the software life-cycle, specifically for systems needing to meet regulatory
compliance.
6.2 Requirements Phase
The requirements phase is iterative on many programs, as there are several
increments. Information assurance requirements for many programs levy secure software
practices. At each new phase of the program, it will be necessary to ensure that these
requirements are flowed down to each component.
The Department of Defense Instruction (DoDI) 8500.2 [5] contains several
requirements that focus on the development, deployment and maintenance of software.
These are the requirements to which software must adhere for many programs.
These IA controls apply to all software written for programs on contract for DoDI
8500.2. Personnel being brought on for both management and development should have
122
some background in security, or training should take place. Developers in particular
should have at least basic training in the art of secure coding practices.
Training must include the use of static code analysis tools, general secure coding
principles and language specific security issues. The security software engineers must
work with developers to ensure they will follow a secure coding standard.
Developers are required to follow the secure coding and language specific coding
standards. The language specific coding standard will cover normal coding conventions,
such as naming conventions and commenting. This type of coding standard increases
readability as well as sustainability. It also allows for more productive code reviews and
maintainability of the Capability Maturity Model Integration CMMI level[83].
As
discussed previously, there are four objectives with respect to software assurance. The
enforcement of good processes can aid in the pursuit of secure code. CMMI is a method
for categorizing the maturity of a company’s processes.
The secure coding standard should cover some language specific aspects as well. The
details of the secure coding standard will be covered in greater depth in the development
phase section.
Another area of guidance that affects application development is the Application
Security and Development (ASD) Security Technical Implementation Guide (STIG) [84].
The ASD STIG considers three categories of vulnerabilities. The first category is CAT I
and represents the most egregious violations of IA policy.
CAT II represents less
offensive vulnerabilities and CAT III the least. For example, a vulnerability that allows
an attacker to bypass a firewall or gain immediate access would be a CAT I finding. If
the vulnerability has a high probability of allowing unauthorized access, it is a CAT II
123
finding.
If the vulnerability provides information that might allow an attacker to
potentially compromise the system then it is a CAT III [84].
Each of the vulnerabilities may or may not exist in different portions of a system. In
the event that a listed vulnerability exists, mitigations must be put into place. Not all
vulnerabilities will appear in all applications. For example, in applications that do not
involve user interaction, login passwords being shown in clear text do not apply.
During the requirements phase, some portions of the guidance will be put into formal
requirements and other portions will apply as design guidance or development guidance.
6.3 Design Phase
In the design phase, developers will take the vetted requirements and create the
framework for the software. The design should incorporate security aspects, to ensure
that software assurance is designed into the system, not included as an afterthought.
There are several design areas that play a significant role in securing the system.
Access control, authentication, integrity, confidentiality, availability, and nonrepudiation are attributes that will affect aspects of software design. When two separate
software elements communicate over a network, it is necessary that there be a form of
authentication and access control, and that the communication remains confidential. The
communication must also have integrity, to ensure proper communication.
Communication integrity can be as simple as using TCP instead of UDP or as
complex as using public key encryption. Authentication can be done using user name
and password, client certificates, or using biometrics. The systems in adherence to the
DoDI 8500.2 should include security attributes as part of the software design [5]. The
124
security attributes are defined as confidentiality, integrity and availability, authentication
and non-repudiation.
These attributes are tied to the four objectives mentioned at the beginning of this
chapter: dependability, trustworthiness, resilience and conformance. Dependability is
ensuring that there is correct and predictable execution.
One method of ensuring
software dependability is by building modular software, and decoupling. Each module
should only provide one function. Calls to and from the function should be explicitly
handled. The structure of the code is important to dependability. For example, if looking
at code being written for application in a civilian air craft, it must meet DO-178
requirements for dependability. This structuring is critical in aviation systems, as a
failure could result in catastrophic consequences [85]. In the end it is necessary to be able
to trace source code to object code. Although not truly part of the design phase, it is
necessary to choose a compiler wisely, as this can lead to software behaving differently
than intended. The software must be designed around a trusted base. The application
may be dependable, but if the underlying operating system is not deterministic, then the
overall operation is not dependable. The authors of [86] suggest that dependability is
typically done through process and test, not necessarily design. Instead they recommend
the use of a dependability case, which provides an argument as to the connection between
the code and its dependability.
The second objective is trustworthiness such that there aren’t any exploitable
vulnerabilities. During design, any human interface should be carefully approached.
Ensuring that any input is correctly handled is paramount, although this is true for any
machine to machine interaction as well. The design should also separate the interfaces
125
and the backend [84].
This reduces the attack surface, reducing the ability for
vulnerabilities to be exploited.
The third objective is resiliency also known as survivability. The goal is to allow
software to come back after an attack, and to minimize the extent of the attack. In
embedded systems, one way to handle this is through the use of software images. First
the system would have to be designed to only have a small boot in non-volatile memory,
and everything else to run in volatile. A copy of the image can be held in secure storage
and if an attack occurs, the system should shut down, and reboot, pulling the clean
version of the software from the secure storage. In order for this to work, it has to be
designed in from the beginning.
The last objective is conformance.
This is the need to ensure that there is an
enforceable approach that requires the system to conform to applicable standards and
procedures. In systems that must meet the DoDI 8500.2, the criteria for conformance is
immense.
Multiple IA controls within the DoDI 8500.2 require the designers and
engineers to look at other directives and instructions sets. This creates a “daisy chain”
effect that makes conformance difficult. Such processes as configuration management,
version control, and restricting access to production code can all help with conformance.
Prototyping software is part of risk reduction. During the prototype phase developers
will begin designing software using patterns. Designers should use the security design
patterns [87]. Similar ideas are found in [88].
6.4 Development Phase
During development there are several areas that must be considered. Adherence to
secure coding standards, secure software principles and practices, architecture, and
126
design and the use of supportive tools, secure software configuration management
systems and processes are crucial to the development of secure code.
Developers are more likely to write secure code, if given the right tools and the
reason as to why this should be done. If the developers are trained in secure coding prior
to development, the understanding of vulnerabilities will have been covered. These tools
include compilers with security compiler options, static code analysis tools, and
configuration management tools that allow for easy versioning control. The following
sub-sections will explain the application of these principles and tools within the
development phase of the program.
6.4.1
Use of Secure Coding Practices
Adherence to the secure coding standard prevents the introduction of vulnerabilities
into the software. This is vital to development, as it could be a critical system, and
cannot stand to suffer from insecure code.
The secure coding standard requests
developers to refrain from the use of deprecated functions as well as dangerous functions.
Dangerous functions are those that are known to cause instability within the code.
Vulnerabilities can be caused through buffer overflows, string or integer manipulation,
etc. Some of the dangerous functions can be used, if certain precautions are taken. An
example of this would be memcpy() in C. This function can be used if memory spaces
are predefined (properly initialized) and code is added to ensure that the boundary
conditions are verified, to prevent data writes into memory spaces that are not expected.
Dangerous functions are language dependent. A security coding standard should be set
for each language a program is using. Many embedded systems use C or C++. This is
127
also a common language for avionics equipment as it can be written deterministically,
which is a requirement for flight critical software.
Software which interacts with users must be written to ensure sanitization of input.
Sanitizing input means that only data meeting specific criteria will be allowed through,
anything not meeting that criteria will not be allowed into the program.
The restrictive use of unsafe functions and sanitization are only one part of the secure
coding standard. The secure coding standard requires that programmers use good
programming practices, such as proper error handling, setters/getters, avoiding null
pointers, not dividing by zero, etc. As the software is being created, it must be handled
correctly. This is accomplished through best practices and principles.
6.4.1.1 Best Practices
Best practices for software subsumes code reviews, use of tools, software handling, as
well as the application of coding standards. The use of tools, aids in expediting code
reviews and in handling software correctly. There are many tools that can be used for
software development on programs such as: Microsoft Visual Studio 2010, Eclipse, Wind
River Workbench, HP Fortify 360, IBM Rational ClearCase and IBM Rational
ClearQuest. Each of these tools provides a different way to reduce security issues with
software. Although the information below is on these tools, a different set of tools could
be used to the same effect.
6.4.1.1.1 Visual Studio
The Microsoft Visual Studio 2010 integrated development environment (IDE) allows
developers to write, debug, and compile software.
The IDE contains compiler options
that can be used to force code to be compiled in a secure fashion and other security
128
features [89]. An example of this is “/analyze” (for C/C++) which turns on Enterprise
Code Analysis. This option analyzes the software and the reports potential security
issues such as buffer overrun, un-initialized memory, null pointer dereferencing, and
memory leaks.
There are a couple of open source options that developers can use instead:
Code::Blocks 8.02, SharpDevelop 3.0, Qt Creator 1.2.1 and MonoDevelop 2.0 [90]. It is
possible to set the compiler options in Code::Blocks.
In conjunction with the Visual Studio 2010, programs can use HP Fortify 360 as its
static code analysis tool. These two tools will provide the developers with insight into
vulnerabilities that maybe contained in the code.
6.4.1.1.2 Fortify 360
Fortify 360 tests the software for vulnerabilities and provides suggestions for
mitigating the flaw. Fortify analyzes code on several levels. Fortify is comprised of two
major components, the Fortify 360 Server and Fortify SCA.
Fortify SCA is integrated into the IDE and scans the code for vulnerabilities within
the developer’s code, during compile time. SCA also should be installed on a separate
server with access to the code repository to ensure the entire code base is scanned on a
regular basis.
Fortify 360 Server is a collaboration module to view and track
vulnerability metrics over time. It is critical to run the tool against code being developed
and code checked in to the base. Fortify follows the code as it is compiled to analyze the
potential vulnerabilities. This requires the entirety of the code base to be present.
Fortify 360 verifies the code for adherence to the Application STIG. This will provide
insight into the STIG vulnerabilities mentioned previously. During the Certification and
129
Accreditation (C&A) process, the system will be evaluated against the STIG. In an effort
to prevent rework and noncompliance, the use of a tool such as Fortify tool should be
required of all developers. A tool such as this should be used on all software before it is
allowed to be added to the code base.
There can be a number of false positive issues that appear. As the code is reviewed,
these false positives should be noted. In some cases the tool will find an issue that is
being corrected elsewhere in the code, this justification should be documented as well.
Fortify 360 does allow for customer rules and filters to handle some of these issues, but
as with anything it isn’t 100% so the developers and security testers should be wary of
this.
Code used on the program, but not developed specifically for the program must also
be tested. Third Party software, including free and open source code must be tested for
vulnerabilities and approved by the Designated Approval Authority (DAA) for use.
Although the DoDI 8500.2 [5] requirement is for binaries and executables, there is a new
threat in the supply chain from open source software [36]. This means that any software
libraries such as those pulled off of SourceForge.net should be tested for vulnerabilities.
Legacy code must also be tested, but as it is impractical, and in most cases
unaffordable to test all of the legacy code. The legacy code must be tested, the extent
possible. Reusing untested legacy code can be risky if there is no documentation or
evidence that it performs as expected. Wrappers can be used to integrate legacy code into
a new system[91]. The primary purpose of wrapping legacy code is to allow for existing
software that is outdated to be reused. This is practical as systems must be updated to
meet current technology, but the time and effort put into the legacy code might still be of
130
use[92]. Wrappers can be used to add security mechanisms to legacy code [93], however
this does not actually fix the original software. It can provide mechanisms such as
message level security. This does reduce the risk in some ways, but still does not prevent
attacks that occur at the assembly level.
The goal is to remove dead code, which is the vulnerability that can lead to attacks at
the assembly level, as well as to find and mitigate any other glaring vulnerabilities.
Documentation will show which code was not tested. It will also include any reasoning
for why mitigations could not be put in place. As software assurance is incorporated into
newer code, the need for extensive changes to legacy code will reduce. In turn this will
make code reuse more affordable and secure. Although this will take time, the long term
goal is to have tested all code within a system.
There are a number of free static code analysis tools [94]. Each tool looks at a
different piece of the code. One may only look at coding style and another may only look
at race conditions. In order to get the same level of analysis provided by Fortify 360, it is
necessary to use a conglomeration of these open source tools. The Application Security
Development STIG requires that static code analysis be run against all code in a platform
needing to be compliant with the DoDI 8500.2.
All new code for the system should be tested by developers before being uploaded
into a configuration management tool such as IBM Rational ClearCase. As software
builds are added to the code base, the integrated builds should also be tested. This
ensures that any integration has not introduced vulnerabilities.
131
6.4.1.1.3 IBM Rational ClearCase
The newer DoD IA requirement sets require a configuration management process [5],
[6]. The use of a tool such as IBM Rational ClearCase allows a development team to
partially meet this control. There is also a requirement for the meeting of a configuration
control board, on which the IA team must take part. The tool provides assurance that the
code being integrated has been handled correctly and has not been changed accidentally.
If it has been changed, there is a way to see who made the changes. ClearCase also
provides version control, which makes it easier to track what software baseline is in use
and what modifications it contains. There are other tools that provide similar version
control functionality. This is also a requirement from the IA requirement sets. The tool
used will be documented as an artifact for the C&A process, in order to show
verification.
Subversion is an open source option for version control [95]. It provides similar
functionality to that of ClearCase. Whether the team decides to use an open source
product, or something a bit more expensive, version control is necessary. In DoDI
8500.2, version control is a requirement. If version control cannot be shown, then the
risk level is considered to be increased.
6.4.1.1.4 IBM Rational ClearQuest
IBM Rational ClearQuest is a companion tool of IBM Rational ClearCase. It provides
management of defect tracking. This provides confidence that vulnerability mitigation
has occurred. Developers are not allowed to make changes to software without approval
from the configuration control board. This means that a process must be in place to
adjudicate any known security vulnerabilities as well as other defects. The C&A
132
processes also require verification that such a process exists [5], [6]. Although these
mechanisms could be implemented by hand, it is my experience that automated tools
afford developers and code reviewers a more expedient process.
Bugzilla could be used instead of ClearQuest for those wishing for an open source
substitute [96]. The ability to track software issues is important to the overall quality of
software. Security flaws can be handled like any other software issue and should be
tracked in the bug tracking system.
6.4.1.2 Code Reviews
All software must go through a code review. Developers cannot always find flaws in
their own code. During code reviews, a specific category for security flaws should be
defined. A person should also be designated to review code with the intent of finding
vulnerabilities. The use of automated tools, allows developers and reviewers to find
security vulnerabilities that might be missed if only reviewing the code by hand.
As with any flaw, the earlier security issues are discovered and remediated, the easier
they are to address. It is difficult to fix code that has not been touched in months. Even
with good commenting, it is sometimes an arduous task to remember why certain
elements were added or removed from the code.
Code reviews shall be done in
conjunction with unit testing as well as integration testing to help developers discover
security flaws as early as possible. This will help in two ways. First, it allows the
developers to approach the code while it is fresh in their mind. Secondly it will prevent
(to some degree) new code from depending on flawed code. Even with these measures in
place, thorough testing of integrated code is still essential. Abuse cases should also be
used during code reviews. It is not enough to test the software to see if it behaves the way
133
it is intended to, but that if given improper input that it does not affect the functioning of
the system.
6.5 Testing Phase
Security
testing
practices
should
focus
on
verifying
the
dependability,
trustworthiness, and sustainability of the software being tested. Testing for security
issues ensures that software executes as expected, even under duress. This means that
positive test cases, those that look for correct functionality are not enough. Test cases
need to be added that abuse the system.
Abuse cases must encompass not only
unintended input, but malicious input as well. This is especially critical in applications
with human interfaces, such as web applications.
Requiring software developers to use good practices in creating code is one of the
first steps of creating secure software. If the software is not adequately tested though, the
assurance of the software security cannot be verified. This leads to questioning whether
or not the code was actually written in a secure manner, using best software security
practices. This type of testing also leads to solid software quality.
Testing is an integral part not only to software quality but software assurance. Both
quality and assurance require that software is deterministic and reliable. Testing provides
verification of a piece of software’s ability. Software testing is done in several stages:
unit testing, computer software configuration item (CSCI) testing, integration testing and
regression testing [97]. There is additional testing for air vehicles, ships, etc. [97]. Each
of these stages must take security into consideration.
During unit testing, the developers should use a static code analysis tool on their own
code. This will provide a solid basis from which to start integration testing. During CSCI
134
testing, a static code analysis tool should be run on the code base in the background, to
ensure that the integration does not introduce vulnerabilities. Once code is integrated it
should also be tested, as a whole. Regression testing should also include a run through a
static code analysis tool.
Static code analysis is not the only security tool to be used during testing. Another
import aspect of testing is abuse cases, in other words, testing for functionality when
given an unexpected or explicitly malicious input. This is especially important in testing
interfaces.
Abuse cases should be written to explicitly try and break the software. Abuse cases
should include tests for interfaces as well as string manipulation, database manipulation
(e.g. SQL injection) [98], integer manipulation, buffer overflows, etc. Abuse cases alone
may not prevent all attack vectors, but do allow for a systematic way of reducing
vulnerabilities.
Interfacing with human users provides many opportunities for both malicious and
accidental input. These interfaces need to be checked for input sanitization and for other
manipulation opportunities, such as cross-site scripting, and database retrieval/deletion.
Another tool that is helpful for testing unexpected functionality is fuzz testing.
Fuzz testing is a technique that purposely injects invalid, unexpected, or random data
to the inputs of a program. File formats and network protocols are the most common
targets of fuzz testing, but any type of program input can be fuzzed. Interesting inputs
include environment variables, keyboard and mouse events, and sequences of API calls
[99]. I would recommend the Peach Fuzzing Platform. a smart fuzzing tool [100]. Fuzz
testing can be viewed as defensive coding as described by the author of [101]. Since
135
June 2010 over 3.4 billion constraints have been solved using whitebox fuzz testing
[102].
Both fuzz testing and the use of static code analysis compliment abuse case and
functional testing. Once all of this testing is complete, both for normal and security
testing, the security engineer can be confident that due diligence was done to reduce the
vulnerabilities in the software. These vulnerabilities should be mitigated or justified
before presentation to the C&A team.
6.5.1
Distribution
Secure distribution and deployment practices and mechanisms should be incorporated
into the program. When software is handed over, there needs to be assurance that the
software has not been altered.
This is done primarily through confidentiality and
integrity mechanisms.
The most notable mechanisms are hashing and signing. When software is ready for
delivery to the integration lab or to the customer, it should be hashed using SHA-256,
SHA-384, SHA-512 or newest standard [103].
A digital signature should also be
included. The software, hash and digital signature should all be included in the delivery.
At arrival, the digital signature must be verified and the software will be rehashed. The
new hash and the hash included with the delivery should be compared, to verify the
integrity of the code.
6.5.2
Maintenance
Secure sustainment practices must be incorporated into the program, so that software
will adequately perform throughout the life of the program.
Maintenance from a
136
functional aspect requires that code be written modularly. This principle is known as
loose coupling. This will allow for easier maintenance of the code.
Software assurance also requires maintenance. One of the primary areas that must be
developed is patch management. Every month new Information Assurance Vulnerability
Alerts (IAVAs) are issued and these are security patches that must be integrated into the
system.
Software patches must not be integrated directly into the system, but must be tested
for patch dependability and functional effects before being put into the operational realm.
Patch management is used to correct security flaws in software and firmware.
Vulnerabilities within software are typically classified at the same level as the software.
Therefore the patches must be handled at the same level as the software they will be
integrated into. Many patches rely on older patches in order to work. A system must be
created to maintain the patches and which patches have been applied to the different
systems. All patches must be tested in the lab before being put onto the production
system, to prevent any unintended consequences. This is especially critical in flight
critical systems and the like. People might get cranky if planes started crashing due to a
bad patch.
Once patches have been downloaded, and tested, the patches must be hashed then
signed before being distributed for inclusion into the system. This is to ensure correct
configuration management of the patches. Once the patches have been distributed, the
patches need to be installed and tested to ensure they were installed correctly. A copy of
the patch must be sent to maintenance to ensure it is put into the database. A database
containing patches and a listing of which patches are on which system are crucial in
137
ensuring that all systems get patched, and with the right patches (version control). For
example if dealing with a fleet of unmanned aircraft, there could be serious
interoperability consequences if there is a mismatch on patches between aircraft and
ground station.
The patches must then be deployed to the systems during regular maintenance. The
system must then be tested to ensure that the patch has been installed correctly. Once the
patch has been installed the maintenance database must be updated to reflect this. A copy
both the un-patched software and the patched software must be backed up.
6.5.3
Backup
Backup of software both during development and during operation is essential. If
there are any attacks on the system, a reliable backup should be available to rebuild the
system. During routine work on the system, there are cases when systems or software are
corrupted. Availability is a core concept of information assurance.
Availability of the software relies not only on well written code, but having a plan in
case of a disaster. If there is no backup software, the restoration of a system will be
virtually impossible.
Backup of base lined software should be done at least weekly, if not daily. Each
developer should be in the habit of saving and backing up work periodically throughout
the work day. During operations, the DoDI 8500.2 IA controls require that copies of all
operational software are held offsite, or are onsite in fire rated containers.
Each phase of the software development lifecycle presents unique opportunities for
including security. Software assurance is the main objective and will be reached, if each
138
element in the lifecycle meets its security objective. The end result will be secure
software that is reliable, dependable and effective.
6.6 Secure Coding
Writing code that is secure is vital to producing code that is more resistant to
vulnerabilities. There is increasing ability for attackers to take advantage of code flaws,
with the advent of point and click attack tools. The program’s Software Assurance Plan
covers the software development lifecycle aspects that help provide security.
The development of secure code relies heavily on including foresight of security in
each stage of the development. One of the largest stages is the actual writing of the
software.
The outcome of secure software is achieved using safe functions, safe designs, and
compiler options. Each language contains unique functions that can contribute to secure
code or can create vulnerabilities. C/C++ and Java have potentially unsafe functions.
The languages being developed using the .NET framework, C# and Visual Basic, rely on
built in security supported by the .NET framework, and have different security
considerations than C/C++ and Java.
Although there are different security issues from language to language, these all stem
from the same types of vulnerabilities. The types of vulnerabilities include: integer/math,
string, buffer overflow, command injection, user interfacing. These vulnerabilities and
the principles for preventing them are covered in the following section.
6.6.1.1 Secure Coding Principles
The main cause of costly vulnerabilities comes from input. Input can come from a
human, another system, or internally. Secure code will run correctly whether the input is
139
correct or unexpected. Malicious attacks stem from the ability to manipulate input
flowing to the software. Unexpected input can cause the same undesired effect as a
malicious attack if the software is not fashioned to handle it. In order to create software
that is robust enough to handle all types of input, the following areas must be considered
[84], [104].
6.6.1.2 Input Validation
Do not trust any entity to input good data. When validating data the application shall
check for known-good data and reject any data not meeting these criteria. It does not
matter where the input comes from. Input may come from a user, data store, network
socket, or other source. For web applications, in order to prevent some types of
manipulation, the developers shall ensure that the character set on the web page is set,
and is not allowed to be variable.
One method for handling input validation is checking the input for any characters that
are invalid. For example, if the user is supposed to be entering a phone number, only
numbers should be allowed, any other character should be scrubbed. Input validation can
prevent multiple types of attacks such as SQL injections, command injections, etc.
6.6.1.3 Injection Vulnerabilities
One of the areas that are most vulnerable are databases [105]. One of the most
common languages that is used to setup databases is SQL. In order to prevent injection
attacks, input validation is critical. Validate all user input, allowing only known good
input through. The developers shall use prepared or parameterized statements. They
shall not use string concatenation or string replacement to build SQL queries. Access to
the database shall be through views not directly to underlying tables in the database.
140
This is not the only type of injection vulnerabilities, another is command injection.
Command injection attacks are attempts to inject unwanted data into an application for
the purpose of executing operating system shell commands. As is expected, this is most
often exploited through un-validated input.
Preventing this type of vulnerability can be done by [84]:

Identifying all potential interpreters or compilers used to pass data.

Identifying the characters modifying the interpreters’ behavior.

Constructing input strings containing these characters can cause a visible
effect on the system, and then passing them to the application and observe the
behavior.

Ensuring all input vectors are tested in this manner.
6.6.1.4 Integer and Math Vulnerabilities
There are three kinds of integer arithmetic issues that could lead to security
vulnerabilities [84]:

Signed vs. unsigned mismatches (for example, comparing a signed integer to
an unsigned integer).

Truncation (for example, incorrectly truncating a 32-bit integer to a 16integer).

Underflow and overflow (for example, the sum of two numbers exceeds the
largest possible value for the integer size in question) [106].
These can be caught using static code analysis or manual code reviews. In order to
find these vulnerabilities, the following tests shall be performed [84]:

Input negative values for numeric input.
141

Input border case values.

Input extremely large string values (> 64k).

Input strings whose lengths equal border cases (32k, 32k-1, 64k, 64k-1).

As a general best practice, fuzz testing can uncover many integer related
vulnerabilities.
6.6.1.5 String Vulnerabilities
String vulnerabilities are an issue in C/C++ code, but can be present in other
languages, if they do not perform input validation [107]. In C/C++ the %p (pointer)
specifier should be used with caution, and the %n (number of characters written) should
be avoided. Validate all input before passing it to a function, allowing only good data to
pass. Format strings used by the application should only be accessible by privileged
users [84].
6.6.1.6 Buffer Overflow
A buffer overflow is a vulnerability where data is written beyond the end of an
allocated memory block or below an allocated block (a buffer under flow). Buffer
overflows are usually exploited through un-validated input. In order to find buffer
overflows during testing:
Use static analysis tools that are known to find this class of vulnerability with few
false positives.

Validate all input before use, allowing only known-good input through.

Replace known-insecure functions with safer functions. (many of these are
listed in the appendices.)

Recheck all calculations to ensure buffer sizes are calculated correctly.
142

Recheck all array access and flow control calculations.

(C++) Replace character arrays with STL string classes.

Use compile-time options that add compiler buffer overrun defenses
6.6.1.7 Canonical Representation
Canonical representation issues arise when the name of a resource is used to control
resource access. An application relying solely on a resource name to control access may
incorrectly make an access control decision if the name is specified in an unrecognized
format. Do not rely solely on resource names to control access [104]. If using resource
names to control access, validate the names to ensure they are in the proper format; reject
all names not fitting the known-good criteria [84], [108]. Use operating system based
access control mechanisms such as permissions and ACLs.
6.6.1.8 Hidden Fields
A “hidden” field vulnerability results when hidden fields on a web page, values in a
cookie, or variables included in the URL can be used for malicious purposes Remove
hidden elements from web pages if they are not needed. Do not use hidden elements to
store values effecting user access privileges
6.6.1.9 Application Information Disclosure
Information disclosure vulnerabilities are leaks of information from an application
which are used by the attacker to perform a malicious attack against the application. This
information itself may be the target of an attacker, or the information could provide an
attacker with data needed to compromise the application or system in a subsequent attack.
Attacks may come in several forms such as:

Inducing errors in the program to verify the contents of error messages.
143

Attempting variations of invalid input combinations to determine if the
response contents or timing reveal information about the system.

Attempting to access data the user should not be able to access.
To prevent disclosures

Ensure an access control policy is in place to enforce access control of the
data.

Display generic error messages to end users, don’t give specifics

Log specific error information for application administrators

Ensure the application responses do not divulge unneeded details.
6.6.1.10 Race Conditions
A race condition occurs when two applications, processes, or threads attempt to
manipulate the same object. Race conditions occur when developers do not consider
what will happen if another entity modifies an object while in use. To prevent race
conditions developers shall minimize the use of global variables and use thread-safe and
reentrant version of functions. Also, developers shall ensure ACLs are enforced on
application resources that restrict the users who can access the protected resources [84],
[109], [110].
6.6.1.11 Dead Code
Dead Code should be removed. If unused code is left in a program, it can be
manipulated to change working code. This is especially dangerous in code where there is
user interaction.
144
6.6.2
Secure C# and .NET
Secure software can only be achieved if the development environment, development
team, and coding practices follow a solid security policy. Policy does not deal with the
technical implications of security.
In order for the program to meet information
assurance directives, a combination of policy and technical requirements must be levied.
On the policy side, the program must follow a strict CMMI process for code
production, typically CMMI level 3 and above is required. Code must be developed in a
secure environment, requiring configuration management and version control.
The
development environment must contain both personnel policy and technical controls.
This is also known as evidence-based security [111].
Development is often done in integrated development environments (IDEs) such as
Visual Studios 2010 and Eclipse. Some IDEs have options that can be used to create
securely compiled code. Eclipse will be discussed in the Java section.
6.6.2.1 Integrated Development Environments
The Visual Studio 2010 IDE is comprised of both a compiler and debugger. Visual
Studio can run the .NET framework when developing applications for the Windows
environment, it can also compile C and C++ code without having to rely on the .NET
framework. The two flags that should be turned on during compile time are shown in
Table 4 [112].
Aside from using compiler flags, there several security features within the .NET
framework provides including: common language runtime, class libraries, and assemblies
[106], [113], [112], [104], [114].
[115].
C# is built on .NET and so can use these features
145
Option
Description
/GS
This compiler option, which is on by default, instructs the compiler to inject overrun
detection code into functions that are at risk of being exploited. When an overrun is detected,
execution is halted.
/SAFESEH
This linker option applies to x86 only. The option instructs the linker to include into the
output image, a table containing the address of each exception handler. At run time, the
operating system uses this table to make sure that only legitimate exception handlers are run.
This helps prevent the execution of exception handlers introduced by a run-time hacker
attack. The option is enabled by default in the OS build system, but disabled by default when
the linker is invoked directly.
Table 4: Compiler Options
6.6.2.2 Assemblies
The common language runtime (CLR) performs just-in-time (JIT) compilation. This
translation from the managed code to the native language allows for close control of
potential security flaws within the code.
Code compilation also is a time when security must be looked at. Code access
security (CAS) checks assembly code to ensure that granted permissions are not exceeded
while executing on a computer system [116].
Developers can actively run security checks as well. There are two types: imperative
and declarative. At runtime, the imperative check calls to the core security engine
requesting a demand. The imperative check could also override portions of the stack
walk operation. Similar checks are provided by declarative check, but these are run as
custom attributes, which are evaluated at compile time.
additional measures that can be implemented at JIT-time.
Declarative checks have
146
The last step for safety of managed code at runtime is the verification process. During
JIT compilation, the CLR verifies all managed code to ensure memory type safety.
6.6.2.3 .Net Libraries and Other Features
The .NET framework also provides protection through managed wrappers [117].
Managed wrappers are used when some useful functionality is implemented in native
code that you want to make available to managed code. To do this, use either platform
invoke or COM interop [111]. Those needing to use the wrappers must have unmanaged
code rights to be compatible. It is better to have the wrapper code contain the rights,
instead of the underlying functionality. Wherever a resource is exposed, the code must
first ensure the permissions are appropriate [118].
Developers shall include permission requests in applications that access protected
resources [119], [120]. Code shall request the minimum permissions it must receive to
run. Developers shall ensure code receives only the permissions that it actually needs. A
PolicyException is raised if the correct permissions are missing and the code will not run.
Developers should explicitly decline access to resources which are not needed, using
the “Refuse” request. Limiting the scope of the permission set allows developers to
enforce least privilege.
Another security feature, IsolatedStorage, allows for protection of access permissions.
This supports special file storage mechanism forcing repositories to be
kept isolated
from each other [118]. It also ensures that specific file system characteristics are not
revealed.
Code that has security functionality must be protected with extra mitigations. The
best way to protect data in memory is to declare the data as private or internal.
147
Developers should be aware that the use of reflection mechanisms can allow get and set
access to private members. For serialization, highly trusted code can effectively get and
set private members as long as there is access to the corresponding data in the serialized
form of the object. The last issue is that under debugging, this data can be read [121].
Data can be declared as "protected," with access limited to the class and its
derivatives [116]. However, developers should take the following additional precautions
due to additional exposure:

Control what code is allowed to derive from a class by restricting it to the
same assembly or by using declarative security, to require some identity or
permissions in order for code to derive from your class.

Ensure that all derived classes implement similar protection or are sealed
Some methods might not be suitable to allow arbitrary untrusted code to call. When
this is the case [119]:

Limit the scope of accessibility to the class, assembly, or derived classes. This
is assuming they can be trusted. This is the simplest way to limit method
access. Do not infer trust from the keyword protected, which is not necessarily
used in the security context.

Limit the method access to callers of a specified identity

Limit the method access to callers having specific permissions
Similarly,
declarative
security
allows
control
of
inheritance
InheritanceDemand allows the following [119]:

Require derived classes to have a specified identity or permission.
of
classes.
148

Require derived classes that override specific methods to have a specified
identity or permission. Risky permissions are listed in Table 5.
Developers shall strictly control permissions. Developers shall not use the previous
methods unless absolutely necessary. In the event that it is necessary to use one of the
functions, mitigations will be put in place to ensure that the security state is not
compromised.
Permission
Potential Risk
SecurityPermission
UnmanagedCode
Allows managed code to call into unmanaged code,
which is often dangerous.
SkipVerification
Without verification, the code can do anything.
ControlEvidence
Invalidated evidence can fool security policy.
ControlPolicy
The ability to modify security policy can disable
security.
SerializationFormatter
The use of serialization can circumvent accessibility
mechanisms. For details, see Security and Serialization.
ControlPrincipal
The ability to set the current principal can trick rolebased security.
ControlThread
Manipulation of threads is dangerous because of the
security state associated with threads.
ReflectionPermission
MemberAccess
Can use private members to defeat accessibility
mechanisms.
Table 5: Risky Permissions
149
6.6.3
Secure Java
Java, like C/C++ contains deprecated functions that are found in the appendices.
These functions should not be used. There are several IDEs that will be used in the
program for Java. Eclipse is one of these. Eclipse is built on OSGi. As explained in
[122] OSGi specifies three different security permissions, namely AdminPermission,
PackagePermission, and ServicePermission. The purpose of each of these permissions is
as follows (for more detailed descriptions refer to the OSGi specification):

AdminPermission - grants the authority to a bundle to perform administrative
tasks, like bundle life cycle operations or any task that accesses sensitive
information.

PackagePermission - grants the authority to a bundle to import or export a
given package or set of packages.

ServicePermission - grants the authority to a bundle to register or get a given
service interface or set of service interfaces.
There are several guidelines that Java applications should follow [123], [124]. These
are similar to many of the C/C++ issues.
6.6.4
Secure C/C++
Developing secure code in C and C++ relies on correct program design and the
avoidance of deprecated and unsafe functions [125], [126]. In the event that a deprecated
or unsafe function must be used, the vulnerability it presents must be mitigated.
For example if a developer wants to use a function that could lead to a buffer
overflow, the developer shall provide boundary checks on the object before it is used.
These mitigations are outlined previously through best practices. Robert Seacord’s book
150
[107] is an excellent source of information on security flaws that should be avoided when
writing C/C++ code. Each of the areas covered previously, such as buffer overflows,
string vulnerabilities and integer vulnerabilities are covered in depth in[107].
6.6.5
Database Security
Database security relies heavily on input sanitization as explained in the best practices
section. [105] goes into many aspects of protecting databases, such as access control and
encryption options. However, relying on outside protection is not enough. Databases are
especially vulnerable to un-validated input.
There are several remedies for input validation. The first is the sanitization of data
when it comes in. When a user can send requests to a database (such as username and
password) the data should be checked to ensure that it only contains appropriate data.
For example, input such as =. ‘, and * should not be allowed.
The second mitigation is input parameterization. Parameterization can be used to keep
input from corrupting a SQL statement. Instead of dropping input into the statement, the
input is associated with a parameter. This means that the SQL command will act on the
parameter and not on the actual input. This will provide some protection from rogue
input affecting tables.
Database security should have access mechanisms as well as other security measures
in place, but those mechanisms should not be used without considering input validation.
If these mechanisms are used in conjunction with proper sanitization and with
parameterization, this will provide a solid security posture for the database.
151
Software Assurance is a necessary step in securing a platform, but it comes at a price.
This is true of all aspects of security. One of the difficulties is that the price is not
calculable currently. The next chapter addresses this issue.
152
Chapter 7
Cost of Security
As with anything in business it all comes down to money. Whether looking short
term or long term, the end goal of a business is to make money. In the defense industry,
there is a goal to provide for the war-fighter certainly, but defense companies also have
shareholders to which they are accountable. On the government side, they have a specific
budget with which to work.
Building security into the cyber infrastructure is an essential part of adequately
providing for the war-fighter.
However, security has traditionally been seen as a
roadblock to providing affordable systems. Although cyber security has been around for a
couple of decades now, building security into the entire life cycle of a DoD program is
still complicated. This leads to challenges in several areas of providing affordable
security.
Without early integration of security requirements, significant expense may be
incurred by the organization later in the life cycle to address security considerations that
were not included in the initial design [6]. One area of difficulty is cost estimation,
because historical cost data for total cyber security costs is virtually non-existent.
Another area is understanding total life cycle cost since the maintenance cost of built in
security is just beginning. Some of the information can be extrapolated, but to take the
information and apply it to programs of different complexity can be difficult.
In other areas of engineering, metrics are used to measure the effectiveness of how
money is being spent. In software engineering, source lines of code (SLOC) count and
other metrics are used as a basis of estimating software expense [127]. In networking
there are metrics associated with the cost of running a network operations center and the
153
installation/maintenance of network infrastructure [128]. In security there are metrics on
the security of a system, such adequate boundary defense and mitigations for protection
for confidentiality, although these tend to be somewhat subjective. I could not find a
defined set of metrics on the cost of security to spot inefficiencies and make security
more affordable.
7.1.1
Contributions
The rest of this chapter will look at the metrics used in software engineering disciplines
and how they relate to security engineering. We offer (1) a means of reducing future cost
through new security metrics. (2) We offer a comparison of the proposed security metrics
to already proven software metrics to show that the use of the proposed metrics can be
used to reduce cost.
In order to show the potential affectivity of the proposed security metrics, I will show
their relation to software metrics. The software metrics already in use have proven to
reduce cost and so I endeavor to prove that a similar outcome may come from the
application of the proposed security metrics.
7.2 Background
Security economics deals with decision making on allocating scarce resources among
several purposes [129]. Cost justification is seen as the calculation of cost saving through
implementing security measures as well as the break-even calculation time based
technique known as Net Present Value, NPV. NPV is calculated by multiplying the
expected benefits and costs by a present value factor, usually related to the expected
Return on Investment, ROI [130]. This is useful in determining if adding a security
feature will be of benefit, but alone does not fully describe the cost of building security
154
into a system from scratch. This is the typical calculation done by security engineers in
determining if a mitigation should be implemented. A different type of metric is
necessary when looking at the whole of a security architecture.
One of the difficulties in achieving an acceptable level of risk in regulatory
constrained systems is that often security risk aversion is very high [131]. Without
having appropriate metrics, it is difficult to tell if it is too expensive to meet the risk level.
As of now, many security engineers are either forced to go over budget to get the system
to an acceptable level or must seek acceptance of a security posture that does not
necessarily meet the perceived need. I have observed this on multiple projects.
Risk-based planning in the software engineering world offers some valuable direction
that can be applied to cyber security. The need for defining the cost of software led to
numerous research papers on calculating the cost of a line of code. One author writes of
the Correctness by Construction methodology, shown to reduce cost [132]. With this
method, there are seven key principles that software developers should follow. I will
show how these metrics are very similar to my proposed security metrics. The first
software metric is to expect requirements to change. The second is to know why you are
testing. The third is to eliminate errors before testing. The fourth is to write software that
is easy to verify. The fifth is to develop incrementally. The sixth is that some aspects of
software development are just hard. The seventh is the idea that software is not useful by
itself, that it must be developed in concert with the enterprise architecture, business
processes, concept of operations (CONOPs), etc. As I will show these principles can be
similarly applied in the security arena will be discussed in the following subsections.
155
7.3
Approaching Security from a Software Perspective
The expectation of requirements change is also directly applicable to security. As a
system is designed, and components change their interactions, the information assurance
requirements must change to ensure that appropriate mitigations remain. Requirements
volatility is an element of cost in every program. Aside from changing design, there is a
cost associated with putting together a change request and having it reviewed by all the
stakeholders. There is a relationship between requirements volatility and cost [133].
Volatility comes from multiple sources and affects the whole software development
life cycle [133]. This relates strongly to the requirements volatility of a system life cycle.
The authors of [106] describe three dimensions of uncertainty. The first is requirements
instability, which reflects the extent of changes in user requirements during the entire
project. The second is requirements diversity, which relates to how users differ about the
understanding of a requirement. The third dimension is requirements analyzability. The
extent to which an objective requirement can be laid out based on a requirements
specification.
Not only does requirements volatility affect cost, it also increases a
project’s risk.
An example of requirements volatility can be seen when security boundaries change
during a program. This can change the security posture of a system with respect to the
certification and accreditation portion of the system. Common practice allows for the
inheritance of mitigations from one piece of a system to another piece of the system. This
can easily be done if the equipment is within the same boundary. It is more difficult to
accomplish if they are under different accreditation boundaries.
156
The frequent interaction of developers and customers leads to a lessening of
requirements volatility [133]. From experience, I have seen this to be true in the security
realm as well. It is critical for security engineers to work with their customers as early in
the life cycle as possible in an effort to stabilize requirements.
Knowing why you are testing, the second principle is also important in the world of
security. In the world of software, you are typically testing for specific functionality.
Software assurance requires that developers test for vulnerabilities.
Not all
vulnerabilities are created equal. As information assurance is about risk mitigation and
not risk elimination, it is important to know which vulnerabilities to test for and to fix. In
some cases, such as navigation systems, it may be necessary to fix everything, but in
other cases it might be cost effective to fix only critical vulnerabilities.
Reducing errors before the testing phase can reduce the effort and cost involved with
testing software. This could be applicable to the security realm, depending on how you
define security testing. In order for a system to be accredited under DoD Information
Assurance Certification and Accreditation Process (DIACAP), it is necessary to run Eye
Retina scans. The use of hardening tools such as Gold Disk, to determine which
configurations and patches need to be fixed, can be considered part of the testing phase.
This is not completely correct though, because these tools are used through the total life
cycle of the system to maintain the security posture of the system. I would propose
penetration testing to be more in-line with the testing phase of the software life cycle.
Waiting for a penetration testing team to discover vulnerabilities in the system can be
expensive. This is due to the fact that any changes that have to be made at this point will
most likely require regression testing of the system, not just testing the security
157
configuration. Also, if the configuration item is being used for flight critical operations
there can be a safety impact as well, due to Federal Aviation Authority requirements
[134].
For example, when securing a remote system and its control station there is a need to
ensure that command and control (C2) data is highly available. An intrusion detection
system (IDS) can be used to detect malicious traffic. An intrusion prevention system
(IPS) may not be allowed in the remote system if there is the potential for it to block C2
traffic, due to safety concerns.
If system safety allows for an IPS and a configuration
change needs to be made, significant testing may be required to ensure that C2 data is not
blocked.
Write software that is easy to verify. Although this doesn’t translate exactly into
system security, the same concept can apply. As the system security architecture is being
built and mitigations are being put into the system, ensure that this information is
documented and laid out in a manner that is easy to verify.
Through the years, there have been multiple requirements in the DoD arena with how
to put together a certification and accreditation package. From the DoD Information
Technology Security Certification and Accreditation Process (DITSCAP) to DIACAP
and now to the National Institute of Standards and Technology Risk Management
Framework (NIST RMF), the layout of the information is constantly changing. It is
necessary that if a program has to change from one C&A process to another, that the
information be in a format that is useable. In a sense it does not matter which C&A
format is required. The same information will still need to exist, so that the security
posture of the system can be verified. Running a Retina Scan and Gold Disk only verify
158
a piece of the overall security posture. The justification of why a control is applicable or
not and how it is applied is a much larger piece of the puzzle.
The ability for a security
engineer to go through this information quickly, and to see what the impact is if
something changes is directly tied to how affordable the security posture is.
If a security engineer cannot synthesize this information quickly, there is a tendency
to add in extra mitigations, “just to be sure.” If the information is in a useable formation,
it allows for better trade studies and allows engineers to better judge the level of risk.
Incremental development can be applied at the architecture level to a degree. As a
system is being built, it is necessary for the system security engineers to continually
perceive how each component is being added into the system, assessing its impact of the
security posture of the system and then applying the appropriate controls. This is what
the NIST SP 800-37 refers to as continuous monitoring [6]. Incremental development
with respect to security should also take into account the dependencies on other
engineering disciplines.
For example, system security engineers should work with the safety engineers early
on to establish how the security mitigations might impact safety. For example, choice in
cryptographic algorithm will affect the probability of a bit flip impacting the system
(stream ciphers encode data differently than block ciphers). Due to the way they handle
the data, stream ciphers can be used to ensure that a single bit flip will not trigger a
failure.
Some aspects of software development are just hard. This is also true of security
engineering. For example, when dealing with modified COTS, it is not always possible
to get an adequate level of protection accomplished at the box level. A combination of
159
inherited protection and minor modifications are typically used. There are some things
that seem hard, but with some changes might be easy to sort out.
For example, it is difficult to adequately look at the cost component of security
devices without access to the cost of the devices. Cost needs to be one of the components
of trade studies. Access to the cost of devices should be available in a timely manner to
ensure that the lowest cost, technically acceptable solution is achieved.
This can
sometimes be difficult if the engineering teams are working well enough with their global
supply chain or in discussing the overall cost of government furnished equipment with
the customer.
Culture is another aspect that can be difficult to overcome. Although engineers want
to produce the most elegant systems, these are typically not the most affordable. There
needs to be a culture change. The cost of components is not just the initial procurement
cost, but should also consider the total life cycle of the system. Factoring this in must be
a change in the acquisition life cycle. Until it is a requirement on the customer end, it
will in many cases be lacking on the contractor end.
The last principle is that software alone is not useful; it must be created in
conjunction with system dependencies. This is very true of security development as well.
Security should not be added in just for security sake, but to reduce the total risk of the
system to an acceptable level. A security engineer cannot apply security mitigations
unless the system is understood and the context of the system within its environment and
its CONOPS are understood. The flow of information within the system and the flow
across security boundaries are pieces of information on which security engineers are
dependent.
160
The concept of operations is especially crucial on mobile or semi-fixed platforms. If
the system is only used within the Continental US (CONUS) there is a level of risk that is
acceptable that would not be acceptable if the system is meant to be used in areas where
access may be challenged or denied.
7.4 Principles into Metrics
Affordability can be increased several different ways, such as being more efficient,
using less expensive materials, etc. It is difficult to understand how to make security
more affordable if there is not a way to measure expenditure both in time and cost. This
leads to the need for metrics. In looking at the principles above, I propose the following
system security metrics to determine if a program is allocating security resources as
affordably as possible. Table 6 outlines my proposed metrics. In order to show how to
apply the metrics, each one will be computed over the monitoring system as shown in
Figure 7-1. This is further explained in Appendix A.
As discussed with respect to the first principle, security requirements changes. If the
posture of a system changes, the requirements can also change. Security engineers
should begin tracking points in the life cycle where requirements become volatile, and if
there is a way to reduce the volatility.
In looking at Figure 7-1, it should be understood that it was developed by someone
who has created an architecture previously and that the system is quite simplistic. I
performed a requirements decomposition of NIST SP 800-53 all the way to the
HW/SW/Supplier level in order to measure the time taken. It took roughly 100 hours.
Table 6 Security Metrics
Metric
Measures
161
Metric
Architecture
Development
Measures
and
Requirements Labor Hours based on:
- Complexity of system
- Engineer’s expertise level
- Requirement set
Cryptographic Architecture Development
Labor Hours based on:
- Complexity of system
- Engineer’s expertise level
Mitigation Cost
Cost of Equipment
- Cost of initial procurement
- Maintenance cost (such as licensing)
System Cost
Affect on System
- Size, Weight and Power (SWaP)
- Redundancy needs
- System Requirements (such as OS,
memory size, etc.)
Maintenance cost
Labor Hours (cyclical)
- Running scans (Retina, SCAP, Gold
Disk)
- Documenting issues
- Implementing fixes
o If this requires new equipment,
calculate mitigation cost
- Regression testing
Testing
Labor Hours for Each Test Level
- Line Replaceable Unit (LRU) level
- Subsystem Level (System II)
- System Level (System III)
- Operational Test
o May include a penetration test
- If applicable: Flight Test or equivalent
Test Type
- Type of test used for each requirement,
such as test, analysis, inspection
First the system will need a security architecture.
This is one of the major tasks
associated with securing the system. The metric should include the time associated with
actually putting an architecture together with respect to size and complexity of the
system.
162
Figure 7-1 Patient Monitoring System
163
The architecture as depicted in Figure 7-1 took one week to design and put in the Visio
format. This is a very simple system. In the real world, a more sophisticated architecture
can take months to years to develop.
The requirement set is decomposed at the system level directly from the
regulatory document, less customer requirements and other out of scope items. Then as
the subsystems are defined the requirements are decomposed at the SDR/FRD level. As
the architecture develops into the lower level components are determined then the lowest
level requirements are decomposed security requirements are interesting as their
application is sometimes determined by the design and in other cases determines design.
This process of allocating security requirements and their allocating mitigation and
design elements into the architecture as well as preparing the sketches and baseline
security architecture, following my method as described in chapter 4, took roughly 40-50
hours on top of the hours required for requirement decomposition.
This also realizes the changes to the requirements based on design decisions.
Measuring these items are what I describe as the Architecture and Requirement
Development Metric. The number of hours required to perform this work can drastically
change depending on the experience level of those performing the work. Keeping this
metric allows for an understanding of the time associated with the creation of these items.
This is important to understand as these hours are typically directly billable to a customer
and can have significant bearing on the basis of estimate for proposals. Keeping the
proposed
Architecture and Requirement Development Metric provides the ability to
accurately propose needed hours. A second metric I propose is the Cryptographic
Architecture Development Metric.
The development of a crypto architecture is not
164
always necessary, but as cryptographic implementation can be of significant cost, the
ability to measure crypto implementation is important. The Cryptographic Architecture
Development Metric should measure the time it takes to determine crypt design decisions,
such as choosing cryptographic types (I, II, III) how the system will allow for keying,
zeroization and recovery. The development of such an architecture can take a month to
initially create, but it will evolve over time and so the expectation should be to spend
multiple months creating this type of architecture.
Similar to the Architecture and Requirement Development Metric, experience
determines how quickly the crypto requirements can be understood and then a design
created. In the toy patient monitoring system, although the wireless is encrypted, the
design is showing built in WPA encryption and not an external crypto product, so a
separate architecture was not created. Although crypto is a mitigation and in the case of
this toy will be considered in the next metric Mitigation Cost, many times it is
complicated enough to warrant its own metric.
My proposed Mitigation Cost Metric is meant to measure the elements associated
with the actual purchasing, implementation and maintenance associated with a particular
product. The Architecture and Requirement Development Metric measures the cost of
developing the architecture that calls out for a particular mitigation such as an anti-virus,
firewall, IDS, etc. Once a mitigation has been determined a need, the Mitigation Metric
measures the cost of implementing the chosen mitigation. The amount of time it takes to
do a trade study, cost of purchasing equipment/software, labor cost associated with
integrating the mitigation as well as any maintenance cost associated with the product
such as license fees, should be captured. These latter items should factor into the trade
165
study as well for example, if the mitigation is an anti-virus/host based intrusion detection
system, then a trade would consider the cost of purchasing separate products and the cost
of purchasing a host based security system (HBSS). The HBSS may initially cost more,
but if the trade factors in the maintenance cost associated with maintaining the software
then the HBSS may win out. This is because the HBSS can auto push security updates to
the anti-virus/HIDS and doesn’t require maintenance personnel to manually perform
updates. On large systems this would be a cost savings on the recurring basis. On a small
system, such as the toy medical example, HBSS would be overkill as anti-virus/HIDS are
only required on two components: the server and workstation. The annual licensing cost
would be more expensive than the labor associated with the maintenance personnel
manually updating the software. A similar trade would need to be done for every planned
mitigation in the system. The cost of all the implemented mitigations in the end is the
Mitigation Cost Metric. This is not meant as an addition of the costs, but the outcome of
all the trades should be kept and used for reference on proposals and future work. It is
advisable to maintain a database that is up to date on commonly used products, such as
anti-virus, HIDS, firewalls, etc. The database could contain the outcomes of these trades,
so that multiple engineers across multiple projects would have a access. This could
reduce rework, driving cost down. It also can be used as a conversation point for
engineers as they try to make similar decisions. These mitigations have to work within a
given system environment. The affects of the mitigations on the system should also be
measured.
It is necessary to have a metric that allows engineers to know what the impact of a
mitigation is on a system. This is particularly important when dealing with networks to
166
ensure systems aren’t crippled by a particular implementation. I have run into this issue
particularly with anti-virus which can hog 100% CPU utilization during a scan and
effectively reduce availability to zero. This is a critical problem on resource constrained
systems and by tracking these types of issues new programs may avoid suffering from
similar issues. Thus, I propose the System Cost Metric intended to measure the effects of
mitigation and system interaction.
For example if in the patient monitoring system, if the sensor needs to send messages
to the healthcare workstation in real time, this would affect the availability needs of the
system. If also affects how the system processes the data. As availability is an IA facet,
the IA team should work with the network team to ensure that enough redundancy is in
place, as well as looking at any traffic shaping that is being done to ensure quality of
service for the real time traffic. In this example it is shown that the wireless requires
extension through the hospital network. This would mean that the team should work with
the hospital network staff to understand the integration issues associated with looking
into an essentially outside network.
In this example, size, weight and power constraints aren’t in play, but if the sensor
were to become say a pace maker, then SWaP becomes more of an issue. The cost
associated with the redundancy needs, or paying for high priority QoS on an external
network should be captured. Also, the cost of dependencies must also be accounted for.
If a mitigation can only run on a particular platform this should be part of the mitigation
trade study, but the cost associated with changing OSs to accommodate would be a
system cost. When developing the system, it should be designed with maintenance in
167
mind. Aside from the maintenance associated with licensing costs, there are other aspects
of maintenance of the security posture that need to be measured.
The Maintenance Metric is proposed to measure the things. As systems under
regulatory compliance are assigned on a frequent basis, this needs to be planned for. The
time it takes the security team to run scans, report issues, and make fixes should be
captured. The experience level of the engineers again affects the efficiency of these
tasks. New assessment tools are constantly cropping up. If the team is going to use new
tools, their efficiency should be measured. Does it take a day to perform an assessment
or a month? The complexity of the system also affects this. With respect to the toy, the
firewalls, wireless routers, healthcare workstation, main router and server must be
scanned, patched, reconfigured and regression tested on a monthly basis. It would take
an average engineer a full week to do the scans with current tools, and submit issue
reports. Then once the issue reports are assessed, it can take days to months to make
necessary fixes, and then depending on the complexity of the change more time will be
needed to regression test the fixes. This would be true whether applying security patches
or changing configuration settings.
All of the maintenance cost associated with these activities should be captured.
Reviewing the findings against the initial design can also provide insight into where
designers can make changes in future projects to reduce maintenance cost. As mentioned
earlier, the mitigations that are outcomes of the maintenance assessments require
regression testing. This is only one type of testing.
I suggest a separate Testing Metric to measure the other forms of testing. The last
metric area is that of testing. Similar to software it is necessary to know what needs to be
168
tested and how it needs to be tested. Many IA requirements can be tested via analysis, but
some need formal test steps and other can be done through an inspection of documents.
There are also multiple levels of testing done throughout the development life cycle. It
may be necessary to perform analysis during initial testing and then perform a more
systematic test at a later date. Determining the most efficient way to test requirements is
useful, but a metric must also be used to measure how well the tests are being passed.
As the system is being implemented and integrated, a series of tests must occur. I
propose the Testing Metric to capture the cost associated with performing the security
tests associated with each test phase. Testing begins at the component level or Line
Replaceable Unit (LRU). In some cases the tests could be done at the subcomponent
level, but from an IA standpoint the focus begins at the component level. One of the IA
tests that needs to be accounted for is hardening. With respect to the toy system, the
firewalls, wireless routers, main router, workstation, VoIP system and server must all
undergo hardening. The IA team must ensure that operating systems are hardened,
network equipment (each having a different hardening requirement set) browsers,
databases, etc. must also have appropriate hardening. This includes scanning systems for
security patches that must be kept up to date.
During the component test phase, aka Sys I test. Each of these items must be assessed
for compliance. The time it takes to perform the activity, as well as the percentage of
compliance should be measured. If components are not hardened at this phase, it will be
harder to meet compliance as integration occurs.
There are different types of IA
requirements some that can be shown to be compliant through simple inspection of
documents and others that require a full test. Some requirements may also require testing
169
at a higher integration level. This might occur at Sys II or subsystem test. For example,
availability within a subsystem may require multiple components to be integrated to test
functionality.
At the Sys III level, subsystem integration is being tested. At this phase of
testing, the cost associated with performing network security scans should be captured for
example, in the patient monitoring system, the server is supposed to store audit logs. At
Sys III, a test should be performed to ensure that the logs from the firewalls and other
devices are adequately captured. This test requires that the subsystems be integrated.
The Test Metric should capture what type of verification is required: analysis,
demonstration, test or inspection and the amount of time taken to complete the
verification.
Once the subsystems are integrated, then the customer may require special testing.
In some cases this is where a third party audit company will review the implementation
for compliance. Penetration testing can also be done at this phase. The cost associated
with having an independent auditor look at the system should be captured with respect to
penetration testing, this can ensure that any forgotten security items are caught before the
customer actually uses the system in production. The final testing phase can be costly if
issues are not mitigated early. Counting the cost of test items allows far better estimates
and can point to design failures. If lessons are learned, it may be possible to reduce the
cost of testing of future programs.
7.5
Comparison
Metrics have already proven useful in software development [135]. They have been
studied in academia and used in industry [129], [136]. Metrics are only as good as the
170
data they are measuring. The purpose of metric keeping is to know how funding and time
are allocated to specific tasks. Once it is understood what it takes to perform a specific
task, engineers can look for ways to be more efficient. If inefficiencies are found, they
can be corrected, allowing for cost savings. If inefficiencies are not found, then engineers
know what time and cost a particular task will take and it will allow them to more easily
allocate funding and schedule to these tasks. The metrics I describe are similar to known
software metrics, but with the specific purpose of looking at security. These metrics if
measuring the correct data provide engineers with useful data about the effort involved
with performing security tasking.
I have worked on various programs performing security tasking. It has been observed
that the same type of tasking is necessary on all of these programs, due to the structure of
the compliance assessment and the need to apply information assurance requirement sets
to each platform. Measuring these tasks allows for better decision making, critical to
protecting systems required to meet regulatory compliance.
Securing systems is crucial to maintaining viable systems. In order to ensure that
systems are adequately secured, the means of security must become more affordable.
Using principles discovered in software engineering and mimicking them in the security
world as well as looking at new metrics to measure the cost of securing a system will lead
to more affordable systems. Although it will take time to gather the metric data, over
time it will allow for securing systems more efficiently and with less expensive materials.
I have provided metrics that can be used to reduce future costs of security. I have
shown how these are similar to software metrics, already proven to reduce the cost of
171
software development/. It can be extrapolated to show that the proposed metrics have a
strong potential of reducing the overall cost of security.
172
Chapter 8
Cryptography Overview
Cryptographic protection is one of the best ways to protect information.
Cryptography relies on secrets. The most important secret is the key [137]. Protecting
key material is vital to effective cryptographic solutions [138], [139]. A key management
plan lays out several critical areas with regard to key protection. A key management plan
must be created in conjunction with a cryptographic architecture for a system. Some
engineers struggle with out to appropriately apply cryptographic techniques.
There are a series of questions that must be asked when determining the type of
cryptography to be used.
answering these questions.
management.
Each section of this chapter will look at the process of
These questions will help in the creation of a key
In some cases, the questions have to be answered before a key
management plan can begin.
The first question is why does it require cryptography? Followed closely by, what in
the system requires cryptography? The type of cryptography determines the type of keys
and therefore directly affects the key management plan. Who is using the cryptography?
Will it be used in non-United States owned systems (or accessible to non-US persons)?
These also determine the type of cryptographic algorithms that can be used. Next, where
is the cryptography being implemented? This can determine how the key material must
be loaded. What is the boundary of the cryptography, both physically and in terms of
time? Is the key material received or generated? How will the key material be protected?
When the key material is no longer needed, what is done with it (how is it destroyed)? If
keys are intercepted or there is an incident of some kind, how is it handled?
173
The answers to these questions are used to fill out a key management plan template.
There are several templates. A particular template is chosen based on who owns the
project [140]. An understanding of the right template is also critical. Each template has
its own peculiarities that need to be addressed.
8.1.1
Contributions
In this chapter we provide a series of questions that one can use to figure out how to
appropriately apply cryptography. The following sections will cover the questions and
how to go about answering them. Regardless of the template chosen, these questions will
have to be answered. The process of answering the questions can be a difficult one.
8.2 Purpose of Cryptography
Cryptography is only one of many protection mechanisms a system can use.
Cryptography is primarily used for confidentiality, integrity and non-repudiation within
the realm of information assurance.
Certain types of algorithms are used for
confidentiality and others are used for integrity and non-repudiation.
In some cases cryptography is the only protection that can be used. For example, if it
is necessary to personally identifiable information, e.g. a health record, going over the
internet, the only approved method of ensuring that the information cannot be viewed is
through cryptography. For other systems it is a better return on investment. For example,
if it is necessary to remove private information off of a hard drive on a daily basis, it
would be much simpler to protect the information via cryptography. Cryptography can be
very expensive, but it can also be one of the most reliable defense mechanisms, which is
why forms of cryptography have been used since ancient times.
The meaning of
174
cryptography is “secret writing,” from Greek origin [137]. One of the earliest known
ciphers is the Caesar cipher. This was used in ancient Rome to send war messages.
During WWII cryptography was a fundamental part of protection for the military.
The Germans created the Enigma machine, and the Americans used the Navajo Indians.
Granted, using a foreign language is not truly a cryptographic technique, as we think of in
the traditional sense, but it had the same effect. The messages, even if intercepted, could
not be understood.
This type of protection, that which keeps information from being understood when
hijacked, is now known as communications security (COMSEC). Protecting the data
from being intercepted in the first place is transmission security (TRANSEC). Protecting
emanations can also be done through encryption and is known as EMSEC. Protecting
computer systems is COMPUSEC and cryptography is one of the applicable protection
elements [141].
In the modern era, the proliferation of the internet has created a need for cryptography
outside of the military arena. Cryptography is used from banking online to protecting
credit card information at the gas station. There are a plethora of uses for cryptographic
protection, but it may not be right for every system. Encrypting and Decrypting can add
overhead, and there is also the key and the algorithm that have to fit in the system. This
means that in order to begin, an analysis of the system must be performed. If the system
is an extremely small embedded system, the cryptography may have to be handled
differently than that of a large financial system.
175
Once an analysis of the system has been completed, and it is determined that
cryptography is indeed the correct choice for certain protection elements, then it is
necessary to determine what type of cryptography should be used.
8.3 Cryptographic Type
There are three main branches of cryptography, based on the type of key being used.
These are referred to as symmetric cryptography, asymmetric cryptography and hashing.
They are used for different purposes, although in some cases they can overlap.
Symmetric cryptography refers to algorithms that use one key for both encryption and
decryption. A message (or data) can be encrypted using a key. The message can then be
sent over an insecure channel, with the knowledge that, even if it is intercepted, it cannot
be read [137]. Once the message is received at the other end, it will be decrypted using
the same key. This does not mean that the key travels with the message. The keys must
be pre-arranged. This is one of the aspects that must be examined as part of a key
management plan.
Asymmetric cryptography uses two keys, a public key and a private key [137]. This
is also known as public key encryption (PKE) which is the framework for public key
infrastructure (PKI). A message is encrypted using a public key. It can be sent over an
insecure channel with the knowledge that it cannot be read if intercepted. Once the
message is received, it is decrypted using the private key. Many people can have the
public key, but only the recipient with the private key can decrypt the message.
A message can also be encrypted using the private key and decrypted using the public
key. This can be used for non-repudiation. If only one person has the private key, then
there is some assurance, that no one else could have sent the message.
176
Hashing is different from the other two forms of cryptography, in that it is a one way
cryptographic function. Data is encrypted, but it is not decrypted. This is used for
integrity. A common use for hashing is for software loads. The software is hashed and
the software and hash are sent to the customer. The customer can run the software
through the same hashing algorithm. The output of that procedure is compared to the
original hash. If the hashes match, then the integrity of the software is confirmed [137].
Within each of the three branches there are various algorithms, each with their own
strengths and weaknesses. There are many applications for each, although there are some
favored approaches. One example would be protecting data at rest. This is usually done
using a symmetric algorithm. In this instance there is no need for multiple people to have
access to a key, so the strength of a PKI solution would not be as useful.
In the instance of protecting email however, PKI is a much better choice. A set of
public keys for a group can be published. When it is necessary to protect data, the key
can be obtained and the message can be encrypted. A private key is held by each
individual. The private key is a pair with the public key. This means that only the
individual with the private key can decrypt the message encoded with the public key.
Once the type of cryptography is found, then the correct algorithm needs to be
identified. There are some restrictions on cryptography. In the United States, certain
algorithms are not allowed to be exported. The Department of Defense (DoD) as well as
other private and public sectors have standards that must be followed [28].
177
8.4 Cryptographic Application
The DoD has specific types of approved algorithms it is allowed to use in its’
systems, depending on the type of data it is protecting. These have been known as Type
1, 2, 3, and 4, although there has been a move to Suite A and Suite B algorithms.
According to [27] Type 1 cryptography is defined as:
Classified or controlled cryptographic item endorsed by the NSA for securing
classified and sensitive U.S. Government information, when appropriately keyed. The
term refers only to products, and not to information, key, services, or controls. Type 1
products contain approved NSA algorithms. They are available to U.S. Government
users, their contractors, and federally sponsored non-U.S. Government activities subject
to export restrictions in accordance with International Traffic in Arms Regulation
(ITAR).
Type 2 is defined as unclassified cryptographic equipment, assembly, or component,
endorsed by the NSA, for use in national security systems as defined in Title 40 U.S.C.
Section 1452 [27].
Type 3 is defined as: cryptographic algorithm registered by the National Institute of
Standards and Technology (NIST) and published as a Federal Information Processing
Standard (FIPS) for use in protecting unclassified sensitive information or commercial
information [27].
Type 4 is defined as: unclassified cryptographic algorithm that has been registered by
the National Institute of Standards and Technology (NIST), but not published as a
Federal Information Processing Standard (FIPS) [27].
178
The NSA has moved towards interoperability. In that interest, the NSA has approved
several commercially available cryptographic algorithms for use in classified systems
[137]. This does not guarantee that unclassified algorithms will be allowed for use in
classified systems. These type of algorithms, known as Suite B algorithms are used in
National Security Systems, when there is a need to communicate with US allies [142].
Some Suite A algorithms are classified others are unclassified and can be used if there
is no need for interoperability, or if it has been determined that there is a greater need for
protection. If the algorithm is classified, it must be protected. This means that in systems
that use classified algorithms, the key management plan must also address a way to
ensure that the algorithm is protected, just like the key material. In times of distress, or at
the end of operation, this means that the key and the algorithm must be sanitized.
Classified systems and other systems owned by the DoD are not the only type of systems
that use cryptography.
As the outcry for greater privacy has become a focus, new legislation, such as HIPAA
has begun to require companies to account for their behavior and their response to the
privacy of their customers [142]. In the Financial spectrum, Sarbanes-Oxley was passed,
demanding accountability of financial firms’ access control of data [140]. The legislation
as explained in [140] requires that access to personal data and write access to financial
records be extremely stringent. These measures do not dictate specific means of
protection, such as cryptography, but they do push companies towards the use of
cryptography as one of the layers of security.
confidentiality and integrity of data.
They call out for protection of
179
Many companies rely on NIST to provide guidance in the realm of cryptography.
NIST certifies cryptographic algorithms and has published guidance on key management
[138], [139].
The cryptographic algorithms supported by NIST can be used
commercially and in some cases by the federal government. NIST algorithms, when
thoroughly tested are added to the FIPS standard. These particular algorithms can then
be used to protect federally owned systems.
One area that crosses over is the North American Electric Reliability Corporation
(NERC). Since the September 11th attack, a new zeal has registered for protecting
critical infrastructures. These critical infrastructures provide basic necessities to those
living in the United States. Critical infrastructures are not owned by the government, but
by private people. This has led to some difficulty in creating legislation to protect them.
The government does not control them, so these systems do not fall under the Federal
requirements. After September 11th, new research and requirements were created to
protect these types of systems. NERC is one of these critical infrastructures that now
must work hand in hand with the government to ensure that its systems are properly
protected.
One of the fundamental protections is cryptography. NERC calls out for the use of
PKI to establish a trusted environment [143]. NERC also calls out for other protective
measures depending on connections [2], [144]. Cryptography is only part of the picture.
It can be used for access control as well as integrity checks, but it depends on where it is
being used in the system.
180
8.5 Cryptographic Placement
Understanding what needs to be protected, and which component need encryption
leads up to the placement of the cryptography. There are various ways of protecting data
in a component. For example, there are bulk encryptors and packet encryptors. Also
depending on the size and construction of a component, the crypto unit may have to sit
outside.
In classified networks, cryptography can be tricky, because it creates red/black
separation. Classified data is red and when it is encrypted it is considered black. There
are requirements known as TEMPEST which are used to protect for EMSEC [145]. One
of the protections for EMSEC is cryptography, but TEMPEST also requires certain
separation between black and red data. This can sometimes dictate where cryptographic
components can be added. This is primarily for National Security Systems and is not a
requirement for commercial systems.
The placement issues are different for non-
National Security Systems.
In some non-National Security Systems, encryption can be done on multiple pieces of
equipment and then decrypted at a centralized facility. If the footprint of the processors
doing the encryption is extremely small, this will determine which algorithms can be
used, as well as potentially the size of the key that can be used. It also may not be
feasible to move the encryption algorithm to a larger processor. These restrictions must
be stated in the key management plan.
181
8.6 Cryptographic Boundary
The boundary concerns both time and space. If the data being encrypted is critical, it
may be that the key has to change every day, or every process. In other cases, it may be
alright for a key to stay for a much longer period of time. How a network is distributed
may also play into how the data is encrypted.
If a system is decentralized, and does not have a single point where the data can be
encrypted, then individual nodes will have to perform encryption. Depending on the
criticality and the size of the system, a single key may suffice, or multiple keys may have
to be used.
It also may be necessary to synchronize keys throughout a system. If keys need to be
changed, then all the areas where the key is used will have to be notified, and have the
new key ready. This can be logistically be quite difficult in some cases, as there may be
multiple keys that are being changed quickly and multiple sites that need access to the
new keys. It may require both a technical solution and a procedural solution to meet the
requirement. Throughout the synchronization process and throughout the lifecycle of a
key, the key must be protected.
8.7 Key Material
Acquiring key material can be done one of two ways, it can either be received
through some means or it can be generated. In National Security Systems, the NSA
provides the key material. In other commercial systems, key material must be generated.
182
In order to receive key material from the NSA the system must be validated to ensure
that key material will be appropriately protected. This includes sanitization measures as
well as handshaking methods to complete key transfers [146].
Generating key material relies on the ability to create random data. There a various
methods for this, but typically an outside source is needed. Functions like rand() in C
cannot be used to create good key material. This function like its counterparts in other
languages, will repeat after a period, and will give a similar, if not the same, random
number if the same seed is picked. The key is the most important secret. If a pattern can
be found in the key material, it is likely that a pattern can be found in the ciphertext.
If key material is being generated, the key management plan should include a section
on how it is being generated, and show the sources of true randomness that are being
provided, to ensure that the key material will not lead to a compromise.
After the key material is obtained or generated, it is used for a prescribed period of
time. Once all of the data associated with a key is decrypted (and either used in the
unencrypted form, or re-encrypted using a different key) and the key is no longer
necessary the key must be destroyed. Some systems may require that the key be held for
a period of time, to ensure that all data has indeed been decrypted.
The protection of the key material is essential, one method of protecting the keys, is
to load the key into volatile memory. If power is lost, the key will also be lost. This
requires that a backup key be maintained. If an attack is occurring, it may be one of the
fastest ways to secure a key. There are other methods which should be considered as part
of the key management plan.
183
8.8 Incident Response
In the event of an attack, or compromise of a key, and incident response plan must be
in place [140]. The incident response team must be able to quickly identify the type of
attack, and how key material was compromised. The data being protected by the key
must be secured. The key must be destroyed. A calculation of loss must be performed.
In some cases it may be necessary to do an analysis to see if the cryptographic system
needs replacement. This can occur if an algorithm is nearing end of life, or if it has been
successfully cracked. Incident response is critical in protecting the remaining portions of
the system and in mitigating the loss.
Creating a key management plan is an important task. It can help in the process of
obtaining the information necessary to protect key material. Identifying all the areas that
need cryptographic protection all the way through key destruction and the end of life an
algorithm are all apart of creating a secure cryptographic system.
A robust key
management plan can lead to better security throughout a system.
These are critical in today’s world where cryptography is one of the most vital means
to protecting information. I have provided information in this chapter to aid engineers on
understanding appropriate application of cryptographic mitigations to facilitate the most
appropriate method of protecting information. The correct application of cryptography
can significantly increase the security of a system.
184
Chapter 9
Regulatory System Compliance
One of the greatest difficulties and handling security on regulated systems is the
ability to show regulatory compliance. This is due in part to the sheer number of
requirements that must be met. Compliance must be shown at all levels in the system, and
where the system cannot comply, it is necessary to show justification [147], [5], [148],
[84], [149], [88], [150], [11], [151].
Often times when requirements in a regulatory document are decomposed at a certain
point, something gets lost in translation. I have experienced decomposing a requirement
to a point that it was thought to be clear, only to see a further decomposition that made
absolutely no sense. Using the tools at hand, trying to determine where the requirement
was derived from takes manual effort. This can impact schedule. A systematic approach
to ensure that all pieces of the system are in compliance is necessary. Throughout the lifecycle it is also necessary to understand where the system is with respect to meeting
compliance. Waiting until the moment of needed accreditation or system sign off to truly
understand the probability of compliance can be risky.
With respect to non-technical requirements, these are typically captured as part of the
statement of work (SOW). The SOW contains items about how a contract is to be
conducted. For example, if there is a regulation that says that all software must be
handled in configuration management, the SOW would contain this. It isn’t a functional
or design requirement placed on the system being built. So these should be captured and
do affect regulatory compliance, but do not get decomposed into the lower layers. Both
the non-technical and the aspects of the regulatory document that are considered out of
scope have to be agreed upon by the customer and added to the contract.
185
On large systems, assessing the risk against the system architecture can also be
unwieldy due to complexity. For most systems needing to show regulatory compliance,
risk is assessed as not meeting a particular regulation. Understanding all the outside
influences on regulation application is also necessary as part of the justification provided
to show compliance.
9.1.1
Contributions
In this chapter I provide insights into an approach to requirements handling for
regulated systems. I also contributes the mechanics behind a way to automate the
requirements mapping through the use of a Bayesian Network. This same approach
allows for reasoning on the design and implementation of a system with respect to
regulatory compliance as well.
9.2 Requirements
Each regulation document must be broken down into discrete requirements, so that
they are readily understandable to those that are going to have to implement them. When
regulatory systems are contracted out, there are typically pieces of the regulations that
will either not apply, or will be of a non-technical nature. These must also be taken into
account as part of the compliance mapping. As seen in Figure 1, the regulatory document
will be decomposed into high level requirements that are further decomposed into more
explicit requirements. When automating requirements mapping, each of these layers must
be accounted for.
With respect to the technical requirements coming from the regulatory document, at
each layer the requirement should become more refined in that the intent should become
clearer on how it should apply at the lowest level. At the system level, the requirement is
186
met only if the requirements are met at the lower levels. This means that the System
level requirements are satisfied when all the subsystem requirements are satisfied.
Regulatory Document
Out of Scope
System Specification
Design Requirement
Sta
Functional Requirem
Subsystem Specification
Hardware Specification
Software Specification
Figure 9-1 Requirements Decomposition
The subsystem requirements are satisfied when all the component requirements are
satisfied. The component level requirements come in the form of hardware, software and
supplier requirements. At each level a requirement may not apply uniformly. It may
apply to one area, but not another.
9.3 Counting Requirements
Let S be the system to be considered. Let SSi be the ith subsystem of S. Let Ci.j be the
component of SSi. Let P(X) be the fraction of requirements met. Let R(X) be the total set
of requirements governing X, which encompasses all decomposed requirements. N(X) be
the set of requirements not met by X. It should be noted that a particular component
Su
187
could have zero or more hardware (HW), software (SW) or supplier (SUPP)
requirements. It is understood that to get the percentage, each of these formulas would be
multiplied by 100. We can simplistically define the relationship as:
𝑃(𝑆) = 1 −
| ⋃𝑛
𝑖=1 𝑁(𝑆𝑆𝑖 )|
(1)
|𝑅(𝑆)|
𝑅𝑖𝑗
P(SSi)= 1 −
| ⋃𝑗=0 𝑁(𝐶𝑗 )|
𝑛𝑖𝑗
P(Cij) 1 −
(2)
|𝑅(𝑆𝑆𝑖 )|
𝑚𝑖𝑗
𝑝𝑖𝑗
|[⋃𝑘=0 𝑁(𝐻𝑊(𝑖𝑗)𝑘 )] ⋃ [ ⋃𝑙=0 𝑁(𝑆𝑊(𝑖𝑗)𝑙 )] ⋃ [⋃𝑞=0 𝑁(𝑆𝑈𝑃𝑃(𝑖𝑗)𝑞 )]|
|𝑅(𝐶𝑖𝑗 )|
(3)
For example, in looking at the patient monitoring system there are two subsystems
and nine components, as seen in Figure 9-2Error! Reference source not found.. The
goal is to map the requirements down to the lowest level.
First the engineer needs to identify the applicable requirements from the regulatory
document. For this example, it is assumed that the company contracted to build this
system is being asked to use the NIST SP 800-53 as their primary regulatory document.
The engineers work with the customer to break out applicable requirements into the
system specification based on how they believe the system will be used. The customer
typically assigns the allocation of baseline controls and enhancements that will be
required. At the system specification (SS) level, each control is broken into discrete
system level requirements. This section is going to look at how controls CM-6, CA-2,
SC-7, and SC-9 affect this monitoring system.
There are significantly more controls that would be broken out into the system
specification, but for purposes of showing the thread of automating requirements
handling, the focus will be on these four requirements. These four requirements were
188
chosen, as they cover several of the more difficult aspects of requirements handling. Only
the base requirements are going to be discussed, the enhancements will not be
considered. If automating this process, these would also have to be taken into account.
189
Figure 9-2. Patient Monitoring System
190
There are multiple combinations that can occur within a single requirement in the
NIST SP 800-53. Other regulatory documents have similar combinations, although the
layout is different.
It should also be noted that supplemental guidance is provided in the NIST SP 800-53
on how to implement each control. This information can be useful in deriving the
requirement language. In many settings, a requirement is only binding if it uses the term
“shall.” If “will” is used it becomes somewhat optional. The NIST SP 800-53 like almost
all regulatory documents is not written in this format and so must be translated. This
translation is critical as it affects the probability of the requirement be correctly
interpreted, decomposed, and implemented in such a way as to ensure compliance. Table
7 shows the initial breakdown of the four requirements from the regulatory document into
the System Specification and the Statement of Work.
One of the first things that might be noticed is that CA-2 becomes a completely nontechnical set of language that must be captured as part of the statement of work and not
part of the system specification. Making the determination of whether or not a control
has technical implications must be done in accordance with the customer. As there are
different interpretations, and the requirement set is seen as a binding contract, it is critical
to have agreement early on. Secondly, as it can be seen from Table 7, some controls are
broken into both technical and non-technical. This has to be captured. The chore of
proving compliance with a technical requirement is different than proving compliance
with a non-technical requirement.
Most of the non-technical requirements are shown to be in compliance through a
documentation trail, such as would be necessary for CA-2.
191
Table 7 First Level Decomposition
Ctrl
Control Wording
Establishes and documents
mandatory configuration
settings for information
technology products
employed within the
information system using
[Assignment: organizationdefined security
configuration checklists] that
reflect the most restrictive
mode consistent with
CM-6 operational requirements;
Implements the configuration
CM-6 settings;
Identifies, documents, and
approves exceptions from the
mandatory configuration
settings for individual
components within the
information system based on
explicit operational
CM-6 requirements; and
Monitors and controls
changes to the configuration
settings in accordance with
organizational policies and
CM-6 procedures.
Develops a security
assessment plan that
describes the scope of the
CA-2 assessment including
Security controls and control
enhancements under
CA-2 assessment
Assessment procedures to be
used to determine security
CA-2 control effectiveness
System
Specification
Statement of Work
The system shall be
configured in
accordance with
the Security
Technical
Implementation
Guides.
The contractor shall
identify, document and
seek approval for any
settings not compliant
with the STIGs.
The contractor shall only
make configuration
changes to the baseline
configuration after such
configurations are
approved by the technical
review board.
The contractor shall
develop a security
assessment plan that
shall include:
a. Applicable controls and
enhancements
b. Assessment
procedures
192
CA-2
CA-2
CA-2
CA-2
SC-7
SC-7
Assessment environment,
assessment team, and
assessment roles and
responsibilities
Assesses the security controls
in the information system
[Assignment: organizationdefined frequency] to
determine the extent to
which the controls are
implemented correctly,
operating as intended, and
producing the desired
outcome with respect to
meeting the security
requirements for the system
Produces a security
assessment report that
documents the results of the
assessment
Provides the results of the
security control assessment,
in writing, to the authorizing
official or authorizing official
designated representative.
Monitors and controls
communications at the
external boundary of the
system and at key internal
boundaries within the system
Connects to external
networks or information
systems only through
managed interfaces
consisting of boundary
protection devices arranged
in accordance with an
organizational security
architecture.
c. Roles and
responsibilities
d. Process for assessing
security compliance
e. Reporting process
The system shall
provide for the
monitoring and
controlling of
communications at
all external
boundary points
and key internal
boundaries.
The system shall
only connect to
approved external
networks or
information
systems through
managed interfaces
and boundary
defense.
193
SC-9
The system shall
protect the
The information system
confidentiality of
protects the confidentiality of transmitted
transmitted information.
information.
It is not necessary for the security team to maintain a copy of the documentation, as
the documentation should be under configuration management, but they need a pointer to
such a document and to the paragraphs and pages that explicitly address the control
language.
The System Specification is then decomposed into design and functional
requirements, as seen in Table 8. The design requirements are those that will get fully
decomposed into the lower level requirement sets. The functional requirements affect the
functional architecture design.
Table 8 Second Level Decomposition
Ctrl
System Specification
System Design
Requirement
CM-6
The system shall be
configured in accordance
with the Security Technical
Implementation Guides.
SC-7
The system shall provide for
the monitoring and
controlling of
communications at all
external boundary points and
key internal boundaries.
The system shall provide for
the monitoring and
controlling of
communications at all
SDR_1: The System
shall be configured in
accordance with the
Security Technical
Implementation
Guides.
SDR_2: The System
shall monitor
communications at all
external boundary
points.
SC-7
SDR_3: The System
shall control
communications to
and from all external
Functional
Requirement
Decomposition
194
Ctrl
System Specification
System Design
Requirement
external boundary points and
key internal boundaries.
boundary points.
The system shall provide for
the monitoring and
controlling of
communications at all
external boundary points and
key internal boundaries.
The system shall provide for
the monitoring and
controlling of
communications at all
external boundary points and
key internal boundaries.
The system shall only connect
to approved external
networks or information
systems through managed
interfaces and boundary
defense.
SDR_4: The System
shall monitor
communications at all
key internal
boundaries
SC-7
The system shall only connect
to approved external
networks or information
systems through managed
interfaces and boundary
defense.
SDR_8: The System
shall only connect to
approved external
networks through
managed interfaces
SC-9
The system shall protect the
confidentiality of transmitted
information.
SDR_10: The Patient
Monitoring
Subsystem shall
protect the
confidentiality of
transmitted
information.
SC-7
SC-7
SC-7
Functional
Requirement
Decomposition
SDR_5: The System
shall control
communications to
and from key internal
boundaries
SDR_6: The System
shall only connect to
approved external
networks through
boundary defense
CON_1: The
connection function
shall ensure that
external
connections go
through managed
interfaces
CON_2: The
connection function
shall ensure that
external
connections go
through boundary
defense
CON_2: The
connection function
shall define
confidentiality levels
on all data flows in
the system
195
This is where the requirements mapping can begin to get complicated for systems that
have to meet all of these regulations.
Compliance requires that enough of these
requirements be met in order to reduce the risk to an acceptable level. There is no set
number. It’s based on the auditor’s judgment.
Ideally all requirements on a contract should be met in order to show compliance, but
often times this is not feasible. There are cost, schedule and technical limitations in some
cases that have to be factored in. The lowest cost, technically acceptable solution is the
preferred business approach, and so it is preferable to meet the fewest number of
requirements necessary in order to be in compliance. This forces engineers to identify
what is most critical to comply with.
Table 9 Third Level Decomposition
System Design Requirement
SDR_1: The System shall be configured
in accordance with the Security
Technical Implementation Guides.
SDR_2: The System shall monitor
communications at all external
boundary points.
SDR_3: The System shall control
communications to and from all
external boundary points.
SDR_4: The System shall monitor
communications at all key internal
boundaries
SDR_5: The System shall control
communications to and from key
internal boundaries
SDR_1: The System shall be configured
in accordance with the Security
Technical Implementation Guides.
SDR_2: The System shall monitor
communications at all external
boundary points.
Subsystem Requirement
PM_1: The Patient Monitoring Subsystem
shall be configured in accordance with the
Security Technical Implementation Guides.
PM_2: The Patient Monitoring Subsystem
shall monitor communications at all external
boundary points.
PM_3: The Patient Monitoring Subsystem
shall control communications to and from all
external boundary points.
PM_4: The Patient Monitoring Subsystem
shall monitor communications at all key
internal boundaries
HC_5: The Healthcare Control Subsystem
shall control communications to and from
key internal boundaries
HC_1: The Healthcare Control Subsystem
shall be configured in accordance with the
Security Technical Implementation Guides.
HC_2: The Healthcare Control Subsystem
shall monitor communications at all external
boundary points.
196
SDR_3: The System shall control
communications to and from all
external boundary points.
SDR_4: The System shall monitor
communications at all key internal
boundaries
SDR_5: The System shall control
communications to and from key
internal boundaries
SDR_6: The System shall only connect
to approved external networks
through boundary defense
SDR_6: The Healthcare Control
Subsystem System shall only connect
to approved external networks
through boundary defense
SDR_8: The System shall only connect
to approved external networks
through managed interfaces
SDR_8: The System shall only connect
to approved external networks
through managed interfaces
SDR_10: The System shall protect the
confidentiality of transmitted
information.
SDR_10: The System shall protect the
confidentiality of transmitted
information.
HC_3: The Healthcare Control Subsystem
shall control communications to and from all
external boundary points.
HC_4: The Healthcare Control Subsystem
shall monitor communications at all key
internal boundaries
HC_6: The Healthcare Control Subsystem
shall control communications to and from
key internal boundaries
PM_6: The Patient Monitoring Subsystem
shall only connect to approved external
networks through boundary defense
HC_7: The Healthcare Control Subsystem
shall only connect to approved external
networks through boundary defense
PM_8: The Patient Monitoring Subsystem
shall only connect to approved external
networks through managed interfaces
HC_9: The Healthcare Control Subsystem
shall only connect to approved external
networks through managed interfaces
PM_10: The Patient Monitoring Subsystem
shall protect the confidentiality of
transmitted information.
HC_11: The Healthcare Control Subsystem
shall protect the confidentiality of
transmitted information.
The next step is to derive the component level requirements. It is at this level that the
requirements should be as fine tuned as possible. In some cases, the component level can
be forgotten in complex systems. In this example, it is assumed that the Patient
Monitoring Subsystem is being outsourced to a second company, which is why the
requirements are listed as supplier requirements. This results in the requirement
allocation in Table 10.
197
Table 10 Fourth Level Decomposition
SSS
PM_1
Hardware
Software
PM_1
PM_1
PM_2
PM_3
PM_4
HC_5
HC_1
HC_1
HC_1
HW_6: The router shall
control communications
to and from key internal
boundaries
HW_1: The wireless
router shall be
configured according to
the applicable STIGs
HW_2: The firewall/IDS
shall be configured
according to the
applicable STIGs
HW_3: The server shall
be configured according
to the applicable STIGs
SW_1: The server
shall only contain
software that has
been configured
according to the
Supplier
SUPP_1: The wireless
router shall be configured
according to the applicable
STIGs
SUPP_2: The firewall/IDS
shall be configured
according to the applicable
STIGs
SUPP_3: The sensor shall
be configured according to
the applicable STIGs
SUPP_4: The firewall/IDS
shall monitor
communications at the
external boundary
SUPP_4: The firewall/IDS
shall control
communications at the
external boundary
There are no internal
boundaries within PM,
therefore this requirement
should be removed
198
SSS
Hardware
Software
Application
Security
Development STIG
HC_1
HW_4: The workstation
shall be configured
according to the
applicable STIGs
SW_2: The
workstation shall
only contain
software that has
been configured
according to the
Application
Security
Development STIG
HC_1
HW_5: The router shall
be configured according
to the applicable STIGs
HW_7: VoIP shall be
configured according to
the applicable STIGs
HW_8: storage device
shall be configured
according to the
applicable STIGs
HW_9: The firewall/IDS
shall monitor
communications at the
external boundary
HW_10: The firewall/IDS
shall control
communications at the
external boundary
HW_11: The router shall
monitor communications
at all key internal
boundaries
HW_12: The router shall
control communications
at all key internal
boundaries
HC_1
HC_1
HC_2
HC_3
HC_4
HC_6
Supplier
199
SSS
PM_6
Hardware
HC_7
HW_13: The firewall/IDS
shall provide all
connections to approved
external networks
Software
PM_8
HC_9
SUPP_6: The firewall/IDS
shall provide the managed
interface to the network
HW_14: The firewall/IDS
shall provide the
managed interface to
the network
PM_10
HC_11
Supplier
SUPP_5: The firewall/IDS
shall provide all
connections to approved
external networks
SUPP_7: The wireless shall
provide for the
appropriate encryption of
transmitted information
HW_15: The wireless
shall provide for the
appropriate encryption
of transmitted
information
Each of these requirements then affects the correctness of the design and
implementation. A security subject matter expert provides input into whether or not the
design and implementation are correct.
To ensure that there is agreement on
interpretation, the customer/auditor/authorizing authority, should be involved throughout
the entire life-cycle.
As it is necessary in regulatory systems to show compliance, the percentage of
requirements met is one gauge to determine if the system will achieve accreditation. This
does not necessarily provide a good estimation of where the system is with respect to
regulatory compliance, as compliance is a combination of requirements, design and
200
implementation. The complexity of determining regulatory compliance can be better
modeled using a Bayesian network.
9.4 A Bayesian Approach to Compliance
Bayesian reasoning and a causality approach in general is useful since requirements
allocation and regulatory compliance have many dependencies, which cannot necessarily
be accounted for by a rule set. Each requirement has a probability of being correctly
decomposed and correctly implemented. The correctness of the decomposition is
somewhat subjective. Bayesian networks support reasoning on this type of data [152].
For example say a regulatory requirement called out for boundary defense and a
subsystem is responsible for providing this. The regulatory document may say something
to the effect that, “The System shall provide boundary defense.” Following this
subsystem B is allocated the following: “The subsystem shall provide a firewall.” In
reality if the regulatory document meant to imply that the system must have a firewall
and intrusion detection implemented, but if only the firewall is allocated, then the
requirement is only partially decomposed correctly, and some may say it wasn’t
decomposed correctly at all. In order to allow for such nuances a Bayesian Network can
be used. If arguing about the degree of correctness only, fuzzy logic might be more
appropriate, but for the purposes of this dissertation, the argument is limited to whether or
not it is correct and not its degree of correctness. The same logic applies to
implementation. The requirement may be correctly decomposed, but not fully or correctly
implemented. As these are not discrete items, they are best described by related
probabilities. If a requirement is only 90% correct at the top level, it typically deteriorates
201
as it goes down the chain. The correctness is determined by an expert viewing the
requirement.
A Bayesian network can provide the ability to reason about regulatory compliance
either predicatively or diagnostically. Each method can provide different probabilities
with respect to regulatory compliance which is not captured using simple weighted
averages.
One of the major hurdles of regulatory compliance is the ability to understand
how far along the system is with respect to compliance. If the requirements are
decomposed, are they interpreted correctly and what is the probability that the engineers
have actually understood and complied with the requirements. The chaining rule as
defined in [153] allows the ability to reason about a particular node given the probability
of its parents such:
𝑛
𝑃(𝑋1 = 𝑥1 , … , 𝑋𝑛 = 𝑥𝑛 )
= ∏ 𝑃(𝑋𝑖 = 𝑥𝑖 |𝑃𝑎𝑟𝑒𝑛𝑡𝑠(𝑋𝑖 ))
(4)
𝑖=1
Where Parents are defined as the parents of node X at 𝑖 th place. So for each requirement
set 𝑖 calculate the probability of that set being compliant and the conditional probability
of the sequence of compliance. This is fundamental to working with the Bayesian
Network. There are several ways to use the network. One can either reason from top to
bottom (predictive) or one can reason bottom to top (diagnostic).
The following Bayesian Networks were created based on requirements
decomposition explained above. In order to justify regulatory compliance, a system
engineer must first decompose the requirements at all appropriate levels. The engineer
will have to show compliance at each level.
The decomposition starts from the
regulatory document (RD) that is typically presented in a performance based
202
specification (PBS) or other high level contractual document. From here the regulatory
document is assessed for applicability. Items that impact either the design or functionality
of the system being built are broken into individual requirements at the system
specification (SS) level.
Some regulatory documents may apply constraints on how the system is developed. For
example, if there is a regulation that requires a programmer to have their privileges
assessed every three months during development, this would be placed in the statement of
work (SOW) and not the system specification.
From the system specification, requirements are allocated as either system design
requirements (SDR) or as functional requirements decomposition (FRD). The functional
requirements affect both the subsystem specification, as well as the functional
architecture (FA). The SDR and FRD are decomposed into the subsystem specification
(SSS), and from there into the component level. The component level requirements are
found in either the hardware (HW), software (SW) or supplier (SUP) specifications.
It should be noted that requirements coming from the regulatory document that are
considered out of scope should not be part of the chain of probability, so if included in
the Bayesian network their values/priors should be set to 1, so that they do not change the
probability.
The correctness of the requirements and their interpretation affect the way the
engineers design and implement the system. Poor requirements lead to a design that may
not meet the regulatory document and if the design is faulty then the implementation
based on the design will be faulty.
203
It should be noted, that in some cases engineers by-pass the process. It is likely that
an engineer may look at the original regulatory document to base the design or
implementation, which certainly affects the probability that the implementation will be
compliant. This happens frequently if the system is being designed using concurrent
engineering methods, due to the fact that engineers may begin design prior to formal
requirements decomposition being complete.
Regulatory
Document
P(RD) – Probability of Correct Interpretation
= .9
Probability of Compliance
RD
C
System
Specification
P(SS)
.8
.2
N
Probability of Compliance
SS
C
N
P(FRD)
.9
Functional
Requirement
Design
Requirement
.1
Probability of Compliance
SS
C
N
P(SDR)
.8
.2
Subsystem
Specification
Probability of Compliance
FRD SDR
C
C
N
C
C
N
N
N
P(SSS)
.95
.5
.5
.1
Figure 9-3 Bayesian Network for Simple Requirement Compliance
Using Figure 9-3, an example of predictive reasoning is this: what is the probability
that SS is compliant if RD is interpreted correctly? This is simply using the data provided
by an expert in the conditional probability table (CPT):
204
P(SS|RD) = .8
(5)
One could also ask: what is the probability that the FRD is compliant if RD is interpreted
correctly and the SS is compliant?
P(FRD| RD) = P(FRD| SS RD )
(6)
SS d-separates FRD and RD thus:
(7)
P(FRD| SS RD ) = P(FRD| SS )
(8)
P(FRD| SS) = .9
(9)
In other words there is a 90% probability that the FRD is compliant.
The concept of d-
separation is discussed in [154]. This concept allows that if two nodes are d-separated
that they can be treated independently, which in this network means that the probability
of FRD being compliant can be calculated using only the information from the SS node.
The concept of d-separation can be used whether the inference is being made using
predictive or diagnostic reasoning as explained in [152]. An example of diagnostic
reasoning using the same network is: What is the probability that RD was correctly
interpreted given that FRD is compliant?
P(RD|FRD) = P(FRD RD)/P(FRD)
(10)
P(FRD RD) = P(FRD RD SS) + P(FRD RD SS’)
(11)
= P(FRD|SS RD) *P(SS RD) + P(FRD|SS’ RD) * P(SS’ RD)
(12)
=P(FRD|SS)*P(SS RD) + P(FRD|SSS’)P(SS’ RD)
(13)
= .9*.8 + .1*.2 = .74
(14)
So if FRD is compliant, the probability that the RD was interpreted correctly is 74%.
205
This simple example provides an understanding of how a Bayesian network can
be used to reason about simple requirement compliance. Figure 9-4 is a suggested
Bayesian Network for reasoning about total compliance.
In order to reason over a complete set of requirements, a database of all the
requirements has to be built, showing all the links between specifications, as shown in the
previous Table 7-Table 10. For each requirement the determination of whether or not it
is compliant must be annotated. Then the design associated with the component must also
be annotated, such as using a document number containing the design. For each design
element the expert should determine whether or not it is compliant with the requirement,
and this should be added to the database. For each design element, an implementation
must be annotated. This can be done by pointing to the test cases associated with each
implementation component. For each of these points, the determination of whether or not
they are compliant would then be annotated in the database.
The compliance value (compliant, not-compliant) would then be evaluated by an expert,
who then would assign a numerical value used to populate the conditional probability
tables (CPTs) for the Bayesian Network. The values associated with the CPTs for the
network would change with respect to the particular system that it is being used to model.
The Bayesian Network in Figure 9-4, could be used to calculate the probability of a
system being in compliance with a particular requirement. If the engineer would then use
the Bayesian Network to run all of the requirements, then a second algorithm could be
used to combine the outcome of each run to give a total probability for the system.
In the following example, the information from the design in Figure 9-4, and the
requirements breakdown for CM_6 in Table 7 - Table 10 was used to determine the
206
values associated with the Bayesian Network CPTs in Figure 9-4 Bayesian Network for
System Compliance. Using this, it is possible to calculate the probability of the system
being in compliance with CM_6.
In looking at the example the following would be expressed in the database: SS is
compliant. FRD is compliant. SDR is compliant. The FA is not compliant. The SSS is
compliant. The SD is compliant. The SUPP, HW and SW are all compliant. The SD is
not compliant, the CD is compliant. All the implementations are compliant.
207
Regulatory
Document
P(FRD)
SS
C
N
P(RD) – Probability of Correct Interpretation = .9
RD
.9
.1
System
Specification
Functional
Requirement
C
N
P(SS)
.9
.1
System
Implementation
FRD
Functional
Architecture
P(SDR)
SS
C
N
C
N
System Design
Requirement
.9
.1
Subsystem
Design
FA SDR FRD
C
C
C
C
C
N
C
N
C
C
N
N
N
C
C
N
C
N
N
N
C
N
N
N
P(SSS)
.9
.7
.7
.2
.5
.2
.2
.1
SSS
C
N
SSS
C
N
Subsystem
Specification
SSS P(SUPP)
C
N
.8
.2
Supplier
Specification
P(SW)
.8
.2
P(HW)
Software
Specification
SSS
FA
C
C
N
N
C
N
C
N
C
C
C
C
C
C
C
C
N
N
N
N
N
N
N
N
C
C
C
C
C
C
C
C
N
N
N
N
N
N
N
N
C
C
C
C
N
N
N
N
C
C
C
C
N
N
N
N
C
C
C
C
N
N
N
N
C
C
C
C
N
N
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
P(SI)
.9
.2
.1
.1
SI
SD
P(SSI)
C
C
N
N
C
N
C
N
.9
.2
.1
.1
SI
CD
C
C
N
N
C
N
C
N
Subsystem
Implementation
P(SD)
.7
.2
.2
.1
Component
Design
Component
Implementation
HW
SW
C
C
C
C
C
C
C
C
N
N
N
N
N
N
N
N
C
C
C
C
N
N
N
N
C
C
C
C
N
N
N
N
P(RC)
.9
.9
.9
.9
.9
.9
.9
.9
.8
.8
.8
.8
.8
.8
.8
.8
.7
.7
.7
.4
.2
.2
.2
.1
.1
.1
.1
.1
.1
.1
.1
.1
C
N
C
N
P(FA)
Hardware
Specification
CI CD HW SW SUPP
FA
C
C
N
N
.8
.2
.8
.2
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
SS
Regulatory
Compliance
SUPP SD
C
C
N
N
C
C
N
N
C
C
N
N
C
C
N
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
C
N
P(CI)
.9
.2
.1
.1
P(CD)
.9
.8
.8
.5
.7
.6
.7
.2
.7
.3
.4
.3
.7
.2
.2
.1
Figure 9-4 Bayesian Network for System Compliance
This can occur, as some people in designing may not follow the process. This is usually
not to the benefit of the system, but in some cases it can turn out.
208
In order to calculate the compliance, we start with:
P(RC HW SW SUPP CD CI) =
(15)
Using the chain rule:
P(RC | HW SW SUPP CD CI) * P(HW| SW SUPP CD CI) *
P(SW| SUPP CD CI) * P(SUPP | CD CI) * P(CD| CI) P(CI) =
(16)
Using the rule of conditional independence this can be rewritten as:
P(RC) = P(RC | HW SW SUPP CD CI) * P(HW) * P(SW)
* P(SUPP) * P(CD| CI) P(CI) + P(RC | HW’ SW SUPP CD CI) * P(HW’) * P(SW)
* P(SUPP) * P(CD| CI) P(CI) + P(RC | HW SW’ SUPP CD CI) * P(HW) * P(SW’)
* P(SUPP) * P(CD| CI) P(CI) + P(RC | HW SW SUPP’ CD CI) * P(HW) * P(SW)
* P(SUPP’) * P(CD| CI) P(CI) + P(RC | HW SW SUPP CD’ CI) * P(HW) * P(SW)
* P(SUPP) * P(CD’| CI) P(CI) + P(RC | HW SW SUPP CD CI’) * P(HW) * P(SW)
* P(SUPP) * P(CD| CI) P(CI’) =
P(HW) = P(HW| SSS)P(SSS) + P(HW| SSS’)P(SSS’) = .8*P(SSS) + .2*(SSS’)
(17)
(18)
P(SSS) = P(SSS|FRD SDR FA)P(FRD|SDR FA)P(FA) + P(SSS|FRD SDR’ FA)*
P(FRD|SDR’ FA)P(FA) + P(SSS|FRD SDR FA’)P(FRD|SDR FA’)P(FA’)
(19)
Using the rule of conditional independence this can be rewritten as:
P(SSS) = P(SSS|FRD SDR FA)P(FRD)P(FA) + P(SSS|FRD SDR’ FA)*
P(FRD)P(FA) + P(SSS|FRD SDR FA’)P(FRD)P(FA’)
P(SSS) = 0.5*P(FRD)P(FA) + 0.7* P(FRD)P(FA) + 0.9* P(FRD)P(FA)
(20)
(21)
P(FRD) = P(FRD| SS)P(SS) + P(FRD| SS’)P(SS’)
= 0.9*P(SS) + 0.1*P(SS’)
P(SS) = P(SS|RD)P(RD) + P(SS|RD’)P(RD’) = 0.9*0.9 + 0.1*0.9 = 0.9
(22)
(23)
209
Substitute back into (22)
P(FRD) = 0.9*0.9 + 0.1*0.1 = 0.82
(24)
P(FA) = P(FA|FRD)P(FRD) + P(FA|FRD’)P(FRD’) =
0.8*0.82 + 0.2*(1-0.82) = 0.692
(25)
Substitute back into (19)
P(SSS) = .5*.82*.692 + .7*.82*.692+ .9*.82*(1-.692) = .908
(26)
Substitute back into (18)
P(HW) = .8*P(SSS) + .2*(SSS’) = .8*.908 + .2*(1-.908) = .74
(27)
P(SW) = P(SW| SSS)P(SSS) + P(SW| SSS’)P(SSS’) = .8*P(SSS) + .2*(SSS’)
(28)
Using value found in (26)
P(SW) = .8*P(SSS) + .2*(SSS’) = .8*.908 + .2*(1-.908) = .74
(29)
P(SUPP) = P(SUPP| SSS)P(SSS) + P(SUPP| SSS’)P(SSS’)
= .8*P(SSS) + .2*(SSS’)
(30)
Using value found in (26)
P(SUPP) = .8*P(SSS) + .2*(SSS’) = .8*.908 + .2*(1-.908) = .74
(31)
P(CD|CI) = P(CI CD) / P(CI) =
(32)
P(CI CD) = P(CI SSI CD) + P(CI ‘SSI CD) =
(33)
P(CI | SSI CD )P(SSI CD) + P(CI| SSI’ CD) P(SSI’ CD) =
(34)
0.9P(SSI CD) + .1 P(SSI’ CD) =
(35)
P(SSI CD) = P(SSI)P(CD)
(36)
P(SSI) = P(SSI| SI SD)P(SI|SD)P(SD) + P(SSI| SI’ SD)P(SI’|SD)P(SD)
+ P(SSI| SI SD’)P(SI|SD’)P(SD’)
Removing conditional dependencies allows this to be rewritten as:
(37)
210
P(SSI) = P(SSI| SI SD)P(SI) P(SD) + P(SSI| SI’ SD)P(SI’)P(SD)
(38)
+ P(SSI| SI SD’)P(SI)P(SD’)
= 0.2P(SI)P(SD) + 0.1 P(SI’)P(SD) + 0.9 P(SI)P(SD’)
(39)
P(SI) = P(SI| SS FA)P(SI|SS)P(FA) + P(SI| SS’ FA)P(SI’|SS)P(FA)
+ P(SI| SS FA’)P(SI|SS’)P(FA’)
(40)
P(SI) = P(SI| SS FA)P(SS)P(FA) + P(SI| SS’ FA)P(SS’)P(FA)
+ P(SI| SS FA’)P(SS)P(FA’)
P(SS) = P(SS|RD)P(RD) + P(SS|RD’)P(RD’) = .9*.9 + .1*.1 = .82
(41)
(42)
Substitute back into (42) and use value from (24)
P(SI) = .2*.82*.692 + .1*(1-.82)*.692 + .9*.82*(1-.692) = .35
(43)
P(SD) = P(SD|SSS FA)P(SSS| FA)P(FA) + P(SD|SSS’ FA)P(SSS’| FA)P(FA) +
P(SD|SSS FA’)P(SSS| FA’)P(FA’)
(44)
Rewrite as:
P(SD) = P(SD|SSS FA)P(SSS)P(FA) + P(SD|SSS’ FA)P(SSS’)P(FA) +
P(SD|SSS FA’)P(SSS)P(FA’)
(45)
Substitute in values from (25) and (26)
P(SD) = .2*.908*.692 + .1*(1-.908)*692 + .7*.908*(1-.692) = .34
(46)
Substitute (45) and (46) into (38)
P(SSI) = .2*.35*.34 + .1*(1-.35)*.34 + .9(.35)*(1-.34) = .254
P(CD) = P(CD|HW SW SUPP SD) P(HW| SW SUPP SD)*
P(SW| SUPP SD)P(SUPP|SD)P(SD) + P(CD|HW’ SW SUPP SD)*
P(HW’| SW SUPP SD) P(SW| SUPP SD)P(SUPP|SD)P(SD) +
P(CD|HW SW’ SUPP SD) P(HW| SW’ SUPP SD) P(SW’| SUPP SD)*
(47)
211
P(SUPP|SD)P(SD) + P(CD|HW SW SUPP’ SD) P(HW| SW SUPP’ SD)*
P(SW| SUPP’ SD)P(SUPP’|SD)P(SD) + P(CD|HW SW SUPP SD’)*
P(HW| SW SUPP SD’) P(SW| SUPP SD)P(SUPP|SD)P(SD’) =
(48)
Rewrite as
P(CD) = P(CD|HW SW SUPP SD) P(HW)P(SW)P(SUPP)P(SD)+
P(CD|HW’ SW SUPP SD) P(HW’)P(SW)P(SUPP)P(SD)+
P(CD|HW SW’ SUPP SD) P(HW)P(SW’)P(SUPP)P(SD)+
P(CD|HW SW SUPP’ SD) P(HW)P(SW)P(SUPP’)P(SD)+
P(CD|HW SW SUPP SD’) P(HW)P(SW)P(SUPP)P(SD’) =
(49)
P(CD) = 0.8*0.74*0.74*0.74*0.34 + 0.3*(1-0.74)*0.74*0.74*0.34 +
0.6*0.74*(1-0.74)*0.74*0.34 +0.5*0.74*0.74*(1-0.74)*0.34
+0.9*0.74*0.74*0.74*(1-0.34) = 0.42
(50)
Substitute back into (35) and (36)
P(CI CD)= 0.9P(SSI)P(CD) + 0.1 P(SSI’)P(CD) = 0.9*0.254*0.42 +
0.1*(1-0.254)*0.42 = 0.13
(51)
P(CI) = P(CI| CD SSI)P(CD|SSI)P(SSI) + P(CI| CD’ SSI)P(CD’|SSI)P(SSI)
+ P(CI| CD SSI’)P(CD|SSI’)P(SSI’)
(52)
Rewrite as:
P(CI) = P(CI| CD SSI)P(CD)P(SSI) + P(CI| CD’ SSI)P(CD’)P(SSI)
+ P(CI| CD SSI’)P(CD)P(SSI’) =
(53)
Substitute values in from (48) and (51)
P(CI) = 0.9*0.42*0.254 + 0.2*(1-0.42)*0.254 + 0.1*0.42*(1-0.254) = 0.16
Substitute back into (32)
(54)
212
P(CD|CI) = 0.13 / 0.16 = 0.813
(55)
Substitute back into (18)
P(RC) = 0.9 * 0.74 * 0.74 * 0.74 * 0.813*.16 +
0.9* (1-0.74)* 0.74 * 0.74 * 0.813*0.16+ 0.9* 0.74*(1- 0.74) * 0.74 * 0.813*0.16+
0.9* 0.74*0.74 *(1- 0.74) * 0.813*0.16+ 0.8 * 0.74* 0.74 * 0.74 *(1- 0.813)*0.16+
0.7 * 0.74* 0.74 * 0.74 *0.813*(1-0.16) = 0.30
(56)
Therefore the probability of the system being compliant with CM-6, given that
everything but the System Design and the Functional Architecture are correct is 30%.
9.5 Conclusion
As I have shown in the previous examples, that a Bayesian network can be created to
handle the probabilities that the difficulties of regulatory compliance can bring. The
expert observations associated with the CPT for an implementation node (i.e. Subsystem
Design, Component Design) can help reason about the probability of meeting
compliance. This means that even with the probabilistic influence of the requirement
decomposition, the overall probability that the implementation meets regulatory
compliance can be calculated to incorporate this knowledge.
There are several tools available, such as OpenMarkov or Stan for those interested in
developing Bayesian Networks [155],[156].
These tools allow the user to build a
Bayesian Network, such as the one modeled in Figure 4 and then assign values to it.
These tools could be used to create a Bayesian Network representing a system’s level of
regulatory compliance and then the values could be changed over time to allow the
engineer to gain an understanding of the probability of compliance. The use of the tools
213
are also nice, as they prevent users from having to prepare all the formulas as shown
above, which by hand is time consuming.
This information could then be shared with the customer or auditor as part of the
discussion on where the system is headed and what risks might be at hand. Regulatory
compliance is critical in sectors such as finance, health and department of defense and
could benefit from the use of probabilistic modeling.
214
Chapter 10 Lessons Learned
The following chapter offers my insights from applying security to projects in
industry. The ability to take lessons from both successes and failures is a needed ability
for any security engineer. I don’t claim that these are necessarily new ideas, but offer
these as lessons learned. These insights are provided to aid other engineers in applying
the methods described throughout this dissertation in a more effective manner.
10.1 Cryptography
If the system being developed or modified is a National Security System and will
involve cryptography, ensure that your customer has a National Security Agency (NSA)
liaison. The customer in these instances should already have a relationship with NSA
before a program begins. In the event that this is overlooked, and there is a good working
relationship with the customer, it is prudent for the security engineers to engage the
customer on receiving NSA support.
This will allow for smoother interaction on
receiving key material, as well as for receiving assurance the cryptography was done
according to cryptography security doctrine.
Cryptography can be one of the largest expenses incurred on a program. Involving
NSA early on in the process can reduce risk. There are particular problems associated
with the application of cryptography in unmanned system that must be dealt with as early
as possible in the life-cycle. Done incorrectly, this can lead to disastrous problems for a
program.
215
The development of new cryptographic algorithms or equipment is an expensive and
time consuming affair. If at all possible it should be avoided. The use of already
approved equipment should be considered first.
If the system does not require NSA involvement, the security team should at a
minimum follow the National Institute for Science and Technology (NIST) standards for
cryptography. NIST provides standards for federal systems and can be considered as a
best practice for non-government systems.
For foreign military sales (FMS) there are some types of cryptography that cannot be
sold. The security team dealing with these types of systems must ensure that the products
used can be released to that particular country. Aside from working with the local
International Traffic and Arms Regulations (ITAR) expert, for some international FMS
sales, the NSA will be able to provide guidance on what cryptography is appropriate.
10.1.1.1 TEMPEST
It will be noted that there is a lack of research on TEMPEST contained in this
dissertation. The reason is that TEMPEST is a complex issue in and of itself and was left
out both due to scope. TEMPEST on some programs may have a specific team to handle
these requirements. The isolation of red and black signals is a complex task that should
be handled by an emissions expert. This expert can be included on the IA team, or can be
on a separate team. There are TEMPEST requirements in many of the IA requirement
sets. This means that if an emissions expert is not available, it is the responsibility of the
IA team to ensure that the TEMPEST requirements are satisfied.
Even when the intent is to bake in IA from the beginning of a program, with the use
of COTS/GOTS items, it is not always possible to fix security issues at the component
216
level. In such cases it is critical that the security architecture account for this and provide
adequate protection throughout the system to reduce this risk. The concept of defense in
depth embodies this idea.
Although important on any team, a good working relationship is critical to
establishing a strong security posture. The overarching security requirements tend to be
established for enterprise solutions. When dealing with a tactical system, these have to
be interpreted to fit. When this interpretation is being done, it can cause contention
between the members of an IA team if they do not have a good working relationship.
This can be alleviated as well by working with the customer to ensure that agreement is
made on requirements interpretation. Some customers will require stricter adherence than
others. Justification is key. It is also important to establish a hierarchy when dealing with
suppliers. If this is not done, misinformation will go back to the customer and result in a
poor understanding of the security architecture by the customer.
Setting expectations with the customer early on is also important. Not every risk will
be mitigated, so it is important to understand what the expectations and potential
outcomes are of not mitigating a risk. Many security engineers forget that it is about risk
mitigation, not risk elimination. Although it is ideal to remove all risks, this is typically
not within budget, nor is it practical. New vulnerabilities are being discovered daily,
such as those discussed in [157]. The security team should strive to reduce risk within
the budget and within schedule. They cannot continually redesign the system to mitigate
a new vulnerability. Once the risk is reduced acceptably, this should be the final design.
217
10.1.2 Buy-in
Security is a necessary part of system development, but is often dismissed or ignored
until too late.
Getting buy-in from the chief engineer, IPT leads and program
management will significantly ease the inclusion of protection.
Without top down
support, IA mitigations can be hampered and can lead to catastrophic consequences, such
as a denial of approval to operate.
One of the ways that management can provide support is in ensuring that
requirements are written correctly, concisely, clearly and in a contractually binding
manner. Without clear requirements, IA will never be adequately included.
218
Chapter 11 Future Research and Conclusion
In this research, several topics were explored associated with the advancement in
securing regulatory constrained platforms. The systems engineering life-cycle was
evaluated and the security elements were synthesized to introduce a new cohesive system
security engineering life-cycle that works within the acquisition process for regulatory
constrained systems. A new methodology for developing security architectures was
designed. In order to ensure that every aspect of developing a regulatory constrained
platform was covered, several issues were covered in conjunction with the system
security engineering life-cycle and architecture development, including requirements
decomposition, software assurance, the cost of security, cryptography, threat motivation,
and the introduction of a new way to calculate the probability of compliance through the
use of Bayesian Networks.
11.1 Known Concerns and Solutions
In this section some of the concerns and issues that were encountered before and
during the research process are detailed, as well as the solutions.
1. As an aside this work was done while I was working on specific platforms, it was
necessary to ensure that no proprietary or ITAR restricted research would be
divulged in this final work. The work also needed to be applicable in a public
setting. This led to many literature surveys to ensure that all work contained
herein was unrestricted and useful.
2. One specific area of concern is the cost of doing security. I found that because
metrics do not exist, there is currently no way to determine the true cost of
219
including security from the beginning. Although the author went down the road of
trying to perform a comprehensive cost analysis, there was no data to base this
study on. Instead the author provides metrics that should be kept, so that a
complete cost analysis can be done in future research.
3. When approaching the systems engineering life-cycle, it was understood that
there is a difference between the customer acquisition life-cycle and traditional
systems engineering. There was a need to correlate these two processes before
the introduction of security processes. In the end I was able to correlate the
processes and include security as part of this work.
4. Security architecture conjures various meanings to mind. Depending on the
engineer using the term you will get different definitions. One of the difficulties
faced in this research was coming up with a definition that fit within the realm of
regulatory constrained environments and included all the various security aspects
that are required. This research not only defined what a security architecture
should mean, but how to go about creating one.
11.2 Evaluation of Success Criteria
The following is a self-evaluation of the success criteria of this research work:
1. A proprietary tool was developed, allowing system security engineers to look at
the security posture of an architecture as it is being created. As part of the tool
there is a way to model IA requirements, allowing for modeling the Department
of Defense (DoD) Instruction (DoDI) 8500.2 and the ability to add in other
requirement sets, such as NIST SP800-37. The tool allows the user to layout IA
requirements against the system, subsystem and components. The tool allows
220
continuous measurement/tracking so that the architect can see how the system is
progressing in relation to the requirements. The tool also incorporates Security
Technical Implementation Guides so that the engineer can review the
configuration vulnerabilities as well. As this tool is not publicly releasable, an
unrelated concept was developed for the dissertation. This resulted in the
Bayesian Network discussed in Chapter 9.
2. Research into threats has resulted in new ideas on the motivation behind threats.
The idea was to create a superset of threats, from which a subset could be applied
to any program. This research is continuing in a joint effort with Dr. Chow.
3. One of the requests was to provide a relative cost survey of including security in
to a system from the beginning as opposed to bolting it on later. As the research
for this was being done, it was found that it was not possible to gather information
from other programs that have had to add security on at the end and anonymize it
then provide a comparison between the relative costs from the different programs.
This is because there do not exist security cost metrics associated with any
available programs. Instead the research focused on what the metrics needed to
be, based on working through the security on multiple programs and the
fundamentals of the DoD Information Assurance Certification and Accreditation
Process (DIACAP). This has resulted in a presentation being given at the IEEE
Software Technology Conference on April 11th, 2013.
4. I was asked to look at the cross-talk DoD journal as a possible publication arena.
Cross talk has accepted two papers for publication on security requirements
engineering [158].
221
5. I completed research with respect to secure coding and software assurance as well
as adding security into the systems engineering life-cycle. The total research has
resulted into an approach to doing IA end to end on DoD programs.
11.3 Contributions
This research contributes to the advancement of securing regulatory constrained
systems in an efficient manner.
1. A new method for including security into the overall systems engineering and
regulatory constrained acquisition life-cycle.
2. A new approach to designing and developing security architectures.
3. A new set of metrics to begin the task of determining the cost of security.
4. A proprietary tool to provide a more efficient approach to developing a secure
platform. This was provided to the committee only as it contains proprietary and
ITAR restricted information.
5. In lieu of the tool, the dissertation contains a proposed Bayesian Network for
predicting the probability of regulatory compliance.
11.4 Future Research
There are a few areas of research that warrant further investigation. The first area is to
expand the use of the Bayesian Network to cover other aspects of regulatory compliance.
Secondly, there is still much work to do in the area of threats. I would like to continue
work in ways of modeling the threat motivations and their impacts on systems. This work
could affect multiple areas of regulatory compliance.
222
Lastly, I would like to continue research on how each of these areas impact the
cyber-physical relationship. Although the focus of this research has been done with
respect to Ethernet and IP type traffic, future research should take into consideration
other types of communication such as ARINC 429.
11.5 Conclusion
Safeguarding the security of our systems is critical to protect the users of these
platforms and in turn protect people around the world from harm. The introduction of
these methodologies allows for ensuring that security is completely ingrained into these
systems in a verifiable manner. It reassures those making the final decision with respect
to acceptable risk that the systems are secure. It allows the system security engineers to
reduce cost by increasing efficiency and allows them to be confident that they are
providing a secure system.
223
Works Cited
[1]
B. Pfitzmann, "Multi-layer Audit of Access Rights," SDM 2007; LNCS 4721, p. 18–
32, 2007.
[2]
North American Electric Reliability Council, "Cyber-Access Control, Version 1.0,"
in NERC, June 14, 2002.
[3]
Better Business Bureau, "A Review of Federal and State Privacy Laws," [Online].
Available: http://www.law.msu.edu/clinics/sbnp/resources/RevFedStatePrivacy.pdf.
[Accessed 2012].
[4]
W. R. K. Nahra, "HIPAA Security Enforcement is Here," IEEE Security and
Privacy, 2008.
[5]
Department of Defense Instruction 8500.2, Information Assurance (IA)
Implementation, February 6, 2003.
[6]
National Institute for Standards and Technology Special Publication 800-37 R1,
Guide for Applying the Risk Management Framework to Federal Information
Systems: A Security Life Cycle Approach, Gaithersbergm MD 20899-8930:
Computer Security Division Information Technology Laboratory National Institute
of Standars and Technology, Februrary 2010.
[7]
National Institute of Standards and Technology , Special Publication 800-53
Revision 3, August 2009, includes updates as of 05-01-2010” Recommended
Security Controls for Federal Information Systems and Organizations, Gaithersburg,
MD, August 2009.
[8]
A. Moscaritolo, "Report: Cyberattacks against the U.S. "rising sharply"," SC
Magazine, 2009 November 2009.
[9]
S. F. Jr., "Cyber Attacks on Feds Soar 680% In 6 Years: GAO," AOL Defense: Intel
and Cyber, 24 April 2012.
[10] B. Fabian, S. Gurses, M. Heisel, T. Santen and H. Schmidt, "A comparison of
security requirements engineering methods," in Requirements Engineering, London,
Springer-Verlag, 2009, pp. 7-40.
224
[11] Department of Health and Human Services, "45 CFR Parts 160, 162, and 164 Health
Insurance Reform: Security Standards, Final Rule," Federal Register, 2003.
[12] Department of Defense Directive 8570.01-M, "Information Assurance Workforce
Improvement Program," Assistant Secretary of Defense for Networks and
Information Integration/Department of Defense Chief Information Officer, January
24, 2012.
[13] E. J. D. A. Ajit Appari, "HIPAA Compliance: An Institutional Theory Perspective,"
in Americas Conference on Information Systems, California, 2009.
[14] A. Kossiakoff and W. Sweet, Systems Engineering Principles and Practice, © John
Wiley and Sons Inc., 2003.
[15] R. Wita, N. Jiamnapanon and Y. Teng-amnuay, "An Ontology for Vulnerability
Lifecycle," in 3rd Int. Symposium on Intelligent Information Technology and
Security Informatics, 2010 © IEEE.
[16] R. Kissel and et al., Security Considerations in the System Development Life Cycle,
Gaithersburg, MD.: NIST, October 2008.
[17] S. Jin, "A Review of Classification Methods for Network Vulnerability," in Proc. Of
2009 IEEE Int. Conf. on Systems, Man and Cybernetics, San Antonio, TX, October
2009 © IEEE.
[18] J. Sherwood, SALSA: A Method for Developing the Enterprise Security Architecture
and Strategy, 1996 © Published by Elsevier Science Ltd. doi:10.1016/S01674048(97)83124-0.
[19] R. Shirey, Defense Data Network Security Architecture, MITRE Corporation.
[20] S. Al-Fedaghi, "System-Based Approach to Software Vulnerability," in IEEE intl.
Conf on Social Computing/ IEEE Int. Conf. on Privacy, Security, Risk and Trust,
2010 © IEEE.
[21] S. Amer and J. Hamilton, Jr., "Understanding Security Architecture," no. SpringSim,
2008.
[22] Wikipedia, " Department of Defense Architecture Framework," 14 December 2010.
[Online]. Available:
http://en.wikipedia.org/wiki/Department_of_Defense_Architecture_Framework.
225
[23] Department of Defense Instruction 5200.39, “Critical Program Information (CPI)
Protection, Critical Program Information (CPI) Protection Within the Department of
Defense, 2008.
[24] B. Chess and J. West, Secure Programming with Static Analysis, Addison-Wesley
Software Security Series, 2007.
[25] S. Udupa, S. Debray and M. Madou, "Deobfuscation Reverse Engineering
Obfuscated Code," in 12th Working Conferece on Reverse Engineering, 2005.
[26] H. Hartig, "Security Architectures Revisited," in EW 10 Proc. 10th Workshop on
ACM SIGOPS European workshop, 2002 © ACM.
[27] National Information Assurance Glossary, "CNSS Instruction No. 4009," 26 April
2010.
[28] National Security Agency, "Suite B Implementer’s Guide to NIST SP 800-56A,"
July 28, 2009.
[29] Wikipedia, "Information Security," 10 December 2010. [Online]. Available:
http://en.wikipedia.org/wiki/Information_security.
[30] National Institute for Standards and Technologies , “NIST SP 800-30: Guide for
Conducting Risk Assessments,” Deparment of Commerce, Gaithersburg, 2012.
[31] F. Williams, “Sarbanes, Oxley and You,” 1 October 2003. [Online]. Available:
http://www.csoonline.com. [Accessed 30 April 2013].
[32] Computer Systems Laboratory Bulletin, "Computer Security Roles of NIST and
NSA," February 1991. [Online]. Available:
http://csrc.nist.gov/publications/nistbul/csl91-02.txt. [Accessed 3 March 2013].
[33] U. C. D. M. Office, "UCDMO," US Government, 2013. [Online]. Available:
http://www.ucdmo.gov/about/. [Accessed 3 March 2013].
[34] A. Elkins, J. Wilson and D. Gracanin, "Security Issues in High Level Architecture
Based Distributed Simulation," in Proceedings of the 2001 Winter Simulation
Conference, 2001 © WSC’01.
[35] M. Jones, "Corporate Governance - SOX Best Practices Paper," [Online]. Available:
http://www.iim-edu.org/corporategovernancesarbanesoxleybestpractices/. [Accessed
29 April 2013].
226
[36] Department of Defense Instruction No. 5200.44, "Protection of Mission Critical
Functions to Achieve Trusted Systems and Networks (TSN)," November 5, 2012.
[37] Defense Acquisition University, "Defense Acquisition Guidebook," Department of
Defense, November 2012. [Online]. Available:
https://dag.dau.mil/Pages/acqframework.aspx. [Accessed 3 March 2012].
[38] National Security Telecommunications and Information Systems Security
Committee, National Information Assurance Certification and Accreditation
Process, NSTISSI No. 1000, NSTISSI, 2000.
[39] ISACA, COBIT 5, Rolling Meadows: ISACA, 2012.
[40] B. Brenner, SaS 70 Replacement: SSAE 16, Data Protection, 2010.
[41] US Department of Health and Human Services, “HITECH Act Enforcement Interim
Final Rule,” HHS, 17 February 2009. [Online]. Available:
http://www.hhs.gov/oci/privacy/hipaa/administrative/enforcementrule. [Accessed 29
April 2013].
[42] F. Guo, Y. Yu and T. Chiueh, "Automated and Safe Vulnerability Assessment," in
Proc. 21st Annual Computer Security Applications Conference, 2005 © IEEE.
[43] A. Haidar and et al., "Vulnerability Assessment and Control of Large Scale
Interconnected Power Systems Using Neural Networks and Neuro-Fuzzy
Techniques," in Australaisan Universities Power Engineering Conference, 2008.
[44] A. Haidar et al. , "Vulnerability Assessment of Power System Using Various
Vulnerability Indices," in 4th Student Conf. on Research and Development,
Malaysia, 27-28 2006 © IEEE.
[45] C. Ten, C. Liu and M. Govindarasu, "Vulnerability Assessment of Cybersecurity for
SCADA Systems Using Attack Trees," 2007 © IEEE.
[46] L. Lowis and R. Accorsi, "Vulnerability Analysis in SOA-based Business
Processes," IEEE Transactions on Services Computing, 2010 © IEEE.
[47] C. Fu, "Research of Security Vulnerability in the Computer Network," 2010 © IEEE.
[48] R. M. Steinberg, “The High Cost of Non-Compliance Reaping the Rewards on an
Effective Compliance Program,” February 2010. [Online]. Available:
https://www.securityexecutivecouncil.com. [Accessed 30 April 2013].
227
[49] C. Ransburg-Brown, HIPAA: The Cost of Non-Compliance, Birmingham:
Birmingham Medical News, 2012.
[50] Department of Defense, “DoD 5220.22-M National Industrial Security Program:
Operating Manual,” Department of Defense, 2006.
[51] J. Atlee and B. Cheng, "Research Directions in Requirements Engineering," Future
of Software Engineering (FOSE'07, IEEE 2007.
[52] K. Montry, "IA Risk Assessment Process," in Proc. of the 2005 IEEE Workshop on
Information Assurance and Security, United States Military Academy, West Point,
NY, 2005.
[53] J. Sherwood et al., Enterprise Security Architecture: A Business-Driven Approach,
San Francisco, CA: CMP Books, 2005.
[54] National Security Agency Information Assurance Solutions Technical Directors,
"Information Assurance Technical Framework, Release 3.0," September 2000.
[55] M. Oda, H. Fu and Y. Zhu, "Enterprise Information Security Architecture A Review
of Frameworks, Methodology and Case Studies," 2009 © IEEE.
[56] C. Irvine and et al., "A Security Architecture for Transient Trust," in CSAW’08
October 31, 2008 © ACM.
[57] M. Neilforoshan, Network Security Architecture, © Consortium for Computing
Sciences in Colleges, 2004.
[58] L. Zhuang, C. Li and X. Zhang, "A Novel Architecture for Trusted Computing on
Public Endpoints," in 2nd Int. Conf. Networks Security, Wireless Communications
and Trusted Computing, 2010 © IEEE.
[59] M. Reiter, K. Birman and R. V. Renesse, "A Security Architecture for Fault-Tolerant
Systems," ACM Transactions on Computer Systems, vol. 12, no. 4, pp. 340-371,
November 1994.
[60] C. Blackwell, "A Multi-layered Security Architecture for Modelling Complex
Systems," in CSIIRW’08 May 12-14, 2008 © ACM.
[61] T. Levin and et al., "Analysis of Three Multilevel Security Architectures," in
CSAW’07, 2007 © ACM.
228
[62] M. Hafiz, "Security Patterns and Evolution of MTA Architecture," in OOPSLA’05,
2005 © ACM.
[63] H. Susanto and F. b. Muhaya, Multimedia Information Security Architecture
Framework, 2010 © IEEE.
[64] Wikipedia, "Zachman Framework," 10 December 2010. [Online]. Available:
http://en.wikipedia.org/wiki/Zachman_Framework.
[65] SABSA Ltd. 2010 ©SABSA, "The SABSA Method," [Online]. Available:
http://www.sabsa-institute.org/.
[66] SABSA Ltd. 2010 ©SABSA, "The SABSA Matrix," [Online]. Available:
http://www.sabsa.org/the-sabsa-method/the-sabsa-matrix.aspx.
[67] S. C. Bengisu Tulu, "A New Security Framework for HIPAA Compliant Health
Information Systems," in Ninth Americas Conference on Information Systems, 2003.
[68] Deparment of Defense Deputy Chief Information Officer, “Department of Defense
Architecture Framework Instruction 2.0,” DoDAF Journal, 2011.
[69] Department of Defense Instruction 8510.01, DoD Information Assurance
Certification and Accreditation Process (DIACAP), November 28, 2007.
[70] Gartner, “Updates in COBIT 5 Aim for Greater Relevance to Wider Business
Audience,” Gartner, 2012.
[71] IBM, "IBM Rational DOORS," IBM, [Online]. Available: http://www142.ibm.com/software/products/us/en/ratidoor/. [Accessed 07 03 2013].
[72] Department of the Army, "FM 3-19.30: Physical Security," Headquarters
Department of the Army, Washington, DC, 8 January 2001.
[73] Committee on National Security Systems CNSS Instruction No. 4031,
"Cryptographic High Value Products," 16 February 2012.
[74] National Security Agency Manual 912, NSA/CSS STORAGE DEVICE
DECLASSIFICATION MANUAL, November 10, 2000.
[75] A. Steinberg, A Plan for Threat Assessment and Response, 1988 IEEE.
[76] P. Torr, "Demystifying the Threat-Modeling Process," EEE SECURITY &
229
PRIVACY, 2005 IEEE.
[77] A. Steinberg, "An Approach to Threat Assessment," in 2005 7th International
Conference on Information Fusion (FUSION), IEEE, 2005.
[78] W. Rippon, Threat Assessment of IP Based Voice Systems, VoIP MaSe, 2006, IEEE.
[79] E. Little and G. Rogova, "An Ontological Analysis of Threat and Vulnerability," in
2006 9th Annual Conference on Information Fusion, IEEE, 2006.
[80] D. McKinney, Vulnerability Bazaar, IEEE Computer Society, 2007 © IEEE.
[81] Information Assurance Technology Analysis Center (IATAC) and Data and
Analysis Center for Software (DACS), "Software Security Assurance," State-of-theArt Report (SOAR), July 31, 2007 .
[82] J. Jarzombek, "ENHANCING THE DEVELOPMENT LIFE CYCLE TO
PRODUCE SECURE SOFTWARE, Version 2.0," Department of Defense (DoD)
Information Analysis Center (IAC), October 2008 .
[83] CMMI Institute, "CMMI Levels," Carnegie Mellon, [Online]. Available:
http://cmmiinstitute.com/cmmi-solutions/cmmi-appraisals/cmmi-levels/. [Accessed 9
March 2013].
[84] DISA, Application Security and Development STIG Version 2, Release 1, DISA
Field Security Operations, July 24, 2008.
[85] Certification Authorities Software Team, "Position Paper CAST-17: Structural
Coverage of Object Code," Federal Aviation Authority, 2003.
[86] E. K. Daniel Jackson, "Separation ofConcerns for Dependable Software Design,"
Proceedings of the FSE/SDP workshop on Future of software engineering research,
Association for Computing Machinery , no. 2010-2011, pp. 173-176, 2010.
[87] C. Dougherty et al., "Secure Design Patterns," Software Engineering Institute,
Carnegie Mellon, October 2009.
[88] DISA, "Database STIG Version 8 Release 1," DISA Field Security Operations,
September 19, 2007.
[89] Microsoft Corporation © 2010, "Detecting and Correcting Managed Code Defects,"
[Online]. Available: http://msdn.microsoft.com/en-us/library/y8hcsad3.aspx.
230
[90] osalt, "Open Sources as Alternative," Airflake ApS, 2006-2012. [Online]. Available:
http://www.osalt.com/visual-studio. [Accessed 9 March 2013].
[91] D. S. T. S. T. D. B. F. Ernst Juhnke, "LCDL: An Extensible Framework for
Wrapping Legacy Code," ACM iiWAS , 2009.
[92] H. M. Sneed, "Encapsulation of Legacy Software: A Technique for Reusing Legacy
Software Components," Annals of Software Engineering, vol. 9, pp. 293-313, 2000.
[93] T. D. A. G. T. K. K. S. S. W. P. K. G. Terstyanszky, "Security Mechanisms for
Legacy Code Applications in GT3 environment," 13th Euromicro Conference on
Parallel, Distributed and network-Based Processing, 2005.
[94] Open Tube, "10+ Free Tools for Static Code Analysis," 31 March 2009. [Online].
Available: http://open-tube.com/10-free-tools-for-static-code-analysis/. [Accessed 10
March 2013].
[95] Apache Software Foundation, "Apache Subversion," Apache, 2001. [Online].
Available: http://subversion.apache.org/. [Accessed 10 March 2013].
[96] Bugzilla, "Bugzilla," bugzilla.org, 19 February 2013. [Online]. Available:
http://www.bugzilla.org/. [Accessed 10 March 2013].
[97] Wikipedia, "MIL-STD-498," January 13, 2013 . [Online]. Available:
http://en.wikipedia.org/wiki/MIL-STD-498.
[98] Wikipedia, "SQL Injection," Februrary 23, 2013. [Online]. Available:
http://en.wikipedia.org/wiki/SQL_injection.
[99] Wikipedia, "Fuzz Testing," February 6, 2013. [Online]. Available:
http://en.wikipedia.org/wiki/Fuzz_testing.
[100] M. Eddington, "Peach Fuzz Platform," [Online]. Available: http://peachfuzzer.com/.
[101] E. R. Harlod, "Fuzz Testing," 26 September 2006. [Online]. Available:
http://www.ibm.com/developerworks/java/library/j-fuzztest/index.html. [Accessed
10 March 2013].
[102] P. G. D. M. Ella Bounimova, "Billions and Billions of Constraints: Whitebox Fuzz
Testing in Production," [Online]. Available:
http://research.microsoft.com/pubs/165861/main-may10.pdf. [Accessed 10 March
2013].
231
[103] National Institute of Standards and Technology, "Federal Information Processing
Standard 180-1," Department of Commerce, Gaithersburg, 2002.
[104] Microsoft Corporation © 2010, "Security (C# Programming Guide)," [Online].
Available: http://msdn.microsoft.com/en-us/library/ms173195.aspx.
[105] P. Wayner, Translucent Databases, Flyzone Press, 2002.
[106] Microsoft Corporation © 2010, "Checked (C# Reference)," [Online]. Available:
http://msdn.microsoft.com/en-us/library/74b4xzyw.aspx.
[107] R. Seacord, Secure Coding in C and C++, Addison Wesley, 2006.
[108] Microsoft Corporation © 2010, "Security and User Input," [Online]. Available:
http://msdn.microsoft.com/en-us/library/sbfk95yb.aspx.
[109] Microsoft Corporation © 2010, "Security and On-the-Fly Code Generation,"
[Online]. Available: http://msdn.microsoft.com/en-us/library/x222e4ce.aspx.
[110] Microsoft Corporation © 2010, "Security and Race Conditions," [Online]. Available:
http://msdn.microsoft.com/en-us/library/1az4z7cb.aspx.
[111] Microsoft Corporation © 2010, "Secure Coding Guidelines," [Online]. Available:
http://msdn.microsoft.com/en-us/library/8a3x2b7f.aspx.
[112] Microsoft Corporation © 2010, "Security Best Practices for C++," [Online].
Available: http://msdn.microsoft.com/en-us/library/ee480151.aspx.
[113] Microsoft Corporation © 2010, "Dangerous Permissions and Policy Administration,"
[Online]. Available: http://msdn.microsoft.com/en-us/library/wybyf7a0.aspx.
[114] Microsoft Corporation © 2010, "Unsafe Code and Pointers (C# Programming
Guide)," [Online]. Available: http://msdn.microsoft.com/enus/library/t2yzs44b.aspx.
[115] Foundstone®, Inc. and CORE Security Technologies 2000, "Security in the
Microsoft® .NET," [Online]. Available:
http://www.foundstone.com/us/resources/whitepapers/dotnet-securityframework.pdf.
[116] Microsoft Corporation © 2010, "Permission Requests," [Online]. Available:
http://msdn.microsoft.com/en-us/library/d17fa5e4.aspx.
232
[117] Microsoft Corporation © 2010, "Securing Wrapper Code," [Online]. Available:
http://msdn.microsoft.com/en-us/library/6f5fa4y4.aspx.
[118] Microsoft Corporation © 2010, "How to: Run Partially Trusted Code in a Sandbox,"
[Online]. Available: http://msdn.microsoft.com/en-us/library/bb763046.aspx.
[119] Microsoft Corporation © 2010, "Securing Method Access," [Online]. Available:
http://msdn.microsoft.com/en-us/library/c09d4x9t.aspx.
[120] Microsoft Corporation © 2010, "Securing State Data," [Online]. Available:
http://msdn.microsoft.com/en-us/library/39ww3547.aspx.
[121] Microsoft Corporation © 2010, "Security and Serialization," [Online]. Available:
http://msdn.microsoft.com/en-us/library/ek7af9ck.aspx.
[122] R. Hall, "Oscar Security, Release version: 1.0.5," 13 May 2005. [Online]. Available:
http://oscar.ow2.org/security.html.
[123] G. McGraw, "Twelve rules for developing more secure Java code," Javaworld, Dec.
1, 1998.
[124] Oracle, "Secure Coding Guidelines for the Java Programming Language, Version
3.0," [Online]. Available: http://java.sun.com/security/seccodeguide.html.
[125] Oracle, "C Library Functions," [Online]. Available:
http://hub.opensolaris.org/bin/view/Community+Group+security/funclist, Note: site
not available after March 2013.
[126] Wikipedia, "Scanf Format," 18 February 2013. [Online]. Available:
http://en.wikipedia.org/wiki/Scanf .
[127] H. Chen, J. Liu and S. Gu, "Cost model based on software-process and Process
Oriented Cost System," in 2008 Int’l Conf. on Computer Science and Software
Engineering, IEEE, 2008.
[128] E. Kwak, G. Kim and J. Yoo, "Network Operation Cost Model to Achieve Efficient
Operation and Improving Cost Competitiveness," ICACT201, Feb. 13~16, 2011.
[129] E. Orlandi, "The Cost of Security," in Procs. 25th Annual 1991 IEEE International
Carnahan Conference on Security Technology, 1991.
[130] Ross, "Uses, abuses, and alternatives to the net-present-value rule," in Financial
233
Management, 1995, pp. 96-102.
[131] D. Frick, "Embracing Uncertainty In DoD Acquisition," pp. 355-372, July 2010.
[132] P. Amey, "Correctness by Construction: Better Can Also Be Cheaper," Cross Talk,
vol. 15, no. 3, pp. 24-28, March 2002.
[133] D. Zowghi and N. Nurmuliani, "A Study of the Impact of Requirements Volatility on
Software Project Performance," in Proceedings of the ninth Asia Pacific Software
Engineering Conference (APSEC’02), IEEE, 2002.
[134] Federal Aviation Authority, "CAST position papers," FAA, [Online]. Available:
http://www.faa.gov/aircraft/air_cert/design_approvals/air_software/cast/cast_papers/.
[Accessed 10 March 2013].
[135] W. Royce, "Software Project Management: A Unified Framework," Massachusetts,
Addison-Wesley, 1998, pp. 201-203.
[136] J. Davis, "The Affordable Application of Formal Methods to Software Engineering,"
in Proceedings of the 2005 annual ACM SIGAda international conference on Ada,
ACM, New York, NY, 2005.
[137] B. A. Forouzan, Introduction to Cryptography and Network Security, New York,
NY: McGraw-Hill, 2008.
[138] E. Barker and et al., "Recommendation for Key Management Part 2: Best Practices
for Key Management Organization," NIST Special Publication 800-57, March 2007.
[139] E. Barker and et al., "Recommendation for Key Management, Part 3: Application
Specific Key Management Guidance," NIST Special Publication 800-57, August
2008.
[140] Public Safety Wireless Networking Program, "Key Management Plan Template for
Public Safety Land Mobile Radio Systems," February 2002.
[141] Secretary of the Air Force, Air Force Instruction 33-203, Vol. 1, October 31, 2005.
[142] National Security Agency, "NSA Suite B Cryptography," 2 November 2009.
[Online]. Available:
http://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml.
[143] North American Electric Reliability Council, 2004, "NERC Cyber Security
234
Activities," [Online]. Available: http://www.nerc.com.
[144] North American Electric Reliability Council, "Control System — Business Network
Electronic Connectivity," NERC, May 3, 2005.
[145] National Security Agency, NSTISSAM TEMPEST/1-92, Compromising Emanations
Laboratory Test Standard, Electromagnetics, 15 December 1992.
[146] E. Barker, D. Johnson and M. Smid, Recommendation for Pair-Wise Key
Establishment Schemes Using Discrete Logarithm Cryptography, NIST SP-80056A, March 2007.
[147] Department of Defense, "Department of Defense Instruction 5200.1-R," January
1997. [Online]. Available:
http://www.dtic.mil/whs/directives/corres/pdf/520001r.pdf.
[148] Department of the Navy, Security Control Mapping, SECNAV DON CIO, 1000
Navy Pentagon, Washington, DC.
[149] DISA, Application Services STIG Version 1 Release 1, DISA Field Security
Operations, January 17 2006.
[150] DISA, DoDI 8500-2 IA Control Checklist – MAC 1- Classified, Version 1 Release
1.4, DISA Field Security Operations, March 28, 2008.
[151] Title IV of Enhanced Financial Disclosures, "Sarbanes-Oxley Act Section 404,"
Congress, 2002.
[152] J. Pearl, Probabilistic Reasoning In Intelligent Systems: Networks of Plausible
Inference, Los Angeles: Morgan Kaufmann Publishers In.c, 1988.
[153] C. Trim, "The Chain Rule of Probability," IBM developerWorks, 2012.
[154] J. Pearl, "Causual Inference in Statistics:An Overview," Statistics Surveys Vol. 3,
pg. 96-146, 2009.
[155] CISIAD, "OpenMarkov," CISIAD (Research Center on Intelligent Decision-Support
Systems), [Online]. Available: http://www.openmarkov.org. [Accessed 26 April
2013].
[156] Stan Development Team, "Stan," 13 April 2013. [Online]. Available: http://mcstan.org. [Accessed 26 April 2013].
235
[157] A. Arnold, B. Hyla and N. Rowe, "Automatically Building an Information-Security
Vulnerability Database," in Proc of 2006 IEEE Workshop on Information Assurance
US Military Academy, West Point, NY, 2006 © IEEE.
[158] S. Pramanik, "Affordable Security," CrossTalk: The Journal of Defense Software
Engineering, p. TBD, 2013.
[159] R. Seacord, Secure Coding in C and C++, Addison Wesley, 2006.
[160] Department of Defense Instruction 8551.1, Ports, Protocols, and Services
Management (PPSM), August 13, 2004.
236
Appendix A – Patient Monitoring System
This appendix aims to provide clarity on the toy patient monitoring system and
provide a model on which to view the methodologies applied. The model for this
dissertation is a simplistic outline of a generic remote patient monitoring system. This
appendix outlines the boxes that are used in the model system for clarification. This
appendix also provides the steps taken to create a security architecture for the model, and
the thought process behind addressing each of the security concerns.
Enterprise style systems are not the only ones that exist. Other systems such as
embedded systems also have security considerations. The first system is the healthcare
control system and the second is the patient monitoring system. In Figure A-0-1 Total
Architecture, the patient monitoring system and healthcare control systems are depicted.
The various pieces of the architecture are extracted and discussed throughout this
appendix.
237
Figure A-0-1 Total Architecture
238
The communication system encompasses the necessary equipment to allow the
healthcare worker to communicate with the patient’s sensor. The patient monitoring
system contains private encrypted communication. The communication portion of a
system is one of the areas that system security engineers must pay close attention. Figure
A-0-2 shows the legend for the security overlays.
Figure A-0-2 Security Overlay
At each point in the system where there is interaction between the system and the
outside world, there is a concern. In good design, all of the links will be encrypted using
appropriate cryptography to ensure proper communication and transmission security, as
shown in the wireless routers depicted in Figure A-0-1. Transmission security endeavors
to prevent communications from being hijacked. Communication security is preventing
239
the disclosure of the information if it is captured. The first layer of protection is ensuring
that all links on and off the system are appropriately encrypted.
Although not depicted in this system, there can be an issue when dealing with public
communication. In needing to communicate with an entity that does not encrypt their
communications, the communication on the remote toy must also provide for an
unencrypted communication link. This means that the architect must either provide a
separate link or must handle the issue using a public and private channel. If using an
unencrypted wireless system, then there is a need to separately encrypt the data going
along this path to ensure that data is not co-mingled, or a separate physical line should be
used.
Depending on the type of sensor size, weight and power (SWaP) can be an issue.
Unlike a datacenter, in a system like a pace maker, there is an added constraint of what a
component can weigh, its dimensions and how much power it consumes. This can affect
the overall life-cycle cost of a system.
It is necessary to have a conversation with the customer early on to determine what
the risk tolerance is and what the cost point is. It isn’t just about the security, but the total
implementation picture that determines whether or not a specific mitigation is used. The
wireless link allows for the transfer of data. The inclusion of a firewall is necessary on
this line. Even though from an operational standpoint the system should not connect to
the outside world (only the hospital WAN), in the event of the WAN becoming infected,
it is necessary to prevent the infection from spreading. Sometimes these type of systems
must go through a physical medium, in this case a wireless link, that may be operated by
a commercial entity. This must be looked at as a potential threat. Even if there is a
240
service level agreement, the confidentiality and integrity of the data must still be assured.
This requires the use of encryption. In this model, because it is assumed that the toy is
managing private data, there has to be protection of that data.
This then forces the system to use high assurance encryption on this line, which is
why the wireless encryption would be at a minimum WPA. The placement of the
boundary defense is also important. It is necessary that the firewall be able to see the
unencrypted data, so the encryption must be placed as it is in the model to allow for the
boundary defense to work adequately.
Also there is a need to ensure that an audit trail be maintained in the event of an
attack. The audit log must contain time/date of failed authentication attempts, user
credentials, any changes to system files, etc. The placement of the audit mitigation is seen
in the various figures and is denoted by a folder icon, as shown in Figure A-0-2 .
As there is a need to ensure that only public information is left when the toy is not in
operation, these audit logs should be off loaded to the data recorder for storage. This
safeguards the audit logs. A backup of all critical software/firmware in the system must
also be maintained. In some cases, the operating systems are burned into read only
memory, in others the OS would be loaded before each flight into volatile memory. This
ensures the integrity of the software being run, as known clean copies of the software are
loaded each time.
The model shows a wireless link that runs over Ethernet to the sensor. In the
boundary defense layer a firewall has been added, as depicted by a grey wall with a flame
on it. Knowing all data flows in the system allows for better configuration of the firewall
and routers. For example, if the system only requires an SSH connection between the
241
healthcare control system and the patient monitoring system, then the firewall should
block everything aside from that one SSH connection.
The router in the healthcare control portion of the system is an internal boundary.
ACLs and routing techniques can be used to provide assurance for availability of the
server and storage device. Although redundancy of networking equipment is ideal, it
isn’t always feasible due to SWaP issues, so the security engineer must be realistic about
what can be added and what can’t be. Although there is some movement in industry for
reducing the size of equipment, there will always be a cost offset for whether or not the
program can afford redundancy. It depends on the mission and what the particular piece
of networking equipment is needed for. If the router in the sensor subsystem was to go
down, the toy could still move, but its purpose would effectively be over and it would
have to return to the operator.
Another layer of protection in the healthcare control portion aside from routing is
encryption. The storage device provides a secure place to store data, audit logs, and
backup images of software. In the event of a glitch that would require a component to
reboot, it can be setup to have the software pulled from the storage device.
The
workstations, server, wireless equipment and router must be hardened using the STIGs.
The healthcare workstation could have extra requirements levied upon it by an entity
outside the customer’s control, in case the auditors for HIPAA compliance. The entity
could require that security be included with respect to human safety. For example, an
intrusion detection system would be allowed, but an intrusion prevention system would
not be, as this could prevent critical messages from reaching the healthcare workstation.
This is where safety and security must communicate.
242
All software that is considered safety critical might need to meet a series of
requirements levied by the entity with respect to how software is written and tested.
In remote systems ensuring the integrity and confidentiality of control information is
critical. If an attacker were to understand the path ahead, it would allow for an ambush.
This is one of the reasons that the protection for the control computer is inherited from
the communication system that provides the confidentiality of any changes to future
coordinates. Inheritance is the idea that although a component itself does not provide for
a certain protection, another connected area of the system may provide for that need. In
this case, the communication system provides the needed protection. This allows the
control computer to be designed for its purpose of controlling the movement of the toy.
The security engineer should request that the non-volatile memory (NVM) be readonly. If it cannot be setup for this, then an analysis must occur to see how each box
handles the data going through. If it can be shown that no private data can possibly be
written to NVM, then this is typically an acceptable level of risk. If there is the ability for
private data to be written to the NVM, then a sanitization procedure must exist to remove
this data. A security remanence guide is normally on the contract to specify what must
occur for a piece of NVM to be considered sanitized. Memory is considered sanitized
when the data cannot be retrieved even if the memory is being looked at in a lab [74].
There are certain types of memory that cannot be sanitized. Although costly, it is
sometimes required to replace chips that cannot be sanitized. Again, it is crucial to have
a conversation with the customer early on as to what will be considered an acceptable
level of risk. A trade involving cost should be done. In some cases it might be cheaper
to remove the chipset then to do the memory analysis.
Although it may not be
243
straightforward, it would be prudent to consider. In the toy the weather system, and
mechanical controls they are assumed to contain either read-only memory or only volatile
memory and so do not have to be sanitized. They are considered public upon shutdown.
The operator also has displays that allow them see what is happening with the patient
sensor. These displays are connected to a backend server, which perform all access
control. The operator never has physical access to the machine with data in them. Only
the system administrator has access to the server and the ability to make changes to the
system. Every change on the server is audited and the audit logs are offloaded and stored
in an encrypted format.
The workstation, router, wireless routers, firewalls and server in the system must have
the correct patches and hardening in place. Also, there are certain restraints on the use of
mobile code, such as ActiveX. This is discussed in the main body of this dissertation, but
it should be noted that for the patient monitoring system, the firewalls would be
configured to block all prohibited mobile code and all browsers installed will also be
configured to only allow accepted mobile code to run.
All the software in the system, including the operating systems and applications must
have a backup. Although in this model it is assumed that the customer is handling
disaster recovery, the security engineer must require that appropriate hardware, software
and firmware have backups and can be replaced quickly.
244
Appendix B - Engagement Questions
The following questions will give the security engineers a good look at how the
requirements should be applied to each component. The questions can be put into an
excel spreadsheet with the answers next to them. This will also start the conversation
with managers and responsible engineers, to allow for better communication.
1.
Does the component have non-volatile memory? (if so, in which sub-component
(if any) does it reside?)
2.
For each instance of non-volatile memory:
a.
What type of non-volatile memory? (solid state, EEPROM, etc.)
b.
Is the non-volatile memory write-protected (or does it use some special
connector or software to make changes)?
c.
How is it being proven that no classified information can be either
accidentally or purposely placed in non-volatile memory? (write protected,
encrypted, etc.)
3.
Does the component have volatile memory?
a.
Can the manufacturer show that the memory is cleared after 30 seconds of
loss of power?
4.
What connections does the component have? (1553, Ethernet, etc.)
a.
What ports need to be opened?
b.
What services will be running?
c.
What protocols will be used to connect to the other components?
5.
What other devices will this component connect to?
6.
What operating system is being used?
245
7.
8.
Where does the device reside (UA, ship, ground?)
a.
If on the UA or on ship, where in the UA or ship? (how is it accessed)
b.
Is it easily accessible?
Does the device require the use of cryptography?
a.
Is the cryptography Type-1?
i.
If yes, look at the NSA requirements
ii.
If No, but is TRANSEC then handle with NSA
b.
Is the cryptography used for entity authentication?
c.
Is the cryptography being used for non-repudiation?
i.
9.
Is the algorithm used, FIPS certified?
What commercial software/firmware is running on the component (these must be
registered)?
10. Will the component interact with components outside the enclave?
a.
In what way?
11. Does the component have the capability to run mobile code?
a.
What is blocking the use of unauthorized mobile code?
12. Does the component have a user interface?
a.
What type of interface?
b.
Is it logically or physically separated from data storage?
13. Does the component contain any binaries or executables?
a.
For each binary or executable:
i.
Is it from a free/open source? (or are they written in house?)
ii.
Does it come with a warranty?
246
iii.
Is it approved by the customer’s branch (such as Department of
Navy) or DAA?
14. Is the Component under configuration control?
a.
Is all software/firmware under configuration control?
b.
Is the hardware under configuration control?
c.
What are the processes for configuration control?
d.
Is software under version control? (ClearCase or something else)
e.
How often does the configuration control board meet?
15. What are the peer review processes for software?
a.
What tests or tools are run to ensure minimization of software flaws (e.g.
Fortify 360, or some other static code analysis tool)?
b.
If there is no automated tool, how is the software complying with the
application development STIG? (e.g. testing for math vulnerabilities,
injection vulnerabilities, string issues, etc.)
16. Is there a test to ensure the integrity of the component’s state?
a.
How does the system handle initialization with respect to integrity?
b.
How does the system handle aborts with respect to integrity?
c.
How does the system handle shutdown with respect to integrity?
17. How does the component provide for audit capabilities?
a.
How is the admin alerted?
b.
What is provided as part of the audit log?
c.
Where is the log stored?
d.
How are audit trails protected?
247
e.
How are audit trails viewed? (what tools are provided?)
f.
How often are audit trails put on backup?
18. Does the component have maintenance ports?
a.
Are they capped when not in use?
b.
How many maintenance ports?
c.
What type of connection (RJ-45, RS-232, etc.)
19. Does the component have backup power available?
20. Is the component an IA or IA-enabled device?
a.
If yes, has it been approved through one of the validated processes, such
as NIST, FIPS, CC, etc.?
b.
Is it in a separate domain or protected in some way from production/user
traffic? (show through diagrams or other architecture documentation)
c.
Is the device part of boundary defense?
21. Have applicable STIGs been identified?
a.
If yes, which ones?
i.
For each one, how is it being applied? (e.g. during test, later in the
cycle, etc.)
22. What CMMI level is the component being developed under?
23. Please provide a list of all software/firmware being used on the component.
24. What access control mechanisms are used to protect the component?
a.
Is access (or denial) logged in the audit files?
25. Does the component contain a database or other transaction based application?
a.
If so, does it employ transaction roll-back or transaction journaling?
248
26. Is the component a router, switch, DNS, or does it manage networking
equipment?
a.
If yes, has a HIDS been deployed?
b.
If it is on the edge, has a NIDS been deployed?
27. Does the component host instant messaging?
a.
If yes, is it for an authorized/ official function?
b.
If no, how is it configured?
28. Does the component come with instructions for restart and recovery?
a.
How is the source code protected?
b.
How can the component be accessed? (goes hand in hand with qn. 18)
29. Can users access this system?
a.
If yes, is role-based access employed?
i.
b.
How is the RBAC setup?
If yes, is the principle of least privilege employed?
i.
For example is there an admin and user mode? And does the admin
work in user mode unless it is absolutely necessary for root access?
30. Are there any incoming or outgoing files, such as OFPs or mission plans?
a.
If yes, what integrity mechanisms are in place?
31. Does the component have a VoIP client?
a.
What is the VoIP client used for? (internally only, external?)
b.
Is it only for authorized/official functions?
32. Is the component a server or workstation?
a.
If yes, has virus protection been employed?
249
33. Does the component contain wireless capabilities?
a.
Is the wireless capability necessary?
i.
If no, disable or remove any wireless functionality
ii.
If yes,
1.
Is it configured according to wireless policy
2.
Are end users prohibited from making configuration
changes?
3.
Are factory defaults changed?
250
Appendix C - Acronym List
Abbreviation
ACL
ADC
AT
ATC
ATO
ATT
AV
C&A
CDR
CDRL
CDS
CIA
CMMI
COMSEC
CONOPS
COTS
CSCI
DAA
DATO
DIACAP
DISA
DMZ
DoD
DoDAF
DoDD
DoDI
DSS
ECP
EGPWS
EMSEC
FAA
FADEC
FAR
FDR
FIPS
Term
Access Control List
Air Data Computer
Anti-Tamper
Air Traffic Control
Approval to Operate
Approval to Test
All views
Certification and Accreditation
Critical Design Review
Contract Data Requirements List
Cross Domain Solution
Confidentiality, Integrity and Availability
Capability Maturity Model Integration
Communication Security
Concept of Operations
Commercial off the shelf
computer software configuration item
Designated Approval Authority
Denial of Approval to Operate
DoD Information Assurance Certification and Accreditation
Process
Defense Information Systems Agency
Demiliterized Zone
Department of Defense
Department of Defense Architecture Framework
Departmenet of Defense Directive
Department of Defense Instruction
Defense Security Service
Engineering Change Proposal
enhanced ground proximity warning systems
Emissions Security
Federal Aviation Authority
Full Authority Digital Engine Control
Federal Acquisition Regulation
Flight Data Recorder
Federal Information Processing Standard
251
Abbreviation
FMS
FRP
GOTS
GPS
HMM
IA
IAM
IAO
IATF
IATO
IATT
IAVA
IDD
IDE
IDS
IFF
IOC
IPS
IPT
ISSE
ISSM
ITAR
LOE
LRIP
MAC
MIL-STD
NERC
NISPOM
NIST
NSA
NSS
OT
OV
PDR
PHM
PIT
PKE
PKI
Term
Foreign Military Sales
Full Rate Production
Government off the shelf
Global positioning System
Hardening and Maintaining Metric
Information Assurance
Information Assurance Manager
Information Assurance Officer
Information Assurance Technical Framework
Interim Approval to Operate
Interim Approval to Test
Information Assurance Vulnerability Alert
Interface Design Description
Integrated Development Environment
Intrusion Detection System
Identification Friend or Foe
Initial Operational Capability
Intrusion Prevention System
Integrated Product Team
Information System Security Engineer
Information System Security Manager
International Traffic in Arms Regulation
Level of effort
Low Rate Initial Production
Mission Assurance Category
military standard
North American Electric Reliability Corporation
National Industrial Security Program Operating Manual
National Institute of Standards and Technology
National Security Agency
National Security Systems
Operational Test
Operational View
Preliminary Design Review
Prognostics and Health Monitoring
Platform IT
Public Key Encryption
Public Key Infrastructure
252
Abbreviation
PPS
PRR
RE
RFP
RMF
SAD
SATCOM
SCAP
SFR
SLOC
SRR
STIG
SV
SVR
SWaP
T&E
TCAS
TEMPEST
TRANSEC
TRR
TV
UAV
UCDMO
VHF
Term
Ports, protocols and services
Production Readiness Review
Responsible Engineer
Request for Proposal
Risk Management Framework
Software Architecture Description
satellite communication
Security Content Automation Protocol
System Functional Review
Source Lines of Code
System Requirements Review
Security Technical Implementation Guide
system views
System Verification Review
Size, Weight and Power
Test and Evaluation
Traffic Alert Collision Avoidance System
Not an Acronym, code word for protecting against eminations
Transmission Security
Test Readiness Review
technical views
Unmanned Air Vehicle
United Cross Domain Management Office
Very High Frequency
253
Appendix D
A separate appendix is being supplied due to proprietary restrictions for the review of the
committee.