template - Department of Computer Science

advertisement
IEEE Real-Time Embedded System Workshop, Dec. 3, 2001
1
A Spiral/Reverse Spiral Life Cycle Model for
Information Systems Risk Assessment (paper title)
Steve Liu, Member, IEEE, Hoh In, Member, IEEE, Sung-Oh Jung (authors and affiliations)

Abstract-- In this paper we present an spiral/reverse spiral
life cycle model for analysis of security risks of information
systems. Protection of the information services is modeled into
the interactions between the “attackers” and the “defenders”,
i.e., system operators and developers. When a development
team creates a software system, it produces both the desired
functionality and security flaws. Considering the case that
security flaws are independent of each other, we model them as
unexpected products in the process of software development.
When the software is released, users begin to learn about these
flaws and will develop attack schemes. When the level/number
of attack incidents exceeds a threshold, the development team
will begin to create solutions to the flaws and fix them. Once a
flaw is fixed, it will be permanently removed from the system.
Knowing that each of the processes (production, detection,
attack, repair, etc.) can be characterized as a software
development process, we use the well-known Rayleigh
distribution to characterize the profile of manpower needs for
the attack and repair processes. For simplicity, we use a single
“attacker” and a single development team to demonstrate the
interplay between design, detection, attack, and repair of
security flaws. It will produce quantitative measurements of
security risks at different times and a systemic framework for
development of risk management strategies.
Index Terms—security risk, tradeoff analysis, random process,
software process simulation
1
INTRODUCTION
M
anagement of information security is a continual
process, rather than an one-time event. (By the nature
of system development, it is not practical to expect
creation of a perfect specification for a secure information
system, so that one can invest the entire resources to build
such a system. Due to critical importance of information
security, a large number of modeling works for security
concerns have been done in recent years. Models of
security-related attack processes are presented in [9]. They
showed steady-state probability of intrusion as a function of
intrusion coverage in a single system or network and in the
analysis of more than two interconnected systems or
networks. In [4], they presented a security model based on
an intrusion experiment done by undergraduate students
who were invited to attack a distributed computer system.
 Hoh In, Steve Liu, and Sung-Oh Jung are with the Department of
Computer Science, Texas A&M University, College Station, TX 778433112. Email: {hohin, jcliu, jungs}@cs.tamu.edu.
We could find models for security policy [7], but much
less could be said about simulation modeling of security risk
assessment. Our model focuses on the behavior relationship
between the service provider team and the attacker team.
Based on the established human behavior for software
development, one can use our model to estimate the high
and low time of attacks and defense. It can be used to
estimate the most likely system down time due to attacks,
and furthermore, one can use the model to adjust securityconcerned factors such as manpower assignment, software
release time, etc. in order to achieve the business objective.
Just like it is hard to develop a flawless information
system, it takes effort to develop non-trivial attack schemes.
We note that, although some security breaching incidents
were created by individuals, many attackers rely on
underground communities to learn and develop new
attacking schemes. The attack system for the security flaws
is nothing more than a competing product that attempts to
compromise the information system. In other words, except
for insider attacks, in which case the “attackers” would have
a much shorter learning curve, development of attacking
software systems should have similar characteristics as
development of the regular software. We present a simple
model to characterize the relationship between software
development and flaw attacks.
Except for some special cases, software development is a
highly complex group activity. We propose using a
probabilistic simulation model to analyze the dynamic
interactions between the development and attack processes.
Unlike regular software, once the countermeasure of an
attack scheme is developed, the security flaw will be
removed permanently. Moreover, with the growth of the
system size, it may take more, or less, time to create attacks
and their countermeasures. We propose a spiral and reverse
spiral framework to capture this kind of characteristics.
2
SERVICE AND ATTACKS
Management of information security is a continual
process. Except for some unusual circumstances, operators
of a typical software system usually need to interact with the
development team to protect the system from attackers. The
objective of the risk assessment is to estimate the possible
interaction patterns between stakeholders, so that one would
be able to answer certain questions, such as, when and how
much the software systems would experience heavy attacks,
and how the development team can optimize manpower to
handle attacks and regular development, etc.
IEEE Real-Time Embedded System Workshop, Dec. 3, 2001
This kind of question can be modeled by a queuing
model, in which the “attackers” and developers develop
“interactive” software (i.e., functional codes, attack codes,
and countermeasure codes) following a fairly regular
pattern. This forms the baseline framework of our study. We
only consider the simplest case in which just one software
system is involved. At each release of the software, a certain
number of independent flaws are embedded to the software
system. Each flaw can be attacked by one attacking
program. Obviously, this is a trivial case, compared to a real
world situation. Nevertheless, it is not difficult to expand
the model to incorporate much more complicated situations.
Next, let us consider the interaction between the
stakeholders: developers-operators vs. attackers.
After development of the software, it will be deployed for
field use. From this point on, the software product will
begin to produce useful services for regular users, but at the
same time, it will become an attack target. After potential
attackers learn about the deployed product, they will begin
to craft the attack strategy and its implementation. In
general, it takes much less time and resources to develop an
attack system than it does to build the useful software
product itself. Once an attack strategy is developed, it will
need to be deployed in a fashion similar to deployment of
the software system itself. Then, at the proper time and in
the proper situation, attacks will be executed to produce the
desired damages. The interaction between the defenders and
attackers is illustrated in the Figure 1.
To reflect the level of tolerance of the software users, we
assume a simple thresholding technique, in which the
defense team issues a rework request (i.e., development in
the next cycle) to build countermeasure solutions after
certain number of attack incidents. The request is put into
the waiting queue of the development team and the team
begins to work on solutions, one by one. After the repair
patch is developed, it will be put in the field, and from that
point on the flaw is considered permanently removed from
the system, i.e., the system has one less flaw. The attack and
rework teams concurrently build and deploy their software
(attack codes and patches) to the target application until all
the flaws are eventually removed, or until a new release of
the software is brought on-line. Then, a new cycle of
interactions between the defenders and attackers begins
again.
2.1
Spiral-Reverse-Spiral life cycles
From the previous discussion we can classify
stakeholders into defenders and attackers. For the defender
side, CD= {development, deployment, detection} forms a
flaw removal cycle for each flaw. For the attackers, each
flaw attack cycle of the flaw consists of CA={development,
deployment, attack}, and the flaw-cycle, F= {CA, CD}.
Depending on the nature of the target application and the
attack codes, it may take more (or less) time to complete the
next F-cycle after the current one. It is interesting to observe
that, since software systems are deployed in batches, i.e.,
“releases,” there is a finite time for a release version to be
replaced by a new one.
2
product
networked
product
: Defender
networked
counter product
counter product
(attack program)
: Attacker
develop
cycle 1
deploy
detected / learn product
develop
deploy
attack
detect attacks
stop product or learn attack program
rework (fix security defects)
cycle 2
redeploy
detected new fixed product
re-develop
redeploy
attack again
detect i-th attacks
stop/unload product
cycle 3
Figure 1. The interaction between defense and attack
Figure 2 illustrates the case that the expected cycle
periods decrease with time. On the other hand, if the
expected cycle periods increase with time, then we just
reverse the arrow direction.
Development
Deployment
Detection
Figure 2: The reverse spiral development model
Next, let us discuss the cyclic behavior relationship
between these two groups. Here, we need to consider four
different combinations, see Figure 3, where both the
defender team and the attack team may have the spiral or
reverse-spiral model. Type I (II) represents the case when
IEEE Real-Time Embedded System Workshop, Dec. 3, 2001
3
the expected length of the CD and CA cycles is both
increasing/non-decreasing
(decreasing/non-increasing.)
Type III is the case when the expected lengths of CD and CA
are respectively increasing and decreasing. Finally, Type IV
is the case when the CD and CA are respectively decreasing
and increasing.
the software modules. Some flaws can be detected by the
defenders, but others not. A flaw inspection efficiency
parameter determines flaws that will be picked up by the
development team. An additional used in the software
engineering process is to quantify the flaw count as the ratio
of flaws per module. Flaws in the attack programs are not
considered in the simulation.
The last, but not least, item is manpower. We adopt the
well-known Rayleigh distribution [6] for manpower buildup and dispatch in development of the service and attack
software systems. As mentioned above, we are only
interested in the non-trivial case, in which the software
system needs to be built by teams. For development,
deployment, or detection, we assume that there is teambuilding time for team members before work starts. Then,
the teams need to learn about the target application, and
then start their development. The manpower is drawn from
the available manpower pool (estimated dev/dep/det
manpower in Figure 4), dispatched according to work
needs, and is mapped into the development/deployment/
detection rate at the service part.
For the attack side, attackers learn, develop, and deploy
attacker programs. However, they can begin to develop only
after the service software is deployed, and thus development
of attack is triggered by the deployment event, see the first
row of the attack side. We note that the word “cum” used
throughout Figure 4 denotes “cumulative measure” of the
mentioned parameters.
I
Type
II
III
IV
Defender
Attacker
Figure 3: The four different types of defense-attack
(reverse) spiral models.
3
SIMULATION MODELING
System dynamics has been developed and used in both
business modeling, social behavior systems [2], and
software process simulation [5]. Benefits of system
dynamics study are to allow us to analyze complex
problems while maintaining some level of structural
simplicity and intellectual control [8]. In this section, we
analyze the dynamic interactions between the defenders and
attackers using a software process simulator. We use the
iThink software process simulation tool [3] to quantify the
interaction between a single software service provider and a
single attacker, on one single operational site. For instance,
we are interested in estimating the peak time of attacks, and
the expected business losses, when certain parameters of the
development and attack teams are known.
iThink [3] is a flow based modeling tool. Each of the
“valve” shapes in the diagram denotes the flow of work
controlled by the flow control rules embedded into each of
the valves. The rectangular boxes denote “stock,” or
“level” of the measurement of task, workload, etc. Referring
to Figure 4, we use the ith cycle of service and attack, Fi, to
describe mapping of the system behavior in Figure 1 into
the simulation modules: {service, flaw, manpower} for the
defenders, and {attack, manpower} for the attackers. Here,
service denotes the target software system, attack denotes
the attack software, flaw denotes the set of security flaws,
and manpower denotes the manpower needed to create
service and attacks, respectively. Of course, one can use a
different analysis configuration that to reflect a different
degree of system details.
Referring to the service module in the service provider
subsystem. In the first row, the service/software is
developed, deployed (configuration, loading DB tables,
etc.), and from there the service begins to provide service
and flaws are exposed to the attackers.
Next, we consider the security flaws in the second row.
Security flaws are undesirable side products of the software
development process. Therefore, generation of flaws is
defined as a random process associated with generation of
4
SIMULATION RESULTS
We simulate two cycles of service development, and one
cycle of attack. First, we assign 100 units of manpower for
the development team, 10 units of manpower for
deployment, and 50 units for detection. A simulated
Rayleigh curve-fitting rule creates the dispatch shape for the
development, deployment and detection teams. Dispatching
of the development team in two cycles is illustrated in
Figure 5, where the first cycle is generated as an input, and
the second cycle (i.e., rework) is the output.
The five curves in Figure 6 denote the rates of five events
in the first cycle: “service development,” “service
deployment,” “attack learning and development,” “attack
deployment,” and “attack detection.” The deployment date
is given from the input in this study, although in common
practice the deployment date could be estimated based on
certain software economics models, such as the well-known
COCOMO [1].
Based on the aforementioned input parameters, we derive
in Figure 7 the active attack time, which is defined as the
time when the system begins to be attacked, to the time
when the attack is detected. Next, we examine the effect of
flaw inspection efficiency on the number of detected flaws,
which will become the input workload of the development
team in the next cycle. When the flaw inspection efficiency
is less than 0.5, the number of detected flaws increases with
the efficiency; but when it is larger than 0.5, no further
visible improvement can be observed; implying that a zero
IEEE Real-Time Embedded System Workshop, Dec. 3, 2001
flaw policy is not effective, in terms of the expected
benefits. If manpower of attack detection is relative low
(e.g., 10 man months), however, the zero flaw policy is
effective as shown in Figure 9.
Finally, we examine the security effects of manpower
allocation approaches, and the results are shown in Figures
10 and 11. In these two examples, we assume that the
development manpower of the second cycle is 10% and
21% of that of the first cycle, respectively, implying
different investment levels on patching the flaws in the
rework phase. In that case, the 21% proportion turns out to
be a threshold, where the number of detected flaws becomes
zero in 24.25 months. On the other hand, if the ratio is
lower than 21% then the number of detected flaws never
becomes zero.
5
CONCLUSION
This paper proposes a simple risk assessment model, based
on a spiral and reverse spiral life cycle model of the
software development process. In our model we assume that
security flaws are exploited and fixed continually, and
interactively between the defenders and the attackers. This
modeling approach is highly scalable, and our current
example only represents a trivial case to get the main idea
across. In the immediate near future, we will take into full
account of the dynamic relationship between flaws and
functions, development resources, in order to consider the
trade of different development strategies.
4
6
REFERENCES
[1] Boehm, B., Horowitz, E., Madachy, R., Reifer, D., Clark,
B.K., Steece, D., Brown, A.W., Chulani, S., Abts, C.,
Software Cost Estimation with Cocomo II, Prentice-Hall,
Englewood Cliffs, NJ, 2000
[2] Forrester, J.W., Industrial Dynamic, MIT Press, Cambridge,
MA, 1961
[3] iThink manual, High Performance Inc., 2000. Available at
http://www.hps-inc.com/bus_solu/ithink/ithink.htm
[4] Jonsson, E. and Olovsson, T., Security Intrusion Process: An
Empirical Model. IEEE Aerospace and Electronics Systems
Magazine, Vol. 124, April 1997, Page(s): 7-17
[5] Kellner, M. I., Madachy, R. J. and Raffo, D. M., Software
Process Simulation Modeling: Why? What? How?, The
Journal of Systems and Software, pages 91-105, 1999
[6] Putnam, L.H. and Meyer, W., Measures for Excellence:
Reliable Software on Time within Budget. Prentice-Hall,
Englewood Cliffs, NJ, 1992
[7] Rees, J., Policy Framework for Inerpreting Rick in
eCommerce Security, Aderson Consulting and The Center for
Education and Research in Information Assurance and
Security (CERIAS) at Purdue University, 1999. Available at
https://www.cerias.purdue.edu/techreports-ssl/public/200001.pdf
[8] Willford, J. and Chang, A., “Modeling the FedEx IT
division: a system dynamics approach to strategic IT
planning”, The Journal of Systems and Software, Vol. 46, pp.
203-211, 1999
[9] Zorkzdis, V. and Karras, D.A., Security Modeling of
Electronic Commerce Infrastructures. EUROCOMM 2000,
Information Systems for Enhanced Public Safety and
Security, IEEE.AFCEA, 2000, Page(s): 340-344
Submitted to 2nd Annual IEEE SMC Information Assurance Workshop, 2001
Service Provider
5
Attacker
Cycle i
* AM: Attack Module
SM: Service Module
Cycle i
SM
AM
attack learn & dev elopment rate
SM under dev elopment
Detected flaws in (i-1)th cycle
AM under dev elopment
AM needing modif ication
AM under deploy ment
SM needing repair
SM under deploy ment
activ e attack time
attack deploy ment rate
detection rate
dev elopment rate
deploy ment rate
cum deploy ed SM
cum dev eloped SM
dev elopment rate'
deploy ment rate'
attack f ailure rate
cum deploy ed AMs
cum dev eloped AMs
cum SM needing repair
attack dev elopment rate'
detection rate'
cum AM modif ication
attack deploy ment rate'
attack f ailure rate'
SM Flaws
f law generation rate
undetected f laws
f law count
f law escape rate
ratio of f law in SM
f law detection rate
Development rate in (I+1 )th cycle
f law inspection ef f iciency
detected f laws
SM Manpower
AM Manpower
cum total attack manpower
cum total manpower
total manpower dispatch rate
cum detection man months
detection manpower dispatch rate
attack dev manpower dispatch rate
deploy ment manpower dispatch rate
dev manpower dispatch rate
attack dep manpower dispatch rate
detection training
attack dep training
deploy training
estimated attack dep man months
estimated dep man months
estimated dev man months
attack dep schedule
attack dev schedule
attack dev training
detection schedule
deploy ment schedule
dev schedule
dev elopment training
total manpower dispatch rate 3
cum attack dep man months
cum attack dev man months
cum deploy ment months
cum dev man months
estimated attack dev man months
estimated det man months
attack dep team buildup
dep team buildup
dev team buildup
attack dev team buildup
det team buildup
Figure 4. The Simulation Model
1: dev elopment rate
1:
2:
1: dev elopment rate 2: detection rate
2: dev elopment rate 2
50.00
2.00
1:
2:
3:
4:
5:
50.00
2.00
7.00
20.00
0.90
1:
2:
3:
4:
5:
25.00
1.00
3.50
10.00
0.45
1:
2:
3:
4:
5:
0.00
0.00
0.00
0.00
0.00
4: attack learn & d… 5: attack deploy m…
3: deploy ment rate
3
1:
2:
25.00
1.00
1
1
2
1:
2:
1
1
4
2
2
2
0.00
7.50
1
1
22.50
15.00
Graph 1: p9 (Untitled)
Months
30.00
1: cy cle time
5
7.50
1
2
1:
3
4
5
1
22.50
2
3
4
5
30.00
2:06 PM Fri, Apr 06, 2001
2: detected f laws
3: detected f laws
4
1:
4: detected f laws
90.00
0.90
0.45
2
15.00
Months
Figure 6. Dev/dep/det rate vs. Attack dev/dep rate
11:55 PM Fri, Apr 06, 2001
1: detected f laws
1:
4
Graph 1: p2 (Untitled)
Figure 5. Dev rate in cycle 1 vs. dev rate in cycle 2
1:
3
0.00
0.00
0.00
5
2
5: detected f laws
Flaw Inspection Efficiency
Line 1 – 0.1
Line 2 – 0.15
Line 3 – 0.2
Line 4 – 0.5
Line 5 – 1
5
45.00
4
5
5
4
3
2
1:
0.00
1
0.00
1
7.50
Graph 1: p8 (Untitled)
1
15.00
Months
1
22.50
30.00
12:15 AM Sat, Apr 07, 2001
1:
0.00
1
0.00
2
3
4
5
1
7.50
Graph 1: p10 (Untitled)
Figure 7. Active attack time (from deployment time to
detection time)
1
15.00
Months
2
3
1
22.50
11:17 PM
2
3
30.00
Sat, Apr 07, 2001
Figure 8. Detection rates vs. Flaw Inspection Efficiency
(Detection manpower = 50 man months)
Submitted to 2nd Annual IEEE SMC Information Assurance Workshop, 2001
1: detected f laws
1:
2: detected f laws
3: detected f laws
4: detected f laws
1: f law count
5: detected f laws
90.00
Flaw Inspection Efficiency
Line 1 – 0.1
Line 2 – 0.15
Line 3 – 0.2
Line 4 – 0.5
Line 5 – 1
5
1:
45.00
5
4
6
1:
2:
3:
3: undetected f laws
3
3
2
1
2
1:
2:
3:
5
2: detected f laws
100.00
50.00
30.00
50.00
25.00
15.00
1
3
1
2
1:
0.00
1
0.00
2
3
4
5
1
7.50
1
15.00
Graph 1: p10 (Untitled)
2
4
3
Months
1
22.50
2
3
1:
2:
3:
4
30.00
1: f law count
2: detected f laws
3: undetected f laws
100.00
50.00
30.00
3
3
2
1
1:
2:
3:
50.00
25.00
15.00
1:
2:
3:
0.00
0.00
0.00
1
1
0.00
2
3
2
7.50
Graph 1: p4 (Untitled)
1
3
2
15.00
Months
22.50
1
0.00
2
3
2
7.50
Graph 1: p4 (Untitled)
11:20 PM Sat, Apr 07, 2001
Figure 9. Detection rates vs. Flaw Inspection Efficiency
(Detection manpower = 10 man months)
1:
2:
3:
0.00
0.00
0.00
30.00
12:32 AM Sat, Apr 07, 2001
Figure 11. Flaw Count vs. Detected/Undetected Flaws
(Development manpower of the second cycle = 21%)
3
15.00
Months
22.50
30.00
12:24 AM Sat, Apr 07, 2001
Figure 10. Flaw Count vs. Detected/Undetected Flaws
(Development manpower of the second cycle = 10%)
Download