Massachusetts Institute of Technology Working Paper

advertisement
Massachusetts Institute of Technology
Sloan School of Management
Working Paper
Modeling the Impact of Process Architecture
on Cost and Schedule Risk in Product Development
Tyson R. Browning
Steven D. Eppinger
Working Paper Number 4050
Revised April 2000
Contact Addresses:
Dr. Tyson R. Browning
Lockheed Martin TAS
Mail Zone 2222, P.O. Box 748
Ft. Worth, TX 76101
tyson @alum.mit.edu
Prof. Steven D. Eppinger
Sloan School of Management
Massachusetts Institute of Technology
Cambridge, MA 02142-1347
eppinger@ mit.edu
Modeling the Impact of Process Architecture on Cost and Schedule
Risk in Product Development 1
Tyson R. Browning
Lockheed Martin Aeronautics Company
P.O. Box 748, MZ 2222 · Fort Worth, TX 76101
tyson@alum.mit.edu
Steven D. Eppinger
MIT Sloan School of Management
Room E53-347 · Cambridge, MA 02139
eppinger@mit.edu
Abstract
To gain competitive leverage, firms that design and develop complex products may strive to
increase the effectiveness and predictability of their development processes. Process improvement is
facilitated by the development and use of models that account for important characteristics of the
process. Iteration is a fundamental but often overlooked feature of product development (PD) processes.
Its impact is mediated by the activity structure or architecture of a process. This paper integrates several
important characteristics of PD processes into a single model, highlighting the effects of varying process
architecture. The PD process is modeled as a network of activities that exchange deliverables. Each
activity has an improvement curve, an uncertain duration and cost, and probabilities and impacts of
rework based on changes in its inputs. A work policy governs the timing of activity execution and
deliverable exchange and the amount of activity concurrency. Most of the model analysis requires a
simple simulation, which yields distributions of sample cost and schedule outcomes. Varying the process
architecture varies the output distributions. Each distribution is used with a target and an impact function
to determine a risk factor. Alternative process architectures are compared, revealing opportunities to
trade off cost and schedule risk. Example results are shown for an industrial process-preliminary
design of an uninhabited aerial vehicle. Two project planning applications are demonstrated:
architecting a process for reduced risk, and choosing cost and schedule targets with acceptable risk. The
model yields managerial insights into how rework cascades through a PD process, trading off cost and
schedule risk, critical interfaces, and occasions for preemptive iteration.
Keywords
Product Development, Engineering Design, Design Iteration, Management of Engineering Design,
Design Process Modeling, Product Development Time, Product Development Cost, Product
Development Risk, Rework, Design Structure Matrix, Process Architecture
t Support for this research from the Lean Aerospace Initiative <http://lean.mitedu/> at the Massachusetts Institute of
Technology, Lockheed Martin Aeronautics Company, and a National Science Foundation graduate fellowship is gratefully
acknowledged. Daniel E. Whitney and John J. Deyst, Jr. provided helpful comments on the model and applications. This paper
draws on material from [1 1, Ch. 6] and constitutes a revision of an earlier working paper [14].
Browning &Eppinger
Rev. 2/00
1. Introduction
To increase their competitiveness, firms that develop products have realized the importance of
improving the effectiveness and predictability of their design processes. Since process improvement
requires process understanding [74], researchers and practitioners put effort into observing product
design and development processes-looking for their important characteristics-and developing models
that account for those features. Most of the advances in this area have proceeded on the assumption that
the design process has an underlying structure [4, 6, 68].
An important characteristic of product
development (PD) processes is that, unlike most business and production processes, they are described by
terms like "creative," "innovative," and "iterative." At an interesting level of detail, PD processes do not
proceed in a purely sequential fashion [24, 43]. The activities in a PD process interact by exchanging
information [21]. The data that activities need to do their work effectively must be available in the right
place, at the right time, and in the right format. The structure of this information flow has a bearing on
process effectiveness and predictability [31, 70]. In particular, the structure of the PD process impacts
project cost and schedule risk [13]. Thus, PD can be described as a complex web of interactions, some of
which precipitate a cascade of rework for other activities. Models that highlight the characteristics of
this web are helpful in improving our understanding of PD processes, and ultimately their effectiveness
and predictability.
In trying to improve process effectiveness and predictability, the following issues are of
particular interest to process planners and managers:
*
How does the creation of new information affect the process?
*
How does rework cascade through the process?
*
What is the best arrangement of activities within the process?
*
Does minimizing process duration minimize process cost? Does minimizing either minimize the
amount of cost and schedule uncertainty and risk?
1
Browning &Eppinger
*
Rev. 2/00
What are the cost, schedule, and risk tradeoffs when designing and structuring processes (i.e., when
determining the process architecture)?
Figure 1 depicts the situation addressed by the last question.2 Cost and schedule outcomes for a process
are plotted on a graph. Five example outcomes lie along the curve representing the efficient frontier of
cost and schedule outcomes. 3 Each of these outcomes indicates the best possible schedule that can be
achieved for the cost, and vice-versa. Most outcomes lie in the shaded region: it is always possible to
spend more time and money than necessary.
A scatter plot of many simulated cost and schedule
outcomes for a particular process will cover a certain area and will have its own frontier of most efficient
outcomes. The questions are: what can be done to move this frontier to the lower left? That is, what can
be done to improve the process? In addition, what can be done to increase the predictability of the
process? That is, what can be done to ensure a desired outcome, at a prespecified point along the
frontier?
As we discuss below, process architecture-thestructure or arrangement of the activities and
their interfaces in a process-is an important process variable [73].
Just as different product
architectures can deliver varied product capabilities and levels of effectiveness, alternative process
architectures have different cost, schedule, and risk characteristics. Much like a product can be improved
through architectural innovation [37], process improvement includes architecting an efficient and
predictable process structure. We view processes as systems, applying systems thinking and the tenets of
systems engineering to process design. Modeling and comparing alternative process architectures can
provide insights to help navigate cost, schedule, and risk tradeoffs.
To address the above questions, we present a PD process model that accounts for process
architecture, including the arrangement of the activities and the deliverables they require and produce.
The model also accounts for important attributes of each activity and deliverable. Activities may have
2 Figure inspired by [2] and [52: 5]
2
Browning &Eppinger
Rev. 2/00
uncertain costs and durations. When an activity's inputs change, this may cause rework for the activity.
This rework risk is a function of both the probability and impact of the rework. Furthermore, reworking
an activity may be faster and cheaper than doing it the first time. Thus, each activity may have a
characteristic improvement curve. Deliverables may vary in their volatility (propensity to change). A
work policy determines the timing of activity execution and interaction, and if activities can work
concurrently, as a function of process architecture. Existing PD models only account for subsets of these
characteristics.
In presenting and analyzing the model, this paper contributes insights into the above questions
for both researchers and practitioners.
applications.
The model enables several interesting analyses with multiple
Most of the analysis occurs through simulation, which yields distributions of cost and
schedule outcomes for the modeled process. These distributions are evaluated against chosen targets to
determine the probability of an unacceptable outcome and, with an understanding of possible
consequences, the level of cost and schedule risk. This information is valuable for project planning and
process improvement. Furthermore, the basic model can be easily extended to enable additional realism,
analyses, and insights.
This paper is organized as follows.
constructs incorporated into the model.
The next section presents and motivates the various
Along the way, example data are given from an industrial
process, an uninhabited aerial vehicle (UAV) preliminary design process at an aerospace company. 4
Section 3 discusses simulating the model. Then, section 4 provides results from application of the model
and simulation to the UAV project data. The results and the validity of the model are also discussed.
Section 5 explores the impact of process architecture on cost and schedule risk, and section 6
demonstrates two applications of the model and discusses a third. Section 7 identifies some opportunities
to extend the model. Finally, the conclusion summarizes the work and its value.
3 In reality, the efficient frontier is not a smooth, continuous, monotonic curve.
3
Browning &Eppinger
Rev. 2/00
2. Modeling Constructs
2.1 PD as a Complex Web of Interactions
A powerful way to gain understanding about a process is to decompose it into an activity
network.
Decomposition is the standard approach to addressing complexity-desirable because it is
generally possible to make more accurate estimates about smaller elements of a system. Unfortunately,
decomposition can invite focusing on the elements and disregarding their relationships.
The
relationships between the elements are an important characteristic that differentiates a system from a
mere grouping of elements. As a kind of system, a process is defined not only by its activities but also by
how they work together to produce an output. With an understanding of purpose, content, and internal
relationships, many processes can be rearchitected or reengineered to produce improved outputs for less
time and cost.
In practice, most process definitions and models tend to include a minimum number of element
relationships or interfaces.
As long as each activity has an input and an output, that is considered
sufficient. However, especially in the early stages of PD, participants and the activities they execute tend
to provide and require a great deal of information to and from each other [21]. Indeed, it is information
that tends to flow in the PD process (somewhat analogous to material flow in a manufacturing process).
A large number of interfaces are necessary to document the full range of information needed and
provided by activities. Most process modeling fails to represent the actual information flow, even though
it is a major driver of process effectiveness [22, 31].
Processes are usually represented by activity network models. Activity-on-arc (PERT 5), activityon-node, and other flowchart-type representations are widely used to represent activities and their
precedence relationships. Despite the enormous amount of research on activity networks (see [28] for an
excellent review), these formats are not convenient for representing a large number of interfaces or for
4 The data set is extensively documented and discussed in [11].
4
Browning &Eppinger
Rev. 2/00
comparing alternative process architectures. The visual representation is too busy, making it difficult to
discern architectural differences. And since many process flowcharts capture only a single input and
output for any given activity, in most cases the full range of activity interdependencies and information
flow is not represented.
Using a flowchart for this purpose would simply be too complicated and
cumbersome.
A design structure matrix (DSM) can also be used to represent a process [31, 69, 70]. The DSM
shows activities and interfaces in a concise format. A DSM is a square matrix where each activity is
represented by a shaded cell on the diagonal. Activity names are given to the left of the matrix. A mark
in an off-diagonal cell indicates an interface.
For each activity, its row shows its inputs and its column
shows its outputs.7 When activities are listed in a roughly temporal order, subdiagonal marks denote a
feeding of deliverables forward in the process-from upstream activities to downstream activities-while
superdiagonal marks indicate feedback. The DSM provides a simple way to visualize the structure of an
activity network and to compare alternative process architectures. The DSM has proved to be a useful
research tool [12]. 8 For example, Morelli et al. [51] use a DSM to study the web of interactions between
PD engineers.
We use a DSM to model the network of PD activities, as shown in Figure 2 for the UAV
preliminary design process. This DSM was built by asking an expert on each activity to list the inputs
(mainly information) the activity requires and the outputs it produces. To distinguish alternative process
architectures, we define a sequencing vector, V, which is given by numbering the rows (and columns) in
the baseline DSM. By resequencing the activities in the DSM, we create new process architectures that
vary the feedback interfaces.
5 Project Evaluation and Review Technique
6 Using the presence or absence of a simple mark in off-diagonal cells to indicate an interface is characteristic of a binary DSM.
7 Some DSMs use the opposite convention-rows for outputs and columns for inputs. The two formats convey equivalent
information.
5
Browning &Eppinger
Rev. 2/00
The model requires a complete list of activities comprising a process. Thus, process boundaries
must be chosen appropriately so that the process is sufficiently defined within them. In reality, this
prerequisite is often unmet, since most processes are enmeshed in a larger process system with vague and
dynamic boundaries. Therefore, it is important to define a process in a relatively modular fashion in
order to focus on its cost and schedule outcomes as a function of its internal architecture.
2.2 Iteration
Iteration is a fundamental characteristic of product design and development [4, 11, 31, 43]. In
many ways, PD is a creative, discovery process [55]. Iteration is a key driver of cost and schedule risk in
PD projects [1, 13, 54].
Moreover, the quality of a product design is theorized to improve with
successive iterations, at least in a general sense [47, 62, 63, 74].
Iteration has become even more
important with the contemporary emphasis on concurrent engineering: activities that were once distinct
and sequential are now intermingled or overlapped, resulting in a greater need to coordinate close
interactions and feedbacks.
Most process modeling literature and software is oriented towards production or business
processes, where the goal is to repeat a process exactly a large number of times. Thus, the standard
representations, PERT and flowcharts, do not handle feedback relationships between activities very well.
However, much of the waste in PD processes stems from having to repeat activities, because the
assumptions and/or other information upon which they were initially executed changed. While some
amount of iteration may be planned in order for designers to converge to a design solution of satisfactory
quality, a lot of unplanned iteration results from a poor activity structure or process architecture [11, 54].
In recognition of the importance of iteration in PD processes, several models have been
constructed to analyze it. An extension to PERT called GERT 9 [e.g., 53, 58] enables simulation-based
8 Another process modeling and representation approach we considered is the Structured Analysis and Design Technique
(SADT), particularly its well-known subset, IDEFO. However, IDEFO diagrams were more cumbersome to represent and
manipulate than the DSM.
9 Generalized Evaluation and Review Technique
6
Browning &Eppinger
Rev. 2/00
analyses of activity networks with feedbacks. Several models have been built using signal flow graphs
[5, 30, 41].
Ford and Sterman [32] use a system dynamics simulation model of the PD process to
demonstrate the impact of rework. However, none of these models is especially convenient for exploring
many alternative process architectures for a large number of distinct activities and deliverables.
Other recent efforts to model iteration utilize the DSM. Smith and Eppinger produced three
DSM-based process models: one which assumes all interdependent activities are worked concurrently
[65], another which assumes such activities are attempted sequentially [66], and a hybrid of the first two
[19, 67]. These models emphasize finding closed solutions for process duration for a limited number of
activities and identifying those activities contributing most to process duration. McDaniel [49] extends
Smith and Eppinger's concurrent model.
Others have explored methods for improving processes by
reducing feedback information [45, 61, 69, 70]. Recent work by Yassine et al. [77] more explicitly
defines dependencies in the DSM based on sensitivity to and variability of information.
However,
existing DSM-based models do not account for stochastic activity durations and costs and improvement
curves, nor do they treat rework probabilities and impacts distinctively. Moreover, they are quite limited
in accounting for partial concurrency in large activity sets.
The model developed in this paper characterizes the product design and development process as
a network of activities that exchange deliverables. If an activity does work and produces an output based
on faulty input information or assumptions, then correction of that input implies rework for the activity.
The accomplishment of that rework then changes the activity's outputs, thereby potentially affecting
other activities in the same way (second-order rework [66]). Rework is not always a certainty: the risk
of rework is a function of its probability and its consequences.
The model accounts for both rework probability and impact. Each input to each activity has a
probability of changing (volatility) and a probability of a typical change causing rework for the activity.
These probabilities are multiplied to get the probability of rework for the dependent activity:
P(rework for an activity caused by change in an input) = P(change in input)- P(change affecting the activity) (1)
7
Browning &Eppinger
Rev. 2/00
Rework probabilities for the UAV example are shown in the DSM in Figure 3, where
P(rework for activityj caused by a typical change in its input from activity i) = DSMji,
(2)
Rework in an upstream activity, j, caused by downstream activity, i, can also cause second-order rework.
That is, when an iteration occurs, and the process must backtrack from activity i to rework some activity j
(j < i), this provides the potential for the change in output from activityj to affect interim activities (+1,
j+2, ... , i-1) and any completed, downstream activities (i, i+1, ... , n) dependent on activity j. Thus, in
DSMJ, superdiagonal numbers represent the probability of iteration (returning to previous activities),
while the subdiagonal numbers note the probability of second-order rework following an iteration. In the
basic model, these probabilities are held constant through successive iterations.
Rework can also have a variable impact on an activity. Some changes in inputs, while very
probable, can be absorbed by a robust activity with little impact. The consequences of changing other
inputs may be more severe. The model uses an impact measure-a percentage of the activity to be
reworked-for each input to each activity. Rework impacts for the UAV example are given in DSM2
(Figure 4), where
%(rework for activity j caused by a typical change in input from activity i) = DSMji2
(3)
In both the probability and impact DSMs, in a few cases where multiple, distinct deliverables pass though
a single off-diagonal cell, the DSMs record only the most influential deliverable.
2.3 Activity Overlapping
One of the most intuitive approaches to decreasing cycle time in PD processes is to do more
activities concurrently.
However, doing dependent or interdependent activities concurrently is
problematic from a rework perspective, since doing work based on assumptions or preliminary data is
riskier than working with final data. Several have proposed models to explore aspects of this issue.
Eastman [27] discusses how to accelerate PD time by overlapping activities and releasing preliminary
information. AitSahlia et al. [3] model serial versus concurrent process execution strategies, noting that
increased concurrency risks more resources and may not always be appropriate. Hoedemaker et al. [38]
8
Rev. 2/00
Browning &Eppinger
also discuss limits to concurrency. Ha and Porteus [361 build a model to explore the optimal timing of
reviews for concurrent activities. Carrascosa et al. [17] use work by Krishnan et al. [44] and Terwiesch
and Loch [48, 72] to build a multi-stage task model that accounts for iteration probability and impact for
a few overlapped activities. Roemer et al. [60] model how overlapping affects time-cost tradeoffs. Most
of these models and frameworks focus on overlapping just two activities (although some of the models
use these couplets as building blocks for multi-staged processes).
As discussed above, we use a DSM to represent a large network of interdependent activities.
DSM models typically separate activities in a process into concurrently executable groups [e.g., 67].
Grose [35] innovated the use of alternating light and dark bands in a DSM to highlight these groupings.
The DSM in Figure 2 includes such bands. For example, since activities four and five do not depend on
each other, they can be done concurrently and therefore share a band. Since one activity in each band sits
on the critical path, fewer bands are generally preferred, implying greater concurrency and shorter
process duration for a given set of activities. (However, the presence of superdiagonal marks in the DSM
[potential iterations] complicates determination of the critical path.) The subject of appropriate banding
and its ramifications is a rich one containing many opportunities for further research. We use a work
policy decision to determine the bands.
Determining bands a prioriassumes no upstream activities will be reactivated. However, rework
can dynamically alter the activities requiring work as the process unfolds. New possibilities for and
limitations to concurrency may result. Thus, when process execution is simulated, the set of activities to
be worked during each time step is redetermined. Based on the ordering of activities in the DSM, the
most upstream activity requiring work is identified and placed in the current band. Then, successive
downstream activities are checked for dependence on the activity. If the subsequent activity also requires
work and does not depend on the first activity in the current band, then this next activity is added to the
current band. Each subsequent activity requiring work is similarly checked for dependence on activities
already in the current band, until an activity depending on an activity already in the current band is
9
Browning &Eppinger
Rev. 2/00
found, at which point the current band is complete. Hence, only consecutive activities requiring work are
added to the current band-making the banding depend on the process architecture (represented by the
activity sequence, V). Activities not requiring work and superdiagonal marks in the DSM are both
ignored when the current band is redetermined. The current band is identified using the "work now"
vector, WN, a Boolean vector indicating which activities are active and passive during each time step of
the simulation.
Determination of the conditions under which activities will be active or passive is a matter of
work policy. The work policy represented by the banding algorithm implies that, in the case of coupled
activities, one of the activities will go first and the other will wait. It also implies that downstream
activities will stop and wait when upstream activities on which they depend begin rework. The Gantt
chart in Figure 5 shows this effect.
When rework is generated by activity four for activity three,
activities four and five wait on the new results of activity three before proceeding. (Their work could be
invalidated if they did not wait).
McDaniel [49] found that strategic waiting conserves resources.
However, causing certain individuals or groups to stop work temporarily requires clear management
policies not present in many engineering environments.
The banding algorithm in the model can be
altered to explore alternative work policies. A particularly interesting extension of the model would be
to craft a work policy and banding algorithm to account for generalized precedence relations [29],
thereby allowing activities to exchange information after their initiation and before their completion.
2.4 Activity Cost and Duration
Before execution, activity costs and durations can be uncertain. A variety of distributions or
probability density functions (PDFs), notably the beta distribution [42], have been used to represent the
random variables for activity cost and duration [e.g., 57].
An important characteristic of the beta
distribution curve is its positive skewness-i.e., the area to the right of the most likely completion time is
much greater than the area to the left. This shape stems from the tendency of work to expand to fill
available time-and of human nature to relax when ahead-thus making it less likely that activities will
10
Browning &Eppinger
Rev. 2/00
finish early, even if they could.
However, the important characteristic of positive skewness can be
adequately represented by the much simpler triangle distribution. Triangular distributions are easy to
build, requiring only three data points. Thus, for each activity, three cost and duration estimates are
required: optimistic or best case value (BCV), most likely value (MLV), and pessimistic or worst case
value (WCV).
The BCV, MLV, and WCV are used to form a triangular PDF, denoted as
TriPDF(BCV,MLV,WCV). For simplicity, we normalize the area under a TriPDF to equal one:°'
TriPDFArea= base height
2
(WCV - BCV) P(MLV)
2
1
(4)
Table 1 shows activity duration and cost data for the UAV project. Where necessary, the displayed data
are rounded, and they are disguised to protect competitive information. The model assumes that time and
effort for all information transfers between activities are included in activity durations and costs.
Since activity cost depends somewhat on activity duration, the cost PDF for an activity often has
a shape similar to the duration PDF (albeit on a different scale). In fact, we expect substantial correlation
for any pair of activity duration and cost samples. The model uses a sampling correlation coefficient of
0.9 (which can be adjusted by the user) for each activity. Alternatively, the model can be extended by
giving each activity its own correlation value based on its individual cost-schedule elasticity [cf., 29].
Or, the cost and duration of each activity can be represented by a joint cost and schedule PDF similar to
the simulation model's output (discussed in section 4.2). In reality, many activities have some flexibility
to trade duration for cost (e.g., "crashing").
While the model does not explicitly consider cost and
schedule tradeoffs within individual activities, it does account for tradeoffs between activities in the
process:
sample process outcomes with identical costs will vary in duration based on the amount of
activity concurrency.
10 Alternatively, one could normalize the area of the TriPDF to, say, 0.8 if one wished to assume a 10% margin on each end of
the stated BCV and WCV. Estimates of the 10h and 90" percentiles of duration are typically more accurate than estimates of the
11
Rev. 2/00
Browning &Eppinger
Activities
Name
ID#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Prepare UAV Preliminary DR&O
Create UAV Preliminary Design Architecture
Prepare &Distribute Surfaced Models &Int. Arngmt. Drawings
Perform Aerodynamics Analyses &Evaluation
Create Initial Structural Geometry
Prepare Structural Geometry &Notes for FEM
Develop Structural Design Conditions
Perform Weights &Inertias Analyses
Perform S&C Analyses &Evaluation
Develop Balanced Freebody Diagrams &External Applied Loads
Establish Internal Load Distributions
Evaluate Structural Strength, Stiffness, &Life
Preliminary Manufacturing Planning &Analyses
Prepare UAV Proposal
Durations (days)
BCV MLV WCV
2
3
1.9
8.75
4.75
5
2.8
4.2
2.66
10
12.5
9
14.3
15
26.3
9
10
11
10
7.2
8
5
8.75
4.75
20
22
18
9.5
10
17.5
14.3
15
26.3
13.5
15
18.8
32.5
36
30
5
6.25
4.5
BCV
8.6
5.3
3
6.8
128
10
11
8.9
20
21
21
41
214
20
Costs ($k)
MLV WCV
13.5
9
5.63
9.84
3.15
4.73
9.38
7.5
236
135
12.4
11.3
12
15
9.38
16.4
24.8
22.5
39.4
22.5
22.5
39.4
45
56.3
257
232
22.5
28.1
IC
35%
20%
60%
33%
40%
100%
35%
100%
25%
50%
75%
30%
28%
70%
Table 1: Activity Data for UAV Process
2.5 Improvement Curves
Often it takes less effort to rework an activity than to do it the first time. An activity may have a
large set-up time (e.g., building a simulation of a subsystem) where, once the infrastructure is in place, it
is easy to rerun the activity with new inputs. Also, activity participants may benefit from learning and
adaptation.
Some existing models account for improvement curve effects [2, 5, 41, 76].
Studying
semiconductor projects at Intel, Osborne [54] found that reworked activities typically exhibited little
further improvement curve effect after an initial iteration. Thus, we model the improvement curve for
each activity as a step function, where an activity initially takes 100% of its duration and cost to
accomplish; second and subsequent executions of the activity take x% of the original duration and cost.
Improvement curve attributes for each activity are stored in a vector, IC. Sample data for the UAV
project are included in Table 1.
2.6 Process Cost and Duration
Expected duration is the primary process metric and objective function used by most process
models, including the vast majority of the models mentioned above. Quite a bit of work has been done to
calculate and bound the completion times of activity networks [e.g., 25].
absolute BCV and WCV durations [42].
12
Rev. 2/00
Browning &Eppinger
Process cost is often treated as a deterministic value or purely as a function of process duration.
While cost outcomes may correlate to an extent with duration outcomes, cost is not a simple function of
duration [33]. Few models compare a number of individual cost and schedule outcomes.
In industry, process cost estimates utilize either parametrics or "roll-up" techniques. Parametric
techniques or cost estimating relationships(CERs) such as those in the Cocomo software [8] predict
process cost as a function of several factors--e.g., duration, complexity, novelty, etc.-with coefficients
determined from regression analysis of historical data.
Roll-up techniques use a cost breakdown
structure (CBS) related to the hierarchy of activities [e.g., 10, 50]. A cost estimate is made for the
smallest activities in the breakdown, and these estimates are aggregated to arrive at a single figure for the
process. The low-level cost estimates are typically based on historical costs and can be deterministic or
random variables [34].
While providing a valuable perspective on potential process costs, parametric techniques are not
easily integrated with process models. CERs cannot evaluate processes for which inadequate historical
data exist, and they cannot be used to compare alternative process architectures. The model in this paper
uses a type of roll-up technique, with an important extension: it accounts for activity interactions.
2.7 Process Cost and Schedule Risk
Cost and schedule risk are also important project management metrics [e.g., 18]. Point estimates
of process cost and duration do not convey information about risk. PDFs provide additional information
about the probabilities of various outcomes. There is a rich literature on the use of activity network
models to evaluate schedule uncertainty in particular [e.g., 9, 23, 34, 40]. The activity network is
evaluated by using Monte Carlo simulation to produce a PDF of possible outcomes. Given a deadline,
the area under the PDF to the right of the deadline represents the probability of unacceptable process
duration outcomes. However, these models typically equate risk with uncertainty and do not account for
varied consequences. Each outcome has a consequence.
consequences!
13
Big cost or schedule overruns can have big
Browning &Eppinger
Rev. 2/00
The risk of an outcome is a function of both its probability and its consequence, which is often
represented as:
Riskoutce, = P(Outcome) · Consequence,o,,-e
(cf., Equation (1)).
(5)
The model utilizes an impact function to weight each adverse outcome by its
consequence, yielding a risk factor, -.
For example, the following is a quadratic impact function:
Is =s(Xo-Ts)2,
Xo>Ts
(6)
where Ks is a normalization constant, x is a sample schedule outcome, and Ts is the schedule target or
deadline. This impact function indicates that the consequences of a schedule overrun increase as the
square of the size of the overrun."
Schedule risk is the sum of the probability and impact products over all adverse outcomes:
•W
= sifs (Xo)[x
-T2dXo
(7)
TS
where fs(xo) is the PDF of schedule outcomes. A similar formula applies for
. 2
c " PDFs can have
different means, variances, and skewnesses, and a better mean does not always imply less variance, and
vice-versa. The risk factor accounts for the mean, variance, and skewness of outcome PDFs versus a
common target. It therefore provides a measure with which to compare schedule PDFs or cost PDFs.
We use
1c and l-s as process
metrics for comparing alternative process architectures.
3. The Simulation Model
The previous section discussed several constructs incorporated in the model. We now describe
how the model is simulated. The simulation is implemented in a spreadsheet augmented by additional
software code. The spreadsheet contains all the inputs and serves as the repository for outputs. The
" Taguchi [71] highlighted the usefulness of quadratic quality loss functions. Alternatively, customer utility functions could be
used as impact functions.
12 These and related formulae are derived in [11].
14
Browning &Eppinger
Rev. 2/00
additional software code implements the simulation. Simulating the model provides a sample cost and
schedule outcome for a given process architecture. Running the simulation many times with the same
input provides distributions of cost and schedule outcomes.
The simulation begins by randomly sampling a duration from each activity's TriPDF, followed
by a correlated cost sample. These samples are stored in the vectors ActS and ActC, respectively. Then,
the simulation proceeds over a series of equal time steps, At, the size of which is smaller than the
duration of the shortest activity.' 3 (Smaller time steps provide greater model resolution at the expense of
greater simulation execution time.) Each activity's sample duration is converted into time steps (rounded
off). At each new time step, the simulation chooses activities to work on, using the banding approach
discussed in section 2.3. A "work to be done" vector, W, tracks the amount of work remaining for each
activity [65]. When Wi = 1, the entire activity i remains to be done; when Wi = 0, activity i is complete.
Initially, all Wi = 1.
During each time step, working activities (for which WNi = 1) complete a fraction of their work,
=i
At
Ac-S
ActS,
(8)
As a result, the work remaining is decreased for each working activity:
W i = W i - iWi
(9)
And equal fractions of the activities' costs are added to the cumulative cost:
C = C+
SActCi
(10)
13 A discrete event simulation was considered (and could provide a superior approach) but would be more difficult to implement
in a spreadsheet without substantial additional coding and/or third party software. The simple, time-advancing simulation
suffices to demonstrate the insights of the model.
15
Browning & Eppinger
Rev. 2/00
For example, if activity i has a duration of two time steps (i.e., ActSi = 2At), then half the activity will be
finished during a single time step. As a result of simulating this time step, Wi is decreased to 0.5, and
half of the activity's cost is added to C, the cumulative cost of the process.
At the end of each time step, the simulation checks for activities that just finished (deliverables
just produced). For each such activity, a probabilistic check is made for potential iterations (rework for
upstream activities), and for second-order rework resulting from any such iterations, using the
probabilities in DSM,. If rework occurs, its amount-a percentage of the affected activity, given in
DSM 2, modified by the improvement curve-is added to W.
When all activities are complete, all W entries equal zero. The number of time steps used in the
simulation is the process duration or schedule, S.
Cumulative process cost is output as C. As the
simulation is run many times, a number of C and S sample pairs are generated. The C outcomes form a
cost distribution and the S outcomes form a schedule distribution for the given process architecture.
Together, the C, S pairs form a joint cost-schedule distribution.
A number of runs are necessary to get stable cost and schedule distributions. Batches of runs, b
(set to 25), are done until the means and variances of the C and S distributions stabilize to within q (set to
one percent), as checked by the following equations (shown only for C):
IE[C ]-
E[C_,bI
EIC,-b i
<q
(11)
lC,r
C,r-bl
2
OC,r-b
<q
(12)
where r is the number of simulation runs. Similar equations apply to S. The user can adjust the one
percent stability threshold and the run batch size between stability checks.
Table 2 summarizes model inputs, Table 3 presents the important variables used in the model,
and Table 4 provides an overview of the algorithm.
16
Browning &Eppinger
Rev. 2/00
Input Data
Unit of Measure
*
List of activities comprising process
*
Activity inputs and outputs
*
Process architecture
*
Duration estimates (BCV, MLV, WCV)
*
Cost estimates (BCV, MLV, WCV)
Money (e.g., dollars)
*
Improvement curve benefit when repeating
activity
% of original duration
and cost
*
Likelihood of a typical change in an input
causing rework
Probability [0,1]
*
Impact of a typical change in an input
% of activity to be
reworked [0,1]
*
C & S sampling correlation for each activity
Correlation [-1,1]
*
Output distribution stability criteria
*
Run batch size (between stability checks)
*
Simulation time step size
(List)
(Binary DSM)
(Activity sequencing
in DSM)
Time (e.g., weeks)
% stability
Number of runs
Time
Table 2: Summary of Model and Simulation Inputs
4. Results
This section presents and discusses model and simulation results using the baseline architecture
of the UAV preliminary design process.
The full data set and the data gathering methodology are
discussed in [11].
4.1 Iteration Criticality
Without simulation, the model can be used to analyze the relative criticality of various process
interfaces.
While the importance of knowing the most critical activities in a process has been
documented [26], interface criticality is also important, since iteration can be a major driver of process
cost and schedule outcomes, as discussed in section 2.2. However, because rework in one activity can
trigger rework in others, which can continue to propagate, a closed form solution for interface criticality
is difficult to derive for a large, complex activity network [65, 66]. For instance, if we want to use the
model to calculate the criticality of various interfaces to the schedule outcome,
17
Rev. 2/00
Browning &Eppinger
Criticalitysij= [DSM,,j DSM2 j ICj ActSj] [sum of second-order rework effects] ...
(13)
.
Disregarding the second- and higher-order terms, and using MLVs for ActS, relative criticalities of the
UAV process interfaces are calculated and shown in Figure 6. In practice, if we assume that the extent of
of rework propagation can be limited (especially for high visibility design changes), then the
approximation obtained by disregarding the higher order terms is still instructive. Interface criticalities
can provide guidance as a gradient for architecture optimization, as discussed in section 5.2.
~~~ti
oi
W
A
:
.t
R~~un: bacr iub~e upu distribution stabiit cecks.,
fge;
(
}
o&ywssb
h44
~~~~~~~~~~~~~~X
cag loe etween batches'ofrus
Required~saiiyo uptdsrbuin
vcto ofLenthn~ sedto pecfy heprocess
Actvitseueningvet~ra
orneac aiqivi
dti
of lengt contain a smple
Ativity duratin vector. a•
. -%-og'
cn rednit
b
..
q
.
---;
ti
f
--
-
r ---: : - =- -:Number of simulation runs - . : . Numer of process activies-:--n :
Cumu-tiv proces cost for;a g;--- runC
r' ienr
Cuir<ulve process duat on
S
Siumuaionrbesse size o
At.,
h..
. .......--
fvector
,
:
ActSn-c-dngCsX~grr
rShml
iXs ;--iedurkiOA
X
Wo
ne
0 ..
,
.
; W4g
XgW
'akh~~~~h
i"
;W Xa*plig .orreati<E::on'
*
a~vector
of lent
WWorkv~ctot
-
'
.s
'
O
zyth
n
aAvt:~nctgokb
done.
nr f6fac
'acty~.irntiafl~$e~tn
au*
s.t, untim
indicate 1OO~
't
drrigi
,bkfth
st ~f e~w~,rkrem~ns fbr each
g
.'~~~~~~T
t~~~~
.4<u
~~~~
~ ~~~~~~~~~
~ ~ ~~~~~~~~~~~~
-'.,'
6
XDA4,DM(fa
'
0
t-lot
A'
~
..
,,
in.
A
ty~~~~~~~e
~~s~~uccve~~~~ex'ecutrons
nmarxwttrddnnnn
tp 4 n
A"'
',g
'
Allen
t.(-
ti &
4
pr
..~~~~~~~~~~~~~~.
>
'22+
dimensiozf'k=Jk
1.Aabx1Ity[&41
~~ (t6~~) i~~~j (Icoliimn~~~~ ~~the ~~~obabffity~~~~of'Ativ.
Table 3: Summary of Model and Simulation Variables
18
cusin
Browning &Eppinger
Rev. 2/00
etWto Wint values
2Sample:!activity costs an uain rmterdsrbutions.
......~~~~
oun....di"g ff (mnmmdurationlis
i 3.Conver -durationt tm stes
~ ~ ......
~ ne time~ step)..~
--:':.'
i
.:.:i::::::iC-4::.::Il::i~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i::i·*:~~~~~~~~~
4.Each
tie step:
Ientif concudrrent'work forcurrent. timessep
Set allMW
FSE..
TRU ES ,i)
;,act·iity. 4 that: has unfihished ork-.e.where: W(i)> 0
Find ms upstream
q
Loopthrough subsequent
i
iset
s
I tish workandis not,depenidentona anu
Otherise;t~coimP1t0b-cth
iad
ier
Dowork,
check for.rki
checig ativities)
on(tp Op:i
t
work r
4~~~~~~~~7
1'.~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~6iii71i91,44.1.i,'Op
41-
Allegeordthsiui&te-4e~f W~ TRUEthi
Subappopiia ion"# oet-ftli'ocrn W) fir'each workig aciiyit y.~,
If actvit doin wpkhi tn ip anjut ingsectoige4g,
ipN(
pTkff.
WJ
-hn
.keer ,frone~pandm
wo~
beyond
origial oeantoeysntoe
imptoV'etx~ ~~~~uert
j~i~fici
--b' -t er
rw ofacqi·-
~~A`a".-da
4e.'W"
Wd
DM44$
" "-
-a-
:Cali
fo nid
At4'4'tUSŽtv
":
OW?;
-o
x"84~e~o
.V~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~i:
4
u of R imltio
lorthfrEa
c
o u c d, uAtc 2).eThemea
The
eanduraionTE[Si 14
is also skewed right (skewness
Asowitha stndrd EatioRn,
=
4
-M',,
l-O.
w~~~~~~~~~~~~~~~~~~~~~~~~~~"
Tale4
'5
r Mpvmetcr
was:ln
'
4
ir
<'44
"
4
jute
st
<XOU0 al~Th~i#reworVT"4Dsandad
now (fbcausieatnoty
of
o
E
it
I·:~~~~~~~~~~~~UIIP
8mdayThesheueditibto
0.95). The two distributions have a correlation coefficient of 0.52,
which is substantially less than the 0.9 correlation between cost and duration samples for individual
activities. This change is accounted for by activity concurrency in many of the outcomes, since increased
19
Browning &Eppinger
Rev. 2/00
concurrency reduces duration without changing cost. Figure 7 also shows targets of $670k for cost and
130 days for schedule. Figure 8 shows the joint cost and schedule PDF resulting when cost and schedule
outcomes are paired.' 4 The presence of two peak regions indicates two general categories of project
outcomes, one of which entails more iteration. The upper portion of Figure 9 shows the joint PDF in
contour plot format.
4.3 Risk Analysis
Using the distributions and targets shown in Figure 7, we can compute the probabilities of
unacceptable cost and schedule outcomes. Pc(unacceptable) = 0.24 and Ps(unacceptable) = 0.91. Using
Equation (7) and letting K = 1/($k)2 and
,-s = 184. If ris chosen to put ,c
customer utility, ,-c
Ks
= 1/(days) 2 (so
Mi
will be dimensionless),
c = 688 and
and ,-fs in comparable units such as lost dollars of revenue or lost
can be compared to
•-s to see which is contributing greater risk. For example,
letting
c
(14)
($lk lost profit)
($lk of cost overrun)
2
($50k lost profit)
(day of schedule overrun) 2
then -c
= $688k lost profit and
•s = $9200k lost profit. Given these values, schedule risk is a bigger
concern than cost risk for the current process architecture.
4.4 Validity of Results
Smith and Morrow [68] distinguish four levels of validity for PD process models:
1. "Face validity" of the model,
14 This PDF was created from the sample C, S pair data using bicubic interpolation across a 73 x 66 mesh grid over 20 x 20
histogram bins. We gratefully acknowledge the use of MatlabTM m-files written by Oyvind Breivik and Christopher Keane in
preparing the three-dimensional histogram data.
20
Browning &Eppinger
2.
Rev. 2/00
Application of the model to existing (retrospective) data sets gathered from industry,
3. Use of the model to guide decision making in an experimental environment, and
4. Use of the model to guide decision making in the real world practice.
After reviewing many of the models referenced above in Section 2-and others-they note that none of
them reach level three. Similarly, the model presented in this paper achieves level two. The model has
high face validity [46] because it is based on existing theory (discussed in Section 2), extensive
observations of the modeled system, and conversations with experts on the modeled system [11].
Furthermore, the model was applied to a realistic data set from an industrial setting. While extending the
validity of the model by applying it in experimental and real environments is a future goal, the current
level of validity is comparable with existing models in the established literature. Of course, increasing
the validity of the model will undoubtedly require an iterative process of enrichment and elaboration.
Section 7 discusses avenues for future enhancement.
Regarding the simulation results: when compared with the UAV project's actual outcome,' 5 the
simulation model's cost and schedule results overestimate the actual time and money required for the
project (while acknowledging the difficulty in validating a distribution of outcomes with a single
outcome). We believe five factors account for the discrepancy. First, the model's basic work policy was
not strictly used in practice. As witnessed by the Gantt chart in Figure 5, the model's work policy
constrains many activities to wait until others have finished completely before beginning, whereas in
reality the activities begin earlier with preliminary information. In section 7, we discuss an extension to
the model to ameliorate this shortcoming.
Second, execution of the actual process occurred under
atypical circumstances: it was accelerated and used new techniques. Since the respondents were asked
to provide data on "typical" activities, their estimates are probably high compared to the actual project.
That is, the input data do not account for new practices.
However, one familiar with the expected
i5 For competitive reasons, actual project cost and schedule outcome data are not publicly available.
21
Rev. 2/00
Browning &Eppinger
benefits of these practices could revise the data, with the expectation that the results would be more
accurate. Third, the input data are insufficiently refined in relation to each other and the scale of the
project. Several of the activity experts providing data participate in a number of similar projects that
vary in complexity and scale. Even though respondents were asked to consider a particular UAV project
as a point of reference, it is possible that some of the experts responded from different frames of
reference.
Moreover, the particular UAV referent required them to provide retrospective responses.
Another round of refinement of the input data (a Delphi approach) would improve their accuracy.
Fourth, unlike the activity duration data, the activity cost data were not elicited directly-instead being
derived from the respondents' resource estimates, assuming $750/labor-day.' 6 In fact, the "actual" cost
reported for the process most likely does not include all of the resources accounted for by the model
input data, since some of these resources, such as computers, were provided by the overhead budget.
Fifth, these results use a time step size of two days, which overvalues small bits of rework and skews
highly iterative runs to higher C and S values. Using a smaller At reduces this effect. Together, these
five factors explain the inflated cost and schedule outcomes provided by the simulation. Despite these
differences, we believe that the simulation model and the current data set provide several interesting and
valid insights.
4.5 Sensitivity Analyses
Some interesting analyses include the sensitivity of
c and
ts to
changes in:
iteration
probabilities, rework impacts, activity durations and costs [cf., 20], and activity sequencing (i.e., process
architecture). The effects of uncertain and changing targets can also be explored. The next two sections
discuss some example applications along these lines.
For the sake of simplicity, each full-time person involved with an activity-and the hardware, software, and facilities he or she
uses-is assumed to cost $750 per day. I.e., assuming each person (with benefits) costs $150k/yr., associated hardware,
software, and facilities are $30k/person-yr., and 240 work days/yr. These estimates are not actual amounts for the company.
16
22
Rev. 2/00
Browning &Eppinger
5. The Impact of Process Architecture on Mc and Ms
The simulation model is used to compare the relative levels of cost and schedule risk in five
Architecture one is the
alternative process architectures (with identical cost and schedule targets).
baseline, shown in Figure 2. Architecture two was suggested by process engineers at the company.
Architecture three takes advantage of beginning manufacturing analyses (Activity 13) earlier, thereby
allowing more activities to work concurrently.
Moving the manufacturing activity upstream actually
increases the instances of iteration in the process. However, the activity's robustness to certain input
changes and its large improvement curve reduce the cost and schedule impacts of any rework it
undergoes. Architectures four and five are variants of architecture three. Table 5 shows V for each of
the five architectures, with
c, ,-ts, and other metrics below.
Architecture
V
1
1
2
3
4
5
2
1
2
3
5
6
3
1
2
3
4
5
6
4
13
3
7
8
8
9
6
7
6
9
9
.- 7
8
7
10
11
9
10
8
10
12
12
11
A2 .
r
E[C]
13
14
475
647
45
13
14
325
643
42
12
14
275
670
59
11
14
375
695
72
11
14
350
692
68
E[SI
141
146
111
99
100
8
8
9
12
12
40.7%
57.6%
58.6%
1.7%
4213
2
10
11
Cas
TC
670
Ts
130
Pc(unacceptable)
Ps(unacceptable)
-tc
-_s
24.4%
21.2%
4
1
2
3
4
5
91.4%
688
100%
575
3.3%
2235
0.8%
4997
184
316
3
2
5
1
2
3
5
8
.
7
.
..
10
Table 5: Comparison of Five Process Architectures
23
Browning &Eppinger
Rev. 200
Architectures three, four, and five have more concurrency, more iteration, and higher cost risk,
but much less schedule risk. Figure 9 compares the joint C, S distributions for architectures one and
three, clearly showing that architecture three reduces schedule risk at the expense of some cost risk. (In
Figure 9, the goal is to get the distribution in the lower left region, inside the cost and schedule targets.)
Figure 10 plots Zc and
•/?s
for the five architectures, revealing potential cost and schedule risk
tradeoffs. As in Figure 1, results for most of the possible process architectures will fall above and to the
right of the five points. Perhaps some yet unfound architecture will stretch the tradeoff frontier to the
lower left. This possibility is discussed in the next subsection.
Note that the type of impact function chosen affects the shape of the curve.
For example,
quadratic impact functions reward an x unit improvement in a high-risk architecture over the same
improvement in a low-risk architecture.
5.1 Preemptive Iteration
Some may find counterintuitive the result that more iteration can yield afaster schedule. The
result occurs because architectures three, four, and five take advantage of preemptive iteration. Activity
13 is begun earlier and allowed to work concurrently with other activities, saving a large amount of time.
Otherwise, it would be executed entirely on the critical path, as shown in Figure 5. Even though these
architectures increase the number of superdiagonal marks (and their distance from the diagonal) in the
DSM, in this case the increased iteration is worth the price, which is discounted by small rework impacts
and improvement curve effects.
Essentially, by beginning activities earlier, in parallel with other
activities, designers are able to set up, "get their feet wet," and begin learning earlier. They can then do
some amount of rework, rather than the entire activity, on the critical path. Although the cost is higher,
the schedule can be compressed.
Figure 11 illustrates the tradeoff. In case A, two activities are done sequentially. Assuming that
lengthening either activity will lengthen the overall process by the same amount, both activities are on
the project's critical path. The total time and cost spent on both activities is simply the sum of each
24
Rev. 2/00
Browning &Eppinger
activity's individual time and cost. In case B, Activity 2 starts earlier. When the results arrive from
Activity 1, Activity 2 must do rework, but the time required for this rework is less than the time required
to do the entire activity. Thus, the total time on the critical path is reduced, although overall cost is
increased by the cost of the rework. The model facilitates making intelligent choices about concurrency,
providing an understanding of the implications for rework. 7 DSM optimization approaches that focus
solely on the number of superdiagonal interfaces and their distance from the diagonal will miss these
types of opportunities to prescribe preemptive iteration."
5.2 Architecture Optimization
The model in this paper could be extended with a control loop that suggests new architectures to
test (changes V) using a genetic algorithm [cf., 61]. If the change provides an improvement in the
objective(s), it would be allowed to stand; otherwise, a new V mutation would be tried. This process
would continue until an optimum was found (or until marginal improvement dropped and remained
below a certain threshold for several comparisons, etc.).
The only real way to optimize the process architecture is to compare all possible sequences of V.
However, this is an O(n!) operation. Thus, several authors [45, 66, 70] offer heuristics focused on
reducing iteration. A key strength of the DSM representation is how it highlights iteration. It is often
expected that the optimal process architecture will have a minimal amount of iteration. Yet, our model
disqualifies that assertion. Section 4.1 discussed a new alternative for an improvement gradient.
The major challenge is determining an objective function that accounts for all significant sources
of uncertainty in the process. The traditional objective in DSM optimization is to minimize the expected
17 When knowledge of potential effects is high (i.e., the outputs are easily anticipated), it is good to "learn before doing" [56].
But when knowledge of effects is low (the outputs are hard to forecast), preemptive iteration is not recommended.
18The time-cost tradeoff suggested by the concept of preemptive iteration is explored in a model developed by Roemer et al.
[60].
25
Browning &Eppinger
duration of the process [61, 65-67].
Rev. 2/00
However, minimizing variance
9
and risk are also worthwhile
objectives, and we want to do so for both cost and schedule. We suggest the risk factors as objective
functions instead of expected durations and costs.
6. Other Applications
This section presents some additional applications of the simulation model, using the UAV
process to demonstrate.
6.1 Project Planning: Setting Cost and Schedule Targets
Project planners may face the task of choosing appropriate cost and schedule targets. As shown
in Equation (3), the risk factor is partly a function of the chosen target. When choosing or negotiating
targets, one wants to ensure an acceptable level of risk. In Figure 12, Pc(unacceptable) and ,c4
are
plotted versus various possible targets (budgets). For example, choosing a cost target of $620k implies
about a 74% chance of an unacceptable outcome and a cost risk factor of about 2500. In comparison,
letting T = $700k, Pc(unacceptable)
0.09 and
1c = 250. These evaluations come in handy when
negotiating cost and schedule targets, choosing an acceptable level of risk when planning projects, or
when analyzing what premium a customer should pay for faster delivery, etc. The analysis can also help
anticipate the risks of uncertain or changing targets.
6.2 Project Control: Replanning
The simulation model is also useful once a process is underway. Activity cost and schedule
estimates can be updated or replaced with actual values as both become available. Then, the model can
be used to explore "what if' questions that come up. For example, if a low-probability, high-impact
iteration becomes more likely or occurs, affected activities can be identified quickly, and the impact on
19While expected duration and variance are correlated to some extent in some model outputs [e.g., 66], minimizing one does not
necessarily minimize the other. On the contrary, it seems that one could sequence activities so as to minimize one at the expense
of the other.
26
Browning &Eppinger
Rev. 2/00
cost and schedule risk can be ascertained. The cost and schedule risks of adding new activities to the
process can also be explored. The remaining portion of a process can then be replanned effectively.
6.3 Process Improvement
By studying the sensitivities of the risk factors to individual activity costs and durations, one can
investigate the effects on the overall process of improving one or more of its activities. Moreover, by
studying the sensitivity of ,-tc and -Is
to changes in iteration probabilities and impacts, one can
analyze the benefits of new software or other tools intended to increase coordination and/or speed certain
activities.
7. Next Steps and Possible Extensions
Significantly, the model provides a framework on which to base extensions that would not be
possible or tractable with other approaches. For instance, the model provides a framework in which to
apply an options-based approach [cf., 39]. All entries in the W vector, which indicate the percentage of
work remaining for each activity in the process, need not begin with unit values. (Thus, one can analyze
partial versions of full processes.) Setting any downstream Wx (and its interactions) to zero prior to its
execution enables representation of contingent activities. Furthermore, off-diagonal DSM entries, which
in the model contain information on probability and impact of rework, can be expanded to contain
additional information.
For example, a counter could be added for the number of times particular
iterations occur. A relation representing iteration probability as a function of C, S, gc,
level of design performance, or the iteration count could also be included.
4s, a
current
(Each time an iteration
happens, its iteration counter is increased, and the probability of future iterations is changed
accordingly.) Thus, the modeling structure also enables the representation of dynamic interfaces among
activities.
Some activity interfaces can be made to disappear later in the process by setting the
appropriate DSMij = 0 with increase in iteration count, C, or S. Similarly, new interfaces can be made to
appear. The occurrence of one iteration could therefore affect the likelihood of others, as observed by
27
Rev. 2/00
Browning &Eppinger
Adler et al. [1]. Yet, all such contingencies and dynamic effects must still be outlined a priori, because
the model can handle contingencies only on a case-by-case basis. This is reasonable since contingencies
are usually studied through evaluation of a finite number of predetermined scenarios. A key next step in
research on engineering project outcomes is to include some measure of evolving design performance
upon which to base the likelihood of rework.
The model could also be extended to account for resource constraints. In the basic model, the
activity sequencing accounts only for deliverable-based constraints. The availability of other inputs, such
as personnel, hardware, software, and facilities, is assumed. (The basic model only accounts for the cost
of these resources.) In reality, a lack of resource availability creates queuing effects that can increase
process duration [1, 59]. PERT and activity network models have been extended to account for resource
leveling [e.g., 64, 75].
Similar techniques could be applied to this model by considering resource
availability in the banding algorithm and even in the process architecture. Modified forms of available
This extension would increase the advantages of
algorithms [e.g., 7, 15, 28] could also be used.
preemptive iteration and decrease possibilities for activity concurrency at resource bottlenecks.
Another possible extension is to explore other work policies. The basic model uses a work
policy that assumes finish-to-start information transfers between activities. In reality, some outputs may
be available before an activity finishes (or the activity may be required to release preliminary information
nevertheless), and some inputs may not be needed until after an activity has started. Thus, the model
could be adapted to allow activities to begin with a certain subset of their inputs, perhaps by using
generalized precedence relations [29]. Though such a study, interface heterogeneity (the number of
different deliverables or activities providing them) may be found to be a significant source of process
risk.
Increasing activity overlapping should account for prior research on data evolution rates and
activity sensitivities to data changes [17, 44, 48, 72].
Implementing many of these enhancements will call for a discrete event simulation [e.g., 57].
Discrete event simulations are more efficient for larger problems and are not affected by choice of At. A
28
Browning &Eppinger
Rev. 2/00
discrete event simulation was deemed unnecessary to demonstrate the insights of the basic model and
would be more difficult to implement without additional, third-party software. However, a discrete event
simulation will become necessary especially to handle more elaborate work policies.
Simulation models have been shown to be an excellent tool for hypothesis generation [16]. Even
the basic simulation model provides a platform for generating hypotheses for future, empirical research.
For example, hypotheses such as the following could be studied:
H,: Paying more attention to and managing iteration leads to improved project outcomes (in
terms of C and S).
H2 : Thoughtful use of preemptive iteration can effectively shorten PD cycle time.
Exploration of hypotheses such as these will support the direction, development, and validation of future
PD process models.
8. Conclusion
Product development firms are keenly interested in improving the value they provide to
customers.
In addition to product technical performance, two other aspects of that value, product
affordability and availability, are directly affected by PD cost and cycle time. The need to mitigate
uncertainty in PD processes also leads to increased time and cost [11, 13]. Thus, understanding and
improving the PD process can increase value.
Moreover, value depends on balancing technical
performance, affordability, and availability in a way that the market or customer prefers. Balancing
requires trading off not only product attributes but also process attributes such as cost and duration. This
paper provides insights on how to navigate these tradeoffs through the examination of alternative process
architectures.
We have presented a model that provides practical insight into process architecture, cost,
schedule, and risk.
The model accounts for a number of PD process characteristics, including
interdependency, iteration, uncertain activity cost and duration, rework probability and impact,
improvement curves, and work policy. The model was used to explore the effects of varying the process
29
Browning &Eppinger
Rev. 2/00
architecture to engineer improved processes. It provides valuable aid in project planning and replanning.
The model is applicable at any level in a process hierarchy, and its results can be used as inputs to higher
level models in a nested fashion. Analyzing the level of cost and schedule risk presented by a process
architecture and a chosen target provides the capability to compare alternative work flows and targets.
With appropriate weighting factors, the model enables trading off cost and schedule risk. Furthermore,
the model shows the schedule advantage of preemptive iteration and accounts for the variables that
influence its applicability. Overall, the model provides a framework in which to examine the impacts of
a variety of effects on process cost, schedule, and risk-yielding important managerial capabilities and
insights.
In the future, as process attributes are more widely recognized as significant sources of
competitive advantage, process systems engineering (vice product systems engineering) will become
more important. Organizations developing large, novel, complex systems will benefit especially from
being able to convince their customers that their PD process has an acceptable level of risk. All PD
organizations can benefit from low risk process architectures by not having to pass on the costs of
mitigating so much uncertainty to their customers.
9. References
[1]
Adler, P.S., Mandelbaum, A., Nguyen, V., and Schwerer, E., "From Project to Process Management: An
Empirically-based Framework for Analyzing Product Development Time" Management Science, vol. 41,
pp. 458-484, 1995.
[2]
Ahmadi, R.H. and Wang, H., "Rationalizing Product Design Development Processes", UCLA Anderson
Graduate School of Management, Los Angeles, Working Paper, 1994.
[3]
AitSahlia, F., Johnson, E., and Will, P., "Is Concurrent Engineering Always a Sensible Proposition?" IEEE
Transactionson Engineering Management, vol. 42, pp. 166-170, 1995.
[4]
Alexander, C., Notes on the Synthesis of Form. Cambridge, MA: Harvard University Press, 1964.
[5]
Andersson, J., Pohl, J., and Eppinger, S.D., "A Design Process Modeling Approach Incorporating
Nonlinear Elements", Proceedings of the ASME Tenth International Conference on Design Theory and
Methodology, Atlanta, GA, Sept. 13-16, 1998.
[6]
Baldwin, C.Y. and Clark, K.B., Design Rules: The Power of Modularity. Cambridge, MA: MIT Press
(forthcoming), 1999.
30
Browning &Eppinger
Rev. 2/00
[7]
Belhe, U. and Kusiak, A., "Resource Constrained Scheduling of Hierarchically Structured Design Activity
Networks" IEEE Transactionson EngineeringManagement, vol. 42, pp. 150-158, 1995.
[8]
Boehm, B., Software EngineeringEconomics. Englewood Cliffs, NJ: Prentice-Hall, 1981.
[9]
Boehm, B.W., Software Risk Management. Washington, D.C.: IEEE Computer Society Press, 1989.
[10]
Brimson, J.A., Activity Accounting: An Activity-Based Costing Approach. New York: John Wiley & Sons,
1991.
[11]
Browning, T.R., Modeling and Analyzing Cost, Schedule, and Performance in Complex System Product
Development, Ph.D. Thesis (TMP), Massachusetts Institute of Technology, Cambridge, MA, 1998.
[12]
Browning, T.R., "Categorization of Design Structure Matrix Applications for Project Management,
Organization Design, and Systems Engineering", Lockheed Martin TAS, Fort Worth, TX, Working Paper
Nov., 1999.
[13]
Browning, T.R., "Sources of Schedule Risk in Complex System Development" Systems Engineering, vol. 2,
pp. 129-142, 1999.
[14]
Browning, T.R. and Eppinger, S.D., "A Model for Development Project Cost and Schedule Planning", MIT
Sloan School of Management, Cambridge, MA, Working Paper no. 4050, Nov., 1998.
[15]
Brucker, P., et al., "Resource-Constrained Project Scheduling: Notation, Classification, Models, and
Methods" European Journalof OperationalResearch, vol. 112, pp. 3-41, 1999.
[16]
Carley, K.M., "On Generating Hypotheses Using Computer Simulations" Systems Engineering, vol. 2, pp.
69-77, 1999.
[17]
Carrascosa, M., Eppinger, S.D., and Whitney, D.E., "Using the Design Structure Matrix to Estimate
Product Development Time", Proceedings of the ASME Design Engineering Technical Conferences
(DesignAutomation Conference), Atlanta, GA, Sept. 13-16, 1998.
[18]
Chapman, C. and Ward, S., Project Risk Management: Processes, Techniques and Insights. New York:
John Wiley & Sons, 1997.
[19]
Cheung, L.C., Smith, R.P., and Zabinsky, Z.B., "Optimal Scheduling in the Engineering Design Process",
University of Washington, Industrial Engineering, Seattle, WA, Working Paper Aug., 1998.
[20]
Christian, A.D., Simulation of Information Flow in Design, Ph.D. Thesis (M.E.), Massachusetts Institute of
Technology, Cambridge, MA, 1995.
[21]
Clark, K.B. and Fujimoto, T., Product Development Performance: Strategy, Organization, and
Management in the World Auto Industry. Boston: Harvard Business School Press, 1991.
[22]
Clark, K.B. and Wheelwright, S.C., Managing New Product and Process Development. New York: Free
Press, 1993.
[23]
Conrow, E.H. and Shishido, P.S., "Implementing Risk Management on Software Intensive Projects" IEEE
Software, pp. 83-89, 1997.
[24]
Cooper, K.G., "The Rework Cycle: Why Projects are Mismanaged" PMNETwork, 1993.
31
Browning &Eppinger
Rev. 2/00
[25]
Dodin, B., "Bounding the Project Completion Time Distribution in PERT Networks" OperationsResearch,
vol. 33, pp. 862-881, 1985.
[26]
Dodin, B.M. and Elmaghraby, S.E., "Approximating the Criticality Indices of the Activities in PERT
Networks" Management Science, vol. 31, pp. 207-223, 1985.
[27]
Eastman, R.M., "Engineering Information Release Prior to Final Design Freeze" IEEE Transactions on
EngineeringManagement, vol. 27, pp. 37-41, 1980.
[28]
Elmaghraby, S.E., "Activity Nets: A Guided Tour Through Some Recent Developments" European
Journalof OperationalResearch, vol. 82, pp. 383-408, 1995.
[29]
Elmaghraby, S.E. and Kamburowski, J., "The Analysis of Activity Networks Under Generalized Precedence
Relations (GPRs)" Management Science, vol. 38, pp. 1245-1263, 1992.
[30]
Eppinger, S.D., Nukala, M.V., and Whitney, D.E., "Generalised Models of Design Iteration Using Signal
Flow Graphs" Research in EngineeringDesign, vol. 9, pp. 112-123, 1997.
[31]
Eppinger, S.D., Whitney, D.E., Smith, R.P., and Gebala, D.A., "A Model-Based Method for Organizing
Tasks in Product Development" Research in Engineering Design, vol. 6, pp. 1-13, 1994.
[32]
Ford, D.N. and Sterman, J.D., "Dynamic Modeling of Product Development Processes" System Dynamics
Review, vol. 14, pp. 31-68, 1998.
[33]
Graves, S.B., "The Time-Cost Tradeoff in Research and Development: A Review" Engineering Costs and
ProductionEconomics, vol. 16, pp. 1-9, 1989.
[34]
Grey, S., PracticalRisk Assessmentfor ProjectManagement. New York: John Wiley & Sons, 1995.
[35]
Grose, D.L.,
"Reengineering the Aircraft Design Process", Proceedings of the Fifth
AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Panama City
Beach, FL, Sept. 7-9, 1994.
[36]
Ha, A.Y. and Porteus, E.L., "Optimal Timing of Reviews in Concurrent Design for Manufacturability"
Management Science, vol. 41, pp. 1431-1447, 1995.
[37]
Henderson, R.M. and Clark, K.B., "Architectural Innovation: The Reconfiguration of Existing Product
Technologies and the Failure of Established Firms" Administrative Science Quarterly, vol. 35, pp. 9-30,
1990.
[38]
Hoedemaker, G.M., Blackburn, J.D., and Wassenhove, L.N.V., "Limits to Concurrency", INSEAD,
Fontainebleau, France, Working Paper Jan., 1995.
[39]
Huchzermeier, A. and Loch, C.H., "Project Management Under Risk: Using the Real Options Approach to
Evaluate Flexibility in R&D" Management Science, 2000.
[40]
Hulett, D.T., "Schedule Risk Analysis Simplified" PM Network, pp. 23-30, 1996.
[41]
Isaksson, O., Keski-Seppal, S., and Eppinger, S.D., "Evaluation of Design Process Alternatives using
Signal Flow Graphs", ENDREA, Trollhatten, Sweden, Working Paper Aug. 31, 1998.
[42]
Keefer, D.L. and Verdini, W.A., "Better Estimation of PERT Activity Time Parameters" Management
Science, vol. 39, pp. 1086-1091, 1993.
32
Browning &Eppinger
Rev. 2/00
[43]
Kline, S.J., "Innovation is not a Linear Process" Research Management, pp. 36-45, 1985.
[44]
Krishnan, V., Eppinger, S.D., and Whitney, D.E., "A Model-Based Framework to Overlap Product
Development Activities" Management Science, vol. 43, pp. 437-451, 1997.
[45]
Kusiak, A. and Wang, J., "Efficient Organizing of Design Activities" InternationalJournal of Production
Research, vol. 31, pp. 753-769, 1993.
[46]
Law, A.M. and Kelton, W.D., Simulation Modeling and Analysis, Second ed. New York: McGraw-Hill,
1991.
[47]
Leong, A. and Smith, R.P., "Is Design Quality a Function of Time?", Proceedings of the IEEE
InternationalEngineeringManagement Conference, Vancouver, Aug. 18-20, 1996, pp. 296-300.
[48]
Loch, C.H. and Terwiesch, C., "Communication and Uncertainty in Concurrent Engineering" Management
Science, vol. 44, pp. 1032-1048, 1998.
[49]
McDaniel, C.D., A Linear Systems Frameworkfor Analyzing the Automotive Appearance Design Process,
Master's Thesis (Mgmt./EE), MIT, Cambridge, MA, 1996.
[50]
Moder, J.J., Phillips, C.R., and Davis, E.W., Project Management with CPM, PERT and Precedence
Diagramming, Third ed. New York: Van Nostrand Reinhold, 1983.
[51]
Morelli, M.D., Eppinger, S.D., and Gulati, R.K., "Predicting Technical Communication in Product
Development Organizations" IEEE Transactions on Engineering Management, vol. 42, pp. 215-222, 1995.
[52]
NASA, NASA Systems EngineeringHandbook: NASA Headquarters, Code FT, SP-6105, 1995.
[53]
Neumann, K., Stochastic Project Networks: Temporal Analysis, Scheduling and Cost Minimization, vol.
344. Berlin: Springer-Verlag, 1990.
[54]
Osborne, S.M., ProductDevelopment Cycle Time CharacterizationThrough Modeling of ProcessIteration,
Master's Thesis (Mgmt./Eng.), Massachusetts Institute of Technology, Cambridge, MA, 1993.
[55]
Petroski, H., To Engineer is Human: The Role of Failure in Successful Design. New York: St. Martin's
Press, 1985.
[56]
Pisano, G.P., The Development Factory: Unlocking the Potentialof Process Innovation. Boston: Harvard
Business School Press, 1997.
[57]
Pritsker, A.A.B., O'Reilly, J.J., and LaVal, D.K., Simulation with Visual SLAM and AweSim. New York:
John Wiley & Sons, 1997.
[58]
Pritsker, A.A.B. and Sigal, C.E., Management Decision Making: A Network Simulation Approach.
Englewood Cliffs, NJ: Prentice-Hall, 1983.
[59]
Reinertsen, D.G., Managing the Design Factory: A Product Developer's Toolkit. New York: The Free
Press, 1997.
[60]
Roemer, T.A., Ahmadi, R., and Wang, R.H., "Time-Cost Trade-Offs in Overlapped Product Development"
Operations Research, vol. (forthcoming), 1998.
[61]
Rogers, J.L., "Integrating a Genetic Algorithm into a Knowledge-Based System for Ordering Complex
Design Processes", NASA, Hampton, VA, Technical Manual TM-110247, April, 1996.
33
Browning &Eppinger
Rev. 2/00
[62]
Safoutin, M.J. and Smith, R.P., "The Iterative Component of Design", Proceedings of the IEEE
InternationalEngineeringManagement Conference, Vancouver, Aug. 18-20, 1996, pp. 564-569.
[63]
Singh, K.J., et al., "DICE Approach for Reducing Product Development Cycle", Proceedings of the
Worldwide Passenger Car Conference and Exposition, Dearborn, Mich., Sept. 28-Oct. 1, 1992, pp. 141150.
[64]
Slowinski, R. and Weglarz, J., Eds., Advances in Project Scheduling. Studies in Production and
Engineering Economics. New York: Elsevier, 1989.
[65]
Smith, R.P. and Eppinger, S.D., "Identifying Controlling Features of Engineering Design Iteration"
Management Science, vol. 43, pp. 276-293, 1997.
[66]
Smith, R.P. and Eppinger, S.D., "A Predictive Model of Sequential Iteration in Engineering Design"
Management Science, vol. 43, pp. 1104-1120, 1997.
[67]
Smith, R.P. and Eppinger, S.D., "Deciding Between Sequential and Parallel Tasks in Engineering Design"
ConcurrentEngineering: Research and Applications, vol. 6, pp. 15-25, 1998.
[68]
Smith, R.P. and Morrow, J.A., "Product Development Process Modeling" Design Studies, vol. 20, pp. 237261, 1999.
[69]
Steward, D.V., "The Design Structure System: A Method for Managing the Design of Complex Systems"
IEEE Transactions on EngineeringManagement, vol. 28, pp. 71-74, 1981.
[70]
Steward, D.V., Systems Analysis and Management: Structure, Strategy, and Design. New York: PBI,
1981.
[71]
Taguchi, G. and Wu, Y., Introduction to Off-Line Quality Control. Nagoya, Japan: Central Japan Quality
Association, 1980.
[72]
Terwiesch, C. and Loch, C.H., "Management of Overlapping Development Activities: A Framework for
Exchanging Preliminary Information", INSEAD, Dallas, Working Paper #97/117/TM, Nov., 1997.
[73]
von Hippel, E., "Task Partitioning: An Innovation Process Variable" Research Policy, vol. 19, pp. 407418, 1990.
[74]
Whitney, D.E., "Designing the Design Process" Research in EngineeringDesign, vol. 2, pp. 3-13, 1990.
[75]
Wiest, J.D. and Levy, F.K., A Management Guide to PERT/CPM: With GERT/PDM/DCPM and Other
Networks, Second ed. Englewood Cliffs, NJ: Prentice-Hall, 1977.
[76]
Wolfe, P.M., Cochran, E.B., and Thompson, W.J., "A GERTS-Based Interactive Computer System for
Analyzing Project Networks Incorporating Improvement Curve Concepts" AIIE Transactions, vol. 12, pp.
70-79, 1980.
[77]
Yassine, A., Falkenburg, D., and Chelst, K., "Engineering Design Management: An Information Structure
Approach" InternationalJournalof ProductionResearch, vol. 37, 1999.
34
Browning &Eppinger
Rev. 2/00
Worse
c)
en
Better
Wre
Aettr
Cost
Figure 1: Cost and Schedule Tradeoffs
.
_3 4 / 5 6 7
1 2
I
Prepare UAV Preliminary DR&O
1
-
U_
-
Prepare &Distribute Surfaced Models &Int. Arngmt. Drawings
3
!
Prepare Structural Geometry &Notes for FEM
6 U
-
Perform S&C Analyses &Evaluation
Develop Balanced Freebody Diagrams &Ext. Applied Loads
9 U
10
-
Evaluate Structural Strength, Stiffness, &Life
12
Prepare UAV Proposal
14 1 I
-
E
I
-
-
a
-
U
- I
I
NW
E
I
_
_
_
.
_
--
-
I 0
-
0 a E II E
-
N
,
_
. _
--
-
- g-
.
a
z
-
U
UI
_
I
9 10 l _11 12 13 14
-
.-
iE
N
8
l _
.
-
M_
.
0 1
E
_
_
II
I
Figure 2: DSM Representation of UAV Preliminary Design Process
1 2
Prepare UAV Preliminary DR&O
Create UAV Preliminary Design Architecture
Prepare &Distribute Surfaced Models &Int. Arngmt. Drawings
Perform Aerodynamics Analyses &Evaluation
Create Initial Structural Geometry
Prepare Structural Geometry &Notes for FEM
Develop Structural Design Conditions
Perform Weights &Inertias Analyses
Perform S&C Analyses &Evaluation
Develop Balanced Freebody Diagrams &Ext. Applied Loads
Establish Internal Load Distributions
Evaluate Structural Strength, Stiffness, &Life
Preliminary Manufacturing Planning &Analyses
Prepare UAV Proposal
3 4
5
6
7
8
9 10 11 12 13 14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
.4
.2
3.4
.5
.3
.5
.4
.1
.4
.5
.4
.5 .5
.1
I
.1
.1
.3 .1
.4
.4
.51
.4
I
1
.5
.5
.5 .2 .1
.5 .5 .5
.5
.4 .5
.5 .4
.4
.5
.5
.4
.3 .4 .4 .4 ..
4 .4 .4 .4 .4 .4 .4 .4 .4
Figure 3: DSM1 Showing Rework Probabilities for UAV Process
1 2
35
-`--P-"ll"--`---`----^----------
3 4
5
6
7 8
9 10 11 12 13 14
Browning &Eppinger
Rev. 2/00
l
-
Prepare UAV Preliminary DR&O
1
Create UAV Preliminary Design Architecture
Create Initial Structural Geometry
2 -.
3
.3
4 .4
.8
5 .1
.1 .5
Prepare Structural Geometry & Notes for FEM
6 .1
Develop Structural Design Conditions
Perform Weights &Inertias Analyses
7 .5
8
Perform S&C Analyses & Evaluation
9 .3
Develop Balanced Freebody Diagrams &Ext. Applied Loads
Establish Internal Load Distributions
10
Evaluate Structural Strength, Stiffness, &Life
12 .5
Preliminary Manufacturing Planning &Analyses
.3
13 .9
-9
14 .5 .8 .8 .8 .8 .8 .8 .8 .8 .8 .8 .8 .8
.I
i
Prepare &Distribute Surfaced Models &Int. Arngmt. Drawings
Perform Aerodynamics Analyses &Evaluation
.3
.1
.3
N
.8
.9
.3 .3
.I
.1
11
Prepare UAV Proposal
i
_
_
_
.5
.5
.3
.5 .4 .3
.5 .5 .3
.3 .5
.3
7
.3
.5 .5
I
S
_
U
_
Figure 4: DSM2 Showing Rework Impacts for UAV Process
1
2
3 U-··
4
5
6
7
8
9
o10
11
12
13
14
0
---
-
20
·-
· ·--
-------
I
~
·
--------
·
·-
·---
40
60
80
100
120
140
Elapsed Time (Days)
Figure 5: Example Gantt Chart of Simulated Process Execution
141413
Figure 6: Interface Criticalities (to Process Schedule)
36
---
··
U~~~~~~I
160
J
I
Browning & Eppinger
Rev. 2/00
Target
1.0
.8
1
.6
.4
2
·· ···
i·-iii:·:
'·:·
.0
570
620
670
720
770
820
870
Cost ($k)
Target
1.0
I
I
I
02
0.0
Schedule (days)
Figure 7: Results: Cost and Schedule PMFs and CDFs
I
900
140
W'
....
.....
" .700
65070
600
acn-t
ubal
120
550
Cot (k)
Figure 8: Joint Cost and Schedule PDF
37
Browning &Eppinger
Rev. 2/00
x
5 10
4
4
a
3
In
L. -I
os3t
Kj
Budget Target
5.1
a
o
g,
1-om lI
Figure 9: Joint Cost and Schedule PDF Contour Plots for Process Architectures One
(Top) and Three
38
Rev. 2/00
Browning &Eppinger
U.-ea S,
i
*1
91"J'Ur
3
5
4
Figure 10: Cost and Schedule Risk Factors for Five Process Architectures
Preemptive Iteration
No Preemptive Iteration
.
'
.:C4..
Ovo
,-s,go..
g,
|~~~~~~~~~~~
SA> SB
CA< CB
SB= tl + t2 (Impact. IC)
SA = t1 + t2
CA = C1 + C2
CB = c, + c 2 + c 2 (Impact. IC)
Figure 11: Effect of Preemptive Iteration on Cost and Schedule
.
%0
-nnn
0.,
I
9600
Q)
7200
,;.,,*;?.
;!::,!:,: -',',:,. ,:i:
(d
Cq
0
0
4800
0.1
2400
..
A''.
,i.-.
e ',.
*Ls< -@ E,,<
0.0
*
560
600
640
680
720
,
760
,
.....
800
.
O
840
Cost Target (Tc)
Prob.
-
RiskI
Figure 12: Pc(unacceptable) and .Ic for Various Cost Targets (Tc)
39
Download