A Framework for Staffing Strategy ... Projects Involving Complex System Architectures

advertisement
A Framework for Staffing Strategy Decision-Making in
Projects Involving Complex System Architectures
by
David E. Willmes, PhD
Submitted to the System Design and Management Program in Partial
Fulfillment of the Requirements for the Degree of
Master of Science in Engineering and Management
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
June 2003
@ 2003 David E. Willmes.
All rights reserved.
JUL 10 2003
LIBRARIES
The author hereby grants to MIT permission to reproduce and
distribute publicly paper and electronic copies of this thesis document
in whole or in part.
Signature of Author ...........
System Design and Management Program
1'vay 24, 2003
Certified by.......
Christophe' LMagee
Professor of the Practice of Mechanical Engineering and Engineering
Systems
Thesis Supervisor
A ccepted by ...................
......
Steven D. Eppinger
Co-Director, LFM/SDM
GM LFM Professor of Mar4gee t Science 4nd Eygineering Systems
A ccepted by ...............
.....
Paul A. Lagace
Co-Director, LFM/SDM
Professor of Aeronautics & Astronautics and Engineering Systems
BARKER
A Framework for Staffing Strategy Decision-Making in
Projects Involving Complex System Architectures
by
David E. Willmes, PhD
Submitted to the System Design and Management Program
on May 24, 2003, in partial fulfillment of the requirements for the Degree of
Master of Science in Engineering and Management
Abstract
The fundamental motivation for this work is the need to quantitatively assess the
technical complexity of a project at initiation to help inform the resource allocation
decision makers, specifically regarding the allocation of project staff. If the complexity of a project can be assessed, techniques can be used to improve the chances for
project success. In this thesis, I describe the problem of project staffing for systems
that involve complex architectures. The staffing requirements focus on employing
engineers and scientists with different sets of skills, specifically, those with the skill
of understanding a system at a high, abstract level, and those with the skill of doing
detailed design work. Complexity measures that are most suited as metrics for system
complexity of engineering systems that may be used to drive the resource allocation
problem are chosen, for which the notions of functional complexity, structural complexity, and ideas from Axiomatic Design are utilized. A system dynamics model
that utilizes a system complexity measure to control the staffing policy for a project
is developed in detail. The model developed here is used for the case of the Apollo
program's guidance, navigation, and control system to see what insight can be gained
into the staffing strategy that could be utilized for this project. Uncertainties that
are inherent in both the technological side of the project, associated with the system
complexity, and the management side, associated with the availability of resources,
are added to the model. System dynamics simulations are run in order to understand
the sensitivities to the modeling parameters. This work concludes with the creation of
a framework in which resources are committed to a project based on policies that take
into account the technical complexity of the system. This framework includes a cascading of functional requirements, in which this cascading is driven by the ambiguity
that exists at each level of design.
Thesis Supervisor: Christopher L. Magee
Title: Professor of the Practice of Mechanical Engineering and Engineering Systems
ACKNOWLEDGMENTS
I would like to thank Northrop Grumman Electronic Systems for giving me the
opportunity to study at MIT. I especially owe a debt of gratitude to my company
sponsor, Jim Armitage, and to George Reynolds for their support and confidence in
me.
I gratefully acknowledge the insight and guidance provided to me by my thesis
advisor, Christopher Magee.
The System Design and Management (SDM) faculty and staff have been very
helpful throughout the program, and my fellow SDM classmates have made my experience at MIT an enjoyable and rewarding one. I would especially like to thank
Benjamin Hsuehyung Koo and Carol Ann McDevitt for valuable discussions during
the course of this research.
3
Contents
1
2
3
Introduction
11
1.1
M otivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.2
Organization of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . .
13
Staffing Needs for Complex Architectures
16
2.1
Control Theory Models for Hiring Decisions
. . . . . . . . . . . . . .
16
2.2
Control Theory Model for Project Staffing . . . . . . . . . . . . . . .
20
2.3
Characteristics of Type I and Type II Staff . . . . . . . . . . . . . . .
24
2.4
The Need for Creativity in Engineering Design . . . . . . . . . . . . .
26
2.5
Staffing in Project Management . . . . . . . . . . . . . . . . . . . . .
28
2.6
Modeling Staffing Policy Utilizing Type I and Type II employees . . .
31
System Complexity
33
3.1
Defining Complexity
. . . . . . . . . . . . . . . . . . . . . . . . . . .
33
3.2
Measures of System Complexity . . . . . . . . . . . . . . . . . . . . .
35
3.2.1
Number of Elements and Interfaces . . . . . . . . . . . . . . .
36
3.2.2
Organized and Disorganized Complexity . . . . . . . . . . . .
37
4
3.4
38
3.2.4
Algorithmic Complexity
. . . . . . . . . . . . . .
. . . . . .
40
3.2.5
Logical Depth and Breadth
. . . . . . . . . . . .
. . . . . .
42
3.2.6
Structural and Functional Complexity
. . . . . .
. . . . . .
43
Axiomatic Design as a Basis for Complexity Measurement
. . . . . .
45
.
.
.
.
. . . . . .
The Two Axioms of Axiomatic Design
. . . . . .
. . . . . .
45
3.3.2
Measuring Complexity Using Axiomatic Design .
. . . . . .
48
3.3.3
Real and Imaginary Complexity . . . . . . . . . .
. . . . . .
49
3.3.4
Application of Axiomatic Design to Project Staffing Problems
.
.
.
3.3.1
The Choice of Complexity Measures . . . . . . . . . . . .
. . . . . .
The Apollo Guidance, Navigation, and Control System
54
4.1.1
Preliminary Design of Apollo GN&C . . . . . . .
. . . . . .
58
4.1.2
Phase-based Design of Apollo GN&C . . . . . . .
. . . . . .
59
4.1.3
Uncoupled Design of Apollo GN&C . . . . . . . .
. . . . . .
62
Second Level Design of Apollo GN&C . . . . . . . . . . .
. . . . . .
68
.
.
.
.
56
4.2.1
The Design of the Inertial Measurement Unit
. .
. . . . . .
68
4.2.2
Complexity and the Choice of the IMU Design . .
. . . . . .
74
76
5 The System Dynamics of Project Staffing
76
5.1.1
Modeling Complexity in System Dynamics . . . . . . . . . .
78
5.1.2
Basic Model for Type I and Type II Employees
83
.
.
System Dynamics Model for Project Staffing . . . . . . . . . . . . .
5
. . . . . . .
.
5.1
51
. . . . . .
.
4.2
50
First Level Design of Apollo GN&C . . . . . . . . . . . .
.
4.1
.
4
Informational Complexity and Entropy . . . . . .
.
3.3
3.2.3
86
5.3
Formulating Staffing Policy using System Dynamics . . . . . . . . .
89
5.4
Effect of Staffing Policy on Cost and Schedule . . . . . . . . . . . .
91
.
.
.
The System Dynamics Model for Apollo GN&C . . .
99
6.1
Conclusions Regarding the Project Staffing Framework . . . . . . .
99
6.2
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
102
.
Concluding Remarks
.
6
5.2
A Documentation for System Dynamics Model with Two Types of Employees
L05
B Calculation of Structural Complexity for IMU Designs
6
1 27
List of Figures
2-1
Linear control model with two production stages and three worker
.
knowledge levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
Project staffing model with N production stages and two employee types 23
2-3
A Typical Waterfall Product Development Process
2-4
A Typical Spiral Product Development Process
28
. . . . . . . . . . .
29
3-1
Complexity as a non-monotonic function of disorder . . . . . . . . .
38
3-2
The Cycle between engineering design and resource allocation
. . .
51
4-1
The Gimballed Inertial Measurement Unit used on the Apollo project
70
4-2
OPM of Gimballed Design for the Inertial Measurement Unit . . . .
72
5-1
Staffing Dynamics for One Level of Employee
. . . . . .
79
5-2
System Dynamics of System Complexity . . . . . . . . .
80
5-3
The effect of complexity on the productivity of the staff .
82
5-4
Staffing Dynamics for Two Levels of Employee . . . . . .
85
5-5
Type I, Type II, and Total staff for the Apollo Simulation
90
5-6
Historical Staff for the Apollo GN&C Project
90
.
.
.
.
.
.
.
.
. . . . . . . . .
.
2-2
7
.
. . . . . .
5-7
Comparison between the baseline case of 700 Type I employees and a
limited pool of 450 Type I employees . . . . . . . . . . . . . . . . . .
5-8
Comparison between the baseline case of unlimited Type II employees
and a limited pool of 150 Type II employees . . . . . . . . . . . . . .
5-9
92
92
Cumulative staff with limited pool of Type I and Type II employees
compared to the baseline case . . . . . . . . . . . . . . . . . . . . . .
93
5-10 Effort Expended (solid) and Project Duration (dashed) versus Ambiguity when level II staff are added for a project with high level I
com plexity (Case I) . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
5-11 Effort Expended (solid) and Project Duration (dashed) versus Ambiguity when level II staff are added for a project with high level II
com plexity (Case II)
. . . . . . . . . . . . . . . . . . . . . . . . . . .
95
5-12 Strategy for beginning level II tasks based on ambiguity . . . . . . . .
96
5-13 Type II staff for different beginning strategies . . . . . . . . . . . . .
97
5-14 Effort Expended (solid) and Project Duration (dashed) versus Ambiguity at which pre-loading begins when level II staff are added for a
project with high level I complexity (Case I) . . . . . . . . . . . . . .
98
5-15 Effort Expended (solid) and Project Duration (dashed) versus Ambiguity at which pre-loading begins when level II staff are added for a
project with high level II complexity (Case II) . . . . . . . . . . . . .
6-1
98
Overall framework for staffing strategy for three levels of decomposition 100
8
List of Tables
4.1
Overall Functional Requirements for Apollo GN&C . . . . . . . . . .
57
4.2
Preliminary Level I Design Parameters for Apollo GN&C . . . . . . .
58
4.3
Preliminary Level I Design Matrix for Apollo GN&C
. . . . . . . . .
58
4.4
Phase-based Level I Functional Requirements for Apollo GN&C . . .
60
4.5
Phase-based Level I Design Parameters for Apollo GN&C . . . . . . .
60
4.6
Phase-based Level I Design Matrix for Apollo GN&C . . . . . . . . .
61
4.7
Combining Level I Functional Requirements
. . . . . . . . . . . . . .
63
4.8
Level I Design Parameters for Command Module G&N . . . . . . . .
65
4.9
Level I Design Parameters for Lunar Excursion Module G&N . . . . .
65
. . . . . . . . . . . . . . . .
65
4.11 Revised Level I Functional Requirements for Apollo GN&C . . . . . .
66
4.12 Uncoupled Level I Design Matrix for Apollo GN&C . . . . . . . . . .
67
4.10 Level I Design Matrix for Apollo GN&C
4.13 Design Parameters for the Gimballed Inertial Measurement Unit, Block I 69
4.14 Design Matrix for Gimballed IMU . . . . . . . . . . . . . . . . . . . .
73
4.15 Design Parameters for the Gimballess Inertial Measurement Unit . . .
73
4.16 Design Matrix for the Gimballess IMU . . . . . . . . . . . . . . . . .
73
9
5.1
Breakdown of Parameters for a Two-Level Staffing Dynamics Model .
87
5.2
Parameters for the baseline staffing model for Apollo GN&C . . . . .
88
5.3
Parameters for the scenario where Type II staff are not added until
ambiguity decreases to a set level . . . . . . . . . . . . . . . . . . . .
10
93
Chapter 1
Introduction
1.1
Motivation
In order to efficiently develop large-scale complex systems, project management must
consider resource allocation to be a significant issue. Managerial decisions must be
closely coupled with the technological aspects of the system being developed. Resources must be allocated with an understanding of how these resources are to be
utilized, and they must be allocated to that stage of the project where they will
provide the greatest benefit. In this thesis, I will consider staff allocation, which is
typically the most important resource for a technological enterprise. The questions
that must be answered are: What technical factors should managers take into account when assembling a project staff? What types of employees should be used to
assemble the staff? At what stage of the project should employees of an organization
become part of the project staff? How should these staff members be selected from
the pool of employees at the company?
11
The answers to these questions obviously depend on several factors. Each project
involving a system of any complexity is certainly unique.
But on what do they
depend? After determining the dependencies, can one come up with a quantitative
formula for deriving the answers to these questions?
In this thesis, I will elucidate a framework for deriving the answers to some of the
important questions regarding staffing policy. I propose that the complexity of the
architecture is a driving factor for making staffing policy decisions. Differing levels
and types of complexity should have an impact on how a staff is to be assembled.
Decisions involving resource allocation of course depend on other factors.
For
example, there almost certainly will be budget constraints, time constraints, and
requirements for sharing employees with other projects. All of these factors can be
considered constraints on resource availability.
The fact that resource constraints must be considered in making staffing decisions
leads to a dynamic model that involves feedback. These constraints force the system
architect to make decisions about the complexity of the system under development.
Technological solutions may be determined in part by the resources available to the
organization.
The choice of architecture will be constrained by the availability of
resources. Given the complexity of a system architecture, there is an optimal choice
of staffing policy to complete the project subject to these constraints.
The requirements for setting staffing policy are complicated further by the uncertainties that are present in any project involving design and development of a
complex system. These uncertainties include: technological uncertainty, in which
the complexity of the architecture is not fully understood; resource availability uncer12
tainty, in which budgets may be cut, staff members may leave the project prematurely,
or project completion time may be shortened.
The fundamental motivation for this work is the need to quantitatively assess the
technical complexity of a project at initiation to help inform the resource allocation
decision makers, specifically regarding the allocation of project staff. If the complexity
of a project can be assessed, techniques can be used to improve the chances for project
success.
To derive the framework in this thesis, complexity metrics in the context of system
development will be examined and applied to an analysis of the complexity of the
Apollo guidance, navigation, and control system. In addition, an original model for
utilizing these metrics to control staffing policy is formulated. This model assesses
the need for two types of employees: employees who specialize in developing holistic
solutions to manage complexity, and subsystem specialists to undertake the design
tasks at subsystem and lower levels. I have developed a system dynamics model, which
extends previous control theory models by including the two types of employees,
as well as modeling the changing ambiguity and complexity of the system design
throughout the lifecycle of the project.
1.2
Organization of Thesis
In the second chapter, I will describe the problem of project staffing for systems that
involve complex architectures.
The staffing requirements will focus on the use of
employing engineers with different sets of skills, specifically, those with the skill of
13
understanding a system at a high, abstract level, and those with the skill of doing
detailed design work.
In Chapter 3, I will discuss various complexity measures which have been developed in a wide range of engineering disciplines. The purpose here is to determine
which measure or measures are most applicable as a metric for system complexity
that can be used to drive the resource allocation problem. After describing each
measure that has been considered, I will identify a small subset as potential metrics.
The following chapter will describe the use of these complexity metrics in detail
for the particular case of the guidance, navigation, and control (GN&C) system for
project Apollo.
The design of the inertial measurement unit (IMU) will also be
described in some detail, as an example of using the system architecture description
at one level to drive the requirements of the subsystem one level below. This chapter
will conclude with an identification of the understanding that has been obtained from
this analysis for the Apollo case, and for the use of complexity measures in general
for examining the resource allocation question.
Chapter 5 will develop in detail a model that will utilize a system complexity
measure to control the staffing policy for a project. This model will first describe,
for simplicity, a project which is broken down into two levels of abstraction. After
this model is understood, it can readily be expanded to include projects that must
be modeled as a multiple-level system. The model that was developed in this chapter will be used for the Apollo GN&C case to see what insight can be gained into
the staffing strategy that could be utilized for this project. Uncertainties that are
inherent in both the technological side of the project, associated with the system com14
plexity, and uncertainties on the management side, associated with the availability of
resources, will be added to the model. System dynamics simulations will be run in
order to understand the sensitivities to the modeling parameters that are inherent in
the model.
Finally, in the last chapter I will conclude with a description of the framework
that was developed in creating the complexity measure and staffing model. After the
framework and its limitations have been described, potential opportunities for future
work will be discussed.
15
Chapter 2
Staffing Needs for Complex
Architectures
2.1
Control Theory Models for Hiring Decisions
In recent years, several models have been developed that attempt to address the
problem of employee staffing [EGA01, GZ02, DT92, BM01], mostly utilizing methods from linear control theory, linear optimization, and mathematical programming.
While the models that have been developed have contributed insight into the particular problems that they addressed, none of them are completely applicable to the
staffing problems of organizations that develop complex systems.
Edward Anderson [EGA01] has developed an optimal strategic-level staffing policy
for hiring and firing based on two types of employees: highly experienced employees
and low (or negative) producing apprentice employees.
The employees that he is
considering are assemblers, engineers, and manufacturing technicians in the aerospace,
16
automotive, machine tool and semiconductor industries. His work utilizes dynamic
programming and control theory to model demand as a nonstationary stochastic
process, utilizing Nelson and Plosser's [NP82] suggestion that gross national demand
for employees can be characterized by a combination of a random walk with drift
and serial autocorrelation. In addition, changes in the utility and pricing for a firm's
products contribute to the nonstationarity of individual firm demand.
Anderson's model is formulated as a closed-loop optimal control problem in which
the objective function trades off the net present value of meeting demand against the
desirable goal of smoothing the organization's capacity. Capacity must not only meet
demand, but must also be smoothed to prevent the expense of employee turnover,
including the suffering of morale when firing is necessary.
Anderson's theory, developed for organizational staffing, needs modification to be
relevant for project staffing for several reasons. First, staff for projects typically are
replenished from employees already employed within the organization. Delays due to
hiring from outside an organization are typically much larger than delays for adding
project staff from within.
Additionally, staffing projects with existing employees
constrains the resource pool more so than external hiring.
Second, unlike Anderson's model, apprentice employees whose time is consumed
by training would not necessarily be considered full-time members of a project staff.
In the project staffing models developed in this thesis, it is thus assumed that employees are already trained as necessary to adequately perform their job functions.
However, even highly-skilled employees need some time to develop an understanding
of a complex system before they can be fully utilized. Modeling this training time
17
in terms of two seperate populations of employee adds little to a model in which the
variability of employee skill is also a big factor in productivity. Instead, this time has
been included as a hiring/training delay before employees are productive.
Third, Anderson's theory considers nonstationary and unknown demand. The
timescale at which the strategic decisions considering hiring and firing policy are
made are industry-dependent; for example, Anderson cites an 11-year business cycle
for the aerospace industry, and 5-year cycles for the automotive, machine tool, and
semiconductor industries. Certainly these cycles are long enough that the short-term
demand demand for each industry's products may be modeled as a random walk.
For projects, the demand for employees with different skills are also nonstationary;
however, instead of treating demand for the employees as a single stochastic variable,
demand for certain skills are dependent upon the phase of the project.
Finally, in a manufacturing corporation as discussed by Anderson, hiring and
firing is a continual process. Project staffing, however, needs to take into account
the fact that projects begin and end. It is not so critical that the model adapt to
fluctuating market demand, since demand for the project deliverable has already been
determined. Instead, demand for employees with certain skills can be set within the
model using the insight of the technical manager for the project.
In Bordoloi and Matsuo's work [BMO1], a linear control model with stochastic
turnover was developed. This model considered three worker knowledge levels, in
which employees progress from one level to another. Each worker is also assigned to
a production stage, as shown in Figure 2-1.
Bordoloi and Matsuo develop a linear control rule that utilizes target values for
18
Production
Stages
Worker Knowledge
Levels
Figure 2-1: Linear control model with two production stages and three worker knowledge levels
each level of employee as well as a restoration factor that indicates the desire to decrease the gap between the target workforce and the current workforce, as introduced
by Denardo and Tang [DT92]. This rule controls the proportion of workers at each
knowledge level in the workforce.
Bordoloi and Matsuo's model implies that a fully trained worker will be capable of
performing equally well at both production stage 1 and production stage 2. However,
in projects involving complex, one-of-a-kind systems, a more realistic model should
include a type of employee that is capable of performing stage 1 work and a type
of employee that is capable of performing stage 2 work. The assumption that the
skill requirement for work performed at different stages is simply related to amount
of experience should be relaxed.
Another model of employee staffing was developed by Gans and Zhou [GZ02],
19
in which a linear programming heuristic is utilized to develop an optimal policy of
managing employee levels with stochastic turnover in a manufacturing environment.
While some randomness in turnover exists in project staffing as well, this model
exhibits the same weakness that the previous models have when applying these models
to development projects, namely that employees become proficient at every task in a
short time-scale with respect to the time-scale of the work to be done.
In light of the inadequacies of the above approaches to project staffing models,
a new control theory formulation for project staffing is proposed. This formulation
attempts to address the staffing requirements for complex projects; these requirements
include the necessity of staffing the project with the appropriate mix of skills to
perform both high-level and detailed tasks.
Each of the previous staffing models
considered workers who progress quickly from apprenticeship to experienced workers,
and it is assumed that experienced workers are capable of any type of work available.
In projects, high-level design tasks require employees who are holistic thinkers, while
components of the system must be produced by specialists in the given field. For
the rest of this thesis, the former employees, the holistic designers, will be designated
Type I employees, and the latter type, the subsystem specialists, will be designated
as Type II.
2.2
Control Theory Model for Project Staffing
The above staffing models take into account uncertainties in the demand for an organization's product as well as uncertainties in the turnover rate of the employees.
20
However, there are additional factors that account for much of the uncertainty in
development projects. Due to the complexity of the system, there is uncertainty in
the large-scale design of the overall system. The form of the system architecture has
not yet taken shape at project initiation; the specific design parameters that are to
be chosen to fulfill the system's functional requirements have not been determined.
This uncertainty results in ambiguity that effects productivity at the next level of
decomposition.
Throughout the life-cycle of the project, there are opportunities to mitigate these
uncertainties. Most of the uncertainty should be dealt with at the front-end of the
project; this is the period during which the Type I employees should be dominating
the staffing level. The characterization of the staff (Type I or Type II) must be related to the complexity of the system under development. As the uncertainty in the
system architecture is reduced, Type II employees can begin to do more productive
work; variable uncertainty is reduced as design possibilities are eliminated. At some
point, staffing should begin to be dominated by the subsystem specialists; the holistic designers must be maintained at a lower level to ensure that interfaces between
components are developed in line with the overall system design.
In this formulation, the complexity of the system controls the ratio of the two
types of employees. As the project evolves, the variable complexity of the system is
expected to decrease. The proportion of employees of each type can then be updated
depending on the expected value of the complexity. This system can be formulated
21
as an optimization problem, with the following constraints:
E(ai) > E(a2) > ...
> E(aN)
gi > kaiVi
gi + si > TVi
ai, gi, si > OVi
where ai is a random variable that describes the complexity at timestep i, E(ai) is
the expectation of a, gi is the staff level of Type I employees at timestep i, si is the
staff level of Type II employees at timestep i, and T is the minimum total workforce
required. The objective function could then be Minimize the total employee-related
costs to satisfy the requirement to reduce risk due to complexity by employing an
optimal number of employees in each role.
This control theory model, as depicted in Figure 2-2, differs from its predecessors
in two important ways. First, Type II and Type I employees are considered two
distinct populations; there is no evolution from one type of employee to another.
Second, the ratio of the two types is controlled by the system complexity; Type
I employees are utilized to reduce the variable component of this complexity, and
reduce the ambiguity in the system. Type II employees are controlled by the level of
ambiguity left in the system.
While the approach of implementing such a model may be of some value, it is
difficult in such a scheme to address all likely behaviors that the modeler may deem
22
Design
Stage
Employee
Roles
Complexity
Figure 2-2: Project staffing model with N production stages and two employee types
acceptable. For example, is it more desirable to push back the project completion
date, or to employ more Type II employees earlier in the project when they may not
be as productive due to ambiguity? A single control theory model which attempts
to determine the optimal staffing level will not answer the more important issue of
determining a robust staffing level, that is, a staffing level which ensures that the
project will be completed on time and within budget with a high likelihood.
To
explore all potential regions of the trade space, Monte Carlo simulations may provide
better answers.
In any event, the dynamics of the staffing levels can be simulated by solving a
coupled set of differential equations. The method of system dynamics as expressed
by John Sterman [SteOO] is a convenient way to visualize the results of solving the
23
differential equations for the staffing levels under a wide range of initial parameters.
Therefore, a system dynamics model will be developed in Chapter 5 to improve upon
the previously existing control theory models for project staffing.
2.3
Characteristics of Type I and Type II Staff
The staffing model utilized here considers two types of employees: Subsystem specialists, denoted by Type II, who are involved with the detailed creation of each of the
elements of the system, and holistic designers, denoted by Type I, who are involved
in the design of the overall architecture of the system.
While ideally both types of employees should be highly skilled, the types of skills
that each type of employee should possess are distinct. Utilizing an employee who
possesses Type I characteristics in a Type II role would be wasting the the employee's
abilities, and the same is true for putting Type II employees into a Type I role. When
making manpower decisions, the type of employee is as important as the skill level of
the employee.
The employee responsible for a system at the highest level of design is considered
the system architect for that system. According to Ed Crawley, professor of system
architecture at MIT, the system architect must have expertise in "simplifying complexity, resolving ambiguity, and focusing creativity." [Cra02] Type I employees must
have the skills required to assist in this activity. These employees must be capable
of recognizing good architecture. Due to the complexity of the technical aspects of
the system, as well as the upstream and downstream influences involved, good ar-
24
chitecture must be the result of a creative solution to the difficulties associated with
providing a complex system as a solution to a customer need.
According to Crawley, the system architect must be able to operate in the conceptual world of the functional as well as in the physical domain. What this means
for Type I employees is that they must be comfortable working in an environment
in which the functional requirements of the system under development are described
in conceptual, solution-neutral terms, without references to the physical form of the
architecture. They also must be able to formulate the solutions in a physical form.
The Type II employees are then responsible for implementing that physical solution.
In order to make decisions regarding the design of a system at the highest level,
an understanding of the system at the lower levels is required. It has been suggested
that the design at each level requires decomposition of the system two levels below it.
It is thus required that Type I employees be able to understand the design to some
extent for each of the subsystems, though not necessarily at the level of detail of the
Type II staff. Type I employees should thus be multi-disciplinary.
By contrast, Type II staff are expected to specialize in their respective fields. Type
II staff may include computer programmers, machinists, and algorithm developers.
While not responsible for the design of an entire system, they can be highly creative
employees as well, with creativity focused on the individual's area of expertise.
Difference between Type I and Type II employees may be as much personality as
experience or education. An often-used measure of personality is the Myers-Briggs
Type Indicator (MBTI). The MBTI measures four characteristic aspects of an individual's personality: The "Establish Relationship" aspect may be Introverted or
25
Extroverted; the "Generating Information" aspect may be Sensing or iNtuitive; the
"Making Decisions" aspect may be Thinking or Feeling; and the "Choosing Priorities" aspect may be Judgemental or Perceptive. The creative aspect of a person's
personality stems from the "Generating Information" and "Making Decisions" characteristics. According to [AKS+99], creativity to an ST may mean "redesigning or
building machinery," while for an NT, it could mean "developing a new model or
concept."
It would seem natural to conclude that, since the act of building machinery is
a type II task, employees who fit the ST personality type are more likely to funnel
creativity into level II-specific tasks, and hence should be classified as Type II employees, and similarly, NT personality types should be expected to be Type I. However,
pigeonholing employees based on personality tests would lead to incomplete and perhaps even inaccurate characterizations. While these tests should not be used as a sole
indicator of an employee's creative inclinations, they may be used in part to initiate
discussion as to the role an individual employee may fill on a given project.
2.4
The Need for Creativity in Engineering Design
Engineering is often considered by non-practitioners to be a rather inartistic practice.
The misconception is that engineers are simply required to take existing data, feed
them through an algorithm, program, or design procedure, and end up with a solution.
This is certainly not true for the case of engineering design. Design parameters can
occur in a potentially endless set of possible combinations. Suppose that engineering
26
designers desire to use a computer to make all of the design decisions for them. There
is a limitation to the number of design parameters that the computer would be able
to analyze, which is based on Bremmerman's limit, as follows [Bre83].
Consider the total number of bits of information, denoted N, that can be handled
by making the entire earth the most efficient computer.
If we were to utilize the
entier mass of the earth as energy available for computation, and this energy is to be
divided into increments of size AE, then Heisenberg's uncertainty principle gives us
N <= 2-r(mc2At/h).
Assuming At to be the period of atomic vibration, this number is smaller than
270!. In other words, there is not enough computing power available to completely
describe 270 variables that can be factorially combined. Complex engineering systems
typically must include many more variables than 270.
There is a need for a system designer to be able to formulate potential design
solutions using a more creative process. While this process requires skill and ingenuity
on the part of the designer, tools are available to assist in the procedure.
Once a creative solution to a problem has been obtained, quantitative methods
can be utilized to measure "goodness" of the design. In Chapter 3, one such method
of measuring design quality will be explored in some detail.
27
2.5
Staffing in Project Management
A framework for project management must be commensurate with the choice of Product Development Process (PDP) deployed by the organization. A typical waterfall,
or stage-gate, PDP is shown in Figure 2-3. This model has been dominant for over 30
years, and is known to work well for projects driven by quality requirements rather
than cost or schedule requirements [UE02].
1
Reviews
Cross-Phase
Iterations
1
Figure 2-3: A Typical Waterfall Product Development Process
The important aspect of the waterfall PDP from the point of view of project
28
/
Detailed
Design
Reiews
System-Level
Cost
Design
(Cumulative Effort)
Design
Figure 2-4: A Typical Spiral Product Development Process
management is the seperation of the system-level design from the detail-level design;
project management must decide when each design stage is complete through the
use of design reviews, or stage-gates.
During the system-level design stage, Type
I employees should be dominating the project staff, while Type II staff should be
dominant during the detailed design stage.
The existence of the reverse arrows, or cross-phase iterations, in the waterfall PDP
framework indicate the potential for substantial rework if the design review failed to
confirm that the previous stage was completed correctly. These cross-phase iterations
29
are generally very expensive and time-consuming in a waterfall process. Mitigating
the risks associated with these iterations requires that the staff employed during the
system-level design stage have some knowledge of the detailed design tasks that lay
ahead. This necessitates the multi-disciplinary nature of the Type I staff employed
during that phase, at least at the level where they are able to ask the right questions
of Type II employees to mitigate the design risks. This suggests that some Type II
staff are available throughout the project, even at project initiation, though their role
is primarily consultation, and not performing detailed design work.
Alternatively, the spiral model for product development is shown in Figure 2-4.
While the waterfall model manages technical risk well, the spiral model is best suited
for the management of market risk. Market-driven fields such as commercial software
utilize this model, in which each stage is visited more than once to mitigate risks such
as poorly understood requirements and changes in market demand.
For the high-complexity tasks with which this thesis is concerned, the stage-gate
model best represents the likely process that the organization will be utilizing. Thus,
in this thesis, it is assumed that the stage-gate model is at least roughly followed.
Note that Type I employees are employees that work on system-level tasks. However, this is not to be confused with the seperation of tasks into two phases, the
system design phase and the detailed design phase. A certain quantity of Type I
employees are required to remain on the project during the detailed design phase, the
determination of this quantity being one of the results of the project management
model developed in this thesis.
30
2.6
Modeling Staffing Policy Utilizing Type I and
Type II employees
While it may be difficult to discern which employees are Type II and which are Type
I, certain traits may distinguish them. These traits are a combination of experience,
education, and personality. The over-riding factor in identifying a Type I is the employee's comfort level with uncertainty: a more experienced employee can be expected
to be better prepared to deal with a wider range of situations. Also, a staff member
with a broader education should be expected to have some level of comfort with each
element of the complex system. However, even highly experienced and educated team
members may not be comfortable in the high-risk, highly volatile world that exists at
the beginning of a project. Nevertheless, these employees are highly valuable as lead
engineers that can flush out the details of the individual elements of the system with
the highest degree of quality.
It could be argued that employees do not fit neatly into these two categories. Some
employees that would make great Type I team members would also be very effective
as a Type II. One could extend the model to include a continuum of employees,
dependent upon their capability to provide services as Type II or Type I. However,
regardless of the difficulty in further classifying people in this manner, the purpose of
this work is to divide the project roles into classifications, as opposed to attempting
to pigeonhole individual employees into Type I or Type II. In fact, individuals may
be classified as Type I in one project and Type II on another.
Furthermore, doing so would dramatically increase the amount of mathematical
31
rigor required in developing the model, whereas the amount of insight that would
be gained would be minimal. Therefore, I will restrict my attention to models that
include just the Type II and Type I designations.
The use of just two categories implies two levels of decomposition of the system
architecture. Complex systems require the decomposition into more levels than this.
This implies that there are not just two types of employee; rather, we should consider
employees of Type I, Type II, Type III, and so on, in which the staff at each level is
responsible for the design one level below it. However, the model for just two levels
of employee can readily be extended to include more than just two levels.
32
Chapter 3
System Complexity
3.1
Defining Complexity
In order to discuss the designing of complex systems, the notion of complexity in this
setting must first be defined. Rechtin and Maier [MR02] define the term complex as
"composed of interconnected or interwoven parts."
They also define system as "a
set of different elements so connected or related as to perform a unique function not
performable by the elements alone." A similar definition of complexity is given by ElHaik and Yang [EHY99] as "A quality of an object with many interwoven elements,
aspects, details, or attributes that makes the whole object difficult to understand in
a collective sense".
Before attempting any quantifiable measure of complexity, it is advantageous to
question the purpose of measuring this quantity. It is a goal of this thesis to argue that
a meaningful description of a system's complexity is extremely useful to the manager
who must allocate resources for the system development project. Thus, complexity
33
metrics will be analyzed according to how relevant they may be to characterizing an
engineering system in relation to resources needed in development.
While searching for an appropriate measure of complexity for systems, the following characteristics must be kept in mind:
1. It must be measurable from project initiation;
2. It must be relevant to the design of engineering systems;
3. It must agree with our intuitive notions of what complexity means for systemlevel projects.
It could be argued that complexity theory does not need to be addressed if one were
to model a system at a level of abstraction for which the complexity does not occur.
It has been pointed out in the introduction that system design for a system of any
reasonable complexity requires that the system be decomposed into subsystems. The
highest level of decomposition, the most abstract, should have most of the complexity
"hidden" within the levels beneath it. The power of the decomposition of a system
into subsystems lies in this form of information hiding; each subsystem should be
designed at its level of decomposition with minimal knowledge of the complexity of
the system one level above it. One of the most important skills of a system designer
is the capability to reduce the apparent complexity of a system by recognizing the
appropriate level of abstraction necessary to perform the design, and the necessary
level of detail that is required.
A meaningful definition of complexity will have to address the following questions
as well: What are the sources of the complexity? Who has the skills to be able to
34
manage the complexity? How does the complexity of a system change over the life
cycle of the project? Is the complexity of the system independent of the choice of
representation of the system?
With these caveats in mind, the following section explores existing complexity
metrics that may be applicable to engineering systems design.
3.2
Measures of System Complexity
Complex systems are characterized by the large number of elements and interactions
that occur within the system. This is to be differentiated from the number of user
actions required to produce a specified result. Fewer actions required to produce an
effect is, from a user's standpoint, a less complicated system. A digital watch, for
example, may contain a large number of parts and interactions, making it complex,
but the user experiences a less complicated interface. The complexity of the system
must not be confused with the complicatedness, or apparent complexity, that an
end-user of the system must face.
An obstacle in attempting to properly determine system complexity is the level
of abstraction used to decompose the system into elements and interconnections.
At a high level, few elements may be used to describe an entire system.
If the
system is further decomposed into another layer of elements, the number of elements
and interconnections necessary to describe the system may make the system appear
more complex. At what level of abstraction is the complexity of the system to be
determined?
35
To answer this question, we need to clearly define the elements of a system based
on the engineering knowledge available to the system engineering organization. New
systems that have not been built before need to have the elements and interfaces
somehow well-defined.
3.2.1
Number of Elements and Interfaces
As mentioned in the first section of this chapter, systems are characterized by interconnected elements, and it seems natural to suppose that system complexity increases
as the number of these elements and interconnections increase. However, it does not
seem natural to consider complexity to be equivalent to the number of elements in a
system; complexity is a more abstract notion than the simple notion of system size.
For example, a brick wall contains a large number of bricks, each brick interactingtransferring compression loads due to gravitational effects-with many other bricks
to satisfy the requirement of being a wall. It is obvious that a brick wall does not fit
an intuitive notion of the term complexity, and therefore the number of elements and
interations alone does not meet our requirements for a complexity measure.
Another reasonable first attempt at measuring complexity is to count the number
of degrees of freedom available. When attempting to describe products, complicatedness to the user is easily attributed to the number of degrees of freedom the user has.
For example, many people are confused by attempting to record programs with their
VCR; they have to set the time to begin programming, the length of recording time,
the channel, and other parameters. The architecture of a digital recording service
36
such as TiVoTM reduces the complicatedness of the system in this regard because the
only parameter the user has to set is the program name or keyword, and the search for
the program is entirely performed by the TiVoTM service. While this service reduces
the number of degrees of freedom available to the end-user, the reduction of complicatedness does not tell us anything about the actual complexity of the architecture.
Counting the number of degrees of freedom, then, is not effective as a complexity
measure; it is not relevant for engineering system design because it does not relate
directly to the system architecture.
3.2.2
Organized and Disorganized Complexity
Two kinds of complexity that have been distinguished by Weaver [Wea48] are organized complexity and disorganized complexity. Systems which exhibit disorganized
complexity are characterized by such a large number of variables that the effects of
the interactions can only be described in a statistical sense. On the other extreme are
systems which exhibit what may be called organized simplicity; such systems have a
small number of variables which interact in a deterministic way.
Engineering systems fall between these two extremes, and thus exhibit organized
complexity, as illustrated in Figure 3-1. El-Haik and Yang [EHY99] argue that the
design of such systems can utilize analytic techniques in early stages of development,
and statistical techniques in latter stages due to noise effects.
Some approaches of complexity theory are intricately tied to dynamical systems
theory, and the existence of chaos in low-dimensional dynamical systems is a prime ex-
37
Organized Complexity
Organized Simplicity
Disorganized (Statistical)
Complexity
Disorder
Figure 3-1: Complexity as a non-monotonic function of disorder
ample of the delicacy of the concept of a complex system. A very simple mathematical
equation, for example the one-dimensional logistic map, given by xl+ = ax"(1- "),
can lead to very complicated state behavior. This behavior appears random for certain ranges of the parameters involved, but completely regular for other choices of the
parameters. Such complicated behavior for a system with a small number of variables
indicates that the concepts described here are not adequate to describe such systems;
it is difficult to determine where such systems would fall in Figure 3-1.
3.2.3
Informational Complexity and Entropy
The concept of entropy was recognized by Maxwell, Boltzmann, and Gibbs as a way
to represent the amount of disorder in a system. For systems that exhibit disorganized
complexity, this concept might be valuable. The Boltzmann entropy is defined to be
S = kB ln(Q), where kB is Boltzmann's constant and Q is the number of microstates
for a given macrostate. In such systems, the microstate is typically completely uncertain, whereas the macrostate is almost or completely certain.
38
For systems that exhibit organized complexity, the statistical relations between
the microstates and macrostates may not be so well defined, making the calculation
of this quantity difficult in practice. Nevertheless, an understanding of the sheer
quantity of interactions that may occur within a system is necessary to quantify complexity, and entropy seems to be a viable candidate. However, instead of attempting
to measure the degree of disorder of a system using entropy, the problem can be
approached by measuring the amount of information contained in the system. This
notion of complexity is based on Shannon and Weaver's seminal work on information
theory [SW49]. Shannon and Weaver define the information content of the occurrence of an event in terms of the number of bits of information that are required to
describe the event, 'R = - log 2 (p), where p is the probability of the occurrence of the
event. The average over all states i of this quantity is called the Shannon entropy,
H = - Ei Pi log(pi).-
These two definitions of entropy can be seen to be equivalent up to a multiplicative
constant by recognizing the that the probability can be expressed in combinatorial
terms for Q. We can use this expression for H to quantify the complexity of the
solution to an engineering design problem by taking the difference between the number
of possibilities that exist for the "undesigned" system and and subtracting the number
of possible "paths" within the system that the designed system allows.
The physicist Murray Gell-Mann [GML96] proposed the measure entitled "effective complexity," the definition of which is related to the identification of a system's
regularities through measuring the length of the system's schema.
A very simple
analogy to this concept involves wallpaper patterns. Wallpaper contains repeating
39
patterns, and the length of the pattern could be considered the schema. These regularities are related to the information content of the system.
Gell-Mann's concept of total information is the addition of the effective complexity measurement with an entropy term to describe the randomness inherent in
the system. The effective complexity measures the amount of information, and the
entropy describes the amount of ignorance. These are useful concepts because the
combination of an information term and an ignorance term gives a more complete understanding of complexity than either one separately. This allows for different sources
of complexity, and this concept will be further developed in subsequent sections.
3.2.4
Algorithmic Complexity
The concept of algorithmic complexity has been studied extensively by a number of
people including Chaitin [Cha87]. This measure of complexity is related to the length
of a computer program required to perform the given task; the longer a program that
is required, the greater the complexity.
This definition is not very suitable for defining complexity for large complex systems, which most likely include a combination of hardware and software subsystems
that are necessary to perform the required task. Even if one is restriced to solely
software systems, the length of a program depends on factors that are not associated
with the complexity of the task, such as the choice of language and choice of software
design, taking into account the possible requirements of software re-use, extensibility,
and readability. In order for a complexity metric to be useful for engineering design,
40
it has to be independent of the language used to measure it. However, algorithmic
complexity measures lead to an interesting concept as described in the example below.
Consider the string of digits 3383279502.
This string may appear completely
random; however, it is a string of digits in the number 7r, starting with the 25th
decimal place. Thus the algorithmic complexity of this sequence of digits depends on
a program's ability to recognize such a fact. In this example, algorithmic complexity
is the complexity of the algorithm that would be required to recreate the string of
digits 3383279502.
The complexity of such a given string of digits is related to the fundamental
building blocks that were used to create such a string. If the digits of 7r are the
only building block for our algorithm and the algorithm were limited to searching
low. If however we are allowed to consider various operations, for example, 7r 2 ,
/
the sequences of digits in the number 7r, the complexity of the string would be very
cos(r), and so on, the potential space from which we can create our string increases
significantly, and the computed algorithmic complexity increases.
If the building
blocks are the individual digits, then the complexity of the algorithm increases even
further.
It is thus apparent that the choice of the fundamental building blocks used to
create a system is of significant influence on the measured complexity of the system.
This raises the important question of what exactly those building blocks are.
41
3.2.5
Logical Depth and Breadth
In his paper The Calculus of Intimacy, Seth Lloyd [Llo90] compares the high information content of James Joyce's novel Finnegans Wake with the low information content
that would be associated with monkeys making random keystrokes. Information theory would indicate a high degree of complexity for random keystrokes due to the lack
of discernable patterns. Clearly, a standard information theoretic view of complexity,
defined as an entropy-like quantity, is not a good measure in this case, nor is it a
good measure for the case of engineering systems, in which degree of randomness is
not the correct view of complexity.
To address the concern that information alone is not sufficient for measuring
complexity, Charles Bennett [Ben85] devised a measure called logical depth, which
he defines to be the number of logical steps in the most plausible computer program
required to find the string 3383279502, to continue our previous example using 7r.
In a similar fashion, Lloyd and Heinz Pagels [LP88] have devised a measure they
call thermodynamic depth, which is related to the informational capacity of a system
(Lloyd and Pagels were considering the system to be a biological species, in which the
information is held in the genetic structure of a member of the species). The measure
of breadth is then the difference between this thermodynamic depth and the length
of a message required to specify variations that does not add additional information
that the species can use.
Lloyd and Pagels' measure of depth is free from the limitations of algorithmic
complexity and logical depth which come about due to the necessity for those mea-
42
sures to assume knowledge of a minimal sequence. These limitations are related to
G6del's Incompleteness Theorem.
Seth Lloyd argues that information alone is not sufficient to describe complexity,
which is in contrast to many of the measures that other authors have described. From
a systems standpoint, what is missing in the previous definitions of complexity is the
measuring of complexity based on the function of the system. It is true that random
keystrokes by a group of monkeys is significantly less complex that a complicated
novel like Finnegans Wake, but this statement only makes sense if we understand
that the function of the writing is to convey information. The functional requirement
of a novel to express a story must be known in order to determine the complexity
of the novel. Thus, information content coupled with function may be sufficient to
adequately describe what is meant by complexity. Seth Lloyd's measures suggest
these concepts, which others have drawn upon.
3.2.6
Structural and Functional Complexity
Braha and Maimon [BM98] borrow ideas from software complexity to define the notions of structural complexity and functional complexity. Their complexity measures
deal entirely with what they call artifact complexity, in which an artifact is an output
of the design process, and ignore the problem of complexity from the point of view
of the design process.
Braha and Maimon distinguish between structuralcomplexity and functional complexity, though both of these complexity measures are a function of the information
43
content of the design. Structural complexity is complexity that is based on the representation of the information. The appeal of this notion of complexity is that the
valuation of complexity from this point of view is easily facilitated using decomposition diagrams that describe the elements and interfaces of the system. Also, since
the structural complexity is a function of its representation, the measure is associated
with the level of abstraction that is used to make the representation.
In the case of software design, the structuralinformation content H can be defined
as
H= (N+ N2 ) log2 (p + N),
where N, is the total number of operators in the program, N2 is the total number of
operands, p is the number of distinct operators in the program, and N is the number
of distinct operands. This definition could well be generalized to include any system,
software or hardware, by equating number of operands with number of elements in
the software architecture, and by equating number of operators with the number of
interfaces between the elements. Braha and Maimon's software structural measure
is identical to the volume measure for software as defined in the pioneering work on
software complexity by Halstead [Hal77].
In contrast, functional complexity is representation-independent.
Unlike struc-
tural complexity, employing a measure of functional complexity does not ignore the
ultimate purpose of the design, which is to satisfy a chosen set of functional requirements. Functional complexity can be considered a function of probability of success.
44
Braha and Maimon define functional complexity F as
F
=10o2(
),
Where p is the probability of success in achieving the functional requirements.
Even though the probability of success is not easily measured, experts in the relevant
field have the ability to satisfactorily estimate this probability, making this a useful
concept.
Axiomatic Design as a Basis for Complexity
3.3
Measurement
The Two Axioms of Axiomatic Design
3.3.1
Axiomatic Design as developed by Nam Suh [Suh90] provides a framework for guiding
the design process. This framework has been tested in a wide variety of contexts with
examples from fields such as manufacturing, materials, software, organizations, and
systems. The premise of this design philosophy is that the design process is governed
by two fundamental axioms:
Axiom 1 (The Independence Axiom) Maintain the independence of functional requirements
Axiom 2 (The Information Axiom) Minimize the information content of the design
45
In order to flesh out the meaning of these axioms, it is necessary to define the
terms customer attributes (CAs), functional requirements (FRs), design parameters
(DPs), and process variables (PVs). These terms have specific meanings depending
on the context, but in a systems context:
CA: the attributes desired of the overall system
FR: the functional requirements of the system
DP: the components or subsystems
PV: the available resources: employees, budget, tools and so on
These entities are connected in a deceptively simple way. In the physical domain,
The vector of FRs is related to the vector of DPs via a design matrix [A]:
FRs = [A]DPs,
and in the process domain, the vector of DPs is related to the vector of PVs via a
process matrix [B]:
DPs = [B]PVs.
The first of these equations, called the design equation, can be used to prove a
number of important theorems. Designs are defined as coupled, uncoupled, or decoupled. A design that satisfies the Independence Axiom is called an uncoupled design,
and occurs when the matrix [A] is diagonal. In terms of functional requirements and
design parameters, an uncoupled design occurs when each individual FR, say FRI,
46
can be satisfied by only one DP, DP.
Similarly, a design that does not satisfy the Independence Axiom is called a coupled
design. Coupled designs that can be put in the form in which [A] is triangular become
what are called decoupled designs.
The Independence Axiom that has been discussed so far is related to the controllability of a design; functional coupling due to mismatches between the FRs and the
DPs serve to weaken the controllability of the design solution. While there is often
confusion between the notions of coupling and complexity, system complexity is not
addressed by this axiom. However, the coupling of a design (coupled, decoupled, or
uncoupled) has implications for the relationship between DPs and the complexity of
the system, as will be seen below.
The second axiom, the Information Axiom, can serve as a basis to define complexity. Information as defined by Suh [Suh90] is "The measure of knowledge required to
satisfy a given FR at a given level of the hierarchy." Given Shannon's original definition of information and Braha and Maimon's definition of functional complexity,
it should be clear that Suh's definition of information is equivalent to the functional
complexity of Braha and Maimon.
During the design process, the DPs of a design are varied while seeking the best
design.
In a coupled or a decoupled design, the information content depends on
the sequence by which the DPs are varied; for an uncoupled design, the information
content is independent of this sequence, since varying a DP only affects one FR.
47
3.3.2
Measuring Complexity Using Axiomatic Design
Many of the complexity measures that were described in the previous section seem inappropriate for complex system development because they are representation-dependent
measures of complexity. Measures such as the number of components, length of a
program, and regularity are not useful measures of the difficulties inherent in system design if they cannot be used to define the process necessary to overcome these
difficulties.
On the other hand, complexity measures that are associated with information
content are more viable candidates for measures that can be used to describe systemlevel design problems.
According to axiomatic design, the relevent definition of complexity for engineering design problems is a measure of uncertainty in achieving the specified functional
requirements. Information content is defined as the logarithm of one over the probability of success.
This definition captures the essential ingredients of other, more intuitive notions
of complexity. For example, the simplest notion of complexity in the previous list
A system that contains a large number
is the notion of number of components.
of components, the performance of each being required for the system to perform
successfully, will fail if any single component fails to perform. Thus, the probability
of success for this system is a function of the probability of success of each component,
the overall number of components, and the way these components interact.
48
3.3.3
Real and Imaginary Complexity
Complexity is intricately tied to uncertainty, as indicated by defining complexity in
terms of the probability of successfully achieving the functional requirements of a
system. Uncertainties can be considered the sources of complexity. In his 2000 book,
Suh [SuhOO] describes these sources as real uncertainty and imaginary uncertainty.
Real uncertainty is uncertainty due to the fact that a given design is incapable of
satisfying the functional requirements of the system. Real uncertainty of a coupled
design can be reduced by decoupling the design, and uncoupled designs are always
lower in complexity than coupled designs. The only way to decrease the real uncertainty of a given design is thus to decouple it.
Imaginary uncertainty is due to the lack of knowledge or lack of understanding
of a design on the part of the designer. Since this is a statement of ignorance, this
uncertainty will be reduced as more knowledge of the system is attained.
Real complexity and imaginary complexity are measures of these uncertainties.
In as much as the sources of these complexities are different, they are completely
orthogonal to each other as their names imply. The absolute uncertainty can then be
calculated as the vector sum of these two quantities:
CA =
C+C 2.
49
3.3.4
Application of Axiomatic Design to Project Staffing
Problems
Design is a evolving interplay between what we want to achieve and how we plan on
achieving it. The first of these can be stated in terms of the functional requirements of
the system, and the second can be stated in terms of the design parameters that make
up the physical solution. In the language of system architecture, this expresses the
idea that form follows function. The expression of a design concept is this mapping
between the FRs in the functional domain and the DPs in the physical domain.
Design can be defined as the mapping process from the functional space to the
physical space to satisfy the designer-specified FRs. This mapping is not unique; the
axioms are used to proved a measure of the quality of the solution.
Given that engineering design is associated with the mapping between the FRs
and the DPs of the engineering system, how is the management of the engineering
design process to be analyzed? Management of a complex system can also be framed
in the context of axiomatic design. The objective of the management organization is
to facilitate the design of the engineering system, and the physical solution for the
management function is the resource allocation required to design the engineering
system. In this way, the management design must parallel the engineering design; each
iteration of determining engineering FRs and DPs must be followed by an iteration of
determining FRs and DPs for the management function of resource allocation. This
cycle of iterating between engineering FRs and DPs and resource allocation FRs and
DPs is depicted in Figure 3-2.
50
Figure 3-2: The Cycle between engineering design and resource allocation
3.4
The Choice of Complexity Measures
At this point we should be able to answer the questions regarding complexity measurement that were asked at the beginning of this chapter.
Is complexity quanitifiable? Complexity can be quantified in a number of ways,
based on a system's structure, evolution, function, size, and so on. While the appropriate measure of complexity depends on the application, it is often difficult to
determine which measure is best suited to describe the phenomena. The remainder
of this thesis focuses on the metrics of functional and structural complexity as the
most relevant for the discussion at hand.
What are the sources of complexity? In a functional description of complexity,
complexity is due to uncertainties about the success of a system. These uncertainties
51
can be due to actual limitations of a design, or due to the lack of knowledge or
understanding of a design. In a structural description, the sources of complexity are
simply related to the interplay between the elements and interfaces.
Does complexity change during the lifecycle of the project? Complexity due to
lack of knowledge should decrease as work is done on a project. Complexity due
limitations of a design may decrease if a coupled design is decoupled, or if a better
design is found with a lower information content. Therefore complexity does change
over the lifecycle of a project, and it is the project manager's role to continually
evaluate the status of the complexity and use this information to allocate the project
staff.
Are large systems always complex? Not necessarily, but of course the answer to
this question depends on the choice of complexity measure. According to Suh [Suh95],
"A large system is not necessarily complex if it has a high probability of satisfying
the FRs specified for the system." Using the wall analogy, the Great Wall of China,
though very large, would not be considered complex in terms of the FR of seperating
one geographic area from another, because it has a high probability of doing so.
However, in terms of satisfying the FR of keeping opposing armies out of China, it
may not necessarily do so with a high probability. From a functional viewpoint, the
FRs must be known before a measure of complexity can be determined.
Is complexity always a bad thing? Complexity may be utilized to reduce complicatedness to the user. Consider that at project initiation a certain "quantity"
of complexity can be allocated to the project. Complexity can then be "spent" to
achieve real goals, such as added functionality, efficiency, robustness, or flexibility.
52
Taking these answers into account, can one determine a "best" measure of complexity? A measure of complexity must agree with our intuitive notions about systems. Measures tend to fall into one of two camps: those measures that deal with
the structure of the system, and those that consider primarily the function of the
system. The structural measures are based on the representation of the system, while
the functional measurements are not. In the following chapters, we will consider
Braha and Maimon's measures of structural complexity and functional complexity as
representative of these notions of complexity, as well as Suh's definitions of real and
imaginary complexity for the functional notion.
53
Chapter 4
The Apollo Guidance, Navigation,
and Control System
In 1961, President Kennedy announced that within the decade the United States
would place a human exploration team on the moon and return them safely to earth.
The first major contract awarded with respect to the Apollo project was the guidance,
navigation, and control system. The functional requirements of the Apollo Guidance,
Navigation, and Control system (hereafter referred to as Apollo GN&C) were to
determine the position and velocity of the spacecraft and to control its maneuvers.
In this chapter, complexity metrics for Apollo GN&C will be analyzed.
This
analysis requires that the system be decomposed, such that functional requirements
and design parameters are determined at each level of decomposition.
While the
decomposition of the system is relatively straightforward owing to the fact that the
system has already been developed, this case should show that the choice of functional
requirements at each level is not necessarily clear, and the complexity that emerges
54
from the design choice based on these requirements has a significant impact on the
system design.
The information included here was obtained through a number of NASA-sponsored
Technical Reports [MMS66, NJ67, Dra65, TH65, Han7l, Hoa76] written by members
of the staff of MIT's Instrumentation Laboratory (currently Draper Laboratory) between the years of 1965-1976, which was responsible for the development of the Apollo
project's guidance, navigation, and control system, hereafter called GN&C.
Apollo GN&C was chosen because it is a system of sufficient complexity to allow
the ability to differentiate between complexity measures. The project was undertaken in a different environment than most current large-scale development work; the
complex aerospace projects of today are typically developed through a number of subcontractors. The study of a system in the simplified project environment of the 1960's
allows the technical complexity of the system to be thoroughly described without the
complications of project development in a multiple-contractor environment.
Furthermore, since the development of a spacecraft to place men on the moon
was so novel and unique, issues concerning reuse, platforms, and extensibility of the
architecture are not very significant. While these factors may be heavily weighted in
the choice of architecture for systems currently under development, they may obscure
the phenonema that this case is meant to uncover.
55
4.1
First Level Design of Apollo GN&C
In the context of the Apollo program, the terms navigation, guidance and control
have the following meanings:
Navigation : The measurement and computation necessary:
1. To determine the present spacecraft position and velocity;
2. To determine where the present motion is sending the spacecraft if no
maneuver is made;
3. To compute what maneuver is required to continue on to the next step in
the mission.
Guidance : The continuous measurement and computation during accelerated flight
to generate necessary steering signals so that the position and velocity change
of the maneuver will be as that required by navigation computations.
Control : The management of the rotational aspects of spacecraft motion.
The activity referred to as navigation concerns the maintenance of the knowledge
of the position and velocity of the vehicle during free fall (see Table 4.1). Data are
used from the Manned Spaceflight Network (MSFN), optical measurements between
stars and the earth and the moon, and radar. During these free-flight phases, attitude
control is frequently necessary. Guidance refers to the measurement of vehicle velocity
changes and the control of vehicle attitude to produce the required changes in course
and speed.
This occurs during powered-flight phases.
56
During guidance a spatial
reference is necessary, which is performed using an inertial measurement unit (IMU).
The IMU also measures acceleration and delivers this information to the guidance
computer for processing. The radar sensor is also used in guidance during the lunar
landing phase.
FR1
FR2
FR3
FR4
Table 4.1: Overall Functional Requirements for Apollo GN&C
Navigation To perform translational measurement and control during
free fall phases
Attitude Control To perform rotational measurement and control
during free fall phases
Guidance To perform translational measurement and control during
acceleration
Thrust Vector Control To perform rotational measurement and control during acceleration
In what follows, various level I decompositions of the Apollo GN&C system architecture will be explored. The purpose is to determine the best choice of parameters
to describe the level I design of the system; this choice is based on the ability to
differentiate level I design work from level II, or subsystem, design work.
While the design of the GN&C system already exists, the representation of the
level I design should be chosen to adequately reflect the most likely design scenario
as visualized by the original designers. The appropriate representation should be one
such that the apparent complexity of the design is low. This is the representation
in which the requirements of each level II task is most completely determined, with
minimal reference to the rest of the system.
57
4.1.1
Preliminary Design of Apollo GN&C
Very early in the development process of Apollo GN&C, it was determined that
the design of a system to perform these functions would include a general purpose
digital computer, a space sextant, an inertial guidance unit, an optical telescope for
alignment of the inertial system, and a control and display console for the astronuats.
For the purpose of this thesis, these five components will have to be considered design
parameters, as shown in Table 4.2.
Table 4.2: Preliminary
DPI
DP2
DP3
DP4
DP5
Level I Design Parameters for Apollo GN&C
Guidance Computer
Sextant
Inertial Guidance Unit
Alignment Optical Telescope
Controls and Display
The Design Matrix A, where A
=
FR/&DP, has nonzero elements as shown in
Table 4.3.
Table 4.3: Preliminary Level I Design Matrix for Apollo GN&C
_DPi
DP2 DP3 DP4 DP5
FRI
X
X
X
FR2
X
X
X
X
X
X
FR3
X
FR4
X
X
X
X
While this design is the historical preliminary design, it is not yet adequate to
be considered the first level decomposition of Apollo GN&C. The first level decomposition must include enough information to allow the second level employees, or
subsystem specialists, to be able to design their respective subsystems without having to understand the details of other components of the system. Given this initial
58
design, how are the design parameters interfaced to each other? For example, the
inertial guidance subsystem must receive information from the optical navigation system; how is this information to be relayed? How does the inertial guidance system
interchange data with the guidance computer; what data is necessary, and in what
format? These are the issues that the high-level designers should be concerned with
to allow the level II staff to concentrate on the details of their components.
Additionally, the complexity of the system is masked by the lack of structural
links between each component. This information is required to compute the structural complexity measure. The functional complexity measure is similarly effected
by this omission; without knowledge of the interactions between components that are
used to satisfy a functional requirement, how can the probability of achieving that
requirement be estimated? For an adequate representation of a design, the FRs and
DPs must be sufficiently defined to allow at least an estimation of the complexity of
the system to be determined.
4.1.2
Phase-based Design of Apollo GN&C
Now that we understand that a more detailed first-level design is necessary, one
approach may be to develop a design based on the phases that the Apollo spacecraft
passes through on its journey from the earth to the moon and back.
The nature of the Apollo mission required that the spacecraft have the capability to guide the crew of the Apollo through twelve distinct phases: the launch to
earth orbit, earth orbit navigation, translunar injection, translunar navigation after
59
booster seperation, Lunar Excursion Module injection, lunar landing, crew on the
lunar surface, the ascent from the lunar surface, the rendezvous of the Lunar Excursion Module with the Command Module, earth atmosphere re-entry, and earth
landing. Therefore, an initial attempt at determining the functional requirements of
the Apollo GN&C system could naturally be the breakdown based on mission phase
as shown in Table 4.4.
Table
FRi
FR2
FR3
FR4
FR5
FR6
FR7
FR8
FR9
FRIO
FR 1I
FR12
4.4: Phase-based Level I Functional Requirements for Apollo GN&C
To provide guidance during the launch to Earth orbit
To provide earth orbit navigation
To provide guidance during the translunar injection phase
To provide translunar navigation after booster seperation
To provide guidance during LEM injection
To provide guidance during lunar landing
To track the location of the Command Module while the Excursion
Module is on the surface
To provide guidance during the ascent from the lunar surface
To provide navigation data during the rendezvous of the Excursion
Module with the Command Module
To provide guidance during the rendezvous
To accurately determine the initial conditions for re-entry guidance
To provide guidance necessary to bring the Command Module accurately to the designated landing site
Table 4.5: Phase-based Level I Design Parameters for Apollo GN&C
DP1
Booster Guidance System
DP2
Command Module Guidance System
DP3
Optical Navigation Subsystem
DP4
Inertial Measurement Unit
DP5
Ground Tracking System
DP6
Lunar Excursion Module Guidance System
DP7
Computer Display
DP8
Control Unit
DP9
Rendezvous Radar System
DPiO Apollo Guidance Computer
DPii Landing Radar System
To fulfill the requirements of the system for each phase, the Apollo GN&C system
60
can be considered to consist of eleven primary subsystems. The subsystems chosen as
design parameters in Table 4.5 are each necessary for a subset of mission phases. For
example, the Command Module is the home for the astronauts throughout most of
the trip; however, the Lunar Excursion Module is the vehicle that makes the descent
to the lunar surface. Therefore, the guidance system on board the Command Module
is only necessary during the guidance phases FRI, FR2, FR3, FRIl, and FR12, while
the guidance system aboard the Lunar Excursion Module is only required during the
lunar landing itself.
The full design matrix using this representation is shown in
Table 4.6.
Table 4.6: Phase-based Level I Design Matrix for Apollo GN&C
FRi
FR2
FR3
FR4
FR5
FR6
FR7
FR8
FR9
FR10
FRI I
FR12
DPI IDP2] DP3
X
X
X
X
X
X
X
DP4
X
DP5 I DP6
DP7
DP8
X
X
X
DP9
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
DP1O
X
X
X
X
X
X
X
X
X
X
X
X
DP11
X
As seen in the design matrix, the phase-based representation yields a system that
shows a high degree of coupling at the first level. The coupled relationship between
the DPs and the FRs implies that this choice of parameters is not the ideal breakdown
of functional requirements. This version of the level I design is not operational: it does
not break down the subsystems in a way that allows the staff that is designing each
61
subsystem to concentrate on a more focused problem. If the designers of the Apollo
system were to use this matrix to develop the system, the emergent complexity would
likely have yielded a low probability of success; each functional requirement would
require the successful operation of multiple design parameters. The structural complexity of this system is high due to the number of interfaces required to ensure that
each subsystem receives the information necessary to fulfill its obligation to perform
the task of achieving each functional requirement. The functional requirements at the
highest level - the fundamental building blocks of used to describe the system - must
be chosen appropriately to create a system design which exhibits low complexity; this
choice would allow for a breakdown of level II tasks for which the requirement of each
level II task is well-defined.
4.1.3
Uncoupled Design of Apollo GN&C
A preferred representation of the level I design takes into account the similarity between the GN&C systems aboard the Command Module and Lunar Excursion Module. Both modules require an IMU for guidance, a guidance computer for processing,
an optical navigation system, and a computer display and controls for the astronauts
to perform their tasks. It was recognized that the IMU, guidance computer, and
display and controls subsystems could be identical for the two modules. In addition,
the electronics that make up the Power Servo Assembly and the components that
are required for analog-to-digital and digital-to-analog conversion for communication
of the computer with other subsystems known as the Coupling Data Units could
62
also be identically designed. The functional requirements from Table 4.4 can now be
seperated into FRs for each module, as in Table 4.7.
Table 4.7: Combining Level I Functional Requirements
FRI
FR2
FR3
FR4
FR5
FR6
FR7
FR8
FR9
FRIO
FRI I
FR12
To provide guidance during the launch to Earth orbit
To provide earth orbit navigation
To provide guidance during the translunar injection
phase
To provide translunar navigation after booster seperation
To provide guidance during LEM injection
To provide guidance during lunar landing
To track the location of the Command Module while the
Excursion Module is on the surface
To provide guidance during the ascent from the lunar
surface
To provide navigation data during the rendezvous of the
Excursion Module with the Command Module
To provide guidance during the rendezvous
To accurately determine the initial conditions for re-entry
guidance
To provide guidance necessary to bring the Command
Module accurately to the designated landing site
CM G&N
CM G&N
CM G&N
CM G&N
LEM G&N
LEM G&N
LEM G&N
LEM G&N
LEM G&N
LEM G&N
CM G&N
CM G&N
Differences between the Command Module and Lunar Excursion Module GN&C
systems are then limited to the optical navigation systems and the systems required
for guidance to land the lunar module and to rendezvous with the command module.
The Command Module optics requires a sextant, which is used during mid-course
navigation in a similar manner to a marine sextant, and a scanning telescope, which
is used as a finder to make a mid-course navigation alignment of the inertial system.
The Lunar Excursion Module optics consists only of an optical alignment telescope
to provide IMU orientation information to the computer.
The additional requirements of the lunar module are satisfied by the landing radar,
which is used to guide the module to the lunar surface, and the rendezvous radar which
63
provides guidance during the rendezvous of the module with the Command Module.
Table 4.8 and Table 4.9 show the design parameters for the GN&C system for
each of the two modules. Taking into account the identical components, there are
ten design parameters for this component-based design. DPs 1, 2, 3, 4, and 8 are
combine to both modules, DPs 5 and 6 are only included in the Command Module,
and DPs 7, 9 and 10 are only included in the Lunar Excursion Module.
If these design parameters were chosen to satisfy the functional requirements of
Section 4.1.2, the resulting design matrix would be as shown in Table 4.10. The
extreme amount of coupling involved in this representation would imply that these
functional requirements are still a poor choice, from the point of view of defining level
II subsystems, for the level I design.
Instead, the functional requirements of the design should be such that only one,
or a small subset, of design parameters should be required to satisfy the requirement. This observation would yield an uncoupled GN&C system design, as shown in
Table 4.12, where the functional requirements are as given in Table 4.11.
The uncoupled design matrix is a suitable representation of the level I design
of the Apollo GN&C system.
The functional requirements give an indication as
to what interfaces are required between each of the design parameters, and each
design parameter is focused on a specific subset of functional requirements. This
representation allows the level II staff to concentrate on a small number of functional
requirements; the high-level complexity of the system should be irrelevant to the
lower-level detailed design work carried out by the subsystem staff.
Recall that the structural complexity measure is representation-dependent.
64
In
Table 4.8: Level I Design Parameters for Command Module G&N
DPI Inertial Measurement Unit
DP2 Apollo Guidance Computer
DP3 Power Servo Assembly
DP4 Coupling Data Units
DP5 Sextant
DP6 Scanning Telescope
DP8 Computer Display and Controls
Table 4.9: Level I Design Parameters for Lunar Excursion Module G&N
Inertial Measurement Unit
DPI
LM Guidance Computer
DP2
Power Servo Assembly
DP3
Coupling Data Units
DP4
Alignment Optical Telescope
DP7
DP8
Computer Display and Controls
DP9
Rendezvous Radar
DP1O Landing Radar
Table 4.10: Level I Design Matrix for Apollo GN&C
FRI
FR2
FR3
FR4
FR5
FR6
FR7
FR8
FR9
FRIO
FRI I
FR12
DP1I DP2
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
DP3
X
X
X
X
X
X
X
X
X
X
X
X
DP4
X
X
X
X
X
X
X
X
X
X
X
X
DP5
DP6
X
X
X
X
DP7
DP8IDP9
X
X
X
X
X
X
X
X
X
65
X
DP1O
X
X
Table 4.11: Revised Level I Functional Requirements for Apollo GN&C
To provide an accurate memory of spatial orientation
FRi
FR2
To provide an accurate measurement of spacecraft acceleration conditions for the required maneuver during free fall phases
FR3
To provide navigation information during mid-course navigation, earth
orbit and lunar orbit phases
FR4
To measure the orientation of the inertial platform orientation
To provide stable-member orientation information to the LEM guidance
FR5
computer
To support the operation of the IMU, the Optics Units, and other parts
FR6
of the system
To convert mechanical angle data into digital quantities for computer
FR7
processing
To generate an analog signal from digital increments delivered for steerFR8
ing by the computer
To control the optics
FR9
FRIO To change spacecraft attitude by means of very small impulses of thrust
FR 1I To establish the proper mode of Optics system operation
FR12 To mark the navigational sighting time to the computer
FR13 To aid in operating the inertial and optical subsystems
FR14 To provide steering signals when human reaction is too slow
FR15 To perform spacecraft attitude control with mimimal fuel expenditure
FR16 To maintain timing references
FR17 To communicate guidance data with the astronauts
FR18 To communicate with ground tracking stations
FR19 To perform calculations necessary to deduce position and velocity relative to the earth and moon from the input data available during all
flight phases
FR20 To supply the guidance system with information on the relative positions of the Lunar Excursion Module and Command Module
FR21 To obtain an accurate set of guidance conditions during lunar landing
66
Table 4.12: Uncoupled Level I Design Matrix for Apollo GN&C
DPI DP2 DP3 DP4 DP5 DP6 DP7 DP8 DP9 DP1O
FRI
X
FR2
X
X
X
FR3
FR4
X
FR5
X
FR6
X
FR7
X
FR8
X
FR9
X
FRIO
X
FR11
X
FR12
X
FR13
X
FR14
X
FR15
X
FR16
X
FR17
X
FR18
X
FR19
X
FR20
X
FR21
X
67
this case, the uncoupled design would indicate a lower structural complexity than
that indicated by the phase-based design. However, the functional complexity, being
representation-independent, is not effected by this new representation. The probability of successfully achieving the functional requirements is not changed by this choice
of decomposing the system; rather, the estimate of the functional complexity is more
easily obtained through observation of the uncoupled design.
4.2
Second Level Design of Apollo GN&C
The existence of well-defined level I requirements is essential for the meaningful evaluation of level II designs. While some level II tasks may be completed before the level
I design is decided upon, for the most part the determination of the design parameters
require the definition of the functional requirements for each subsystem.
The successful definition of requirements for each level also results in the hiding
of complexity at that level for the next level in the hierarchy. Type II employees that
work on each subsystem only need to understand how their subsystem interacts with
each of the other subsystems.
4.2.1
The Design of the Inertial Measurement Unit
The inertial measurement unit must provide the GN&C system with the first two
funtional requirements as shown in Table 4.11. These two requirements are:
FR1 To provide an accurate memory of spatial orientation;
FR2 To provide an accurate measurement of spacecraft acceleration.
68
The types of measurements that are required in every inertial guidance system,
regardless of design, involve distinct instruments:
1. angular rate or direction using gyroscopic devices;
2. linear acceleration using restrained test masses in accelerometers.
A physical solution of satisfying the functional requirements of the IMU was determined to be a three-gimbal gyroscopically stabilized platform. This design will
be called the Block I design (Figure 4-1. There were other, competing designs of
the IMU; two examples are a four-gimbal gyroscopically stabilized platform and a
gimballess IMU. For simplicity, I will only consider comparisons of the Block I design
with the gimballess design, hereafter denoted Block II.
The Block I design can be decomposed into the following components: A stable
platform; three gyroscopes mounted on the platform; three accelerometers mounted
on the platform; and a gimbal system which supports the platform and provides three
degrees of freedom between the structure of the spacecraft and the inner gimbal axis,
hereafter called the stable member. These components are the design parameters for
the Block I design, as shown in Table 4.13.
Table 4.13: Design Parameters for the Gimballed Inertial Measurement Unit, Block I
DPI Gyroscopes (3)
DP2 Accelerometers (3)
DP3 Stable Platform
DP4 Gimbal System
The three gyroscopes are all identical, single degree-of-freedom subsystems, and
are used to provide signals for the guidance computer regarding the rotation rate of the
69
OUTER GIMBAL AXIS (OGA)
SUPPORT
IMBAL (CASE)-
OJTER G4MOAL
MIODLE GIMBAL
4XIS (MGA)
LIMIT SWITCH
1 609
MIDDLE GIMBAL
tNNER GIMBAL
AXIS O GA)
STABLE MEMBER
IINNER 6IMBAL)
INTERGIMBAL
ASSEMBLY (6)
Figure 4-1: The Gimballed Inertial Measurement Unit used on the Apollo project
spacecraft, through the coupling data units (CDU's). Recall that since each of these
subsystems are identical for both the Command Module and the Lunar Excursion
Module, the guidance computer can represent either the Apollo Guidance Computer
present in the Command Module or the guidance computer for the Lunar Excursion
Module.
70
The design of the gyroscopes themselves requires expertise from the specialists at
this level; these specialists could be considered designers of a level III subsystem in
the hierarchy of subsystems. The estimate of the probability of a gyroscope carrying
out its function of providing angular rate information must be included in the calculation of the probability of achieving mission success. If a single gyro wheel stopped
running or developed excessive drift, the success of the mission would be in jeopardy.
The Type I staff are not required to understand the details of each component, although a general rule of thumb has been that each member of a system design team
should be able to decompose a system two levels down from the level that they are
designing [Cra02]. It is the job of the gyroscope designers, as Type II staff, to design
their subsystem to decrease the functional complexity of their subsystem, given that
subsystem's functional requirements only.
The three accelerometers are identical, single degree of freedom instruments as
well, and they deliver signals to the computer for the three components of acceleration
resolved along the stabilized platform axes. The accelerations are determined within
each accelerometer using single degree of freedom physical inertial pendula (PIP).
The Block I design of the IMU as described can be graphically represented using on
OPM diagram as in Figure 4-2. While not a part of the IMU subsystem, the guidance
computer is represented in this diagram as an agent for the two processes, Angular
Rate Providing and Linear Acceleration Measuring. This relationship between the
two subsystems is of importance when deciding upon the choice of the IMU design.
The design matrix for the Block I IMU design can now be constructed and used to
assess the complexity metrics for this design. The functional requirements of the IMU
71
Degree-of-Freedom
Prnvidin
Stable Platforrm
Gimbal Syste m
xed Inertial
Reference Frame
Provideing
Accelerometer A
Ayroscope
Accelerometer B
...........
Accelerometer C
Gyroscope B
Linear
A cceleration
Measuring
I
____________________
Guidance Computer
Figure 4-2: OPM of Gimballed Design for the Inertial Measurement Unit
are as declared in the beginning of this section; they can be further delineated by each
degree of freedom, using the nomenclature of Suh. Thus, FRi1 can be considered
to provide the angular rate for degree of freedom 1, FR21 will be to measure linear
accelerationfor degree of freedom 1, and so on. The design parameters are similarly
defined. Using this notation, the design matrix can be seen to be diagonalized based
on degree of freedom, as in Table 4.14.
By comparison, the Block II design of the IMU, the gimballess system, does not
utilize gimbals or a stable platform; instead, the inertial sensors are mounted directly
72
I
FRi1
FR12
FR13
FR21
FR22
FR23
Table 4.14: Design Matrix for Gimballed IMU
DP11 DP 12 DP13 DP21 DP22 DP23 DP3
X
X
X
X
X
X
X
X
X
X
X
X
DP4
X
X
X
X
X
X
onto the spacecraft. The design parameters and design matrix for this competing
design are shown in Table 4.15 and Table 4.16. The gimballess IMU is discussed in
detail in [MMS66].
Table 4.15: Design Parameters for the Gimballess Inertial Measurement Unit
DPI Gyroscopes (3)
DP2 Accelerometers (3)
Table 4.16: Design Matrix for the Gimballess IMU
11DP11 DP 12 DP13 DP21 DP22 DP23
FRIl
X
FR12
X
FR13
X
X
FR21
X
FR22
X
FR23
Besides the obvious reduction of structural complexity for this design, the gimballess IMU design has the advantages of reduced system weight, reduced volume,
lower power, lower cost, more packaging flexibility, more reliability and easier maintainability. The primary disadvantage of this subsystem is the substantial angular
velocity that the instruments are subjected to, due to the fact that they are mounted
directly on the spacecraft and do not have a stabilizing platform, which has the tendency to exaggerate performance errors of the instruments.
73
These errors must be
reduced by requiring the guidance computer to determine the angular orientation of
the spacecraft by integrating the measured angular velocities at a higher precision
than is required for the Block I IMU design. The accuracy attainable by a gimballess
inertial system is limited by the maximum angular velocities to which the vehicle is
subjected.
4.2.2
Complexity and the Choice of the IMU Design
Both the gimballed and gimballess IMU designs were considered for use in the Apollo
project. Did the complexity of the two systems have a bearing on the choice of design?
A straightforward calculation of the structural complexity of the two systems yields a
smaller value for the gimballess design. Structural Complexity for the Block I design
is 33.27 bits; the structural complexity for the Block II design is 16.64 bits. The
details of this calculation are provided in Appendix B.
However, the use of the Block II design would require greater precision from the
Apollo Guidance Computer than any other subsystem would require. Utilizing the
Block II design of the IMU would make the successful achievement of the AGC's functional requirements less probable. Thus the functional complexity of the gimballess
IMU is much higher than the functional complexity of the gimballed IMU.
Is the increased functional complexity of the Block II design more important than
the decreased structural complexity, and the weight, volume, power, cost and other
advantages of the design? Who is to make the decision as to which factors are most
important? The choice of IMU design is based on the ability to satisfy the functional
74
requirements for the level II subsystem. If the IMU subsystem were to have been
decided upon as an isolated system and its interactions with other subsystems were
not taken into account, the gimballess design would have appeared to be the preferred
design. However, the IMU has the functional requirement that it must supply data
to the guidance computer via the CDU's, which are also level II subsystems. While
Type I employees may be unable to create the best design for the IMU, Type II
employees need the input from the level I tasks to determine the overall best design.
In this case, the staff involved in the level I design made the decision to use the
Block I IMU design, since the Block II design required an increase in the required
capability of the guidance computer. Not only are these employees responsible for
making such a decision based on their overall understanding of the level I system,
but they are also the ones more capable of recognizing which resource constraints
are most relevant. While the level I design is ideally defined before the level II work
begins, a subset of employees familiar with the level I tasks are required to remain on
the project while the level II work takes place, to provide the necessary, subjective
input in deciding between level II designs.
75
Chapter 5
The System Dynamics of Project
Staffing
5.1
System Dynamics Model for Project Staffing
The basic building blocks of system dynamics models are stocks, flows, and causal
relationships between variables. The dynamics of the staff for one level of employee
is shown in Figure 5-1, the structure of which is adapted from a model obtained from
Jim Lyneis [Lyn02]. In this model, the staff on the project is represented by a stock;
employees are added to this stock from the pool of available employees as they are
needed, and they are removed from the stock when they are no longer working on the
project.
The generic model for a single level of project staff is the same for Type I and
Type II employees; the Type I staffing model is described as an example. The number
of Type I staff on the project at any time t is determined to be equal to the integral
76
of the number of Type I employees added to the staff minus the number of Type I
employees leaving the staff, integrated over time:
Staff
=
J(Staff Added
-
Staff Leaving)dt-
The variables Staff Aed and Staff Leaving are flows into and out of the stock called
Staff.
The rate at which employees are added to the staff is affected by a delay; this
delay represents the time required to add people to the staff from other projects, as
well as the time required for new staff members to become familiar enough with the
project to be productive. Similarly, there is a transfer delay, which may exist due to
the time it may take for an employee to get involved with another project, assuming
an employees remains on their current project until they can be gainfully employed
on another.
The PotentialDesign Rate is simply the Type I Staff on Project multiplied by Level
I Productivity. The Actual Design Rate, which is the rate at which candidate designs
are chosen, is equal to the Potential Design Rate, unless it exceeds the Maximum
Design Rate. This rate establishes an upper limit on the speed at which design work
can be carried out, and is equal to the number of Candidate Designs divided by the
Minimum Time to Define Design.
As the design process progresses, the Cost to Complete Designs decreases as the
number of initial potential designs divided by the Level I Productivity, multiplied by
one minus the number of chosen designs divided by the number of initial potential
77
designs:
Cost = (Designs~nit/Productivity)
(1 - Designscompleted/Designsnits
).
The number of Type I employees that are required to complete the level I design work
is then determined by dividing the cost to complete by the time remaining on the
project.
5.1.1
Modeling Complexity in System Dynamics
The single level system just described is a set of two coupled differential equations,
one describing the changing staff level on the project, and one describing the rate of
work done at this level, where the work done is in terms of designing the system. The
effort required by the employees at this level is determined by the system complexity.
At each level of decomposition, the complexity of the system effects the productivity
of the employees that are tasked to develop the system. Consider the top level where
the system architecture is created, as shown in Figure 5-2.
As described in chapter 2, the complexity of a system can be thought of in either functional or structural terms.
ric.
Consider first the functional complexity met-
Since the functional complexity of satisfying functional requirement FRi is
log 2 (1/Pi), the probability of satisfying n independent functional requirements is
log 2 (1/f
1
Pi) = E'i= log 2 (1/Pi). In this model, I will assume the case in which
all of the functional requirements are independent and the probabilities of achieving
each functional requirement are equal. In this case, the fixed functional complexity is
78
Type I Staffing
Delay
Type I Transfer
Delay
Pool of
Available
TVeI' in
Type I's added to
C1mpany
Staff
Tp
t
on Project
tf
T
Leaving
Extra Type IPxesTp
Needed
Staff
Type I's Required
Potential Design
Rate
Scheduled
Completion Date
TimeR
Cost to Complete
DesignsLevel
I
Productivity
<Iffitind Potential
DEffect of Complexi
on Producti ity>
Minimum Time to
Finish Work
Candidate
Desi s
Minimum Time to
Define Design
Chosen
sign R
Desgns
Maximum Design
Rate
Figure 5-1: Staffing Dynamics for One Level of Employee
79
Table for Effect of
Complexity on
Productivity
429andldate
Imagine ry
Comple> ity
Effect
of Complexity
on Productivity
elative Complexity
Initial Potential
Desi ns
Abso ILte
Compl exity
Irobabilty of
Average P
Satisfying
Requ irement
m l
i
Average num er of
Design Paramete rs per
Design
Average number of
Functional Requirements
per Design
Figure 5-2: System Dynamics of System Complexity
equal to -F x log 2 (p), where F is the number of functional requirements, and p is the
probability of satisfying any one functional requirement. The variable complexity is
equal to log2 (number of candidate designs), which decreases as the design work gets
done.
The total number of possible designs increases combinatorially with the number
of functional requirements and the number of design parameters; however, creative
designers only need to consider a small subset of these combinations of FRs and
DPs. Here, it is assumed that the typical number of designs that are expected to be
considered is equal to FR x DP, the number of elements in the design matrix. This
number is certainly dependent on the number of options available for each design
parameter; if one wished to consider this, the number of design parameters could
simply be increased to simulate the increased complexity, or the equation for the
80
number of potential designs could be modified, say, to FR x DP' for i options for
each design parameter. Regardless, the current model allows for a reasonable number
of designs to be considered without complications that add little to the insight that
may be gained by such a model.
The Absolute Complexity of the system can then be computed as the vector sum
of the fixed and variable complexities; since these two complexities are orthogonal,
CA=
CF+C?.
The Relative Complexity is the complexity of the system at each timestep relative to
the complexity of the system at time 0:
CR = CA/( /log 2 (DesignsInit2 + CF).
The relative complexity is used in conjunction with a lookup table to determine
the effect the complexity has on productivity. While a linear relationship between
complexity and productivity could be modeled, a more realistic function would take
into account a more pronounced decreased productivity for high levels of complexity,
while the effect of the complexity becomes less pronounced when the complexity is
very low. Thus a function that has these characteristics was chosen for the lookup
table, Figure 5-3. While the exact form of this function could be argued in-depth, it
turns out that the relationship for low complexity has very little effect on the staffing
dynamics, whereas the slope of the function is very significant when the complexity is
81
very high. Therefore, the observation that the productivity remains low for a range
of high complexities is of paramount importance in utilizing the complexity effect on
productivity.
I
I
I
I
I
0.9
0.8
0.7
0.6
.5
0.4
0.3
0.2
0.1
n
0
0.1
I
I
I
0.2
0.3
0.4
~
0.5
0.6
0.7
---
I
0.8
0.9
1
Relative Functional Complexity
Figure 5-3: The effect of complexity on the productivity of the staff
Alternatively, the system dynamics model could include a model for structural
complexity. Recall that our definition of structural complexity is
H = (Ni+ N2 )log2 (p+ N).
In order to use this definition, the number of unique operators and operands must be
given as well as the total number of unique operators and operands. Notice that the
form of this complexity is equivalent to the form of the fixed functional complexity. By
replacing the total number of functional requirements by the total number of operators
82
and operands and replacing 1/p by the number of unique operators and operands,
one can switch over from the functional view of complexity to the structural view;
the result is to interchange the role of the FRs with the role of the DPs. Since a
structural view of complexity can thus easily substitute for the functional view by a
change of variable names, I will only consider a model for the functional complexity
here.
5.1.2
Basic Model for Type I and Type II Employees
The above models for the single-level staff dynamics and the complexity dynamics
can now be used as the building blocks for a model of project staffing that utilizes
more than one type of employee. Here, we will consider extending the above models
to include both Type I and Type II employees. Type II employees are the subsystem specialists who are required to developed the components, or the specific design
parameters that the Type I employees have defined in their design work.
When
the design is still ambiguous, these employees will not be very productive; any work
that they do may require extensive re-work if the design changes after they begin
work. Therefore the basic model for two types of employees includes an effect on the
productivity of the Type II employees due to the ambiguity that exists at level I.
The ambiguity is thus used as the parameter that connects the level II work to
the work that has been done at level I. Ambiguity is defined here simply as the
fraction of work that has been completed. Intuitively the ambiguity of a system is
related to the system complexity; here, the ambiguity is indirectly determined by the
83
complexity existing in level I, since the complexity determines how quickly the design
work proceeds. The full model for the staffing dynamics with two levels of employees is
shown in Figure 5-4, with the ambiguity highlighted to show the relationship between
the two levels.
The ambiguity parameter is utilized not only to determine the productivity of
the Type II employees, but also to determine the policy of when to begin work on
the level II tasks. The Level II Begin Switch sets the policy for bringing Type II
staff onto the project. The decisions that must be made by project management are
when to begin adding Type II staff to the project, and how many Type II employees
are required to be added. In this model, the ambiguity drives these decisions; the
Level II Begin Switch determines when to add Type II staff depending on the level
of ambiguity existing on the project at that time.
It may appear ideal to bring Type II employees onto the project only after the
ambiguity in their tasks has decreased to a minimum. However, if these employees
do not begin work until almost all of the ambiguity is out of the system, there is a
chance that they will not finish their tasks by the project deadline. It thus seems
reasonable to assume that a compromise between these two preferences must be met;
increased efficiency due to performing tasks when ambiguity is low must be balanced
with meeting scheduling requirements. The Level II Begin Switch allows for different
amounts of effort on level II tasks to be added at different levels of ambiguity. In
this way any feasible policy regarding ramping up of level II work can be investigated
with this model.
84
Type! Staffing
A
aL
s in Company
Type!
dded
Type I Transfer
Pr
toType
IStaff
Type 11
Staffing
DelayDea
veypl
Poty
etal
Avaa
m
Leaving
ein
r~c
'sadded
ts
on Project
Type IIStaf Leaving
iihc
M
iExtra Type o
Needed
Extra Type
NeededT
Excess Type IStaff
11's
Exc
szEitect of Level.1i
cPduiv'k on71ime
ei
D
sigs
11
Renainn>
Projet ished
Level U Productivity
ScheduledS
Completion Date
Cost to Complate
Level N Tasks
Cost to Complete
Des,
Level
:Tiff
Productivity
Designs
LO
00
Minimum Time to
Perform a Task
~
~~~
rPrc-A
I~~
U)
Maximum Work
Effect of A mbiguity on
Level 1 Productivity
ehilPtna
ning .
Type
-
Type rs Required
Time RmI
ype
in Company
Type ! Transfer
clnit..sls
Project Finished
Minimum Time to
Finish Work
Table fr Effect
Ambiguity on
of
Potential Work
Rate
Productivity
SCandidate
Designs
sg lt
Chsn
Degs
Minimumi Ambiguity
Level11 Tasks
Ambiguity
to Do
<1Id! hevel"
Minimum Time to
Define Desig
MaImum
Dsign Rate
Work
Accomplishment
Level!! Tasks
one
vrta dn
Level 1 Begin
Swvitch
ffort Expended
on
,Type I ISIVOf
Shilon
Cumulative
Effort
E ended
5.2
The System Dynamics Model for Apollo GN&C
The model described in Section 5.1.2 can be used to investigate staffing policy for
projects involving complexity at each level of design. Before this is done, however,
the existing model will be calibrated using the Apollo GN&C case as described in
Chapter 4. The purpose of this exercise is to use the model to attempt to re-create
the staffing levels of the project, as described in [Han71], given feasible complexity
metrics that are consistent with the results of Chapter 4. With evidence of the validity
of the model for this case, the effect of varying parameters to test different staffing
policies can then be studied.
Parameters that can be adjusted in this model fall into three categories: Parameters that describe the technical system, parameters that describe the availability of
resources, and parameters that determine staffing policy. The parameters that describe the technical system include the number of functional requirements, the number
of design parameters, and the probability of achieving a functional requirement at each
level. The parameters that describe resource availability will here be limited to those
resources related to staffing. In this case, these are the initial staff level, maximum
staff level, hiring delay and transfer delay for each design level. The parameters that
determine policy are the level II begin switch and the scheduled completion date. For
the two-level dynamics model, these parameters are summarized in Table 5.1.
The breakdown of parameters into these categories is admittedly somewhat arbitrary; it would make as much sense in general to consider the scheduled completion
date to be a limitation on the availability of a resource, and the maximum staff level
86
Table 5.1: Breakdown of Parameters for a Two-Level
Resource Parameters
Technical Parameters
Initial Type I Staff
Number of Level I FRs
Initial Type II Staff
Number of Level II FRs
Maximum Type I Staff
of
Level
I
DPs
Number
Maximum Type II Staff
Number of Level II DPs
Probability for Level I FR Type I Hiring Delay
Probability for Level II FR Type II Hiring Delay
Type I Transfer Delay
Type II Transfer Delay
Staffing Dynamics Model
Policy Parameters
Scheduled Completion Date
Level II Begin Switch
to be decided by policy instead of a resource availability constraint. However, the
intent of the analysis of this model is to gain insight into policy as it relates to the
allocation of the staff, and not necessarily the allocation of other resources, including
schedule. This decision presupposes that staff is the resource that is most critical to
manage, which is assumed to be the case for systems with several levels of technical
complexity. Even so, the breakdown of parameters in this way does not impede the
interpretation of these parameters in any different manner.
Using the number of FRs and DPs that were indicated in chapter 3, the above
system dynamics model can be used to suggest staffing levels of Type I and Type
II employees for the Apollo GN&C project. An ideal test of the model would use
values for all of the parameters that are indicated from the historical record, and
independently obtain a profile of the staffing level over time that agreed with the
levels that actually occurred. The ability to determine these parameters directly is
not realistically attained, however. Instead, the model can be calibrated using the
historical staffing profile to suggest what the reasonable parameters may be.
The parameters that were chosen for this experiment are given in Table 5.2. The
number of level I functional requirements was based on the 21 functional requirements
87
that were indicated in Chapter 4; the number of level II functional requirements was
chosen to approximate the total number of designs that exist for all of the level II
subsystems, including the IMU. Transfer delays for level I employees were chosen to
be higher than normally expected; since many of the staff members were working
on graduate degrees at MIT during their tenure on the project [Han7l], it may be
expected that they will remain on the project while their theses are being written.
The 35-month schedule covers the project from February, 1965 to December, 1967, the
period in which the GN&C project staff was formulating hardware design decisions
based on functional requirements that were determined earlier in the project and
software design within the Guidance Computer subsystem had ramped up.
Table 5.2: Parameters for the baseline staffing model for Apollo GN&C
Initial Type I Staff
650
Maximum Type I Staff
700
Type I Hiring Delay
3 months
Type I Transfer Delay
3 months
Number of Level I Functional Requirements
20
Number of Level I Design Parameters
20
Probability of Achieving a Level I Requirement
0.95
Minimum Level I Ambiguity
0.20
Initial Type II Staff
0
Maximum Type II Staff
300
Type II Hiring Delay
3 months
Type II Transfer Delay
1 month
Number of Level II Functional Requirements
10
Number of Level II Design Parameters
10
Probability of Achieving a Level II Requirement
0.99
Scheduled Completion Date
35 months
Begin Switch for adding 25% of level II staff
0.80 ambiguity
Begin Switch for adding 100% of level II staff
0.20 ambiguity
Using these parameters, the result for the staffing levels is shown in Figure 5-5.
This can be compared to the actual historical staff, as given in [Han7l] and shown
88
in Figure 5-6. The policy for adding Type II staff to the project was chosen so as to
add 25% of the potential staff when the ambiguity was reduced to 80%, and add the
entire potential staff when the ambiguity reached 20%, which was assumed in this
model to be the minimum obtainable ambiguity.
The results of this case suggest that reasonable choices for the system dynamics
parameters do indeed result in staffing profiles that are indicative of actual staff levels,
and that this model may be used to test scenarios concerning different policies and
resource constraints. The insight that may be gained by experimenting with these
scenarios is discussed in Section 5.3.
5.3
Formulating Staffing Policy using System Dynamics
In any complex engineering development project, managerial decisions must take into
account both the complexity of the technical system that is being developed and the
constraints on the resources that are to be used in developing the project. In this
section, I will consider the case in which the the staff available for the project may
be constrained such that the most efficient policy, in terms of staffing costs, cannot
be attained.
Consider first the possibility that the amount of Type I staff available on the
project is constrained. Using the parameters given in Section 5.2 as a baseline, assume that the number of Type I employees available were limited to 450. Figure 5-7
89
800
Ota#
Total Staff
400
Type 11
Stf
0
0
4
8
16
20
24
Time (Mcfnth)
12
28
32
40
36
People
People
People
Type I Staff on Project : tet2
Type 11 Staff on Proje ct: test2
Effort Expended: test2
Figure 5-5: Type I, Type II, and Total staff for the Apollo Simulation
TOTAL MAN POIWER
APOLLO G&N PERSONNEL
TOTAL HAR DWARE
MIT HARDWARE
TOTALSOMFPUARE
MIT 90FTWAR E:
-
.......
Figure 5-6: Historical Staff for the Apollo GN&C Project
90
shows the profile of Type I staff on the project compared to the baseline case. With
fewer staff working on the level I design, ambiguity at level II decreases at a slower
rate. While this constraint on resources indicates that Type I staff will remain on
the project for a longer duration, this does not necessarily inhibit the project from
completing by its scheduled completion date.
This constraint on staffing resources in the model may also be considered as matter
of policy. While the overall costs for the project may increase, the constancy of the
staff profile makes staffing for an organization that is involved in multiple projects
easier. For many organizations, keeping a stable and productive workforce may be
more advanatageous than the short-term reward of optimizing the staffing level for a
single project.
Additionally, the Type II staff may be limited as shown in Figure 5-8. While it
can be seen in Figure 5-9 that the completion date is extended by several months
compared to the baseline case, the staffing profile is obviously much smoother than
the policy that does not set an upper limit on staffing level.
5.4
Effect of Staffing Policy on Cost and Schedule
The cost and schedule implications of a staffing policy can be observed with this model
by simulating when to add subsystem specialists (Type II staff) to a project. Initially,
consider the baseline scenario modified in which no Type II staff are added until the
ambiguity decreases to a certain desired level. Two cases are considered: The first
case considers high complexity at level I, as with the Apollo GN&C example, and the
91
Type I Staff on Project
800
400
- --.-.
0
0
4
8
12
16
20
24
Time (Month)
28
32
36
Type I Staff on Project : Apollo _700_300
Type I Staff on Project.: Apollo__450_150
Type I Staff on Project Apollo_Project
40
-
People
People
People
Figure 5-7: Comparison between the baseline case of 700 Type I employees and a
limited pool of 450 Type I employees
Type II Staff on Project
400
... ....
.
.
.........
200
.
.........
0
.
.........
0
4
8
12
16
20
24,
Time (Month)
-
Type II Staff on Project :Apollo_700300
Type II Staff on Project: Apollo 450_150
Type II Staff on Project :ApolloProject
28
32
36
40
People
People
People
Figure 5-8: Comparison between the baseline case of unlimited Type II employees
and a limited pool of 150 Type II employees
92
Effort Expended
800
400
0
0
4
8
12
16
20
24
Time (Month)
Effort Expended: Apollo 700_300
Effort Exp ende d : Apollo_450_150
Effort Expende d ApolloProje ct
28
32
36
40
People
People
People
Figure 5-9: Cumulative staff with limited pool of Type I and Type II employees
compared to the baseline case
second case considers high complexity at level II, the subsystem level, which may be
expected for many engineering projects. The values used for these cases are shown in
Table 5.3, in which all of the resource parameters are the same as the baseline case.
Table 5.3: Parameters for the scenario where Type II staff are not added until ambiguity decreases to a set level
Case 1 Case 2
Number of Level I Functional Requirements
20
10
Number of Level I Design Parameters
20
10
Probability of Achieving a Level I Requirement
0.95
0.95
Number of Level II Functional Requirements
10
20
Number of Level II Design Parameters
10
20
Probability of Achieving a Level II Requirement
0.99
0.99
The results for manpower cost and project completion time for these two cases
are shown in Figure 5-10 and Figure 5-11. As expected, if level II tasks begin before
ambiguity due to unfinished level I design work has been minimized, the total cost
to complete the project is increased, while the project completion time typically
93
is decreased.
This effect on completion time is more pronounced when the level
II complexity is higher than the level I complexity. This result suggests that, for
projects involving high level I complexity, holding off adding Type II staff may be
advantageous to reduce amount of effort expended, while for projects involving high
level II complexity, adding Type II staff while more ambiguity exists in the design
may be preferred due to the significant effect on completion date.
As previously mentioned, decreasing cost and schedule risks requires a staffing
profile that may be unfavorable from a management perspective: Large amounts of
staff are needed at certain periods during the project, and smaller amounts at other
times. This effect can be mitigated by choosing a pre-loading approach to bringing
in Type II staff. By pre-loading, I mean adding a portion of Type II staff before the
ambiguity is decreased to its lowest level, and then adding the rest of the Type II staff
after the ambiguity reaches its lowest level. Three policies are shown in Figure 5-12,
one with no pre-loading, one in which 12.5% of the staff are added when the ambiguity
reaches 80%, and one in which 25% of the staff are added when the ambiguity reaches
80%.
In Figure 5-13, the staffing profiles for Type II employess are shown for these three
cases of pre-loading strategy. As can be seen in this figure, cost in terms of effort
expended is not significantly higher for the pre-loading cases, but the employment
profile is noticeably smoother.
The effect of pre-loading on cost and schedule can be seen in Figure 5-14 for Case
1 and Figure 5-15 for Case 2. The ambiguity shown on the x-axis represents the
ambiguity at which a 25% effort on level II tasks tasks is to begin. The full 100%
94
15
x
High Level IComplexity
1, M
6
3
4
35
-
01.E0
0
LO
1.2
0
20
30
1.2'
50
40
60
70
33
80
Ambiguity (%)
Figure 5-10: Effort Expended (solid) and Project Duration (dashed) versus Ambiguity
when level II staff are added for a project with high level I complexity (Case I)
1.15
x 10"
High Level
I Complexity
36
1-34
0
32
1.05
E0
0
0E
-30
1
00
00
0.95-
28
-L-
0.85
20
30
40
50
60
70
24
80
Ambiguity (%)
Figure 5-11: Effort Expended (solid) and Project Duration (dashed) versus Ambiguity
when level II staff are added for a project with high level II complexity (Case II)
95
Level I Begin Switch
1
1.- No Preload1ing,
0
0
4
8
12
16
20
24
Time (Month)
28
32
36
40
Level 1 Be in Switch Apollo Project 25
Level H Be inSwitch ApolloProject_125
Level IBegin Switch ApoUo Project_0
Figure 5-12: Strategy for beginning level II tasks based on ambiguity
effort begins when the ambiguity reaches its minimum of 20%. The 100% ambiguity
in Figure 5-14 and Figure 5-15 represents pre-loading the project immediately, and
the 20% ambiguity represents no pre-loading at all. These policies are modeled using
the Level II Begin Switch.
For projects involving architectures with high level I complexity, Figure 5-14 indicates that effort expended (left axis) essentially does not increase unless pre-loading
occurs at an ambiguity of 60%. Furthermore, project completion time (right axis) is
minimized at a pre-loading ambiguity of 60%. This would indicate that, at least for
this example, pre-loading of 25% of the level II staff should begin when the ambiguity
decreases to 60%.
For projects involving architectures with high level II complexity, Figure 5-15
indicates that effort expended (left axis) does not significantly increase until the
ambiguity at which pre-loading begins reaches 90%. Also, completion (right axis)
96
Type II Staff on Project
400
200
0
0
4
8
12
16
20
24
Time (Month)
28
32
36
Type II Staff on Project Apollo Project 25
Type II Staff on Project Apollo Proje ct 125
Type II Staff on Proje ct: ApolloProje ct_0
40
People
People
People
Figure 5-13: Type II staff for different beginning strategies
time falls monotonically and dramatically as the pre-loading ambiguity increases. In
this case, pre-loading should occur much sooner in the project than in the case of
high level I complexity, possibly almost as soon as project initiation.
The above results seem to make intuitive sense. High level I complexity implies
that many design iterations may be required at the system design stage before the best
design is discovered. Beginning level II tasks early are likely to result in substantial
re-work and wasted effort. Alternatively, if most of the complexity resides in the
subsystem tasks, beginning to design those subsystems early should entail less cost
and schedule risk. Just how much pre-loading and how soon to begin subsystem tasks
depends on the technical aspects of the system, as measured by a complexity metric,
and the staffing resources available for the project.
97
Pre-loading for High Level IComplexity
Pre-loading for HighI Level I Complexity
x 10,4
. xI i0
S1.4
.
35
0
I
4)
E
I
0
0
t!1.3
a
-
-
a
a
aa
30
40
--
a
a
a
a
50
60
Ambiguity (%)
70
80
90
108,
Figure 5-14: Effort Expended (solid) and Project Duration (dashed) versus Ambiguity
at which pre-loading begins when level II staff are added for a project with high level
I complexity (Case I)
1.5
4104
Pre-loading for High Level
11Complexity
35
0
0
0
1)
i30
1
CL
CL
0
0.
0
n
0.51
20
30
40
50
60
70
80
90
25
100
Ambiguity (%)
Figure 5-15: Effort Expended (solid) and Project Duration (dashed) versus Ambiguity
at which pre-loading begins when level II staff are added for a project with high level
II complexity (Case II)
98
Chapter 6
Concluding Remarks
6.1
Conclusions Regarding the Project Staffing Framework
Interviews with project management in various industries has led to the conclusion
that subsystem specialists are often working on projects before a high-level design has
been decided upon. The ambiguity that exists in the design leads to a large amount
of re-work at the next level and a waste of manpower resources. However, schedule
constraints and the constraints on certain types of manpower resources may require
that detailed subsystem design work begin while a high level of ambiguity still exists.
The tradeoffs involved in determining the optimal strategy for beginning each level
of work differs from industry to industry, and from project to project.
With this in mind, a framework can be created in which resources are committed
to a project based on policies that take into account the technical complexity of the
99
Decrease
Level I
Functional Requirements
Level I
Staff
Allocation
Level I
Design
Parameters
Ambiguity
Level I
Staff
Availability
Level II
Functional
Requirements
Level II
Staff
Allocation
Level II
Design
Parameters
Level II
Staff
Availability
Decrease
Ambiguity
Level III
Functional
Requirements
Level DI
Design
Parameters
Level III
Level I
Staff
Staff
Allocation
Availability
Figure 6-1: Overall framework for staffing strategy for three levels of decomposition
system. This framework is shown in Figure 6-1 for the case of three levels of decomposition. The ambiguity existing in the design at each level drives the decision to
begin work at the next level. This cascading of functional requirements is similar to
the zig-zagging process described by Suh [Suh90]; in this case, the iterations within
each level of the cascading process includes both the design of the functional requirements and design parameters of the technical system and the design of the managerial
system in terms of the resource allocations.
At each level in the hierarchy, the functional requirements for the subsystem must
100
first be defined. These functional requirements are typically derived from the level
above; in the case of the highest level of decomposition, the functional requirements
are determined by the needs of the customer. For each successive level, the customer
can be considered to be the design team at the preceding level.
After the functional requirements have been determined, the design parameters
for the system can be defined. The design parameters in system development are
analogous to the subsystems. The choice of design parameters may be indicative of
the resources that are available to the project; for example, organizations that have
expertise in a certain field would likely limit the potential design parameters to be
contained within that field.
It is then necessary to estimate the complexity of the system at the highest level.
This requires an estimation of the probability of achieving the functional requirements. While any a priori estimate of the probability distribution function for a
system that has yet to be built is bound to be inexact, employees with enough expertise often can often utilize their experience to determine this probability with enough
accuracy to be useful. Because complexity at the initial, conceptual phase of system
development is driven by the lack of knowledge rather than the information content,
this estimate can be improved later in the process as more information is obtained
concerning the design of the system.
The staffing resources that should be made available for the project must be
determined at each phase of the project. It is often the case that this activity is
carried out independently of the technical activities required to assess the complexity
of the system and determine the system design. However, as should now be clear,
101
staff allocation decisions should be made concurrently with the technical decisions.
The primary decision that this framework addresses is the choice of when to
begin work at the next lower level of subsystem decomposition. From a technical
standpoint, the optimal choice is to hold off the subsystem tasks until the ambiguity
in the subsystem functional requirements has been minimized. From a managerial
perspective, the optimal staffing policy would be to staff the project with a constant
number of employees that remain on the project throughout the lifecycle. These
two competing factors must be balanced to provide a robust solution to the project
staffing problem.
The difficulty in achieving this balance is compounded by the
stochastic nature of the change in apparent complexity as the design evolves and in
the availability of staff with the appropriate skills given the requirements of other
projects, employee turnover and the like. The recommended solution that should be
chosen is the solution that most likely will satisfy the schedule, resource, and technical
constraints of the project.
6.2
Future Work
The system dynamics model and the framework outlined here have been developed
to attempt to address the question of staffing a project given the complexity of the
system architecture. However, there is much more work that can be done to further
develop the concepts introduced in this thesis.
Case studies of this framework should be studied in depth.
The difficulty in
conducting such studies is the lack of data concerning Type I and Type II staff; this
102
difficulty is due to the designation of employees by their specialty, but not necessarily
by their skills. System engineers do not necessarily possess Type I skills, but many
highly skilled specialists have Type I capabilities. While education and experience
play a role in determined an employee's skillset, the employee's personality may play
an even bigger role in the Type I/Type II designation. Most employers do not consider
such "soft" factors in their decisions in the assignment of tasks.
Additionally, the model and framework proposed here tacitly assumes that the
project is managed using a stage-gate, or waterfall, process. While this is most likely
the case for projects in which technical risk outweighs other factors, a spiral model
of project management may be more relevant for commercial software products and
other market-driven industries. It would be of interest to extend the current model
to include the management of such projects, in which there are significant iterations
between tasks performed at each level.
More theoretical work in this area could also be done. Analysis of the framework
could benefit by creating and simulating a control theory model with more mathematical rigor than the system dynamics model developed here. Also, complexity
theory remains a substantial research effort in many fields; as such theories become
more highly developed and more applicable to large-scale engineering projects, the
treatment of complexity as given in this thesis could be better refined.
Further analysis regarding the choice of complexity measure may also lead to a
more sophisticated model that includes both functional and structural measures. It
may well be that functional complexity is the driving factor during the early phases
of design work, when the functional requirements of the system are being explored
103
at a more conceptual level. Structural complexity, then, would be important during
the detailed design phases, when Type II staff members are involved in building the
components, and are more concerned with structures. An investigation as to how
well these measures correlate with different phases of a project could be carried out.
Finally, there is no better test of a framework than to put it into practice. In many
organizations, projects are typically staffed in an ad-hoc manner, utilizing whatever
resources are available; usually little emphasis is placed on a quantitative assessment
of staff allocation. It was a goal of this thesis to argue that such assessments can
be done in a meaningful way. However, only if organizations are open to considering
experimenting with a more scientific approach to project management can the benefit
of the work developed in this thesis be realized.
104
Appendix A
Documentation for System
Dynamics Model with Two Types
of Employees
sqrt(Real Complexity * Real Complexity + Imaginary Complexity
Imaginary Complexity)
Units: bits
Absolute complexity is the vector sum of the two orthogonal
complexities, real complexity and imaginary complexity.
(02) Absolute Level II Complexity=
+
sqrt(Real Level II Complexity * Real Level II Complexity
105
*
(01) Absolute Complexity=
Imaginary Level II Complexity*Imaginary Level II Complexity)
Units: bits
Absolute complexity is the vector sum of the two orthogonal
complexities, real complexity and imaginary complexity.
(03) Ambiguity=
Max(Candidate Designs/Initial Potential Designs, Minimum Ambiguity)
Units: Dimensionless
Ambiguity represents the lack of knowledge of the requirements
of the level II design tasks due to the unfinished level I
design work.
(04) Average number of Design Parameters per Design=
20
Units: Dimensionless
This number is chosen to reflect the possible choices of design
parameters that are available to the level I design staff
(05) Average number of Functional Requirements per Design=
20
Units: Dimensionless
(06) Average number of Level II Design Parameters=
106
10
Units: Dimensionless
This number is chosen to reflect the possible choices of design
parameters that are available to the type II staff
(07) Average number of Level II Functional Requirements=
10
Units: Dimensionless
(08) Average Probability of Satisfying a Functional Requirement=
0.95
Units: Dimensionless
The estimate of the average probability of achieving the
functional requirements at level I
(09) Average Probability of Satisfying a Level II Functional Requirement=
0.99
Units: Dimensionless
The estimate of the average probability of achieving the
functional requirements at level II
(
(10) Candidate Designs= INTEG
-Design Rate,
107
Initial Potential Designs)
Units: Designs
Candidate Designs represents the amount of design work left to
do.
(11) Change in Pink Noise = (White Noise - Pink Noise)/Noise Correlation Time
Units:
1/Week
Change in the pink noise value; Pink noise is a first order
exponential smoothing delay of the white noise input.
(
(12) Chosen Designs= INTEG
Design Rate,
0)
Units: Designs
Chosen designs represents the amount of design work already done.
(Initial Potential Designs/Level I Productivity)
*
(13) Cost to Complete Designs=
(1 - Chosen Designs/Initial Potential Designs)
Units: Month*Person
The cost to complete designs is measured in Person-Months, and
is based on the amount of design work remaining and the current
productivity of the type I staff.
108
(14) Cost to Complete Level II Tasks=
(Initial Level II Tasks/Level II Productivity) *
(1 - Level II Tasks Done
/Initial Level II Tasks)
Units: Month*Person
The cost to complete level II tasks is measured in
Person-Months, and is based on the amount of level II work
(15) Cumulative Effort Expended= INTEG
(
remaining and the current productivity of the type II staff.
Effort Expended,
0)
Units: **undefined**
Keeps track of total person-months of effort spent on the
project.
(16) Design Rate=
Min(Maximum Design Rate, Potential Design Rate)
Units: Designs/Month
(17) Effect of Ambiguity on Level II Productivity=
Table for Effect of Ambiguity on Productivity(Ambiguity)
Units: **undefined**
109
This effect simply weights the productivity of the level II
staff based on the degree of ambiguity that is present in the
tasks that they are to accomplish.
(18) Effect of Complexity on Productivity=
Table for Effect of Complexity on Productivity(Relative Complexity)
Units: **undefined**
High complexity is associated with low productivity according to
the Table for Effect of Complexity on Productivity.
(19) Effect of Level II Complexity on Productivity=
)
Table for Effect of Complexity on Productivity(Relative Level II Complexity
Units:
**undefined**
High complexity is associated with low productivity according to
the Table for Effect of Complexity on Productivity. The table
for this effect is the same for both level I and level II.
(20) Effort Expended=
(Type II Staff on Project + Type I Staff on Project) * Project Finished Switch
Units: People
(21) Excess Type I Staff=
110
Max(O, Type I Staff on Project - Type I's Required)
Units: People
When tpye I staff level required is less than the type I staff
level, there are excess staff and therefore staff are
transferred to other projects.
(22) Excess Type II Staff=
Max(O, Type II Staff on Project - Type II's Required)
Units: People
When tpye II staff level required is less than the type II staff
level, there are excess staff and therefore staff are
transferred to other projects.
Max(O, Min(Pool of Available Type I's in Company, Type I's Required
Type I Staff on Project))
Units: People
When staff required is larger than current staff level, extra
staff are needed and may be hired transferred to the project
from elsewhere internal to the organization. A maximum staff
level can be imposed by management.
(24) Extra Type II's Needed=
111
-
(23) Extra Type I's Needed=
Type II Staff on Project))) * Level II Begin Switch
Units: People
When staff required is larger than current staff level, extra
type II staff are needed and may be hired transferred to the
project from elsewhere internal to the organization. A maximum
staff level can be imposed by management.
(25) FINAL TIME
=
40
Units: Month
The final time for the simulation.
(26) Imaginary Complexity=
Max(0, Log(Candidate Designs, 2))
Units: bits
Imaginary Complexity is an entropy-like quantity that measures
the amount of ignorance of the chosen set of designs. It is
defined here as the log of the number of candidate designs.
(27) Imaginary Level II Complexity=
Max(0, Log(Level II Tasks to Do, 2))
Units: bits
Imaginary Complexity is an entropy-like quantity that measures
112
-
(Max(0, Min(Pool of Available Type II's in Company, Type II's Required
the amount of ignorance of the chosen set of designs. It is
defined here as the log of the number of level II tasks remain
to be done.
Average number of Level II Functional Requirements
*
(28) Initial Level II Tasks=
Average number of Level II Design Parameters
Units: Dimensionless
The number of initial potential designs is estimated to be the
average number of FRs times the average number of DPs
Average number of Functional Requirements per Design
*
(29) Initial Potential Designs=
Average number of Design Parameters per Design
Units: Dimensionless
The number of initial potential designs is estimated to be the
average number of FRs times the average number of DPs
(30) INITIAL TIME
=
0
Units: Month
The initial time for the simulation.
(31) Input=
113
1+STEP(1,Noise Start Time)*Pink Noise
Units: Dimensionless
Input is a dimensionless variable which provides a variety of
test input patterns, including a step, pulse, sine wave, and
random noise.
(32) Level I Productivity=
Effect of Complexity on Productivity
Units: Designs/(Month*Person)
Productivity represents tasks accomplished per person-month of
effort. Normal productivity is the output that would result if
the impact of complexity on productivity are 1.0; therefore,
normal productivity represents the effects of all non-modeled
factors on productivity.
(33) Level II Begin Switch=
IF THEN ELSE(Ambiguity<0.8, IF THEN ELSE(Ambiguity<0.21, 1, 0.25), 0)
Units: Dimensionless
The Level II Begin Switch sets the policy regarding how soon and
how many type II employees should be added to the project, based
on the ambiguity that exists in the level II design requirements.
(34) Level II Productivity=
114
Effect of Ambiguity on Level II Productivity * Effect of Level II Complexity
on Productivity
Units: Level II Tasks/Month/Person
Productivity represents tasks accomplished per person-month of
effort. Normal productivity is the output that would result if
the impact of complexity on productivity are 1.0; therefore,
normal productivity represents the effects of all non-modeled
factors on productivity.
(
(35) Level II Tasks Done= INTEG
Work Accomplishment,
0)
Units: Level II Tasks
(
(36) Level II Tasks to Do= INTEG
-Work Accomplishment,
Initial Level II Tasks)
Units: Level II Tasks
(37) Maximum Design Rate=
Candidate Designs/Minimum Time to Define Design
Units: Designs/Month
The maximum rate at which level I design work can be
115
accomplished given available level I design tasks to do.
(38) Maximum Work Rate=
Level II Tasks to Do/Minimum Time to Perform a Task
Units: Level II Tasks/Month
The maximum rate at which level II work can be accomplished
given available tasks to do.
(39) Minimum Ambiguity=
0.2
Units: Dimensionless
A minimum ambiguity greater than zero represents the fact that
the level I design may still change even after all assumed
design work has been done.
(40) Minimum Time to Define Design=
0.25
Units: Months
The minimum time to complete a single design task.
(41) Minimum Time to Finish Work=
1
Units: Month
116
For planning staffing, the minimum time over which management
desires to complete the remaining tasks. Note that this is
larger than the minimum time required to finish any one task.
(42) Minimum Time to Perform a Task=
0.25
Units: Months
This represents the minimum amount of time to complete a single
level II task.
(43) Noise Correlation Time=
3
Units: Week
The correlation time constant for Pink Noise.
(44) Noise Standard Deviation=
5
Units: Dimensionless
The standard deviation of the pink noise process.
(45) Noise Start Time = 5
Units: Week
Start time for the random input.
117
(46) Pink Noise = INTEG(Change in Pink Noise,0)
Units: Dimensionless
Pink Noise is first-order autocorrelated noise. Pink noise
provides a realistic noise input to models in which the next
random shock depends in part on the previous shocks. The user
can specify the correlation time. The mean is 0 and the standard
(47) Pool of Available Type I's in Company= INTEG
(
deviation is specified by the user.
-Type I's added to Staff,
50)
Units: People
Pool of Available Type I's in Company sets the maximum number of
type I staff that are available to join the project.
(
(48) Pool of Available Type II's in Company= INTEG
-Type II's added to Staff,
300)
Units: People
Pool of Available Type I's in Company sets the maximum number of
type I staff that are available to join the project.
118
(49) Potential Design Rate=
Level I Productivity * Type I Staff on Project * Project Finished Switch
Units: Designs/Month
The rate at which level I design work could be accomplished if
there is enough work to be done.
(50) Potential Work Rate=
Level II Productivity * Type II Staff on Project * Project Finished Switch
Units: Level II Tasks/Month
The rate at which level II tasks could be accomplished if there
is enough work to be done.
(51) Project Finished Switch=
IF THEN ELSE(Level II Tasks Done>99, 0, 1)
Units: Dimensionless
The project is defined to be finished when 99% of the work is
done. The project finished switch shuts off the application and
accounting of labor to the project.
(52) Real Complexity=
-Average number of Functional Requirements per Design*Log(Average
Probability of Satisfying a Functional Requirement,2)
Units: bits
119
Real complexity is a measure of the functional complexity of a
system, which is a constant value related to the probability of
achieving each of the functional requirements
(53) Real Level II Complexity=
-Average number of Level II Functional Requirements * Log(Average
Probability of Satisfying a Level II Functional Requirement,2)
Units: bits
Real complexity is a measure of the functional complexity of a
system, which is a constant value related to the probability of
achieving each of the functional requirements
(54) Relative Complexity=
Absolute Complexity/(sqrt(Log(Initial Potential Designs, 2)^2
+ Real Complexity^2))
Units: Dimensionless
Relative Complexity measures the complexity of the system
normalized by the initial complexity.
(55) Relative Level II Complexity=
Absolute Level II Complexity/(sqrt(Log(Initial Level II Tasks,2)^2
+ Real Level II Complexity^2))
Units: Dimensionless
120
Relative Complexity measures the complexity of the system
normalized by the initial complexity.
(56) SAVEPER
=
TIME STEP
Units: Month
The frequency with which output is stored.
(57)
Scheduled Completion Date=
35
Units: Months
In this model, every attempt is made to finish the project by
the scheduled completion date, given constraints on the staff
levels and choice of beginning level II tasks.
(58) Table for Effect of Ambiguity on Productivity(
[(0,0)-(1,1)], (0,1), (0.1,0.9), (0.2,0.8), (0.3,0.7), (0.4,0.6),
(0.5,0.5), (0.6,0.4), (0.7,0.3), (0.8,0.2), (0.9,0.1), (0.99,0.01), (1,0.01))
Units: Dimensionless
The table represents the relationship between the level of
ambiguity that the type II staff must deal with and the
productivity of the type II staff at that level of ambiguity.
121
(59) Table for Effect of Complexity on Productivity(
[(0,0)-(1,1)], (0.0030581,0.986842), (0.0489297,0.837719),
(0.110092,0.692982), (0.211009,0.548246), (0.348624,0.416667),
(0.477064,0.324561), (0.584098,0.245614), (0.767584,0.118421),
(0.883792,0.0482456), (1,0.0131579))
Units: Dimensionless
This effect represents the fact that complexity decreases the
productivity of the staff. This effect may be modeled to be
nonlinear.
(60) Time Remaining=
Max(Minimum Time to Finish Work, Scheduled Completion Date - Time)
Units: Months
The months remaining before the project reaches the scheduled
completion date. Once that date is reached, the model assumes
that management tries to finish the project in a minimum time.
(61) TIME STEP
=
0.125
Units: Month
The time step for the simulation.
122
(62)
Type I Staff Leaving=
(Max(0, Excess Type I Staff)/Type I Transfer Delay + abs(Input))
(63) Type I Staff on Project= INTEG
(
Units: People/Month
Type I's added to Staff - Type I Staff Leaving,
350)
Units: People
(64) Type I Staffing Delay=
3
Units: Month
Reflects the average time required to perceive the need for new
type I staff and obtain them internally. Internal transfer may
be slow because of needs on other projects, and the need to for
staff to come up to speed on the project before they are
productive.
(65) Type I Transfer Delay=
3
Units: Month
The delay in transferring type I workers may be high due to the
123
desire to keep staff on the project until they are needed on
another project.
(66) Type I's added to Staff=
Extra Type I's Needed/Type I Staffing Delay
Units: People/Month
(67) Type I's Required=
Cost to Complete Designs/Time Remaining
Units: People
Type I's required is the number of people needed to finish the
estimated level I design work remaining in the time remaining.
(68) Type II Staff Leaving=
(Max(O, Excess Type II Staff)/Type II Transfer Delay)
Units: People/Month
(
(69) Type II Staff on Project= INTEG
Type II's added to Staff - Type II Staff Leaving,
0)
Units: People
(70) Type II Staffing Delay=
124
1
Units: Month
Reflects the average time required to perceive the need for new
type II staff and obtain them internally. Internal transfer may
be slow because of needs on other projects, and the need to for
staff to come up to speed on the project before they are
productive.
(71) Type II Transfer Delay=
1
Units: Month
The delay in transferring type II workers may be high due to the
desire to keep staff on the project until they are needed on
another project. It is assumed that this time is shorter for
type II staff than for type I staff, since it is easier to get
them integrated into a new project more quickly.
(72) Type II's added to Staff=
Extra Type II's Needed/Type II Staffing Delay
Units: People/Month
(73) Type II's Required=
Cost to Complete Level II Tasks/Time Remaining
125
Units: People
Type II's required is the number of people needed to finish the
estimated level II work remaining in the time remaining.
(74) White Noise = Noise Standard Deviation*((24*Noise Correlation Time/
TIME STEP)^0.5*(RANDOM 0 1(
- 0.5))
Units: Dimensionless
White noise input to the pink noise process.
(75) Work Accomplishment=
Min(Maximum Work Rate,Potential Work Rate)
Units: Level II Tasks/Month
126
Appendix B
Calculation of Structural
Complexity for IMU Designs
Structural complexity is defined in Chapter 3 as
H = (N + N2 )log 2 (p + N),
where N1 is the total number of elements in the system architecture, N2 is the total
number of interfaces between the elements, p is the number of distinct elements in the
architecture, and N is the number of distinct interfaces. Since the base-2 logarithm
is used, this representation for structural complexity is measured in bits.
For the gimballed IMU, the elements are the gyroscopes (3), the accelerometers
(3), the gimbal system (1), and the stable platform (1). Thus N1 = 8 and p = 4. Each
of the accelerometers are mounted on the stable platform, and each of the gyroscopes
are mounted to the gimbal system. Including the interface between the gimbal system
127
and the stable platform, there are thus 7 interfaces, 3 of which are distinct. Thus
N2 = 7 and N = 3.
The structural complexity for this design is therefore
H
=
(N1 + N2 ) log 2 (p + N) = (8 + 7) log 2 (4 + 3) = 42.11bits.
For the gimballess IMU, the elements are the gyroscopes (3) and the accelerometers (3).
Thus N1 = 6 and p = 2. Each of the accelerometers and each of the
gyroscopes are mounted directly on the spacecraft, so there are 6 interfaces, of which
2 are distinct. Thus N2 = 6 and N = 2.
The structural complexity for this design is therefore
H = (N + N2 ) log 2 (p + N) = (6 + 6) log 2 (2 + 2) = 24bits.
Note that this measure of complexity has the limitation that the degree of coupling
between the elements is not addressed. If the interface between the accelerometers
and the spacecraft is more complex than the interface between the accelerometers and
the stable platform, a measure of functional complexity may be needed to compare
the two designs.
128
Bibliography
[AKS+99] Ancona, Kochan, Scully, Van Maanen, and Westney. OrganizationalBehavior and Processes. South-Western College Publishing, Cincinnati, 1999.
[Ben85]
Charles H. Bennett. Emerging Syntheses in Science. Pines, 1985.
[BM98]
Dan Braha and Oded Maimon. the measurement of a design structural
and functional complexity.
IEEE Transactions on Systems, Man, and
Cybernetics-PartA: Systems and Humans, 28:527-535, 1998.
[BM01]
Sanjeev K. Bordoloi and Hirofumi Matsuo.
Human resource planning
in knowledge-intensive operations: A model for learning with stochastic
turnover. European Journal of Operations Research, 130:169-189, 2001.
[Bre83]
H. J. Bremmerman. Optimization through Evolution and Recombination,
Self-Organizing Systems. Spartan Books, Chicago, 1983.
[Cha87]
G. J. Chaitin.
Algorithmic Information Theory. Cambridge University
Press, Cambridge, 1987.
[Cra02]
Ed Crawley. System architecture lecture notes, Fall 2002.
129
[Dra65]
Charles Draper. Space navigation, guidance, and control. Technical report,
MIT Instrumentation Laboratory Report R-500, 1965.
[DT92]
Eric V. Denardo and Christopher S. Tang. Linear control of a markov
production system. Operations Research, 40:259-278, 1992.
[EGA01]
Jr. Edward G. Anderson. The nonstationary staff-planning problem with
business cycle and learning effects. Management Science, 2001.
[EHY99]
Basem El-Haik and Kai Yang. The components of complexity in engineering design. IE Transactions, 1999.
[GML96]
Murray Gell-Mann and Seth Lloyd. Information measures, effective complexity, and total information. Complexity, pages 44-52, 1996.
[GZ02]
Noah Gans and Yong-Pin Zhou. Managing learning and turnover in employee staffing. Operations Research, 2002.
[Hal77]
M. H. Halstead. Elements of Software Science. Elsevier North-Holland,
New York, 1977.
[Han7l]
James A. Hand. Mit's role in project apollo. Technical report, MIT Instrumentation Laboratory Report R-700, 1971.
[Hoa76]
David G. Hoag. The history of apollo on-board guidance, navigation, and
control. Technical report, MIT Instrumentation Laboratory Report p-357,
1976.
[Llo90]
Seth Lloyd. The calculus of intricacy. The Sciences, 30(5):38-44, 1990.
130
[LP88]
Seth Lloyd and Heinz Pagels. Complexity as thermodynamic depth. Ann.
Phys., 1988.
[Lyn02]
Jim Lyneis. System and project management course notes. Fall, 2002.
[MMS66]
J. McNeil, J. E. Miller, and J. Sitomer. Use of body-mounted inertial
sensors in an apollo guidance, navigation and control system. Technical
report, MIT Instrumentation Laboratory Report R-544, 1966.
[MR02]
Mark W. Maier and Eberhardt Rechtin. The Art of Systems Architecting.
CRC Press, Boca Raton, 2002.
[NJ67]
J. L. Nevins and I. S. Johnson.
Man-computer interface for the apollo
guidance, navigation, and control system. Technical report, MIT Instrumentation Laboratory Report, 1967.
[NP82]
C. R. Nelson and C. I. Plosser. Trends and random walks in macroeconomic time series: Some evidence and implications. Journal of Monetary
Economics, 1982.
[SteOO]
John D. Sterman. Business Dynamics. Irwin McGraw-Hill, Boston, 2000.
[Suh90]
Nam P. Suh. The Principles of Design. Oxford University Press, New
York, 1990.
[Suh95]
Nam P. Suh. Design and operation of large systems. Journal of Manufacturing Systems, 1995.
131
[SuhOO]
Nam P. Suh. Axiomatic Design: Advances and Applications. Oxford University Press, New York, 2000.
[SW49]
C. E. Shannon and W. Weaver. The Mathematical Theory of Communication. University of Illinois Press, Urbana, 1949.
[TH65]
Milton B. Trageser and David G. Hoag. Apollo spacecraft guidance system.
Technical report, MIT Instrumentation Laboratory Report R-495, 1965.
[UE02]
Darian Unger and Steven Eppinger. Product development process design:
Planning design iterations for effective product development.
Working
Paper, November 2002.
[Wea48]
W. Weaver. Science and complexity. American Scientist, 36:536-544, 1948.
132
Download