DESIGN, COGNITION, AND COMPLEXITY:
AN INVESTIGATION USING A COMPUTER BASED DESIGN TASK SURROGATE
by
Nicholas W. Hirschi
S.B., Mechanical Engineering
The Massachusetts Institute of Technology, 1998
Submitted to the Department of Mechanical Engineering in partial fulfillment
of the requirements for the degree of
Master of Science
at the
Massachusetts Institute of Technology
June 2000
@ Massachusetts Institute of Technology
All rights reserved
/
/177
Signature of Author
Department of Mechanical Engineering, 19 May 2000
Certified by
Danie
. Frey
Assistant Professor of Aeronautics and A tronautics
Thesis Supervisor
Certified by
'avi R. Wallace
Assistant Professor of Mechanical Engineering
Thesis Reader
Accepted by
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
SChairman,
LIBRARIES
Ain A. Sonin
Professor of Mechanical Engineering
Committee on Graduate Students
DESIGN, COGNITION, AND COMPLEXITY:
AN INVESTIGATION USING A COMPUTER BASED DESIGN TASK SURROGATE
by
Nicholas W. Hirschi
Submitted to the Department of Mechanical Engineering
on 19 May 2000 in Partial Fulfillment of the
Requirements for the Degree of
Master of Science
ABSTRACT
This thesis shows that human cognitive and information processing capabilities place
important constraints on the design process. An empirical investigation of how such human
cognitive parameters relate to the design activity yields insight into this topic and suggestions for
structuring and developing design support tools that are sensitive to the needs and characteristics
of the designer. These tools will be better able to mediate shortcomings in human ability to cope
with complex design problems characterized by large scale, interrelated structure, and significant
internal dynamics.
A computer based interactive surrogate model of a highly generalized version of the
design process is presented and developed. The surrogate facilitates investigation of how a
human designer's ability scales with respect to system size and complexity. It also allows
exploration of the impact of learning and domain-specific knowledge on the design task. The
design task is modeled using an orthonormal n x n linear system embedded in a software
platform. Human experimental subjects, acting as "designers," interact with the task surrogate to
solve a set of design problems of varying size, structure, and complexity.
It is found that human ability to deal with complex, highly coupled and interdependent
systems scales unfavorably with system size. The difficulty of fully coupled design tasks is
shown to scale geometrically with design problem size n , as approximately 0(e"), while that of
uncoupled systems scales linearly as 0(n). The degree of variable coupling in fully coupled
systems has a significant but secondary effect on problem difficulty. Domain-specific knowledge
is shown to result in a reduction in the effective size and complexity of the design problem,
allowing experienced designers to synthesize complex problem tasks that are far less tractable to
novices. Experimental results indicate that human cognitive parameters are a significant factor in
the designer's ability to work with large, highly coupled, or otherwise complex systems.
Thesis Supervisor: Daniel D. Frey
Title: Assistant Professor of Aeronautics and Astronautics
ACKNOWLEDGEMENTS
A number of people have contributed significantly to the rewarding experience I have
had at MIT. First and foremost, thanks go to my advisor, Dan Frey. I am indebted to Dan for
teaching me a great deal over the last two years and for giving me the freedom to explore my
interests. The quality of my thesis research owes much to Dan's insightful guidance, especially
his thorough comments during the final stages of the work.
Others have made significant contributions to this research. I'd like to thank Professor
Timothy Simpson, of Pennsylvania State University, and Dr. Ernst Fricke, of Technische
Universitdt MtInchen, for their helpful suggestions and assistance. I am also grateful to Professor
David Wallace for serving as my departmental thesis reader.
I would like to acknowledge MIT's Center for Innovation in Product Development for
providing the funding necessary to conduct this research.
Fredrik Engelhardt, Jens Haecker, and Siddhartha Sampathkumar also deserve thanks for
making the lab a particularly enjoyable place to work.
Finally, I am grateful to my parents and brother for their love and support.
TABLE OF CONTENTS
1 INTRODUCTION ..................................................................................................
11
1.1 THINKING ABOUT DESIGN......................................................................................
11
1.2 THE PROCESS AND THE DESIGNER.........................................................................
1.4 MY APPROACH .........................................................................................................
12
13
13
1.5 A BRIEF OUTLINE .....................................................................................................
14
1.3 THE GOAL OF THIS THESIS.......................................................................................
2 COGNITION, INFORMATION PROCESSING, AND DESIGN.................................16
2.1 AN INTRODUCTION TO COGNITIVE SCIENCE .........................................................
2.2 BASIC INFORMATION STORAGE AND PROCESSING STRUCTURES.........................
2.2.1 SHORT-TERM MEMORY .............................................................................................
2.2.1.1 A CLOSER LOOK AT SHORT-TERM MEMORY LIMITATIONS....................................
2.2.3 LONG-TERM MEMORY ..................................................................................................
2.2.4 EXTERNAL MEMORY.................................................................................................
2.2.5 THE CENTRAL EXECUTIVE FUNCTION .....................................................................
2.2.6 WORKING MEMORY: THE MIND'S INFORMATION PROCESSING STRUCTURE............
2.3 HUMAN COGNITIVE PARAMETERS AND COMPLEX MENTAL TASKS ........................
2.3.1 MENTAL CALCULATION ...............................................................................................
2.3.2 DECISION MAKING.....................................................................................................
2.3.3 INFORMATION PROCESSING MODELS OF THE PROBLEM SOLVER .........................
2.3.3.1 MEMORY AND PROBLEM SOLVING.......................................................................
2.3.3.2 LEARNING, EXPERTISE, AND PROBLEM SOLVING ....................................................
2.3.3.3 EXPERIENCE AND IMPROVED PROBLEM REPRESENTATION ..................................
2.3.4 INTERACTING WITH DYNAMIC SYSTEMS.....................................................................
16
18
19
20
24
24
25
26
27
27
29
30
31
32
33
34
3 PRODUCT DESIGN AND DEVELOPMENT...........................................................36
3.1 BACKGROUND ........................................................................................................
3.1.1 THE PRODUCT DEVELOPMENT PROCESS IN A NUTSHELL ........................................
3.1.2 PROBLEM STRUCTURE: AN ENGINEERING APPROACH TO DESIGN .........................
3.1.3 MANAGING DESIGN RELATED INFORMATION .............................................................
36
3.2 MODELING AND VISUALIZING THE DESIGN PROCESS..............................................
3.2.1 CONCEPT DEVELOPMENT .............................................................................................
3.2.2 PARAMETER DESIGN.................................................................................................
3.2.3 THE DESIGN STRUCTURE MATRIX.............................................................................
3.2.4 COUPLING .....................................................................................................................
3.3 THE HUMAN FACTOR ...............................................................................................
3.3.1 COGNITION AND THE DESIGN PROCESS...................................................................
3.3.2 PROBLEM SIZE AND COMPLEXITY ...............................................................................
3.3.3 NOVICE AND EXPERT DESIGNERS ................................................................................
3.3.4 DESIGN AND DYNAMIC SYSTEMS..................................................................................
3.3.5 CAD AND OTHER EXTERNAL TOOLS.......................................................................
41
41
42
43
46
37
40
40
48
49
51
51
54
55
4 TOWARDS AN ANTHROPOCENTRIC APPROACH TO DESIGN.........................58
5 A DESIGN PROCESS SURROGATE...................................................................60
5.1
THE MOTIVATION FOR A SURROGATE..........
7
...............................................
60
5.1.1 AN IDEAL SURROGATE TASK....................................................................................
5.2 DEVELOPING THE DESIGN TASK SURROGATE .......................................................
5.2.1 THE BASIC DESIGN PROBLEM MODEL.......................................................................
5.2.2 THE SOFTWARE PLATFORM ......................................................................................
5.2.3 HOW THE PROGRAM WORKS........................................................................................
5.2.4 EXPERIMENTAL PROCEDURE....................................................................................
5.3 PRELIMINARY EXPERIMENTS AND PROTOTYPE DESIGN MATRICES.....................
5.3.1 WHAT WAS LEARNED FROM THE FIRST MODEL..........................................................
5.4 PRIMARY EXPERIMENTS: A REFINED SYSTEM REPRESENTATION .......................
5.4.1 DESIRABLE DESIGN SYSTEM CHARACTERISTICS ....................................................
5.4.1.1 M ATRIX CONDITION .............................................................................................
5.4.1.2 NON SIN GULARITY ................................................................................................
5.4.1.3 ORTHONORM ALITY ...............................................................................................
5.4.2 CHARACTERIZATION OF SPECIFIC DESIGN TASK MATRICES ..................................
5.4.2.1 M ATRIX TRACE ....................................................................................................
5.4.2.2 M ATRIX BALAN CE ................................................................................................
5.4.2.3 M ATRIX FULLNESS...............................................................................................
5.4.3 DESIGN TASK MATRIX GENERATION ........................................................................
5.4.4 A DESCRIPTION OF THE PRIMARY EXPERIMENTS.......................................................
6 RESULTS OF THE DESIGN TASK SURROGATE EXPERIMENT ........................
65
67
69
70
71
73
73
74
74
75
75
76
76
77
78
79
83
6.1 PRELIMINARY DATA ANALYSIS AND NORMALIZATION OF THE DATA...................
6.2 DOMINANT FACTORS IN DESIGN TASK COMPLETION TIME..................................
6.2.1 MATRIX SIZE, TRACE, AND BALANCE ...............................................
...................................
..................
6.2.2 MATRIX FULLNESS...........
6.3 SCALING OF PROBLEM COMPLETION TIME WITH PROBLEM SIZE ...................
............ ...............................
6.3.1 INVESTIGATING THE SCALING LAW...
....
.
..................................................
6.4 DESIGN TASK COMPLEXITY
6.4.1 NUMBER OF OPERATIONS...................................................
6.4.2 COMPUTATIONAL COMPLEXITY...................................................................
..........................
6.5 OTHER ASPECTS OF DESIGN TASK PERFORMANCE .......
62
62
63
83
84
84
86
87
88
92
92
94
96
96
6.5.1 TIME PER OPERATION .................................................................
99
..
6.5.2 GENERAL STRATEGIES FOR PROBLEM SOLUTION ..............................
100
6.5.2.1 GOOD PERFORM ERS ...............................................................................................
102
6.5.2.2 AVERAGE AND POOR PERFORMERS .......................................................................
................ 105
6.5.3 MENTAL MATHEMATICS AND THE DESIGN TASK...........................
106
6.5.4 SOME GENERAL OBSERVATIONS ON TASK RELATED PERFORMANCE .............
6.6 LEARNING, EXPERIENCE, AND NOVICE/EXPERT EFFECTS.........................108
......... .. 109
.........
....
.....
....
6.7 SUMMARY.....................................
7 CONCLUSIONS, DISCUSSION, AND FUTURE WORK.....................111
THESIS REFERENCE WORKS........................
APPENDIX .................
..................
120
..............................
.............................................
A.1 COMPUTATIONAL COMPLEXITY
A.2 RATE OF GROWTH FUNCTIONS ...............................................
A.2.1 IMPORTANT CLASSES OF RATE OF GROWTH FUNCTIONS.......................
..........................
A.2.2 A PRACTICAL EXAMPLE ................................
8
115
........................
...
120
120
121
.123
TABLE OF FIGURES
FIGURE 2.1 - Basic structure of the human information processing system ............................
19
FIGURE 3.1 - The five stages of the product design and development process......................... 38
FIGURE 3.2 - Tool availability during various phases of the product development process......... 39
FIGURE 3.3 - The generalized parameter design process .........................................................
43
FIGURE 3.4 - An example of a Design Structure Matrix representation ...................................
44
FIGURE 3.5 - Types of information dependencies.....................................................................
45
FIGURE 3.6 - DSM showing fully coupled 2 x 2 task block. .......................................................
45
47
FIGURE 3.7 - Uncoupled, decoupled, and fully coupled systems...............................................
FIGURE 5.1 - DS-emulator design task GUI for 3 x 3 system. ..................................................
65
FIGURE 5.2 - M aster GUI and Control GUI..............................................................................
67
F IG URE 5.3 - C onsent G UI.......................................................................................................
67
FIGURE 5. 4 - DS-emulator program functionality...................................................................
68
69
FIGURE 5. 5 - Workflow of typical experimental session. ........................................................
FIGURE 5.6 - Preliminary Uncoupled vs. Fully Coupled task completion time results ............. 71
FIGURE 5.6 - Experimental matrix characteristics. ...................................................................
80
FIGURE 6. 1 - Completion time data for all matrix types used in the surrogate experiments ....... 85
FIGURE 6.2 - Effects of matrix fullness on normalized completion time.................................
87
FIGURE 6.3 - Scaling for fully coupled and uncoupled design matrices vs. matrix size............ 88
FIGURE 6.4 - Full matrix completion time and best-fit exponential model...............................
90
91
FIGURE 6.5 - Uncoupled matrix completion time and best-fit linear model............................
FIGURE 6.6 - Average number of operations for Series C fully coupled matrix data. .............. 93
FIGURE 6.7 - Relative complexity of the design task for humans and computers .................... 95
FIGURE 6.8 - Average time per operation for Series C experiments........................................
97
97
FIGURE 6.9 - Average time per operation for Series F experiments. .........................................
FIGURE 6.10 - Average number of operations for Series F uncoupled matrix experiments. ........ 98
FIGURE 6.11 - Input adjustment vs. operation for a subject who performed well ...................... 100
FIGURE 6.12 - Input/output parameter plots for a subject who performed well.......................... 101
103
FIGURE 6.13 - Delayed search of input variable ranges..............................................................
FIGURE 6.14 - Average input variable adjustment for a subject who performed poorly. ........... 103
FIGURE 6.15 - Input/output parameter plots for a subject who performed poorly ...................... 104
FIGURE 6.16 - Random dispersion of input variable change with respect to time per move...... 106
FIGURE 6.17 - Design task completion time vs. task repetition for 2 x 2 full matrix. ............... 108
FIGURE 7.1 - Theoretical effects of increased experience on task performance. ........................ 112
125
FIGURE A.1 - Polynomial and exponential growth rate comparison...........................................
9
10
1 INTRODUCTION
1.1 THINKING ABOUT DESIGN
This thesis is about design. More specifically, it is concerned with the complex
interrelationship of the human mind and the design process. To discuss design and the design
process, however, it would first seem necessary to have a sound working definition of what
"design" might be. Of course, the outcome of any inquiry along these lines is heavily dependent
on who is asked to provide such a definition. An engineer might be satisfied with calling the
design process the search for an, "optimum solution to the true needs of a particular set of
circumstances" [Matchett 1968]. This definition would be unlikely to suit an architect, though
arguably the fault is more likely to lie with dry wording rather than any real lack of accuracy.
The architect, in turn, might prefer Louis Sullivan's more poetically worded definition, "form
ever follows function." Calling design a, "very complicated act of faith," however true it might
be, would probably satisfy no one [Jones 1966].
A most succinct, meaningful, and general definition can be found in Christopher
Alexander's book Notes on the Synthesis ofForm. He states that design is, "the process of
inventing physical things which display new physical order, organization, and form in response to
function" [Alexander 1964]. If the word "physical" is removed, this statement generalizes well
across all fields of design, from product development to software design to architecture and
industrial design. The fact that Alexander includes the words "process" and "inventing" is also
significant. In so doing he acknowledges the complexity and ambiguity of the design process, the
general lack of easy solutions and the need for iteration, and the importance of human thought
and intuition to the process.
A typical design problem has requirements that must be met and interactions between the
requirements that make them hard to meet [Alexander 1964]. This sounds straightforward at first.
But many design problems are quite complicated and some are nearly intractable. As the
complexity and size of design projects has increased, new tools of ever increasing sophistication
have arisen to cope with the demands imposed on the designer. Economic realities also confer
significant advantages to those who can design not only well but quickly. This has necessitated
negotiating the design process ever more swiftly without sacrificing the quality of the result. As a
result designs now no longer have the luxury of evolving gradually over time by responding to
changes in use patterns and the environment.
11
Moreover, the quantity of information required for design activities is often so extensive
that no individual can possibly synthesize an optimal solution to a given design problem. Thus,
the practice of design has moved away from the lone artisan, who both designs and builds his
creations, towards a new paradigm of professional design [Lawson 1997]. The increasing
complexity and cross-disciplinary nature of the design process has necessitated the development
of specialized experts in many different fields and has spurred the development of sophisticated
multi-disciplinary design teams that integrate their knowledge in an effort to solve a particular
design problem.
1.2 THE PROCESS AND THE DESIGNER
Fundamentally, the design process has a physical aim. Rather than being an end in and of
itself a designer's thinking is directed towards the production of a tangible end product, the nature
of which must satisfy certain constraints and be effectively communicated to others who may
help design or construct it [Lawson 1997]. Nonetheless, the thought process of the designer,
however intangible, is the most crucial ingredient of any design. Often whether a design
succeeds or fails is determined very early on in the design process during the vulnerable
conceptual phase in which the fundamental nature of the proposed solution to a given design
problem is synthesized by the designer.
Though the design process itself receives a great deal of scrutiny the agents directly
involved in the process, humans, are not as well understood in this context as they should be. To
a large extent this state of affairs is understandable. Although thinking and creativity are clearly
central to design the history of cognitive psychology reveals many conflicting views about the
nature of thought. Models of cognition vary widely, from the near mystical to the purely
mechanistic. Indeed, cognition is one of the most contentious and problematic fields to research
because it is an investigation of something that can be neither seen, nor heard, nor touched
directly [Lawson 1997]. It is also, as a direct result, difficult to study empirically. These factors
have made it difficult for other fields to benefit as much as they might from cognitive science
research and a better understanding of human mental parameters and their implications.
This rift is particularly evident within the context of the engineering design and product
design and development processes. Due to its roots in mathematics and physics the engineering
community tends to take an almost exclusively analytical approach to studying the design
process. Except for a few fields, such as "human factors engineering" which studies ergonomics
and the interaction of humans and machines, little attention is paid explicitly by engineers to the
human element in the design process. Generally, human factors are considered at a late stage in
12
this activity, almost as an afterthought and primarily for ergonomic reasons, as would be the case
for an industrial designer's contribution to the physical appearance of a consumer product, for
instance.
The inclusion of human parameters in the early stages of the design process generally
only occurs in special instances, for example in the design of airplane controls and flight deck
displays where safety is a critical issue and properly interfacing an airplane and the pilots who
control it is an important factor. However, as many such human factors examples have shown,
the impact of considering human factors during a design project is often significant and highly
beneficial to the end result of the design process.
If a consideration of human parameters can have such a positive effect on the outcome of
the design process, why not also consider their effect on the process itself? Since humans are the
most important part of this task, it would seem vital to characterize and understand their strengths
and weaknesses with respect to the design process. This is a first step towards developing design
tools and techniques that are better able to accommodate or mediate shortcomings on the part of
the designer with respect to design problem size, scale, type, complexity, dynamics, and timescale.
1.3 THE GOAL OF THIS THESIS
The goal of this thesis is to show on a basic level that the structure of the human mind
and the nature of human cognition (resulting in specific cognitive capabilities and limitations on
the part of the designer) are as important to understanding the design process as the nature and
structure of the design problem itself. In fact, the two are inextricably intertwined. A better
understanding of the strengths and weaknesses of the designer in light of the demands of the
design task and how these capabilities scale with project size, structure, and complexity, as well
as how they change over time, is critical for developing further tools to assist the designer and
better manage the design process. I also hope that others in the product design and development
field will be encouraged after reading this thesis to take a more "anthropocentric" view of the
design process. There is little to be lost from this approach, and, in my view, a great deal to be
gained.
1.4 MY APPROACH
I believe that a quantitative approach to characterizing certain aspects of the designer is
necessary and complementary to more qualitative studies. Part of the problem with many
13
theories of human problem solving, and other findings concerning cognitive capabilities
presented by the cognitive science field, is that researchers seldom relate this information directly
to the design process in a quantitative manner. This state of affairs has made it unlikely that such
results will gain the attention of the engineering design community due to the lack of design
context and the perceived "softness" or absence of analytical rigor of the findings.
An interest in a quantitative approach, however, does not indicate an underlying belief on
my part that design is a true science that follows fundamental laws or is quantifiable on any
fundamental level. I merely suggest that such an approach will serve as a meaningful vehicle for
eliciting and characterizing certain aspects of the designer's performance with respect to the
design task. Even a field as seemingly Platonic as mathematics has been revealed to have certain
dependencies on the structure of the minds that created it [Dehaene 1997]. This in no way
detracts from its beauty, meaningfulness, or usefulness as a tool, and the same should be true for
design.
1.5 A BRIEF OUTLINE
I have taken an integrated approach to presenting and discussing this research, and it is
my hope that the thesis will be read in its entirety as this will give the reader the most complete
picture of my argument. However, those with knowledge of engineering design and product
design and development and who are in a hurry could reasonably skip the first two sections of
Chapter 3. Likewise, cognitive scientists on a tight schedule might choose to dispense with
reading Chapter 2.
For the aid of the reader, I present a brief outline of the material contained in this
document:
My goal in Chapter 2 is to present a clear view of the capabilities of the designer and a
view of the design process from a cognitive science perspective. I will present an overview of
modern ideas from cognitive science that are relevant to design and the designer, including
information processing theories of human problem solving and cognition, and topics such as
memory types and characteristics, recall and processing times, mental calculation in exact and
approximate regimes, the role of learning and experience, and human time-response issues,
among others. Relevant literature and research studies will be presented and discussed in the
context of the design and problem-solving processes.
Chapter 3 concerns design and the designer in the context of engineering design and
product development activities. I will discuss how the design process has been framed in this
field, and how many common design methodologies and tools fit within this general structure.
14
Because all of these topics are covered in great detail elsewhere (in the case of Design Structure
Matrices, parameter design, etc.) I will only present details that I feel have specific relevance to
this thesis. The assumption is made that the reader either has general knowledge of these topics
or, if not, that numerous thorough and exacting treatments are available from other sources. This
chapter concludes with a presentation of some recent studies done in an engineering design or
product design and development context in which human cognitive considerations are discussed
and identified as significant.
In Chapter 4, I suggest that there is a need for a more "anthropocentric" approach to the
design process. There is little interaction between the design and cognitive science fields and it is
my contention that both disciplines have a great deal to offer one another. Of particular
importance is that the product design and development field pays more attention to the
characteristics of the designer. It is my belief that this will be significant not only in that it will
benefit the product development process as it is traditionally understood, but that new distributed
design environments being proposed or implemented will also reap rewards from a richer
understanding of the cognitive characteristics of the individual human agents involved in the
complex collaborative design process.
Chapter 5 introduces and describes the design task surrogate program developed and
tested in cooperation with human subjects as part of the research conducted for this thesis. Called
"DS-emulator," this computer program simulated a highly generalized version of the design
process using a well-characterized mathematical model and an interactive user interface. The
development of the design process model, various assumptions used in the program, and its
implementation are discussed in detail. The experimental regimen, as well as types of
experiments conducted with the help of experimental subjects, are also discussed.
Chapter 6 contains a comprehensive presentation of the results achieved with the DSemulator design task surrogate program. These results indicate that human cognitive parameters
do have a significant effect on the outcome of the design process.
I close the thesis proper in Chapter 7 with a discussion of the results of the design task
experiments presented earlier and how they are related to current theory and practice in both the
design and cognitive science fields. Particular consideration is given to the implications of the
results for product design and development activities.
Because a basic understanding of computational complexity and rate of growth functions
is helpful for reading certain sections of this thesis the Appendix includes a brief overview of
these topics.
15
2 COGNITION, INFORMATION PROCESSING, AND DESIGN
The purpose of this chapter is to familiarize the reader with topics in cognitive science
research that are significant to this thesis. Section 2.1 presents a brief overview of the modem
cognitive science field, taking note of its roots in information theory and computer science. This
thesis recognizes the information-processing paradigm as the most appropriate conceptual
framework with which to structure a discussion of the impact of human cognition on the design
process. As a direct consequence of this stance, earlier theories of human cognition such as
Behaviorism and Gestalt theory are ignored. In Section 2.2, important fundamental concepts
from cognitive science, such as various types of memory and other basic human cognitive
structures and their parameters, are defined and discussed and references to significant research
and experimental findings are presented. The final section discuses more complex topics in
cognitive science, such as theories of human problem solving, along with other associated
research that draws on the fundamental topics of Section 2.3. This material is meant to show how
human cognitive parameters might impact both the design process in general and the specific
design task experiment discussed in Chapters 5 through 7.
2.1 AN INTRODUCTION TO COGNITIVE SCIENCE
The cognitive science revolution was to a great degree encouraged by the development of
information and communications theory during the Second World War. This sophisticated
innovation led to a new era of exploration of human thought processes. Information could be
mapped and observed in quantitative manner not previously available, and the idea of
"information" as an abstract and general concept, an intrinsic quality of everything that is
observable, allowed many old problems to be revisited within this revolutionary conceptual
framework. Significantly, information theory has also provided important insights on how
humans might encode, analyze, and use data acquired through sensory channels. More primitive
and substantially less empirical frameworks such as Behaviorism and Gestalt theory have been
largely supplanted by a cognitive science approach which views humans as information
processing machines. This cognitive approach envisions humans as more adaptable and
intelligent organisms than do earlier Behaviorist or Gestalt theories, and focuses on context and
task sensitive empirical studies of processes and operational functions as ways of attempting to
explain how the mechanisms of cognition work [Lawson 1997].
The information processing approach to cognitive psychology was also inspired by
computer science research, and the advent of dedicated information processing machines such as
16
computers and electronic communication devices has had significant impact on the development
of cognitive science. At the theoretical level, experiments in computer design pointed towards
the desirability of separate long-term information storage structures, with high capacity but slow
access time, and complementary short-term information storage and processing functions, with
rapid access and limited span. Modem computers, for instance, operate in this manner, with large
stores of data on the hard drive and processing activities carried out using RAM and the CPU. By
analogy, it was suggested that this architecture might also benefit the human brain and as a result
many parallels have been identified between the computer science model and contemporary
information-processing concepts of human cognition [Baddeley 1986]. In the case of the human
mind, memory, search, and processing tasks are divided between a time and capacity limited
short-term memory and working memory, and a vast long-term memory store. In most cases this
structure seems to be the most efficient compromise between storing tremendous amounts of
information, which can only be searched and retrieved at limited speeds in the case of both
humans and computers, and the rapid retrieval and processing of small amounts of information
pertinent to the situation at hand [Cowan 1995].
A cognitive science based approach is particularly attractive to those who seek to
understand the design process because it draws many useful parallels between thoughts,
perceptions, and actions. Cognitive scientists theorize that the way information is perceived,
organized, and stored has great bearing on how it is processed and synthesized into a solution to a
problem, for instance. Thus, the study of problem solving can serve as an invaluable window on
the structure and function of the human mind and, by extension, the mental processes of the
designer.
Naturally there are drawbacks to the cognitive science approach. The tools available to
directly observe the human brain are still rather crude, making it difficult to develop or verify
detailed models of human thought. Although new techniques such as fMRI (functional magnetic
resonance imaging) have provided sophisticated and dynamic data concerning the relationship
between mental processes and the brain's structure, exploration at the level of the neuron (of
which there are approximately 10" in the human brain) still requires highly invasive and clumsy
methods [Horowitz 1998]. Due to the general complexity of the brain the performance of
Artificial Intelligence (Al), and other computational techniques influenced by cognitive science
research, remains inferior to actual human thought in most respects. The Al approach works best
when it is applied to well-ordered problems with a well-defined structure, but generally falls far
short of the flexibility displayed by the human mind. Thus, it has become essentially a truism
17
that any computational theory of mind, underpinned by a cognitive science approach, would be
by definition as complex as the mind itself [Lawson 1997].
2.2 BASIC INFORMATION STORAGE AND PROCESSING STRUCTURES
From the cognitive science standpoint the human mind is an information-processing unit
that contains sensory structures for gathering information from the environment, storage
structures that retain information, and processing structures to manipulate the information in ways
useful to the information-processing unit. The nature of the memory and processing structures of
the mind, and the complex manner in which they interact, form the basis for the information
processing theories of human cognition proposed by the cognitive science community.
The five senses, sight, hearing, taste, smell, and touch, and the associated regions of the
human brain that pre-process information from these sources, are the mind's information
gathering structures. This subject, though most interesting, is beyond the scope of this research.
However, a basic understanding of the other major storage and processing structures theorized by
cognitive science researchers, and how they are linked together, is an important part of
understanding human cognition and how it affects problem-solving activities such as the design
process.
Cognitive scientists generally divide the storage capabilities of the mind into two distinct
categories, short-term memory and long term memory. Some also include a third, sensory
memory, that is specially devoted to storing information from some forms of sensory input.
[Cowan 1995]. However, this is a contentious issue and not significant to this research, and so
will not be addressed here.
The basic processing structure of the mind is the working memory. This is generally
believed to be a flexible structure consisting of the activated portions of the short- and long-term
memories that are being drawn upon during a particular information processing task.
The activities of the information storage and processing structures of the human mind are
directed by the central executive function, which focuses attention, allocates resources, and
directs and controls cognitive processes. Although the evidence for some type of supervisory
structure in the mind is clear, it is poorly understood in comparison with most other basic
cognitive structures [Cowan 1995]. Figure 2.1 shows how cognitive scientists generally believe
that these basic cognitive structures fit together to form the human information processing
system.
18
Sensory
Central Executive
Systems
(directs attention and controls processing)
Long-term Memory
Sensory Memory
Memory
a_
Working
Memory
FIGURE 2.1 - Basic structure of the human information processing system. Arrows denote
information flow pathways [adapted from Cowan 1995].
2.2.1 SHORT-TERM MEMORY
The concept of short-term memory (STM), particularly the limited-capacity model in use
today, can be traced back to the philosopher and psychologist William James, active in the late
1 9 th century. James drew a distinction between the small amount of information that can be
stored consciously at any one time and the vast store of information actually stored in the human
mind. He referred to the time-limited short-term memory as the "primary memory," and the
long-term, more stable memory as the "secondary memory" [Cowan 1995].
As it is understood today, the short-term memory is a volatile buffer that allows quick
storage and rapid access to information of immediate significance to the information-processing
unit. Although it is easy to encode information in the short-term memory, it decays rapidly and is
soon forgotten unless it is transferred to the long-term memory. For instance, a task such as
remembering a phone number for immediate dialing would employ the resources of the short-
19
term memory. Of course, because of the transience of this type of memory the number is likely to
be rapidly forgotten once dialed, or perhaps even before.
As mentioned earlier, the advantage of a separate short-term memory is that rapid recall
of information is facilitated by this structure. However, the drawback is that this information
storage buffer is both time limited and size limited, and this has important consequences for the
overall processing capabilities of the human mind.
2.2.1.1 A CLOSER LOOK AT SHORT-TERM MEMORY LIMITATIONS
Important topics for communications and information theory research conducted during
the Second World War were often practical problems related to communication technology such
as studies of the limitations of electronic transmission channels or the ability of humans to
receive, select, and retain information from a complex barrage of stimuli. A typical study from
that period might have tested the ability of pilots to filter and recall critical information from
concurrent streams of radio transmissions. Research such as this led naturally to the study of
more general human capacity limits [Cowan 1995]. Of particular interest was the short-term
memory because it was apparent from an early stage that its limited capacity was probably a
major factor in limiting overall human information processing capabilities.
The psychologist George A. Miller made an early and highly influential contribution to
the understanding of the limits of humans as information processing agents in a paper he
published in 1956. In "The Magical Number Seven, Plus or Minus Two," Miller suggested that
human short-term memory capacity was limited by several factors. First, he suggested that what
he called the "span of absolute judgement" was somewhere between 2.2 to 3 bits of information,
with each bit corresponding to two options in a binary scheme. This restriction was based on
tests examining the human's ability to make "one-dimensional judgments" (i.e. judgements for
which all information was relative to a single dimension like pitch, quantity, color, or saltiness)
about a set of similar stimuli presented in a controlled environment.
For instance, subjects were played a series of musical tones of varying pitches and asked
to compare them in terms of frequency. It was discovered that about six different pitches were
the largest number of tones a subject could compare accurately without performance declining
precipitously. The same held true for pattern matching exercises, judgement of taste intensities,
and other similar comparison tests involving memory. From this data Miller was able to conclude
that humans have a span of absolute judgement of 7 ± 2 pieces of information (i.e. 2.22 to 32
bits) that were related by having a similar dimension of comparison [Miller 1956].
20
Miller was also clever enough to recognize that if this limitation were inflexible there
would be a problem. If the amount of information storable in what he called the "immediate
memory," now more commonly called short-term memory, was a constant, then the span should
be relatively short when the individual items to be remembered contained a lot of information.
Conversely, the span should be long when the items themselves are simple. How then could one
possibly remember an entire sentence if a single word has about 10 bits of information?
Miller proposed that people actively engage in the "chunking" of information as it is
encoded in their memory to get around the span limitation. In this scheme the number of bits of
information is constant for "absolute judgement" and the number of "chunks" of information is
constant for the short-term memory, and both regimes are governed by the 7 ± 2 rule. Miller
suggested that since the short-term memory span is fixed at 7 ±2 chunks of information the
amount of data that it actually contains can be increased by creating larger and larger chunks of
information, each containing more bits than before. To do this, the information is converted into
more efficient representations. Input is received by an individual in a form that consists of many
chunks with few bits per chunk, and is then re-coded such that it contains fewer chunks with more
bits per chunk [Miller 1956].
An example of this type of behavior is often encountered when one learns a new
language. At first the language might seem like gibberish, and even repeating a sentence
phonetically may be difficult after hearing it only once. However, with some effort the individual
learns rules of pronunciation and syntax that allow him to more efficiently encode and process
information contained in a series of words. Gradually, a string of foreign words becomes a single
sentence with a single meaning that is easily remembered.
Miller closed his influential paper by stating that, "the span of absolute judgement and
the span of immediate memory impose severe restrictions on the amount of information that we
are able to receive, process, and remember" [Miller 1956]. He went on to suggest, though, that
the human capacity to re-code information is an extremely important tool for allowing
complicated synthesis of information despite a limited availability of cognitive resources.
Although Herbert Simon, a scientist influential in the study of cognitive science and
human information processing, has argued that the chunk capacity of short-term memory is
actually closer to somewhere between 5 and 7, there is general agreement on the accuracy of
Miller's 7 ± 2 rule. Moreover, chunk size appears to have important implications for all highly
structured mental tasks such as mental arithmetic, extrapolation, and even problem solving,
particularly during the initial exploration and problem definition period in which the problem
must be characterized and a mental model produced.
21
More recently, short-term memory limitations have been explored through various types
of "digit span" experiments. In this type of investigation, human subjects are shown strings of
random integers of varying lengths and asked to remember them but are not given the opportunity
to rehearse the data (i.e. translate it from short-term to long-term memory). Such tests have found
that a digit span of 7 ±2 chunks for humans, the chunks being numbers in this case, is still a
relatively accurate value for the short-term memory span parameter [Reisberg 1991].
An interesting experiment in a similar vein conducted by Chase and Ericsson showed just
how flexible and dynamic human information chunking capacity actually is. Their study of digit
span, conducted in the late 1970's, included one individual who had an extraordinary span of
around 79 integers. It turned out that the subject was a fan of track and field events and when he
heard the experimental numbers he thought of them as being finishing times for races. For
instance (3,4,9,2) became 3 minutes and 49 point 2 seconds, a near world record mile time
[Reisberg 1991]. The subject also associated some numbers with people's ages and with
historical dates, allowing him to create huge chunks in his short-term memory that stored large
amounts of numerical data very efficiently. When the subject was re-tested with sequences of
letters, rather than numbers, his memory span dropped back to the normal size of around 6
consonants. Although the 7-chunk limit seems to be meaningful it is clear that humans have
many resourceful ways for circumventing it. As one becomes skilled in a particular task the
extent and complexity of the information that can be fit into the short-term memory framework
can be surprisingly large.
The cognitive science researcher Stanislas Dehaene has presented another interesting
wrinkle on short-term memory span. In his fascinating book The Number Sense: How the Mind
CreatesMathematics, he suggests that the size of an individual's digit span and short-term
memory is actually highly dependent on the language he speaks. For instance, the Chinese
language has an extremely regular and efficient numbering scheme and words for numbers
happen to be much shorter than the English equivalents. Consequently, it is not unusual for
Chinese individuals to have a digit span of nine numbers. The Welsh language, in contrast, has a
remarkably inefficient numbering scheme with large words for the numbers. As a result the
Welsh have a shorter digit span and it takes on average 1.5 seconds more for a Welsh school child
to compute 134+88 than an American child of equivalent education and background. The
difference seems to be based solely on how long it takes for the child to juggle the representation
of the number in the short-term memory. Dehaene goes on to suggest that the "magical number
seven," often cited as a fixed parameter of human memory, is only the memory span of the
individuals tested in the psychological studies, namely white American college undergraduates.
22
He suggests that it is a culture and training-dependent variable that cannot necessarily be taken as
a fixed biological parameter across all cultures and languages [Dehaene 1997]. This research
does not necessarily dispute the overall size limit of the short-term memory but does suggest that
the efficiency with which information can be stored within this structure is probably influenced
by many factors.
The other major limitation of short-term memory besides its finite capacity is the amount
of time data can reside in the structure before it decays and is "forgotten." There is good
information, both scientific and anecdotal, that the short-term memory is quite volatile and that
information stored within it decays rapidly [Newell and Simon 1972]. Experiments conducted by
Conrad in 1957 provided some of the first hard evidence for the time-limited nature of
information in the short-term memory. Conrad had human subjects memorize short lists of
numbers and then recite them verbally at a fixed rate. The experiments showed that the recall
ability of human subjects was impaired when they were required to recite the numbers at a slow
rate (a number every 2 seconds) rather than a fast rate (a number every 0.66 seconds). Even
better results occurred when subjects were allowed to respond at a self-determined rate [Cowan
1995]. It appeared that slow recitation rates allowed subjects to forget many of the numbers
before they had the chance to recite them.
According to most recent studies the duration of information in the short-term memory
appears to be somewhat longer than a few hundred milliseconds and up to about 2 seconds
[Dehaene 1997]. This is obviously not a very long time, however it is long enough to be useful.
Since the recall time for information stored in the short-term memory is only about 40
milliseconds per chunk it is actually possible to scan the entire buffer multiple times before it
decays [Newell and Simon 1972].
Although there is much debate over the extent and nature of human short-term memory
limitations, the facts everyone can agree on are that they exist and that they impose a significant
bottleneck on the information processing capabilities of the mind. In fact, a large body of
research has supported the significance of short-term memory limitations with respect to the
performance of activities as diverse as game playing, expert system design, and personnel
selection [Chase and Simon 1973, Enkawa and Salvendy 1989]. Other experiments have
indicated that even though the retention of information in the short-term memory is timedependant, human performance on comprehension and problem solving tasks is likely to depend
more on the capacity limits of short-term memory rather than its temporal volatility [Cowan
1995]. Moreover, when expertise is accounted for short-term memory capacity seems not to vary
a great deal even over widely differing tasks [Newell and Simon 1972, Baddeley 1986, Cowan
23
1995]. As a result short-term memory limits are frequently cited as being one of the most
significant bottlenecks on human information processing capabilities, and are almost certainly a
major contributing factor to the difficulties posed to humans by complex problem-solving tasks
[Simon 1985].
2.2.3 LONG-TERM MEMORY
The long-term memory (LTM) is the mind's long-term information storage structure. In
contrast to the capacity-limited short-term memory there is no evidence that the long-term
memory ever becomes full over a human lifetime, or that irrelevant information is replaced with
more important data to conserve space or optimize other performance-related characteristics such
as recall time [Newell and Simon 1972]. Although data can certainly be forgotten, in most cases
information that is encoded in the long-term memory is retained indefinitely.
Naturally, such immense data storage capacity comes at a price. The amount of time
required to "write" information from the short-term memory, where it initially resides upon
uptake, to the long-term memory is much longer than the time required to merely encode
information in the short-term memory. It has been estimated that the time required to transfer
information from the short- to long-term memory is 2 seconds for a decimal digit and 5 seconds
for a "chunk" of information. Moreover the effort required is much greater than simply storing
information in the short-term memory [Newell and Simon 1972].
Fortunately, access time for the long-term memory is relatively quick, at approximately
100 milliseconds for a chunk of data. When information is "recalled" the process is roughly the
reverse of the encoding process, so data is shifted from the long-term memory back to the shortterm memory or to the working memory, which is discussed later on in this chapter. Remarkably,
the required access time does not seem to depend on how recently the information of interest was
transferred into long-term memory [Newell and Simon 1972]. For most people childhood
memories, for instance, are as readily recalled as the plot of a recently viewed film or a recent
meal.
2.2.4 EXTERNAL MEMORY
Another issue of importance, though not really a human cognitive parameter per se, is
external memory (EM). This is the term that cognitive scientists use to refer to external aids that
supplement the mind's capacity for information storage and manipulation, such as paper and
pencil or computer-based tools such as CAD software. Of course, external memory aids place
24
some cognitive demands on the user. For instance they may necessitate "index entries" in either
the STM or LTM to allow efficient retrieval of information from the external source [Newell and
Simon 1972].
Although storing information in an external memory format is generally easier than
encoding it in the long-term memory, access to data in the EM actually ends up being much
slower because of all the searching that must be done to collect it. In the end, the relative ease
with which the LTM can be searched relative to the external memory ends up making it superior
for many purposes [Newell and Simon 1972]. Nonetheless, for complex tasks the EM has certain
advantages. One instance would be carrying out activities such as arithmetic calculations that are
well understood conceptually but difficult to execute in practice due to short-term and working
memory capacity limitations. In this case, the ratio of time required for mental multiplication of
two four-digit numbers to performing the same task with pen and paper is about 100:1. This is
most likely due to small size of the short-term memory and the long "write time" to the long-term
memory which makes it difficult to store data during intermediate phases of the mathematical
operation [Newell and Simon 1972]. The utility of external memory tools, and their ability to
reduce cognitive load, has been shown in many contexts and will be discussed in more detail at
other points in this thesis.
2.2.5 THE CENTRAL EXECUTIVE FUNCTION
It is evident to most who study human cognition that some mental structure is responsible
for coordinating the complex information processing activities that the mind carries out, for
focusing attention, and for engaging in supervisory control over task and resource allocation
[Cowan 1995]. Cognitive scientists who study this poorly understood structure have dubbed it
the "central executive function." Although the central executive function clearly plays an
important part in all human information processing capabilities the most well characterized aspect
of this supervisory structure is its ability to direct and focus attention and select information from
the environment that is particularly relevant to the issue at hand. Apart from this, little of a
quantitative nature is understood.
The issue of a central executive function has turned out to be particularly significant for
those who research the mind's mathematical abilities. Dehaene [1997] has shown that human
ability to perform exact numerical calculations depends on the coordinated efforts of a number of
different regions of the brain including those areas responsible for language. As Dehaene has
noted, the dispersion of arithmetic functions over multiple cerebral circuits raises a central issue
for neuroscience: How are these distributed neuronal networks orchestrated?
25
A well-regarded current theory, for which there is not yet concrete proof, suggests that
the brain dedicates highly specific circuits to the coordination of its own networks. Brain
imaging techniques, such as Functional Magnetic Resonance Imaging (fMRI), have shown that
these circuits are probably located in the front of the brain in the prefrontal cortex and the anterior
cingualte cortex. They contribute to the supervision of novel, non-automated tasks and behaviors
such as planning, sequential ordering, decision making, and error correction, and constitute a
"brain within the brain" a central executive that regulates and manages resources and behavior
[Dehaene 1997]. Because the functions of this structure are so interwoven with other mental
processes it has proven particularly difficult to study using standard experimental methods that
have been instrumental in unlocking the nature of short- and long-term memory.
2.2.6 WORKING MEMORY: THE MIND'S INFORMATION PROCESSING STRUCTURE
The working memory is the mind's information processing structure. Most current
theories of the working memory postulate that it is a composite structure involving multiple
components and drawing on a number of cognitive resources [Baddeley 1986]. Researchers
believe that working memory consists of activated portions of the short- and long-term memory
that have been called into service by the central executive function for the purpose of performing
a specific processing task [Cowan 1995]. Evidence, beginning with studies conducted by Hitch
[1978], suggests that working memory plays an important part in mental arithmetic as well as in
counting and general problem solving activities [Baddeley 1986].
Some tests, notably Brainerd and Kingma [1985], have found that the performance of
reasoning tasks is unrelated to short-term memory. But it is unlikely that in the tasks used for
these experiments that all of the background information had to be kept in mind to solve the
problems correctly [Cowan 1995]. A much more common assumption is that working memory
and the mental systems responsible for short-term memory are broadly equivalent. So, tasks that
involve the storage and manipulation of an amount of information approaching the short-term
memory span should, and in fact do, have a severe and detrimental effect on task performance
[Baddeley 1986]. For instance, a number of studies have been conducted during which a subject
performed a mental task while trying to retain in memory a set of digits presented immediately
before the task. This "digit preload" was found to have noticeable adverse effects on the
subject's performance of the task [Baddeley 1986].
The human working memory also appears to process information in a serial manner, and
as a result it can only execute one elementary information process at a time. Studies have
estimated the "clock time" for a basic information processing task conducted in working memory
26
to be about 40 milliseconds [Newell and Simon 1972]. This does not imply that information is
accessed or scanned serially from short- and long-term memories, though. In fact, this is almost
certainly not the case. The long-term memory has been shown not to function in this way and
this appears true for the short-term memory as well [Newell and Simon 1972].
As a direct result of the serial data processing behavior of the working memory, there is a
fundamental attention conflict between processing information previously received and attending
to current information. Thinking about what I've just written, for instance, prevents full attention
to what I'm now writing. Unfortunately there is no way out of this problem as the conflict arises
from the limited, serial nature of human information processing capabilities [Lindsay and
Norman 1977].
2.3 HUMAN COGNITIVE PARAMETERS AND COMPLEX MENTAL TASKS
It goes without saying that the fundamental human cognitive parameters discussed above
significantly impact all higher-order information processing activities. To help provide a clearer
picture of how all these structures work together to allow the mind to cope with complex tasks I
will present a few concrete examples of this, including mental calculation and other numerical
tasks, decision-making, general problem solving, and learning and experience.
It is my contention that the design process can be viewed as a special case of a problem
solving activity so this idea features prominently in the following discussion and throughout this
thesis. This concept stems from the fact that the design process is really an informationprocessing task involving the identification of a need or the definition of a goal (i.e. to build a
shelter using available materials or to create a useful product, for instance), and the development
of a solution strategy that facilitates satisfying the needs or fulfilling the goal.
2.3.1 MENTAL CALCULATION
A fundamental human information processing task that has been well characterized and
found to be highly dependent on the structure of the mind itself is the mental manipulation of
numbers. Part of the explanation for this lies with the fact that simple numerical tasks, such as
memorization of strings of digits, the comparison of digit magnitude, and simple mental
manipulation of digits such as addition, subtraction, and multiplication, are tasks often used to
probe many different aspects of human cognition. The simplicity and well-defined nature of
these numerical tasks, and also the fact that they require no special domain-specific knowledge,
gives them great utility as experimental tools. As a result studies based on numerical probe tasks
27
have uncovered many interesting cognitive limitations concerning both the short- and long-term
memory as well as the information processing capabilities of the working memory. Many of
these are discussed in Stanislas Dehaene's book The Number Sense: How the Mind Creates
Mathematics. In his text the author explores many aspects of human mathematical ability and
how it springs from the fundamental physiological and information processing structures of the
brain.
One limitation on the human mind's numerical capabilities, the finite number of objects
humans are able to enumerate at once, has been known for some time. In a set of experiments
conducted in Leipzig, Germany, by James McKeen Cattel in 1886, human subjects were shown
cards with black dots and asked how many dots there were. Cattel recorded the elapsed time
between the display of the cards and the subject's response. The experiment showed that the
subjects' response time generally grew slowly as the number of dots was increased from 1 to 3,
but that both the error rate and response time started growing sharply at 4 objects. For between 3
and 6 dots the response time grew linearly, with a slope of approximately 200-300 milliseconds,
corresponding roughly to the amount of time to count an additional object when quickly counting
out loud. These results have been verified many times using many different types of objects
standing in for the dots in the original experiment. Interestingly, the first 3 objects presented to a
subject always seem to be recognized and enumerated without any apparent effort, but the rest
must be counted one-by-one [Dehaene 1997, Dehaene et al. 1999].
Human perception of quantity also involves a distance and magnitude effect. Robert
Moyer and Thomas Landauer, who published an article about their research a 1967 issue of
Nature, first discovered this effect. In their experiment, Landauer and Moyer measured the
precise time it took for a human subject to distinguish the larger of a pair of digits flashed briefly
on a display. They found that the subject's response time always rose as the numbers being
compared got closer together and as the numbers became larger. It was easier for subjects to tell
the difference between 50 and 100 than 81 and 82, and easier still to discern between 1 and 5.
Further research on the human number line has supported Moyer and Landauer's result and also
suggested that it in fact seems to be scaled logarithmically. The mental number line allots equal
space between 1 and 2, 2 and 4, 4 and 8, and so on [Dehaene 1997].
Another significant numerical task that has been well characterized is mental calculation.
Calculation appears to be handled in two different ways by the brain depending on whether it is
exact calculation or estimation. Many estimation tasks appear to be evaluated by a specialized
mental organ that is essentially a primitive number-processing unit that predates the symbolic
numerical system and the arithmetic techniques learned in school [Dehaene 1997]. This innate
28
faculty for mathematical approximation can somehow store and compares mathematical values in
a limited way and can also transform them according to some simple rules of arithmetic. The
uniquely human ability to perform exact, symbolic calculation relies solely on a conglomeration
of non-specialized mental resources, including many of the brain's spatial and language
processing centers.
This skill appears to be acquired solely through education [Dehaene 1997].
Work by a number of researchers to determine the time required to complete an exact
mental calculation has shown that calculation time does not increase linearly with size of the
numerical values involved, as would be expected if a "counting" technique were used to sum or
multiply two numbers. In fact, calculation time appears to depend polynomially on the size of
the largest quantity involved and even disparate mathematical activities such as multiplication
and addition seem to demand approximately the same computation time for problems involving
numbers of similar magnitude. The likely explanation for this phenomenon is that such simple
numerical problems are not solved by counting up to a solution but that the answer is actually
drawn from a memorized table residing in the long-term memory. It takes longer to recall a
solution to a math problem and also harder to store and manipulate the data in the short-term
memory as the operands get larger, which accounts for the growth in calculation time as operand
size increases [Dehaene 1997].
2.3.2 DECISION MAKING
A mental activity preliminary to problem solving is decision-making. Decision-making
involves selecting a course of action from a specific list of alternatives. As simple as it sounds, it
is in fact a very difficult psychological task to compare several courses of action and select one
[Lindsay and Norman 1977]. Of course the list of alternatives need not be an exhaustive list, and
the nature of this list depends very much on the mental model of the problem, the experience of
the problem solver, and his ability to increase the efficiency of a search for relevant information
stored in the long-term memory through experience-based heuristics.
Evaluating each possible choice from an array of options on its merits actually places
significant demands on short-term memory capacity by requiring that a choice be held in memory
to be compared with the other possibilities. Sometimes there is no clear way to do the
comparison or no optimal metric that can be identified and calculated for each alternative. Also,
having a solution space that is small enough to be searched exhaustively and analyzed all at once
is a rare event [Lindsay and Norman 1977]. Most problems encountered in the real world would
stretch the human short-term memory to its outer limits without the help of decision-making and
search strategies, heuristics, or other information-processing aids.
29
Not surprisingly, the human decision-making process has been shown to be quite fallible.
When analyzing a situation, humans often erroneously emphasize its most salient or easily
imaginable features to the exclusion of other important factors. Also, initial judgements
concerning unfamiliar phenomena tend to become filters through which the absorption and
processing of all new information is conducted. A widely noted manifestation of this bias is the
persistence of an incorrect perception despite clear and repeated evidence to the contrary [Chi and
Fan 1997]. Cognitive limitations such as these, especially with regard to decision-making under
uncertainty, fall under the general concept of the "bounded rationality" of human information
processing agents first suggested by Simon [1957].
2.3.3 INFORMATION PROCESSING MODELS OF THE PROBLEM SOLVER
In the context of cognitive science, the most sensible way to view the design process is as
a "problem solving" activity. Herbert Simon [1978] has put forth a useful information-processing
paradigm of the problem solver. In his proposed scenario, problem solving is an interaction
between a "task environment" and the problem solver. The task environment is the problem
itself. Simon's view holds that the structure of the problem influences considerably the process
of problem solving in that constraints are placed on the problem-solving activity by the problem's
structure.
Problems can be either well- or ill-defined. A well-defined problem has a clear initial
state, an explicit set of legal operations (allowed moves) that the problem solver can take to
complete the task, and a clear goal. A good example of this might be the famous "Tower of
Hanoi" problem, which involves moving rings of varying sizes from one peg to another. Here the
problem solver is provided with all the information necessary to find a solution to the problem: an
initial state in which all rings are on one peg, the goal of moving the rings to another peg, and a
complete set of legal operators and operator restrictions (i.e. a larger ring cannot be placed on top
of a smaller ring, only one ring may be moved at a time, etc.).
In contrast, an ill-defined problem is one for which incomplete or little information is
provided about the initial state, goal state, operators, or some combination of these factors. The
design process, because of its open-ended nature, almost invariably falls into the ill-defined
problem category. Though the starting point and the goal of the process are likely to be clearly
spelled out, relatively speaking, the allowable operations are often thoroughly ambiguous. In
other words, the problem-solver must supply information that is not given.
However, even ill-defined problems are covered by the general mechanisms of problem
solving and are really a subset of well-defined problems, so they can be readily examined and
30
approached through the same overall structure [Simon 1973]. Moreover, even the fact that a
particular problem may be well defined does not make it easy to solve. For instance the game of
chess is very well defined, but it can be very difficult indeed to beat a skilled opponent.
To solve problems, an individual must construct a mental representation of the all the
information describing the problem. Simon refers to this representation as the "problem space."
For instance, the problem space for a game of chess might be the layout of the board, the starting
position of the pieces, the goal of victory over the opponent, and an understanding of the allowed
moves for each type of piece. An important distinction to make is that the problem space is not
the actual problem but an individual's representation of the problem, and it may contain more or
less information than the actual problem. Taking the chess example a bit further the problem
space for a novice would clearly be quite different from that of a grand master, who would be
able to build a more sophisticated mental model of the situation. It is also useful to remember
that many problem descriptions contain a great deal of implicit information. In the game of
chess, for example, it is usually taken for granted that the players are allowed to look at the board
and pieces.
While solving a problem, the individual progresses through various "knowledge states."
These knowledge states contain all the information an individual has available at each stage of the
problem solving process. This includes both immediately accessible data in the short-term
memory and knowledge that can be retrieved from the long-term memory by a search operation.
Of course, short- and long-term memory structures store such information about the knowledge
states during problem solving processes [Newell and Simon 1972].
By applying "mental operations" to knowledge states in the working memory the
problem solver can change from one knowledge state to another with the aim of reaching the final
goal state, the solution to the problem at hand. Because these operations are taking place in the
mind of the problem solver they do not necessarily refer to the objective state of the problem
[Kahney 1993]. Significantly, this leaves the door open for the general information processing
characteristics of the problem solver to have major effects on the outcome of the problem solving
exercise. Some important parameters are, not surprisingly, short-term memory capacity and
domain-specific experience, reflected by the depth of information available in the long-term
memory for use on a particular type of problem.
2.3.3.1 MEMORY AND PROBLEM SOLVING
Obviously, various capacity limits play an important part in problem solving by limiting
the complexity of the mental problem representation and restricting the complexity and size of the
31
search space or problem space. Much has been made of the fact that the size of the problem
space for a given problem can present a bottleneck to human problem solvers. As such, the
limitations of short-term and working memory place bounds on the amount of information that
can be searched, compared, or otherwise processed at one time.
Chess, for instance, has around 10120 possible games and is considered difficult by most.
Obviously humans do not, and cannot, search spaces of that size due to memory and processing
limitations. However, a search-based problem solving strategy would not be much simpler or
more feasible if the problem space were only 1030 or 1010 games. In fact, the most sophisticated
and powerful chess computers actually restrict their exhaustive problem space searches to around
106
games [Simon 1989]. It has even been argued that a game as simple as tic-tac-toe would be
virtually impossible for humans if played by reasoning alone because of the large size of the
search space required if all possible options were considered for each move [Lindsay and Norman
1977]. Formulating and comparing all these alternatives would completely overwhelm the shortterm and working memory buffers.
2.3.3.2 LEARNING, EXPERTISE, AND PROBLEM SOLVING
If the only techniques available for problem solving were search and processing the
situation would indeed be dire. However, the ingenuity of the problem solver often allows him to
restructure the problem into a manageable format or develop other tools and techniques that
reduce the load on short-term and working memory, which are generally the bottlenecks on any
information-processing task. One way this limited processing capacity is supplemented is
through use of the long-term memory which allows the remembering of heuristics, similar
situations, or useful techniques learned from other endeavors [Lindsay and Norman 1977]. It is
such collective experience and skill that allows the search space of otherwise intractable problems
to be reduced to a manageable size [Simon 1989]. Expertise, gained through rehearsal and
learning, allows patterns to be observed and remembered and successful problem solving
methods to be developed and retained for later use.
Naturally, this information also serves as a structure that can be applied to groups of
problems involving related configurations. Problems are represented in the memory in such a
way as to allow a new problem to be recognized as similar to or different from a previously
encountered task. Information retained from previously found solutions to problems can be
employed on identical problems, called isomorphic problems because they have an identical state
space, and on similar or homomorphic problems, with a like but not identical structure. Of
course, the retained solutions of earlier problems can be used and adapted to a new problem. So,
32
as a result the problem solving and problem reformulating processes go hand in hand [Kahney
1993].
Differences between experts and novices with respect to their problem solving abilities
have been identified and characterized in a number of cognitive science studies. Experts
generally have a more efficient and rich knowledge structure than novices on which to base their
search for a solution to a problem. Experts are also better at generalizing problems and
integrating, extrapolating, and obtaining information to form a solution strategy. Rather than
approaching problem solving in a sequential manner, experts take a system view of the problem.
Novices, in contrast, have a less rich information structure on which to base a solution strategy.
Thus, they tend to oversimplify problems and ignore important information, which leads to a less
integrated approach to finding a solution [Newell and Simon 1972, Chase and Simon 1973]. Not
surprisingly expertise, and the attendant increase in domain-specific knowledge, is a major factor
in the relative ability of problem solvers.
2.3.3.3 EXPERIENCE AND IMPROVED PROBLEM REPRESENTATION
A chunking model of memory would suggest that extensive practice and learning would
allow data to be more efficiently categorized and stored in a hierarchical manner, increasing the
amount of information that can be maintained in a single chunk of short-term memory space.
Likewise, experts tend to have a more efficient information chunking scheme than novices, which
allows them to store and manipulate larger amounts data in their short-term and working
memories. The overall number of chunks retained for novices and experts alike appears to be
about the same and is approximately what would be predicted from immediate recall tests of
numbers or consonants, in other words seven chunks or so. But, in the case of the experts the
chunks contain more and richer information than those of novices.
Interestingly, this increase in short-term memory capacity only extends to the domain in
which the experience has been acquired. An extraordinarily efficient chunking ability for one
type of data structure does not transfer over to other data types [Baddeley 1986]. For instance, a
number recall experiment performed on Japanese grand master abacus users, conducted by
Hatano and Ogata in 1983, confirmed that they had exceptional numerical digit spans of about 16
decimal digits as evidenced by their extraordinary ability to store and manipulate numbers when
performing abacus calculations. However, upon further examination the verbal short-term
memory spans of the abacus masters turned out to be of entirely normal dimensions [Baddeley
1986].
33
Another good example of how extensive experience affects human ability to cope with
complex tasks is the case of chess grand masters. These extraordinarily talented players are often
credited with a superior ability to see "into the future" and divine how a chess game will play out.
However, closer inspection reveals that one major reason for the apparent skill of grand masters is
the tremendous amount of experience they have acquired with the game. Although problems are
generally evaluated in the short-term and working memory, it is the more highly developed longterm memories of problem-related experiences that gives expert problem solvers the edge over
amateurs. Thus, the increased experience of grand masters provides them with a large store of
useful knowledge and allows them to more efficiently search pertinent information stored in the
long-term memory through the development of heuristics that reduce problem complexity and
search space size [Newell and Simon 1972].
Another reason for the superior performance of experienced chess players is their ability
to encode information into larger, more sophisticated perceptual chunks, which permits a more
complex evaluation of the problem space [Chase and Simon 1973]. Studies have shown that
many chess grand masters are able to view configurations of chess pieces on a board as single
information chunks, enabling them to use the short-term memory and processing resources they
have more efficiently to compare and evaluate possible strategies in the working memory.
Interestingly, the ability of experienced chess players to remember configurations of chess pieces
turns out to be highly dependent on whether or not the configuration is the result of an actual
game or merely random [Chase and Simon 1973]. Random piece configurations are actually no
easier for the grand master to remember than for the novice player, most probably because the
random configurations make no sense in the context of an actual chess game and are thus not part
of the expert's domain-specific knowledge base.
2.3.4
INTERACTING WITH DYNAMIC SYSTEMS
A number of psychological experiments, as well as many disastrous real-world accidents,
have demonstrated that humans generally have a limited capacity for dealing with phenomena
that are dynamic or nonlinear in time. Research conducted by Dietrich D6rner, who explored
human subjects' abilities to control time-varying scenarios using complex interactive models of
social systems, has indicated a number of key characteristics regarding human ability to
understand dynamic systems.
Humans appear to have great difficulty evaluating the internal dynamics of systems and
often resort to generalizing about overall behavior on the basis of local experience. Thus, they
tend to rely on extrapolation from the moment techniques when creating a mental model of a
34
system's dynamics, and the behavior of a system at the present moment is often the
overwhelming dynamic effect considered when a prediction of future behavior is sought. As a
result human interaction with such systems is not guided by the differentials between each state
of the system but by the situation at each stage, and the immediate situation is regulated rather
than the overall process [D6rmer 1996]. People also tend to utilize simple, linear models of timedependent behavior, regardless of the true nature of the system, making control of dynamically
complex and non-linear systems very difficult. This lack of attention to dynamic behavior as it
unfolds also leads to a tendency to overcorrect in many such situations.
The existence of a time lag in a system, discussed in detail within an engineering design
context in Section 3.3.4 of this thesis, makes the phenomenon of human insensitivity to system
dynamics even more pronounced. In another study, for instance, D6mer [1996] researched the
implications of such a time delay. Again using the modeling and control of social systems as a
test case, he found that a time delay interferes with dynamic extrapolation techniques by making
it more difficult for humans to capture an instantaneous derivative of a system's behavior.
Time lag phenomena, extrapolation from the moment, and fixation on linear behavior can
conspire to create the familiar oscillatory behavior that humans often provoke in systems they are
trying to control but for which they are unable to form an accurate mental model [D6mer 1996].
35
3
PRODUCT DESIGN AND DEVELOPMENT
This chapter is organized into three basic sections. In the first section I give a broad
overview of the engineering design and product design and development field, discuss the basic
structure of the process, and present some of its fundamental concepts. The second section
focuses on a few ways of modeling the design process, or parts thereof, that are used frequently in
a product development context. I have chosen to discuss these specific techniques because they
are directly related to the manner in which the computer-based design task surrogate program,
which forms the core of the research conducted for this thesis, simulates the design process. The
third section of this chapter discusses some instances in which human cognitive limitations
receive either implicit or explicit consideration in a product design and development context.
Both specific research and practical tools that highlight the importance of human cognitive
factors are discussed.
3.1 BACKGROUND
The engineering design research field is quite active today, with both industrial
practitioners and university based researchers participating in the discourse. Although the words
engineering and design might sound incongruous to some when used in the same sentence, this
area of design actually stands unique in the overwhelming impact it has on the world. The sale of
products from a basic sheet of paper to the most sophisticated jet aircraft or communications
satellite, all of which require careful design, is a fundamental force driving the economies of the
world. This impact of engineering design and product design and development is not only
economic, but can be appreciated on a more visceral level through the consumer products most of
us use every day: computers, automobiles, telephones, and so on. Needless to say, life without
many of these products would be very different indeed. Each of these devices was designed in a
way not unlike the manner in which an architect would design a dwelling. A fundamental needs
exists to be satisfied but the designer has great latitude to choose the manner in which this will be
done.
The engineering design environment is also going through a prolonged period of rapid
and fundamental change. Technological advances, such as the microchip and innovative new
materials and production processes, have allowed for the design of ever more sophisticated and
complex products. Although the possibilities for the designer are now more wide-ranging than
ever, the challenge of guiding the design process to a successful conclusion has never been
greater.
36
Increased globalization, mergers, and joint ventures have introduced tremendous
opportunities for achieving economies of scale, and this global competitive environment has
caused companies to be ever more mindful of the efficient use of resource. This has necessitated
offering a wide array of products for the increasingly demanding customer all at the lowest
possible cost. Naturally, heightened competition has also placed great pressure on the designer.
The consumer's desire for product variety and low cost forces companies to offer more diverse
and carefully engineered product lines, leading to added complexity at almost every level of the
product development process. Moreover, increasing market pressure has necessitated ever
shorter product development cycle times yet at the same time increased flexibility to meet the
dynamics of consumer demand.
The broad demands posed by the product design and development cycle require that
those involved in the process are able to synthesize information and knowledge from a wide
variety of sources. Basic science and engineering is, of course, necessary at a fundamental level
to develop technology and drive innovation. However, the product designer must also have some
understanding of marketing so that he may better understand the consumer and identify possible
market opportunities. An understanding of the product development organization and project
management is also important as resources, both human and materiel, must be deployed in the
most advantageous way possible. Finally, the economic, financial, political, and legal factors that
are also part of the product design and development process must be understood. In short, the
design process is highly complex and places intense cognitive demands on those involved
because of the scale and multiple interrelated tasks and goals that characterize such an activity.
3.1.1 THE PRODUCT DEVELOPMENT PROCESS IN A NUTSHELL
Because of its open-ended nature the product development process could easily be
divided into any number of segments in an arbitrary number of ways. What the "best"
decomposition is, of course, depends upon who is being asked. However, Ulrich and Eppinger
[1995] have made a good case for a five-part decomposition. In their view the product design and
development process can be broken down into concept development, system-level design, detail
design, testing and refinement, and production ramp-up tasks. This sensible division of the
design process, and some of the activities that take place during each segment, can be seen more
clearly in Figure 3.1.
37
Phase I
Concept
Development
Phase 2
Phase 4
Phase 3
System-Level
Design
Detail Design
Phase 5
Testing and
Production
Refinement
Ramp-Up
Concept Generation
Concept Selection
Establish Product
Architecture
Industrial Design
Design for
Manufacturing
Prototyping (CAD model, Physical Prototype, etc.)
Project Economics and Finances
FIGURE 3.1 - The five stages of the product design and development
process [adapted from Ulrich and Eppinger 1995].
Naturally, as the design process progresses and fundamental issues are resolved, the
process moves from being qualitative in nature through states in which progressively more
quantitative tools and methodologies are available to the designer. During early stages of the
product development process discovering and investigating market needs, and generating
concepts to satisfy them, are the primary activities. After a general concept design is settled
upon, and this concept can be quite abstract, more work is done to determine the fundamental
architecture of the product. This often involves input form many sources, including marketing
studies, as well as information form experts in engineering, manufacturing, industrial design, and
management, who all form the core of a multi-disciplinary product development team. Later
stages of the design process involve quantitative modeling and analysis, CAD tools, and other
analytical methods.
Not surprisingly most of the quantitative design-related tools employed during product
development are only deployed in later stages of the process when rigorous analysis is facilitated
by a more concrete idea of the structure of the design. However, there is a general consensus
that developing more rigorous "upstream" methods and design tools would be highly beneficial.
38
It has been estimated that most of the eventual costs and other resource commitments of a product
development project (up to 60%, in fact) are actually locked-in during the conceptual phase of the
design process [Welch and Dixon 1994]. Figure 3.2 shows how tool availability varies with the
progression of the design task.
Phase 1
D
eomnt
Phase 3
Phase 2
System-Level
Phase 4
Detail Design
Tesinn
Phase
5
Pductin
Dsign
Heuristics
Quantitative Analysis
Availability
of
Automated
Tools
Stage of Product Development Process
FIGURE 3.2 - Tool availability during various phases of the product
development process [adapted from Whitney 1990].
The goals of product design and development research, not surprisingly, are similar to the
goals of industries that fund the labs and universities conducting such research. They include
improved product quality, reduced cost, reduced time-to-market, increased efficiency, increase
flexibility, increased robustness, and better management of the uncertainty inherent in any design
effort. The success of the product development process is generally gauged by these same
metrics, which are often traded off against one another to optimize the process [Krishnan and
Ulrich 1998].
The increasingly decentralized product development process often requires orchestrating
the collaborative efforts of many dispersed entities, including multi-disciplinary teams, sub-
39
contractors, and specialists. So, because the scale and extent of the design process has grown
rapidly good communication is vital, both across firm boundaries and within firms.
3.1.2 PROBLEM STRUCTURE: AN ENGINEERING APPROACH TO DESIGN
So, how are design problems conceptualized and modeled by engineering designers?
Arguing for a design methodology that corresponds to the underlying structure of the design
problem has been a goal of notable design researchers such as Herbert Simon [1969], Christopher
Alexander [1964], and many others [Eppinger et al. 1989]. Naturally, the engineering design
field has also adopted a structural approach as this philosophy lends itself to the quantitative
techniques that are the currency of the scientific and engineering communities.
It follows that a broad look at the engineering design literature reveals that most work is
going on in areas for which powerful representational or computational schemes are available.
The research is certainly diverse in scope and scale but is tied together by a common reliance on
quantitative methods. Research often focuses on macro scale aspects of the product development
process or on specialized techniques applicable in specific situations. This tends to result in the
development of tools and methodologies that are most applicable to designs at a more advanced
stage of maturity when it is much easier to assign numerical models to parameters of interest.
Some research areas of particular significance, all of which have a mathematical basis, are
parametric optimization, Design Structure Matrices, Computer-aided design, and Robust Design
[Krishnan and Ulrich 1998]. All these methods depend on a well-structured analytical
representation of either the design itself or the design process that can then be somehow
manipulated by the designer until desired product or process characteristics are achieved in the
model.
3.1.3
MANAGING DESIGN RELATED INFORMATION
Concepts from information theory have also had a significant impact on the product
development field just as they have influenced many other high-technology areas and, as
previously discussed, the study of cognitive science. In a sense product development can actually
be seen as a complex information-processing activity involving hundreds or even thousands of
decisions [Clark and Fujimoto 1991]. Some parts of the process are information sources, others
are sinks, and most are both. Information flows through the network of a given product
development process from node to node. Such a network can be envisioned on many different
scales, from the information dependencies pertaining to the physical interrelationships of
40
mechanical parts in a product to the information flow between different parts of a large team
working together on the design itself. So, nodes in the process might represent a single
dimension of a particular piece part, an entire part, or even a sophisticated task that is part of a
much larger process. In this way product development tasks and decisions form a complex
network of information-based interdependencies [Krishnan and Ulrich 1998].
Naturally, the complexity and interrelatedness of the product development process leads
to a few key fundamental issues. First of all, the existence of many levels of interdependencies
indicates that the product design and development process is inherently iterative [Eppinger 1999].
So overall there is a challenge to optimally organize projects in terms of work and information
flow to minimize costly iteration. Also, since the product development process can be envisioned
as a web of information dependencies, then it stands to reason that large-scale projects may be
advantageously decomposed into many smaller projects that are easier for designers to handle.
Coping with such considerations necessitates the development of models of specific
design activities and also of the product design process itself. Modeling can help answer
questions such as how information necessary for the product development process can be
gathered and manipulated efficiently into the best, or "optimum" design; how the design process
can be broken down into smaller, easier to handle sub-problems; and, how these problems can be
solved more efficiently, more quickly, and in a way that leads to a superior overall solution to the
design problem.
Before discussing a few specific modeling and analysis techniques that are significant to
this thesis, an important point must be mentioned. By their very nature models employed in
product development research are at best coarse approximations of the phenomena under study,
especially when considered in contrast to those used in the physical sciences where the language
of mathematics seems to map in a remarkably consistent way to the physical world [Krishnan and
Ulrich 1998]. While models are useful reflections of the design process, and many have in fact
proved useful in practice, they are unlikely to be representative of the fundamental nature of the
design process or reflect any basic laws governing such an activity.
3.2 MODELING AND VISUALIZING THE DESIGN PROCESS
3.2.1 CONCEPT DEVELOPMENT
The most fundamental stage of the product design process is the concept development
phase. At this point there are a few basic decisions to be made concerning the intended
functionality of the product and the most appropriate embodiment, physical or otherwise.
41
Important questions are, what will be the product concept (i.e. what should it do)? What is the
product architecture (i.e. how should it do it)? What are the target values of product attributes?
What variants of the product will be offered? And, finally, what is the overall physical form and
industrial design of the product [Krishnan and Ulrich 1998]? As has been mentioned earlier, this
early stage of the development process actually has a very significant impact on the overall
outcome of the effort. Poor design can slow the rate of production ramp up and increase the
lifetime expense of a product through elevated repair costs or other such functional inefficiencies
[Krishnan and Ulrich 1998].
3.2.2 PARAMETER DESIGN
Once the basic concept of a product has been determined, and an overall product
architecture decided upon, many of the following steps in the design process involve parameter
design activities. Essentially this method involves modeling a product, or parts thereof, as a
simple system with inputs, often called "design parameters" or "control factors," controllable by
the designer and outputs, or "response," that can be measured. The goal of the parameter design
phase is to specify values of design parameters in order to satisfy or optimize certain desired
performance characteristics of the product. Such performance characteristics are typically
described as the response of a system in this scheme. The goal of the parameter design process is
to adjust the design parameters until the response characteristics of the product or system reaches
an intended value, represented by the system's "signal factor."
Fundamentally, the idea of parameter design is a subset of Genichi Taguchi's Robust
Design methods. Taguchi developed these tools to increase the efficiency of the product design
and development process by helping designers create products that are less sensitive to sources of
variation. More complex and complete versions of the parameter design process, such as that
presented in Phadke [1989], also incorporate "noise factors," which are statistically described
parameters that the designer cannot control explicitly but that do affect the response of a product.
An example of a noise factor might be variance in the strength of steel being used for the
manufacture of a bolt. Because the designer may not be able to directly control this variance she
may wish to account for its effect on the system by modeling it as a noise factor input. Figure 3.3
diagrams a generalized parameter design scheme.
The main point I wish to make by discussing parameter design, and the one that is most
significant in terms of understanding this thesis, is that a product can be modeled as a system of
inputs and outputs, provided that both can be quantified in a meaningful way. This process is
typically performed after the product concept and its basic architecture have been clearly
42
established and when the creation of more rigorous models is possible using CAD systems,
mathematical techniques, or even physical prototypes that can used for experiments [Krishnan
and Ulrich 1998].
Intended Output
Parameter Value
(Signal Factor)
Controllable Input
Parameter
(Control Factor)
Product or
System
Measurable Output
Parameter
(Response)
Uncont rollable Input
P arameter
(No ise Factor)
FIGURE 3.3 - The generalized parameter design process.
A great deal of the literature takes for granted the ability to set values for product
parameters at will and essentially assumes that arbitrary combinations of design parameter and
response specifications are possible. Unfortunately, this is not necessarily the case [Krishnan and
Ulrich 1998]. For instance, while it may be possible to achieve any combination of color and
viscosity in paint, such flexibility would most certainly not be feasible for combinations of the
strength and weight of an airplane's wing or the gas mileage and power of an automobile engine.
In most complex designs a sophisticated and highly interrelated mapping exists between the
design parameters and system response, and the adjustment of a single design parameter leads to
changes in many of the system's response variables. Obviously such interdependencies are an
important issue. How then are the many subtle interrelations between various aspects of a
product captured so that they can be modeled, understood, and manipulated?
3.2.3 THE DESIGN STRUCTURE MATRIX
One valuable tool is the Design Structure Matrix (DSM), first suggested by Steward
[1981]. The DSM is a matrix-based representation that is typically used to visualize information
43
dependencies in the product development process, most often those between specific design tasks.
In this model, tasks and their information interrelationships are basically modeled as a matrix of
coefficients, with tasks listed once across the top of the matrix as column headings and once
down the side of the matrix as row headings.
A design project with n tasks would be represented by a n x n design structure matrix.
If task i provides information to task j, then the ai matrix element is non-zero. The diagonal
elements of a DSM are always non-zero because tasks always depend on themselves for
information. Nonzero elements above the diagonal of the matrix indicate information that is
being fed back from tasks occurring later in the process to earlier tasks, while non-zero elements
below the diagonal indicate information dependencies of later tasks on previously completed
ones. An example of a DSM for a design process with four tasks, labeled A through D, is shown
in Figure 3.4.
A
A x
B x
B C
C
x x
x
D x
x
D
x
x
FIGURE 3.4 - An example of a Design Structure Matrix representation.
In a product design context the DSM was originally intended to identify dependencies
between tasks to assist with the generation of an optimum development process sequence. In this
case, information flows in the matrix were examined and used to characterize tasks as "serial,"
"parallel," or "coupled," based on their information dependencies on other tasks. Serial tasks rely
solely on "upstream" information from previously completed tasks, parallel tasks require no
information exchange, and coupled tasks are dependent on one another for information, as shown
in Figures 3.5 and 3.6.
The identification of interdependent coupled tasks is important because such information
interdependencies can significantly impact the design process. Parallel tasks can usually be
performed concurrently, and serial tasks in sequence, so that they contribute little to the overall
system-level complexity of the design process. However, coupled tasks in particular are
44
challenging for designers to cope with due to the complexity of information exchange involved
and because they indicate the occurrence of iteration. Iterations and coupling are not only
difficult for the designer to manage but also increase the time that the design process requires and
can add significantly to its overall expense.
Parallel
Serial
Coupled
A
A
A
FIGURE 3.5 - Types of information dependencies.
A
Tasks
B C
A x XX-X
B
x xx
C
x x
D
_x
x
D
Coupled block
x
Information
dependency
FIGURE 3.6 - DSM showing fully coupled 2 x 2 task block.
Algorithms and techniques have been developed to decompose, or "tear," the elements of
the DSM and reorganize them so that the effects of the coupled tasks or design parameters are
minimized. In this case, the key is to optimize information flow in the design process so that
better results are achieved in a shorter time. Generally, this means rearranging the DSM so that it
is as close to lower triangular in structure as possible, thus minimizing the number of fully
coupled blocks in the system. Because poor problem decomposition can lead to a sub-optimal
45
solution to the design problem, however, great care must be taken with this process and all
variable coupling must be thoroughly considered [Rodgers 1994]. This topic is discussed in more
detail in a DSM related context in Section 3.3.3 of this thesis.
An important consideration, though, is that iteration has been shown to increase the
quality of a design if properly executed [Eppinger et al. 1994]. The DSM representation
facilitates this by allowing the designer to model trade-offs between reducing the time required
for the product development effort, through reducing iteration and reordering the process, and
increasing quality by building in strategic iterations. It also permits optimizing the task structure
of the design process so that it better reflects the structure of the product development team or
organization.
Design Structure Matrices are very flexible representational tools. For instance, elements
within the matrix can convey information about the process beyond just the existence of simple
information dependencies, such as the probability of completing a task at any given time or a
task's sensitivity to downstream changes in the design. The DSM structure is also not restricted
to modeling just the design process. It can be used to represent a parameter design problem,
much like the basic parameter design scheme discussed earlier, or even the organizational
structure of a product development entity.
Design structure matrices have also been used to estimate product development cycle
time using discrete Markov models of design tasks and their information dependencies
[Carrascosa et al. 1998]. In this case, each development activity is modeled as an information
processing activity with its estimated time to completion represented by a probability density
function. If the task in question is serial or coupled its completion time is also dependent on the
completion of other tasks upon which it relies for information and which are also described by
probability of density functions. Other researchers have focused on minimizing the "quality loss"
of a design due to constraints that upstream decisions in the design process place on downstream
flexibility [Krishnan et al. 1991]. This topic is discussed later on in Section 3.3.3 of this thesis.
Both of these models provide a much richer understanding of how coupling affects cycle time and
overall product quality than just a basic DSM representation and show what a flexible and useful
modeling structure it can be.
3.2.4 COUPLING
Although coupling may at first seem rather straightforward, it is in fact a complex,
nuanced topic of critical importance to the design process. Iteration is a key driver of product
development cycle time and cost, and tight coupling of any kind increases the likelihood of
46
iteration and can make it far more difficult to converge on a satisfactory design [Eppinger 1994].
This is true regardless of whether the coupling occurs between the design parameters of a product
or as an information dependency between designers or divisions of a company working together
on a design project.
In its most basic form coupling is a situation that arises when a change in one design
variable (an input parameter to the design "system") leads to a change in more than one attribute
of the system's output (response) that is being measured. In the context of Axiomatic Design, a
theoretical framework in which coupling is explicitly addressed, design problem output variables
are given the name Functional Requirements (FR's) as they represent measured system states that
the designer desires to adjust to some specified level. The system inputs, that the designer can
manipulate to adjust the Functional Requirements, are called Design Parameters (DP's).
In the Axiomatic Design scheme, design parameters can be "uncoupled," "decoupled," or
"fully coupled" [Suh 1990]. This framework can be readily visualized using a linear system
mapping, with the inputs, x , mapped to the outputs, y , using a matrix of coefficients, A , as
presented below in Figure 3.7. Coefficients ay of the design system matrix, A ,represent the
degree and type of coupling between input and output variables.
all
0
a22
all
0
a
21
a1
a 21
FIGURE 3.7
-
F1
1
a
22
1!
(diagonal matrix)
2
(triangular matrix)
Yl
2 _
1
a 12
a 22
2
2 _
1
_
X 2 ].
_
Uncoupled
Decoupled
Y 1l
Fully
2_
Coupled
(full matrix)
Uncoupled, decoupled, and fully coupled systems.
47
Uncoupled Design Parameters (DP's) are fully independent and their relationship to the
system response can be described by a diagonal coefficient matrix, whereas decoupled DP's are
characterized by a triangular input-output scheme. At least one DP affects two Functional
Requirements (FR's) in a Decoupled design, so the order in which adjustments to the DP's occur
is important. In a fully coupled system each input parameter affects all output variables.
Coupled systems encountered during the design process are more difficult for the
designer to handle than uncoupled or decoupled equivalents because of the more complex
mapping of inputs to outputs. Additionally, coupled tasks complicate the overall product
development process due to the increased complexity and interdependency of the information
exchange that their presence indicates [Carrascosa et al. 1998]. The implications of highly
coupled designs or processes are often higher costs and longer development cycles due to
iteration and rework.
Some have suggested that coupling is to be avoided at all costs [Suh 1990]. Although
coupling can make things more difficult, it is simply not feasible to reduce every design, process,
or organization to an uncoupled equivalent. Moreover, it is not clear that this is possible or even
preferable. For instance, key components of a product are more likely to be specially designed,
rather than selected from available stock, if the requirements they serve are fundamental to a
product's functionality or arise in a complex way from most or all of its elements [Krishnan and
Ulrich 1998]. These components, for the same reasons, are often likely to have tightly coupled
design parameters. In addition, Frey et al. [2000] have shown that even if it is possible to find an
uncoupled set of Design Parameters and Functional Requirements for a design the resulting
outcome may under certain circumstances actually be inferior to that which can be achieved with
a fully coupled model.
3.3 THE HUMAN FACTOR
Human cognitive capacity is clearly an important factor in problem solving, and design is
nothing if not a "problem solving" activity. However, little concrete, quantitative research has
been done by the product development community on how human cognition might impact the
design process or the success thereof.
Most engineering-related studies of the significance of human cognitive parameters are
actually found in human factors and ergonomics literature [Ehret et al. 1998]. A major focus in
these fields is on discovering how best to present information regarding the state of a complex
system to a human supervisor that is monitoring and controlling it. For example, a great deal of
human factors research goes into developing effective displays and controls for aircraft cockpits.
48
In this case the environment is complex and fast-paced and requires pilots to rapidly gather and
synthesize large amounts of information in order to make critical decisions. Because the modem
aircraft flight deck is becoming increasingly complicated and mode rich, pilots are now presented
with information of many different types in multiple formats, making it increasingly hard to track
and prioritize appropriately.
An example of the importance of human factors is the fact that pilot error is the most
commonly cited cause of aircraft crashes, accounting for up to 70% of fatal accidents. A major
contributor to accidents of this type is believed to be a disconnect between the pilots' mental
model of what is happening in the cockpit and the actual status of the aircraft, which is often
called a loss of situational awareness or "mode confusion." This reflects a lack of understanding
of the system's internal architecture and the increasing difficulty of dealing effectively with the
complexity of the modem aircraft cockpit. The problem is exacerbated by a lack of humanoriented interface design [Butler et al. 1998]. So it has become increasingly important to
understand exactly how the manner in which pilots gather and process information interacts with
the control and avionics layout in the cockpit to cause such errors [Butler et al. 1998].
Although the aircraft industry is a high-profile instance of the importance of designing
systems that take human limitations into account, it is certainly not the only prominent example
that could be taken from the design field. Unfortunately, most product design and development
literature approaches this subject rather obliquely. This oversight is particularly regrettable
because it would seem that the design process is exactly the period during which these limitations
could be considered to best advantage so that they might be designed around in the most efficient
and least costly manner. A natural extension of this idea is that if the functionality of designs can
benefit so much from improved human factors engineering, then why couldn't the design process
itself benefit from such a consideration?
3.3.1
COGNITION AND THE DESIGN PROCESS
There is certainly some measure of discussion about the impact of human cognitive
limitations on the design process in product development literature. Most who study design do so
under the assumption that organizations, and designers themselves, approach product
development in rational and structured or semi-structured manner. This goes hand in hand with a
belief in the bounded rationality of individuals and teams, as suggested by Simon [1957], and the
importance of individual behavior to the outcome of the design process [Krishnan and Ulrich
1998]. This is indicative of an implicit belief on the part of researchers that the design process
49
should be viewed as a set of decisions based on preferences of the designer, with the designer
seeking to maximize his utility.
Some research has carried this "economic" argument even further. For instance, a group
of designers making decisions through consensus can reach inefficient outcomes, even if they all
have transitive preferences, if their preferences are ordered differently. Thus groups of rational
individuals can exhibit intransitive, irrational preferences even if the individuals have transitive
preferences. This suggests that an explicit group consensus of utility must be reached if the
design is to be optimal [Hazelrigg 1997].
It is also generally accepted that coupled design parameters lead to increased design
complexity and contribute substantially to the difficulty of a given design problem. This topic is
most clearly addressed by the theory of Axiomatic Design [Suh 1990]. In a similar vein, the
Design Structure Matrix is also a nod towards human limitations. Techniques such as reordering
tasks to reduce iteration and coupled information dependencies, or decomposing a large design
problem into smaller, more manageable sub-problems, are consistent with the idea of reducing
the cognitive load on the designer during the design process.
Occasionally product design researchers approach the subject of human cognitive
capabilities and their interaction with the design process in a more explicit manner. However,
many who do discuss the interaction of human cognition in this context and development tend not
to ground their research with cognitive science theory and frequently coin their own terms for
various phenomena, making dialogue with the cognitive science field difficult at best. Moreover,
the focus of such research is often on the conceptual parts of the design process. The temptation
in this case, particularly for those not well versed in cognitive science and its experimental
methods, is to philosophize (see, for instance, [Madanshetty 1995]) rather than trying to gather
some facts in a quantitative manner.
Some product design researchers hit closer to the mark. For instance Condoor et al.
[1992] identify the significance of the short-term memory bottleneck, suggest the importance of
the interplay between short- and long-term memory, and hint at the importance of expertise in
developing a concept, or "kernel idea," to the overall success of the project. Chandrasekaran
[1989] implicates the search space/problem space paradigm, frequently noted by cognitive
science researchers including Newell and Simon [1972], as significant for the design process.
However, a few researchers studying design have taken a closer look at the human
cognitive elements of the design process and observed some particularly interesting results. What
sets these efforts apart is that they all involve some sort of experiment that allows the capture of
rich data on the interaction between cognition and design.
50
3.3.2 PROBLEM SIZE AND COMPLEXITY
Based on the fundamental capacity limitations of short-term and working memory it
would appear that problem size and complexity should be critical factors in the overall difficulty
of design problems. Indeed, support for this conjecture is offered by research conducted in a
design-related context.
In a 1994 experiment, Robinson and Swink presented human subjects with a distribution
network design problem which involved defining the number, location, and market area of
distribution facilities for a hypothetical firm in order to maximize efficiency and minimize costs.
Their study was designed to elicit how human problem solving performance was related to the
problem characteristics of typical Operations Research design problems. The design problem
was presented to human subjects participating in the experiment as an interactive computer model
of a distribution system for which a set of free variables could be manipulated by subjects to
determine an optimal solution. A number of problems of varying size (i.e. number of distribution
facilities, etc.) and combinatorial complexity (i.e. number of possible network configurations)
were presented to subjects over the course of the experiment. The metric used to judge the
success of a subject's efforts was the proximity of their solution to the true optimal solution of the
particular network design problem.
The study uncovered two main findings. First, the size of the design problem appeared to
be the dominant factor in the difficulty it posed to the human subjects. Second, an increase in the
combinatorial complexity of the problem also adversely affected the ability of subjects to find a
solution. However, particularly in the case of complex problems, what was influenced by these
problem characteristics was not the quality of the solution but the amount of time needed for
subjects to find a solution as measured by the number of iterations required. The researchers also
concluded that the education, experience, and spatial reasoning abilities of the experimental
subjects were significant to their performance on the design problem [Robinson and Swink 1994].
3.3.3 NOVICE AND EXPERT DESIGNERS
Another good example of research that emphasizes the importance of human cognitive
factors to the design process is a study conducted at Delft University of Technology by
Christiaans and Dorst [1992]. The aim of their experiment was to produce a cognitive model of
the design process, which they viewed as a cognitive information-processing task, paying special
attention to the importance of domain-specific knowledge and the relationship between this prior
knowledge and the quality of the resulting design.
51
The Christiaans and Dorst study consisted of presenting a complex design problem, for a
railway carriage "litter disposal system," to ten novice and ten experienced industrial design
engineering students at the Delft University of Technology. Experienced students were in the
final-year of their studies students while the novices were in the second year. The specific design
problem was chosen because it was relatively sophisticated and involved integrating information
from a number of disciplines (engineering, manufacturing, industrial design, etc.) in order to
synthesize a solution. All supplemental information the students might need during the design
process (such as dimensional and functional requirements, information on available materials and
processes, and so on) was printed on small cards, and the appropriate card was given to the
student upon request. The experiment was videotaped and the students were requested to "think
aloud" and offer a verbose running commentary as they grappled with the problem and created
sketches of possible designs. This type of verbal interaction is commonplace in cognitive science
research and is called a "verbal protocol." Subjects were allotted 2.5 hours to come up with a
solution to the design problem.
The Christiaans and Dorst study elicited some interesting results. First of all, the
researchers suggested that based on the results the cognitive model likely used by the students for
solving design problems bears much resemblance to models described for problem solving in
other problem domains as suggested by cognitive science theory and research. Based on analysis
of the videotaped sessions, similarities were discovered between the student's methods and the
information processing problem solving paradigm set forth in Simon [1978] and also discussed
earlier in this thesis in Section 2.3.3 [Christiaans and Dorst 1992].
Christiaans and Dorst also verified the importance of knowledge and expertise in
engineering design problem solving by evaluating the differences in task-related performance
between novice and expert designers. For the purposes of the experiment the researchers
characterized expertise as the ability to recognize that a problem is of a certain type and will yield
to certain solution methods. Of central importance to this assertion is that a suitably organized
cognitive structure plays a crucial role in the quality of the problem solving process. In the case
of experts a highly developed experienced-based framework allows the problem solver to
categorize the problem, identify potentially useful knowledge, and plan a sequence of actions or
operations that leads to a solution. Prototypes for this process emerge with increased domainrelated expertise [Christiaans and Dorst 1992]. This result strongly echoes findings from
cognitive science research, which also indicate that the extent of knowledge and experience has
major relevance on problem solving abilities [Kovotsky and Simon 1990].
52
Building on the idea of expertise, Christiaans and Dorst also found important differences
between the problem solution approaches taken by novice and expert designers. During the
experiments novices tended to employ what Christiaans and Dorst dubbed a "working
backwards" method of problem solving. Novices typically suggested problem approaches or
solutions soon after being presented with the design problem, often failed to generate necessary
inferences or generated faulty inferences from the data available, and frequently employed
"sparse problem prototypes" (in other words over-simplified mental models of the problem)
organized by superficial features [Christiaans and Dorst 1992]. According to the researchers,
other behaviors that characterized a novice designer's problem solving approach were:
e Ignoring many complications and subtleties in the problem.
* Not asking many questions about the problem, instead generating much of the needed
information themselves.
9 Abstraction of problem was done at the student's own level of competence. Many novices
did not recognize that they were lacking data or had erroneously generated information.
e A lack of systems thinking. Novices tended to break problem into more sub-problems
rather than focusing on an integrated solution.
* Novices in general treated the design problem at a simpler level than did experts.
In stark contrast, experts participating in the Christiaans and Dorst experiment tended to
use a "working forwards" approach. They often employed qualitative analysis techniques first,
which lead to the generation of useful inferences and knowledge based on an overall view of the
design problem. Commonly called a "knowledge development strategy," this technique leads to
enriched problem representation, the use of domain-based inferences to guide the retrieval of
problem-solving information and techniques, and also aids with structuring and modeling the
problem in a useful manner [Christiaans and Dorst 1992]. Significantly, experts tended not to
decompose problems as the novices did, but sought more integrated solutions. As Christiaans and
Dorst noted, "one of the most striking results, when analyzing the protocols of [more
experienced] designers, is that the subjects do not split up the problem into independent subproblems, but try to solve it as a whole." Expert designers also asked for additional information
about the design problem more often than novices did and synthesized it in such a way that it was
useful for structuring and solving the problem.
From their research, Christiaans and Dorst were able to draw a number of important
conclusions about the effect of human cognition and expertise on the design process. In the case
53
of novice designers, trying to see one sub-problem as independent from the rest of the design
problem often resulted in an unmanageable amount of restrictions for the rest of the design, or
else an unwanted bias in the solution [Christiaans and Dorst 1992]. For instance, novices might
fix one major set of part dimensions early in the design exercise. When this turned out to make
the design of another part more difficult, or otherwise sub-optimal, instead of changing the
dimensions of the first part they would attempt to design around the problem by altering whatever
subsystem they were currently working on.
Interestingly, a similar phenomenon has also been described at the organizational level by
Krishnan et al. [1 997a] in a Design Structure Matrix context involving sequential decision
making. In this case, decisions taken early on in the product development process can lead to
sub-optimality in the design because they place preconditions on sequential downstream
decisions by freezing certain parameters or information. Naturally, coupling reduces the effects
of this problem by allowing iteration, but it also causes the design process to take longer to
converge to a solution and increases the overall cost of the process. So, even if a process is
entirely decoupled there may still be inefficiencies or poor end results due to the fixing of design
parameters. Krishnan et al. have suggested a method for measuring the quality loss due to these
rigidities imposed on the design by upstream decisions so that increased coupling and iteration
might be traded off against the benefits of a sequential design decision process, such as low
complexity, shorter process cycle time, and lower cost.
Christiaans and Dorst's research highlighted the importance of experience and knowledge
(an individuals accumulated skills, experiences, beliefs, and memories) to the design process.
They have shown that domain-specific knowledge, specialize understanding of a specific
endeavor or field of study, enables the designer to more effectively structure a problem and
synthesize pertinent information to create a superior solution strategy. This finding is consistent
with work on knowledge-based problem solving in the cognitive science field, which also
supports the view that expert problem solving depends primarily on having appropriate domainspecific knowledge and experience rather than any unusual intellectual abilities [Anderson 1987].
3.3.4 DESIGN AND DYNAMIC SYSTEMS
A further example of how human cognition might impact the design process is provided
by investigations of the temporal aspects of human information processing. In an interesting
study conducted by Goodman and Spence [1978], it was found that system response time (SRT)
appears to have a strong effect on the time required to complete a design task. In their
experiment Goodman and Spence had human subjects undertake a computer based graphical
54
problem solving exercise that involved adjusting input parameters to cause a curve to fall within
certain regions of a graph displayed on a computer screen. In essence this was a numerical
optimization and constraint satisfaction exercise. The researchers repeated the experiments with
various time delays (0.16, 0.72, and 3 seconds, approximately) between the user's adjustment of
the input controls and the updating of the computer monitor to display the system's response to
the new inputs. Goodman and Spence discovered that a time delay between the input and system
response caused difficulty for the human subjects as indicated by a disproportionate increase in
the overall time required for them to find a solution. As the delay became longer Goodman and
Spence found that completion time for the design problem increased linearly over the limited
delay times tested [Goodman and Spence 1978]. This finding agrees with the general
understanding of the difficulties imposed by time delays and other dynamic effects as suggested
by Drner, for instance, and discussed in Section 2.3.4.
3.3.5
CAD AND OTHER EXTERNAL TOOLS
The importance of considering and mediating the effects of human cognitive limitations
on the design process is also supported by the existence of Computer Aided Design software and
other such computer based design task support tools. One reason that the product development
process is so complex is because of need to satisfy multiple interrelated design constraints.
Juggling these interactions leads to cognitive complexity for the engineer because of the large
amounts of information that must be synthesized. This complexity also makes it difficult to
ensure that all important factors of the problem are being considered, or that significant
interactions being modeled appropriately [Robertson et al. 1991]. Even seemingly simple tasks,
such as creating part geometry, can become mentally burdensome due to multiple interacting
design parameters.
This cognitive overload is in part due to a limited short-term memory span that prevents
the designer from developing a complete mental model of the design problem. Dependence on
finite mental resources in turn leads to a reliance on iteration and sequential techniques and
reinforces the satisficing nature of the design process by making it difficult to integrate all the
information required to solve a design problem.
Basic CAD systems, like AutoCAD, SolidWorks, or ProEngineer, help the designer
surmount such mental limitations by serving as an extension of the working memory. This
external memory, or "sketch pad," stretches the capacity of the short-term and working memory
functions of the designer, facilitating the evaluation of complex problems with many interrelated
variables [Lindsay and Norman 1977]. Although searching and processing externally stored
55
information can be more difficult than manipulating it in the short-term memory or retrieving it
from the long-term memory, shortcomings in this respect are made up for by the scalability of
external support to extremely complex design situations.
It has been pointed out that a major objective of CAD systems should be to minimize the
cognitive complexity posed by engineering design tasks [Robertson et al. 1991]. Although CAD
software at its most basic is still quite useful, it is clearly capable of much more than just
extending the span of short- and long-term memory by serving as a drawing tool or sketchpad for
the designer. As a result there is now much emphasis on designing CAD systems that offer even
more sophisticated support to the designer.
For instance, the information retrieval and synthesis parts of the design process can put
heavy loads on the designer's working memory. But, this demand on cognitive resources might
be relieved in a number of ways. For instance, a CAD system would facilitate decomposing a
task into smaller, more manageable sub-problems that could be approached separately. Also,
design related information could be automatically selected and managed in such a way that the
designer receives relevant data on an as-needed basis in a format that facilitates comprehension
and absorption. In other words, information might be appropriately "pre-chunked" by the CAD
system before being fed to the designer [Waern 1989]. Another option is to capture knowledge
from experienced designers and use it to develop heuristics and interfaces that facilitate the work
of designers with less problem-related expertise. One obvious example of this is the possibility of
using of expert knowledge to improve the effectiveness a search among various design
alternatives [Waern 1989].
To some extent such isues are being addressed by the DOME (Distributed Object-based
Modeling and Evaluation) system being developed at MIT, which models a design in terms of its
interacting objects (modules) each representing a specific aspect of the overall design problem in
a quantitative manner. This CAD framework is an integrated design environment that allows
interrelated decisions to be made such that they satisfy competing design objectives. By its very
nature DOME assumes that decomposition of the design task into sub-problems is possible,
allowing different parts of the problem solution to be supervised by those with requisite abilities.
The overall design problem is then re-aggregated by the DOME system so that large design tasks
can be modeled as system of interrelated sub-problems [Pahng et al. 1998].
One difficulty with this approach, however, is that there is seldom a natural objective
function that allows trade-off between multiple criteria along various different dimensions. For
example, how much additional material cost might another unit of strength be worth? While
some limits are hard and easily quantifiable, such as a minimum acceptable yield strength for a
56
particular part, others are "soft," like the aesthetic appeal of a design [Robertson et al. 1991]. The
lack of a normative design criteria evaluation framework may lead to difficulty in developing
meaningful objective functions and creates the potential of developing "satisficing" rather than
optimal design problem solutions [Simon 1969]. Moreover, as Whitney [1990] has aptly noted,
very few real design problems are actually suitable for numerical optimization.
The fact that DOME relies on making quantitative trade-off analyses among the design
parameters also implies that the problem decomposition must be optimal for the solution to also
be truly optimal. As Pahng et al. [1998] have noted, the distribution of design knowledge, tasks,
modules, and the design team itself may impose limitations on the structure or scale of models
and how the design problem is divided into more tractable sub-tasks. As many have noted, suboptimal problem decomposition can undermine many of the positive benefits of such optimization
based techniques [Krishnan et al. 1997a, Rodgers 1994]. This was discussed in a DSM related
context in Sections 3.2.3 and 3.3.3. Distributed design systems are thus at increased risk of
finding design optima that are in fact optimal solutions to sub-optimally decomposed design
problems rather than the true optima.
Several studies have highlighted the importance of external tools to the design process,
and their role in minimizing the cognitive stresses imposed on the designer. In a study of the
effect of computer-based decision aids on the problem solving process, Mackay et al. presented a
resource allocation problem to a number of human test subjects. One group of subjects had the
benefit of a computer tool to model the problem while the other group not. The researchers found
that a computer-based decision making support tool caused the experimental subjects, both task
experts and novices, to explore many more alternative solutions in a more sophisticated manner
than those without the benefit of the tool [Macky et al. 1992]. Also, although subjects using the
tool did not solve the problem exercise any faster, their solutions were often superior to those
developed by unaided subjects. These differences in solution strategy were quite similar to the
general differences between novice and expert design problem-solving strategies uncovered by
the Christiaans and Dorst experiment discussed in Section 3.3.3.
CAD support systems still face problems, among them a slow learning curve and
complex interfaces that are frequently counterintuitive and difficult to negotiate. However, they
are obviously an excellent tool for circumventing some of the limitations imposed on designers
by their natural cognitive parameters. The fact that CAD is so clearly a tool to circumvent such
limitations also supports the contention that human cognitive parameters interact with and affect
the design process in many complex ways and are in need of more careful and thorough
characterization.
57
4 TOWARDS AN ANTHROPOCENTRIC APPROACH TO DESIGN
As discussed in the preceding chapters, the way designers approach a design problem
must be related not only to the structure of the problem itself, but also to the structure and
parameters of their own minds. Studies of the impact of human cognitive and attention limits on
task performance has found them be significant in many diverse endeavors besides design and
engineering, including finance and economics [Chi and Fan 1997], human factors engineering
[Boy 1998], Al and expert systems [Enkawa and Salvendy 1989], management decision making
[Mackay et al. 1992], and operations research [Robinson and Swink 1994].
Because of the effectively infinite capacity of the long-term memory most cognition
related performance deficiencies must by default be attributed to the limited resources of the
short-term memory, with its capacity limitations and temporal volatility, and the attendant
information-processing restrictions on the working memory. Although humans are resourceful
problem solvers, in the end usually able to discover ingenious ways to extend the boundaries of
their abilities through learning and experience, it is the same set of information processing
limitations that conspires to make the learning task itself such a challenge. It is indeed amazing
that a few limits could express themselves so consistently across so many different tasks, from the
most complex cognitive activities like chess and advanced mathematics to mundane probe tasks,
such as tests of verbal memory span, confined entirely to the psychologist's laboratory.
However, the available evidence suggests nothing less [Simon 1969].
Given the importance of cognitive parameters as shown by the studies I've just discussed,
it is clear that the development of more effective design tools and methodologies will require a
better description of the design process in terms of human cognitive and information processing
models. Human information processing abilities play a fundamental role in design activities and
a better understanding of how they impact the end results of the design process will have
significant implications for its outcome.
It is my contention that the cognitive science and design fields have a great deal to offer
one another. However, the product design and development profession has been particularly slow
to adopt a more anthropocentric approach to the design process and is forgoing the advantages
that a more thorough consideration of the designer's fundamental cognitive capabilities might
confer. I am not implying that this is the key to revolutionizing the design process, merely that
the designer is in principal worthy of consideration and that the results of such an investigation
should have significant implications for the practice of design. Moreover, based on evidence I
have already discussed, the designer is being faced with design problems that are increasingly
58
complex, forcing more reliance on sophisticated support tools to circumvent human information
processing limitations. For this reason one key to developing the next generation of design
support tools is likely to be a better understanding of the interaction between the designer and the
design process.
To some degree the general lack of interest in human cognitive factors on the part of the
product design community may have been caused by the perceived softness of much cognitive
science research and the difficulty inherent in transferring this type of knowledge in a meaningful
way to a more quantitative discipline such as engineering design. To begin correcting this
situation what is needed is an experiment that helps bridge the gap between these two fields. It
should be inspired and informed by cognitive science theories, but also quantitatively rigorous
and based somehow on the product development process so its "language" is that of engineering
design. The remainder of this thesis concerns such an experiment.
59
5 A DESIGN PROCESS SURROGATE
With this chapter begins a description of the research project that forms the core of this
thesis: an investigation of the interaction between the design process and the cognitive and
information processing parameters that characterize the designer using a computer based design
task surrogate. The task surrogate created for use in the experiments conducted for this thesis is a
computer program that simulates the design process using a parameter design task, similar to that
described previously in Sections 3.2.2 to 3.2.4.
This chapter discusses the development of the surrogate and is divided into four sections.
Section 1 presents the motivation for an investigation of the design process using such a
technique, both in terms of practical reasons and also precedents set forth in other research
projects. In Section 2, I outline the development of the software platform, describe the specific
model used as the design task by the program, and discuss the preliminary set of experiments
carried out as a "proof-of-concept" test. The experimental procedure used to gather data with the
design task surrogate program, and with the cooperation of human subjects, is also outlined in
this section. Results from the preliminary study, and their implications are presented in Section 3.
Section 4 details how the design process model and software platform were refined based on what
was learned from the preliminary results. This section concludes with a description of the
primary set of experiments carried out with the refined model. Results from these primary
experiments serve as the empirical basis for this research and are discussed in detail in Chapter 6.
5.1 THE MOTIVATION FOR A SURROGATE
The need for an alternative to the actual product design process has been an important
motivating factor for many design researchers. Although the most thorough and meaningful way
to study the design process is most certainly by examining the process itself, the expense and time
required for learning in such a way is prohibitive. To facilitate research on the interaction
between human cognitive limitations and design what is needed is a surrogate for the design
process, something that captures enough essential features of the task to be a realistic model
useful for experimental purposes but that avoids the resource commitment, time requirements,
and other drawbacks of the actual design process. Moreover, a simulation of this kind would also
allow the experiment to be designed in such a way that it is readily quantifiable and thus more
easily controlled and characterized.
60
In complex experiments the environment often becomes a problem. Many research
efforts directed at the study of human cognition are thwarted by the difficulty of accurately
tracking information flow during the experimental procedure. A complicated or highly dynamic
environment can obscure relevant inputs, outputs, or other significant aspects of the probe task,
for instance. Unintended inputs or outputs can also leak into the experimental results from the
environment. This can lead to the variables of interest being obscured by noise inherent in the
experimental environment, or erroneous conclusions being drawn from experiments that have
unintended factors, of which the researcher is unaware, that outweigh the variables of interest
[Ehret 1998]. Additionally, experiments that rely on observation of the design process and
require interaction between an observer and the designer, while frequently producing valuable
and insightful results, often suffer from "uncertainty principle" effects. The presence of an
observer causes the subject to act and think differently the he ordinarily would, and the
environment, if not carefully regulated, is so different from that of an actual design task that the
inferences that can be drawn are necessarily vague.
One way of lessening such adverse effects is studying the design process through a
"scaled world." In this scenario, a scaled-down version of the actual design task is employed as
an experimental platform, allowing the researcher to more readily explore the phenomenon of
interest by stripping away non-essential aspects of the task and environment [Ehret 1998]. Of
course, this requires that the experimenter have a clear understanding of the process he is
exploring as well as its original environment. Careful decisions must be made about which
variables to include and which to exclude, and the modeling process, experimental procedure, and
environment all require much thought.
A potential problem with this type of approach, that is also a central concern to
behavioral and cognitive science research and is probably significant to much applied research in
other fields, is that experimental control must be balanced with generalizability [Ehret 1998].
Studies that explore complicated tasks and environments must attempt to sort important
information from trivia. Moreover, experiments of this type may be permeated by unknown and
unaccounted for factors from the environment, which makes drawing clear, logical results from
the data somewhat difficult. If a simulation or surrogate task is used in place of a more complex
activity, care must be taken that the model is robust and not oversimplified. Oversimplification in
particular can lead to results that, while interesting and perhaps important in specific
circumstances, lack general significance with respect to the original activity.
61
5.1.1 AN IDEAL SURROGATE TASK
In order to help make appropriate choices about what to include and what to exclude in
the surrogate model some good general characteristics of such a task can be identified. First of
all, the model should have a clear, logical relation to the activity or process being studied, and
should be simple but not oversimplified. Second, all significant inputs and outputs should be
included, easily measurable, and not confounded by unwanted effects or interactions. The model
itself should have readily measurable characteristics, and changes in these characteristics should
be traceable to changes in the inputs, outputs, or inherent structure of the system. In addition, the
task model should also be configurable so that it can be tuned to exhibit exactly the desired
characteristics. The problem space of the simulation should also be well defined and finite to
reduce ambiguity. Finally, the surrogate task should avoid incorporating any aspects of the
original process or phenomenon that make it difficult, costly, or otherwise troublesome to study
in its natural state [Ehret 1998].
5.2 DEVELOPING THE DESIGN TASK SURROGATE
The primary goal of this research was to explore how cognitive capabilities affect the
ability of the designer to solve complex and highly interdependent design problems. Of particular
interest was how the designer's capacity to find a solution to a parameter design problem was
related to the scale of the design task, its complexity, and its structure. To facilitate the study of
these phenomena a tool was needed to simulate the most important aspects of the parameter
design process so that a human subject acting as a "designer" could work with the model and in
doing so provide useful data about how the nature of the task interacts with the designer's
cognitive capabilities. Different design "problems" could then be posed to the designer in the
context of the surrogate task experiment by altering the size, complexity, and type of system
being used as a model of the design process.
Since this research project was concerned primarily with studying how basic human
cognitive capabilities impact the design process, another concern was stripping away any need for
domain-specific knowledge on the part of the subject. Although domain-specific knowledge is
actually a key aspect to successful design as well as an important factor in general problemsolving, it's presence in the mind of the designer can get in the way of eliciting the fundamental
cognitive parameters at play if it is neither explicitly avoided nor accounted for [Christiaans and
Dorst 1992, Kovotsky and Simon 1990]. A more advantageous approach, which I have adopted
62
in this research, is to filter out as much as possible the effects of domain-specific knowledge on
task performance, through careful design of the primary experiments, and instead investigate the
implications of such expertise by including a second set of tests designed specifically for this
purpose.
The first step in developing a tool to explore the relationship between cognition and the
design process was to create a suitable substitute model of the task itself, with which a human
experimental subject acting as a designer could then interact. An obvious choice was to develop
a computer based simulation rather than employ a simplified "real" design task as in the
Christiaans and Dorst experiments. This would allow both the task and environment to be
carefully generated and manipulated, providing as much control possible over the attributes of the
design task and also facilitating the gathering of numerical performance-related data.
After some debate a linear matrix-based model was, for a number of reasons, decided
upon as the most appropriate representation of the parameter design problem. This mathematical
model was particularly attractive because it appears in many places in engineering design
literature as a tool for representing parameter relationships and information flows in the actual
design process, most obviously in Steward's Design Structure Matrices (DSM) and in the
relationship between Functional Requirements and Design Parameters in Axiomatic Design.
And, of course, it is also implied by the input-output scheme of typical parameter design
problems.
Another useful attribute of matrices is that they provide a clear, well-characterized
mapping from input to output. It is natural to think of the elements in a matrix of coefficients as
representing the degree of informational dependency of one task in generalized design process to
another, or as representing the sensitivity of one design parameter to another. Matrices also have
a number of characteristics, such as size, condition, sparseness, and eigenstructure, that can be
easily altered by the experimenter for the purposes of exploring how these structural variations
affect the ability of human subjects to find a solution to the system.
5.2.1 THE BASIC DESIGN PROBLEM MODEL
Once a basic idea of how to model the design task had been settled upon the next step
was to develop the experimental software platform. The basic concept for the parameter design
process surrogate, dubbed the Design System Emulator or "DS-emulator," was a matrix
representation of a physical design problem to be solved graphically by human subjects acting as
designers. Subjects would control input variables that a matrix representation of the "design
system" (i.e. the design problem) would map to output variables represented by indicators on the
63
program's graphical user interface (GUI). The aim was to make the interface of the software
platform as simple and intuitive as possible so subjects would not be confused or otherwise
hampered in their efforts to solve the design problem by any aspects of the interface. It was also
necessary to ensure that as little training as possible would be needed for subjects to be able to
use the program successfully. Finally, the program would be designed to record pertinent data
from each subject as they worked through the design problem exercises.
DS-Emulator, the resulting software platform, is a MATLAB application that mimics the
coupled parameters and interrelated objectives that face the designer in many of the stages of
product development. Using a basic linear system model
[y] = [4Ax]
the surrogate program simulates the parameter design task. Attributes of this process, including
the scale, degree, and type of coupling of the design problem, can be altered by manipulating
values in the coefficient matrix A. The relative magnitudes of coefficients in this square matrix
reflect the degree and type of coupling present in the system while n, the size of A, reflects the
system's scale. Design parameters of the surrogate model, controllable by the designer, are
represented by the vector of input variables,
[x],
and performance parameters, or measurable
attributes of the system, by the vector of output variables,[y].
A broad array of different design matrices presented to subjects by the application allow
an investigation of how various critical factors impact the time required to complete the
parameter design surrogate task. Since design process cycle time has been frequently identified
as a critical aspect of the product development process, this was chosen as the metric by which to
gauge the relative difficulty of each design system presented to experimental subjects by the
program [Krishnan and Ulrich 1991, 1998]. In addition, there are a number of experimental
precedents in other fields for using task completion time or the number of iterations required as a
meaningful metric for problem difficulty and complexity, especially in empirically based research
[Robinson and Swink 1994, Mackay et al. 1992, Goodman and Spence 1978].
In contrast to the actual design process, the set of surrogate tasks was formulated so that
it would take a relatively short time to complete (~1.5 hours). The relatively brief experiment
duration made it possible to collect data from multiple subjects in the field, something that is
much more difficult to accomplish with an actual design exercise of any complexity. An
additional benefit of the parameter design model was that domain-specific knowledge was not
64
required for completion of the task. So, there was no need for subjects to be familiar with any
aspects of the product design process in order to participate in the experiment.
5.2.2 THE SOFTWARE PLATFORM
The software platform for the design surrogate task was developed in the MATLAB
programming language because of the ease with which it handles matrix manipulations and other
mathematical functions that were necessary for the project. This language also permitted
developing the software tool and storing and analyzing experimental data in a single
programming environment.
The design task GUI was arranged so that the basic parameter design activity was turned
into a game-like exercise. In order to "solve" a design problem human subjects were required to
adjust the input variables (design parameters), controlled by slider bars on the design task GUI,
until the output variables (system performance parameters) indicated by gauges on the GUI fell
within specified levels. An image of the Task GUI for a 3 x 3 design matrix is shown below in
Figure 5.1.
Target ranges
Inputs controllable
by subject
Clicking on the arrow
allows fine adjustments
to the indicator position
Clicking in the "trough,"
or moving the slider
button directly, allows
more coarse adjustments
to be made
The Refresh Plot Button
recalculates positions of
output indicators based on
subject's input
Outputs observable
by subject
FIGURE 5.1 - DS-emulator design task GUI for 3 x 3 system
with key to significant features.
65
The "target range" within which the output had to fall for the design problem to be
considered completed, or "solved," was set at 5% of the range of the output variable display
gauge range. This value was chosen as it was representative of a reasonable "tolerance" range for
an actual design problem (i.e. ± 2.5%).
In order to mimic more closely the actual design process, which is naturally iterative and
discontinuous, the output variable display gauges were not designed to update themselves
smoothly and continuously as the inputs were varied. Instead, the positions of the output variable
indicators in the displays were only recalculated and updated after the "Refresh Plot" button on
the lower right-hand side of the Task GUI was pressed by the subject. The stepwise,
discontinuous display of changes in the output variables with response to input variable
adjustments was designed to imitate the iterative nature of the design process. Task GUIs were
developed capable of handling n x n design problems of all scales required by the experiment,
and system representations ranged in size from n = 1 to n = 5. No over- or under-defined
design systems were included in the experiments so the GUIs always had equal numbers of slider
bars and gauges.
In addition to the task GUI, which served solely as the interface for the design surrogate
task itself, other interactive GUI's were developed to help manage and control the workflow of
the human subjects during the experiment. A "Master GUI" allowed the subjects to launch both
the actual experiment program and also a brief demonstration exercise to teach them how to use
the software. A "Control GUI" allowed subjects to pause between design problems during the
experiment and re-start the program when they were ready to continue, or quit the program if they
so chose. Both of these GUIs, shown in Figure 5.2, were visible at all times on the computer
screen along with the Task GUI while the experiment was underway.
Because any experiment involving human subjects conducted at MIT must be approved
by the school's Committee on the Use of Humans as Experimental Subjects (COUHES), a
proposal outlining the experiment and the planned procedure were submitted for the committee's
approval. After minor rework of the experimental protocol and the addition of a GUI that
allowed subjects to give consent to participating in the experiment, which is shown in Figure 5.3,
the committee granted its approval.
66
FIGURE 5.2 - Master GUI (left) and Control GUI.
FIGURE 5.3 - Consent GUI, which allowed subjects to
agree to participate in the experiment.
5.2.3 HOW THE PROGRAM WORKS
The overall functionality of the DS-emulator program, and how it acts to simulate the
parameter design process, is straightforward. This can be seen more clearly in Figure 5.4, which
67
depicts how the central design task GUI functions and how it interacts with the rest of the
program to gather data from human subjects acting as the designer during the experiment.
In order to adjust the input values to the parameter design system the subject must move
the sliders on the Task GUI. As mentioned earlier, in order to simulate the stepwise iterations of
the actual design process the value of the design system output variables, which is depicted by the
position of single blue lines in the display gauges, only changes after the Refresh Plot button on
Control Program
and System Model
3 - System Response Calculated
and Time of Input Recorded
MATLAB/Excel
Database
4 - Data Recorded
FY
1 [X 1
y
683 -. 658 -.317 x
.658 .366 .658 x2 = y2
-. 317 -. 658 .683 x
y3
Y2
x11
x 21
x31
X12
X22
X32
y11 y21 y31 ti
Y12 Y22 Y32 t2
x13
x13
x13
y 13
y1
3 y13 t3
X.
Y.
Y.
X2
5 - Updated System Output
Data Sent to GUI
X. X.
.
t.
2 - User Inputs
Sent to Program
-X1 - User Adjusts Inputs
Adjusted System
Response Displayed
FIGURE 5. 4 - DS-emulator program functionality.
the Task GUI has been pushed by the subject. Each time the Refresh Plot button is pressed, the
input values and output values, and time at which the refresh plot button was activated are
recorded in the program database and the display updated to reflect the new state of the system.
Throughout the course of the experiment the program compares the output values of the system
with the upper and lower bounds of the target specification range. When the subject has adjusted
the inputs so that all system outputs are within the specified range the parameter design task is
complete. At this point a data array containing the compiled input, output, overall elapsed time,
68
and time per operation for the particular experiment in question are sent to a Microsoft Excel file
as well as stored in the MATLAB database. Once a subject has completed a particular parameter
design problem he may continue on to the next task in the set of experiments by pressing the
"New System" button on the Control GUI, to launch the new Task GUI and its associated design
system, followed by the "Start" button on the Control GUI to actually begin the task. This
process is repeated until all experiments are completed.
5.2.4 EXPERIMENTAL PROCEDURE
In order to prevent influencing the results in any way through inconsistent experimental
procedure, a predetermined protocol was followed for all subjects. Each participant was seated at
a computer in a lab room on the MIT campus that was free from any disturbances. Following a
scripted protocol the researcher explained the rationale for the experiments to the subjects and
then helped them as they went step-by-step through a demonstration exercise that was part of the
experimental program. This consisted of practicing using the DS-emulator program by solving
two simple 2 x 2 design systems, one uncoupled and the other decoupled. After the subjects
finished the practice exercises and formally consented to participate in the study they were left
alone to complete the core set of experiments. Because of the nature of the investigation subjects
were not allowed to use pen, paper, calculator, or any other type of memory or computational aid
during the experiment.
3 - Design task experiments
1 - Demonstration Exercise
2 - Subject Consent
FIGURE 5. 5 - Workflow of typical experimental session.
69
Figure 5.5 gives a graphical overview of the workflow of a typical experimental session.
Generally the explanation and demonstration of the DS-emulator program and the practice
exercises took a total of about 10 minutes, and the actual experiment, consisting of multiple
design tasks, generally took anywhere from 45 minutes to over 2 hours in some cases. Upon
completion of the experiment the subjects were interviewed by the researcher and asked to
comment on the experiment in general, their perceptions of the difficulty and complexity of the
various tasks, and the type of solution strategy they developed and employed, if any, to help them
complete the design tasks.
Subjects were permitted to pause at any time, and functionality was placed in the
program to allow the test program to be paused between different design task problems without
affecting the overall results. In compliance with COUHES regulations, the subjects were also
allowed to cease participation in the experiment at any time, and for any reason, without penalty.
Subjects were paid US$15 for their efforts regardless of whether or not they actually finished all
the design problems in the experiment. All subjects were drawn from the MIT community.
5.3 PRELIMINARY EXPERIMENTS AND PROTOTYPE DESIGN MATRICES
To get a basic idea of what matrix parameters might be important factors governing
completion time for the surrogate design tasks, a preliminary set design matrices was developed
and embedded in the DS-emulator platform to test some hypotheses. These matrices ranged in
size from 2 x 2 to 4 x 4 , and were populated by randomly generated coefficients between 0 and
1.
To ensure that the output variables of the system used the maximum range of the output
gauges, without ever exceeding the limits of the display the matrices were multiplied by an
appropriate scaling factor. A single solution was generated in the same random manner for each
design problem matrix.
The intent of the first set of experiments was to get a basic idea of how the size, fullness
(i.e. the number of non-zero coefficients in a matrix, also often called the "sparseness"), and
relative strength of the diagonal and off-diagonal coefficients of the linear system affected the
amount of time subjects needed to find its solution to within the given tolerance range of ± 2.5%.
To explore these factors a number of different matrices of varying size and fullness were
generated in order to gather enough data to establish trends for task completion time as a function
of both fullness and system size n . Matrices were also created that had stronger coefficients on
the diagonal than the off diagonal, and vice versa, to explore how the relative strength of variable
coupling within a system affected the time required for subjects solve the design problem. In all,
subjects were presented with 24 unique parameter design problems in the first set of experiments,
70
composed of 8 separate experiments each for 2 x 2, 3 x 3, and 4 x 4 linear systems. Two of the
2 x 2 experiments, as stated earlier, were used as training exercises. Several ill-conditioned
matrices were also included in the proof-of-concept experiments.
5.3.1
WHAT WAS LEARNED FROM THE FIRST MODEL
It was immediately evident from the preliminary experimental data that fully coupled
systems were more difficult for human subjects to cope with than diagonal systems, and that the
difficulty of fully coupled systems grew more rapidly as a function of system size than that of
diagonal systems. Though the exact relationship was unclear, uncoupled system solution time
appeared to grow linearly with system size while fully coupled system completion time seemed to
follow a more unfavorable linear, polynomial, or exponential growth rate. Task completion time
vs. design problem matrix size is shown for both uncoupled and fully coupled systems in Figure
5.6.
Average Normalized Completion Time vs. n for Uncoupled
and Fully Coupled Matrices
(95% confidence interval)
6.00
..---------------------------
5.00
-
4.00
-
E
+ Uncoupled Matrix
0
U
0
En
3.00
-
2.00
-
AFull
f
1.00 -
0.00
I
2
3
4
5
n (matrix size)
FIGURE 5.6 - Uncoupled vs. Fully Coupled task completion time results
for preliminary set of experiments.
71
Matrix
The preliminary experiments, which were meant to determine the importance of coupling
strength, matrix sparseness, and system size to the amount of time required to find a solution to
the design problem, yielded somewhat imprecise results as to the exact trends involved.
However, it was evident that the degree of coupling, matrix fullness, and matrix size all correlated
to some extent with the time required for subjects to find the solution to the system, with matrix
size being the strongest factor. It was also clear that the scaling law governing human
performance on fully coupled matrices was less favorable than that for uncoupled design
problems.
One apparent drawback to the initial pilot study was that the matrices used to simulate the
design systems in the parameter design task were not developed in a sufficiently rigorous manner.
Although attention was given to their basic structure, and to the extent and type of coupling
present, a mathematically precise system was not employed for developing the numerical
coefficients in the design systems. It is likely that this had some effect on the results in the
preliminary experiments, and may have been a reason for the lack of a clear completion time
trend for either the full or uncoupled matrix experiments. A more quantitative method of
generating test matrices would also have facilitated establishing how time to completion of
various matrix types varied with the degree of variable coupling, which was also an important
goal of the research project. In addition, more carefully developed test matrices would allow a
trend to be clearly associated with matrix characteristics alone and minimize the likelihood of
inadvertent effects from carelessly generated matrices confounding the experiments.
Full or diagonal matrices can, as I will explain in the following sections, be created so
that even though they vary in size they are in fact quite similar in a structural and mathematical
sense. However, this generally cannot be done with sparse matrices. Except for a few special
cases there is simply no way for a 2 x 2 sparse matrix to be identical in structure to a 5 x 5
matrix. Though sparse matrices do often occur in engineering analysis problems, like those
encountered in finite element calculations for example, they are often quite large in such
instances and the methods useful for characterizing large sparse matrices are not meaningful for
small matrices. Although the results yielded by the preliminary investigation into the effects of
matrix fullness on task completion time were quite interesting, and are discussed in detail in
Chapter 6, it was decided for this reason not to pursue the issue further in the primary set of
experiments.
In any event, many sparse systems actually encountered in real-world product
development activities, such as unordered DSM's and some parameter design problems, can be
72
reordered into approximately triangular form with a few relatively small fully coupled blocks
protruding above the diagonal. Also, in many cases a product's design parameters might be
uncoupled or lightly coupled, with just a few complex custom-designed parts having highly
interrelated design parameters described by fully coupled blocks in the DSM. So, while sparse
matrices may represent a unique and difficult challenge to the designer the existence of heuristic
and mathematical methods to eliminate them in practice, by reordering them to lower triangular
matrices containing smaller fully coupled blocks, made them a less attractive subject from a
practical standpoint for study using the parameter design task simulator.
5.4 PRIMARY EXPERIMENTS: A REFINED SYSTEM REPRESENTATION
The factors described above indicated that restricting the experimental investigation to
diagonal and fully coupled matrices of various types would both permit increased rigor and
probably be more fruitful. Since the preliminary experiments indicated that a more exacting
method of developing and characterizing the test matrices was needed a goal became the
development of such full and diagonal square matrices for use in the primary set of experiments
along with quantitative metrics describing their important attributes. These metrics would make
it possible to create matrices of different sizes that varied only in certain characteristics of
particular interest. Results from various sets of experimental matrices could then be compared to
uncover how different types of matrices affected the outcome of the experiments in terms of task
completion time and other factors.
5.4.1 DESIRABLE DESIGN SYSTEM CHARACTERISTICS
Fundamental principles of linear algebra were used to develop an improved set of test
matrices for the primary set of experiments. The goal was to create a population of matrices with
a consistent fundamental structure and having well-defined characteristics that could be usefully
manipulated without undermining this basic structure. This was achieved by ensuring that all full
matrices were orthonormal, nonsingular, well conditioned, and had a standardized Euclidean
Norm, and by using variants of the identity matrix to model all uncoupled design problems.
Mathematical descriptions of these matrix characteristics and their significance to the experiment
are given in the following subsections, along with a detailed explanation of how the test matrices
and descriptive metrics were actually generated.
73
5.4.1.1 MATRIX CONDITION
For a homogeneous linear system, Ax = b, such as those used to model the parameter
design problem in the surrogate model, the matrix condition, c(A) , reflects the sensitivity of the
system solution, x, to small perturbations in the values of A or b. The condition of a system
can be measured as a ratio of the largest to smallest eigenvalues of the coefficient matrix A
c(A) =
or, it can take the form
AXI*1= C(A)
AbI~b.
If a system is ill-conditioned and c(A) >> 1, then small relative errors or changes in A
or b result in disproportionately large effects on
Ax = A-Ab,
yielding an unstable solution to the linear system [Strang 1986].
Matrices used in the primary set of design task experiments were generated in such a way
that they all had c(A)
1. This assured both the stability of the system and also that the
experimental GUI was able to produce a well-scaled display of the output of the system
representing the design task for all allowed numerical values of x, the input variable controlled
by the human subject.
5.4.1.2 NONSINGULARITY
By assuring that the n x n design system matrices, An, were nonsingular a number of
desirable characteristics were enforced [Edwards, Jr. 1988]. As a direct result of this restriction
all design problem models were homogeneous, linear, and row-equivalent to the n x n identity
74
matrix. In other words, the coefficient matrix for the design task, A
,
, could be turned into the
identity matrix I through a finite sequence of elementary row operations. More importantly, for
the design task surrogate experiments at least, nonsingularity also implied that an inverse, A
,
existed for the n x n design system matrix An, Ax = 0 had only the trivial solution x = 0, and
that for every solution vector b , the system Ax = b had a unique solution.
5.4.1.3 ORTHONORMALITY
The design matrices, An, used in the primary experiments were also prepared so that
they were orthonormal, composed of mutually orthogonal column vectors all having an Euclidean
norm equal to 1.
Two column vectors, u and v, are orthogonal if the vector dot product
u -v = (u xv 1 )+(u
2
XV 2 )+...
(u Xnv)= 0 .
This step assured that all column vectors were perpendicular, linearly independent, and
guaranteed a consistent underlying structure for all the matrices.
The Euclidean norm of a vector u is in essence the length of the vector, and is defined as
=Uu
U)
= (U 2 + U2 +...+
U2 )1/2.
By applying the restriction that Jul = 1 for all column vectors of the design matrices, all input
variables were as a result well balanced with respect to one another with none having a
disproportionate effect on the output of the system. Although an individual input variable might
have a large effect on one particular output variable, and a much smaller effect on the others,
orthonormality and a consistent Euclidean Norm ensured that the influence each input variable
had on the overall output of the linear system was well-balanced with respect to that of the other
inputs.
5.4.2
CHARACTERIZATION OF SPECIFIC DESIGN TASK MATRICES
A number of metrics were also developed that allowed important features of the test
matrices to be characterized in a quantitative way. These metrics were used as objective
75
functions during the generation of the matrices and also, upon completion of the experiments, to
explore how the various matrix characteristics in question correlated with the resulting empirical
data.
5.4.2.1 MATRIX TRACE
Of particular importance was formulating a metric to measure the strength of coupling in
the design system matrices. To do this a ratio of the absolute value of the matrix trace and the
matrix 2-norm for the design matrices, An, described by the equation
Za,,
t(An ) - =
1
,
a2
i,j=I
was developed and used.
This metric quantifies the coupling strength of coefficients in the An matrix by
measuring the relative magnitude of the diagonal elements of the matrix compared with that of
the off-diagonal elements. The higher the value for t(An), called simply the "trace" from now
on, the stronger the diagonal elements of the design matrix are compared to the off-diagonals.
For a given n x n orthonormal matrix with an Euclidean norm of 1, the maximum value t(An)
can have is F .
This metric was also explored in a somewhat more standard form by using the actual
matrix trace rather than the absolute value of the trace in the numerator. However, in this form
the metric turned out to be misleading since it permitted the numerator to be small, or even zero,
in cases when the coupling of coefficients was in fact quite pronounced. So, though the metric
has been called the "trace" for simplicity's sake, it is worth remembering that is not in fact the
trace. The matrix trace merely appears in a slightly altered form in the numerator of the metric.
5.4.2.2 MATRIX BALANCE
Another metric developed to classify the linear systems was the matrix "balance." This
measured the relative number of positive coefficients and negative coefficients in the matrix,
76
normalized by the total number of matrix elements. The matrix balance is described by the
following equation:
b(A)
=
(number of positive elements
2
(number of negative elements
Since the absolute value of the trace was used in the "trace" equation t(A,)
described
previously, the balance equation was necessary to characterize the overall sign of the matrix
coefficients. Because the preliminary experiments gave some indication that a negative
correlation between input and output variables might be more difficult for subjects to manage the
"balance" metric was developed to quantify this characteristic so that its effect on design task
difficulty could be evaluated. This metric was only useful for full matrices.
5.4.2.3 MATRIX FULLNESS
In order to explore how the "fullness" of the design problem matrices affected the
outcome of the experiments another metric was developed to measure this important variable.
There are many ways to characterize the structure of non-full, or "sparse" matrices (i.e. "bandsymmetric," "block triangular," etc.), and these classifications have a great deal of significance
for large systems of equations like those used in finite element modeling [Tewarson 1973].
Unfortunately, these classifications are not really meaningful for small matrices such as those
used in this study. Rather than attempting to develop ways to exhaustively characterize the
sparseness of the small matrices used to simulate the design task, and to help make the number of
test matrices more manageable, a single metric was allowed to stand alone as a measure of this
aspect of matrix structure.
The degree of "fullness" of a matrix can be described by the ratio of the number of
nonzero elements to the total number of elements
f(An)
=
(number of nonzero elements
n2
This metric is arranged so that a "fullness" of 1 indicates that the matrix is full, and as the
number of zeros in the matrix increases the numerical value of the metric decreases. Small
fullness values, i.e. f(An ) <<1, indicate that the matrix is sparse. As noted earlier in this
77
chapter, the effects of matrix fullness were only investigated in the preliminary set of
experiments. However, this will be discussed along with the rest of the significant results in the
Chapter 6.
5.4.3 DESIGN TASK MATRIX GENERATION
The method developed and employed to generate the design matrices used for this
experiment ensured automatically that they would embody all the desired characteristics listed in
Section 5.4.1. All design system matrices began as n x n identity matrices, I, a form which has
an inherently nonsingular and orthonormal structure. Rotating the basis vectors of the identity
matrix using the rotational transform
FcosO
-sin01
sin0
cos6
T(r) =1
produced full matrices for 0 < 0 <1 . This transformation merely rotates the n -dimensional
basis vectors of the matrix about the origin. So, all the desirable characteristics of the original
identity matrix were preserved through the transformation from diagonal matrix to full matrix.
For example, basic fully coupled 3 x 3 design problem matrices were constructed using
the formula
cosOl
A3=[T2] sin0
0
where
1
0
0
-sinO
0
cosO,
0 0 cosO2 -sinO
0
1_0
sinO2
cos0
2
2
cosO3
0
0
1
_L sinO3
0
-sin0
3
0
[T,
cosO3 _
IT,, is some form of the n x n identity matrix for which rows and columns have been
reordered or coefficients changed from +1 to -1.
1 0 0
IT, I= 0
For instance,
0 1 0
0 0 ,0
1 0 ,1
0 0 1-
0
0 0 1i 1
0
-1
1
0
0]
are some examples of such matrices, but many more combinations are possible. Similar formulas
were used for design matrices of other sizes.
78
Varying the angles of rotation, 01.. .0, , and the reordering the identity matrices, Tn
allowed systems with various coefficient characteristics to be created.
,
An Excel spreadsheet
program was developed that generated matrices with specific traits by manipulating the free
variables described above subject to certain constraint parameters based on the descriptive
metrics discussed in Sections 5.4.1 and 5.4.2. All fully coupled design matrices used in the
primary round of experiments were generated in this manner. The identity matrix, which formed
the basis for the full matrices, was used to represent all uncoupled systems.
5.4.4
A DESCRIPTION OF THE PRIMARY EXPERIMENTS
The primary set of experiments conducted with the DS-emulator platform were designed
to explore the following two factors:
* For various types of full and diagonal matrices, how does matrix size affect the difficulty
posed by the design surrogate task to human subjects?
* For full matrices, how do coupling strength and matrix balance affect the difficulty of the
surrogate design task?
Using the metrics and the Excel spreadsheet tool described in Section 5.4.3, a set of 17
matrices was developed to explore these factors. The test matrices were carefully evaluated to
ensure that the output variables used the full range of the display gauge on the Task GUI without
going out of bounds. The characteristics of the matrices developed for this set of experiments are
tabulated in terms of the metrics previously discussed in Figure 5.6.
The set of test matrices was divided in to five separate series, with Series B, C, D, and E
composed entirely of full matrices and Series F containing only diagonal matrices. Series C, D,
and E were set up to explore how the coupling strength of the matrices would affect the difficulty
of the design task. In these matrices the "trace" was varied while other factors were held
constant. Series B and C, which differed only in terms of the matrix "balance" metric, could be
compared to determine if a higher proportion of negative coefficients in the design problem
matrix had an adverse effect on the subject's performance. Series C and F formed the core of the
experiment and were designed to explore the differences in subject performance for fully coupled
and uncoupled design tasks. These two series of experimental matrices were also intended to
uncover how performance varied with matrix size, and for this reason were the only series of
matrices explored through n = 5.
79
M atrix
Series
B
C
D
E
F
n
(trace)/2-norm
abs (trace)/2norm
2
3
-1.03
-1.00
1.03
1.00
-0.50
4
-1.00
1.00
-0.50
2
3
1.08
1.00
1.08
1.00
0.50
0.56
4
1.00
5
1.00
1.00
1.00
0.50
0.52
2
3
1.25
1.25
1.25
1.25
0.50
0.56
4
2
3
4
1.25
0.75
0.75
0.75
1.25
0.75
0.75
0.75
0.50
0.50
0.56
0.50
2
3
1.41
1.73
1.41
1.73
4
5
2.00
2.24
2.00
2.24
0.50
0.33
0.25
0.20
Balance
-0.56
FIGURE 5.6 - Experimental matrix characteristics.
Diagonal matrices used in the experiment, since they were identity matrices with an
invariable structure, were restricted to "trace" values of t(A,) =
ranged from 0.2
b(An)
, while their "balance"
0.5. Thus, it was not possible to create diagonal matrices that varied
only in size and no other characteristic, as was possible with full matrices, but the range of
variation as characterized by the metrics was fairly minimal.
In order to improve the experimental procedure, several other important changes were
made in the DS-emulator experimental platform itself. First, the program was adjusted so that it
presented the design matrices to the subjects in a randomly selected order that was changed for
each participant. This was done to avoid inadvertently training the subjects by progressing from
"easy" to "hard" systems and to minimize any other possible effects a consistent design task order
80
might have had on the outcome of the experiments. In addition, a solution vector with an
Euclidean norm of 1 was selected at random for each design problem from a set of twenty-five
unique vectors created for each different size of design matrix. This procedure improved the
experiment in several ways. Since the solution vectors were normalized, the overall distance
between the starting point of the output variables (always at the origin of each gauge on the Task
GUI, which ranged from -20 to +20) to the solution position (i.e. the target range) would be the
same, in a least-squares sense, for each experiment involving a given design matrix. This created
a degree consistency amongst all the solutions. Also, randomizing the selection of the solution
vector to some extent, hence randomizing the position of the target ranges in the Task GUI,
reduced the likelihood of a particularly easy or difficult solution recurring consistently and
affecting the experimental results.
With the help of twelve human subjects, the second set of experiments was conducted
with DS-emulator platform following an identical experimental procedure to that of the
preliminary set of experiments, described earlier in Section 5.3 of this chapter. As before the
program recorded the position of the input and output variables of the design system after each
move, or "operation," the subject made during the experiment, as well as the time at which it
occurred. Subjects were also interviewed briefly upon completion of the experiment, just as in
the preliminary investigation.
Although the majority of subjects completed all or most of the test matrices presented to
them by the program, a few gave up in frustration while trying to solve the 4 x 4 and 5 x 5 fully
coupled systems. So as a result, after just over half of the subjects had completed the experiment
using the entire set of 17 design matrices described in Figure 5.7, the size of the experiment was
reduced to include only the C and F series of test matrices. This was done mainly to shorten the
time required to complete the prescribed tasks and to ensure that enough subjects completed the
most significant design problems. Also, some individuals had taken well over two hours to finish
the full set of 17 experiments and it was felt that this was simply too long a time to be certain that
the subjects were giving the experiment their full attention. Results from these experiments are
presented in Chapter 6.
An additional set of tests was conducted as part of the primary experiments to explore
further how learning and experience in the problem domain might affect the time required to
complete the design task. In this study five subjects repeatedly solved the 2 x 2 fully coupled
Series C design problem, each time with a different randomly selected normalized solution
vector. Completion time was recorded for each repetition of the experiment in order to ascertain
81
how it varied with increased experience and familiarity with the design task. The results from
this set of experiments are also discussed in the next section of the thesis.
82
6 RESULTS OF THE DESIGN TASK SURROGATE EXPERIMENT
Experiments carried out with the cooperation of test subjects using the DS-emulator
platform yielded a number of interesting results pertaining to how design problem size, structure,
and complexity impacted performance of the design task surrogate. This section will present the
results from the primary set of experiments in a way that shows how they are related to the
general study of the design process and in particular how they pertain to the engineering design
and product development process. The data will also be examined and discussed with respect to
what is understood about human information processing and problem solving from a cognitive
science perspective.
Before the results are set forth it is worth reviewing briefly exactly what the design
process surrogate experiments were designed to achieve. The primary goals were to explore how
system size and other characteristics affect the difficulty of finding a solution to the design
problem and to allow comparisons to be made between the experimental results and theories from
cognitive science about human information processing and problem solving.
6.1 PRELIMINARY DATA ANALYSIS AND NORMALIZATION OF THE DATA
A quick glance at the data from the twelve subjects who participated in the primary
investigation revealed that there was a great deal of variation in how long it took subjects to
complete the set of tasks. Some were able to complete all 17 design problems administered in the
experiment within about 45 minutes, while other subjects took nearly three times longer. This
variation seemed to be due mainly to differences in the pace at which people naturally worked
and, of course, some people are just faster workers than others.
So, to reduce this variation it was decided to normalize the task completion time results
for each subject. In order to do this the completion time result from one test, the 2 x 2 fully
coupled Series C matrix, was used as a normalizing factor; the time-to-completion results for all
tests for a particular experimental subject were then divided by that subject's completion time for
the 2 x 2 Series C matrix. Since the goal of this research was to suggest a general scaling laws
based on problem size, structure, and complexity, rather than to actually develop a predictive
model of task completion time, normalization of the data was the most sensible way to both
reduce spurious variation and improve the generality of the results.
83
Throughout this thesis I have tried to be quite precise about whether I am discussing the
normalized completion time for an experiment, which is most often the case, or the actual
completion time, which I mention less frequently.
6.2 DOMINANT FACTORS IN DESIGN TASK COMPLETION TIME
Along with the implications of design problem size, analogous to matrix size n in the
DS-emulator design task model, the effect of a number of other variables on experimental
completion time were explored with the software platform. These variables were introduced and
discussed in some detail in Section 5.4.2, but a brief reminder is presented here. In no particular
order, the complete list of metrics developed to classify the design system matrices consisted of:
* "Trace," t(An ); a ratio comparing the magnitude of diagonal elements to the magnitude of
off-diagonal elements in the design matrices. A "trace" value larger than one indicates that
the diagonal elements are "stronger" (i.e. greater in magnitude) than the off-diagonals. If
the trace is less than one, the opposite is true.
* "Balance," b(An); compares the number of positive and negative coefficients in the
matrix. A negative value for the matrix balance indicates that there are more negative than
positive coefficients present.
" "Size," n ; indicates the size of the n x n design matrix An .
"
"Fullness," f(An); measures the number of non-zero coefficients in the matrix. As matrix
fullness decreases, the matrix becomes increasingly sparse.
Variations in task completion time due to trace, balance and size were investigated in the
primary set of experiments, while the significance of matrix fullness was studied through the
preliminary set of tests briefly discussed in Section 5.3. The issue of matrix fullness will be
revisited later on in Section 6.2.2 of this chapter.
6.2.1 MATRIX SIZE, TRACE, AND BALANCE
Average normalized completion time data for all five series of test matrices (B, C, D, E,
and F) used in the primary set of experiments is shown in Figure 6.1. The significance of matrix
attributes such as size, trace, and balance on task completion time was explored by using
regression techniques to analyze this experimental data.
84
Average Normalized Completion Time for All Test Matrix Types vs. n
40.000
35.000 0
E
30.000 0
CL
E
25.000 AFully Coupled - Series C
* Uncoupled - Series F
o Fully Coupled - Series B
0
0
N
20.000 -
+ Fully Coupled - Series D
UFully Coupled - Series E
E
0 15.000 -
z
0
10.000 5.000 -0
0.000
1
2
3
4
5
6
n (matrix size)
FIGURE 6. 1 - Completion time data for all matrix types used in the
design task surrogate experiments.
The analysis process began with generating normalized completion time data for design
matrix types B, C, D, E, and F from the experimental results stored by the DS-emulator program
in its database. Next, multiple parameter linear regression was used to explore the various matrix
characterization metrics and evaluate the significance of each in terms of its effect on the ability
of the regression model's ability to predict the linearized average normalized completion time
data. A combinatorial approach was used to determine the significant independent variables that
contributed to the most accurate model of the surrogate task completion time results.
After carefully evaluating the data in this manner it was discovered that matrix size and
trace appeared to have a statistically significant impact on the time required for subject's to
complete each design problem. Matrix size was the dominant factor while the matrix trace
seemed to have a lesser, though still significant, impact on normalized task completion time. The
regression analysis uncovered a strong positive correlation between normalized completion time
and matrix size and a strong negative correlation between matrix trace and completion time.
85
These findings agree with the assumption that as a matrix becomes less coupled, and thus more
nearly diagonal in structure, the system it represents it should be increasingly easy for the
designer to manage. Although in post-experiment interviews many subjects reported having
difficulty with systems in which many inputs and outputs were negatively correlated the matrix
balance, which quantified this characteristic, actually appeared to show little or no correlation
with design task completion time when the results were analyzed.
The best-fit model using the entire set of matrices explored in the primary experiments,
including the uncoupled Series F, fit the linearized time to completion data to matrix size n and
matrix trace t(An) parameters and did not include matrix balance. Although some models
evaluated included the balance variable, b(A), and showed a slightly better fit in terms of Rsquare and adjusted R-square values, the probability that the coefficient of the balance was
actually zero was too high for the model to be presumed accurate.
The best regression model developed had an R-square value of 0.892 and an adjusted Rsquare value of 0.876. Analysis of the regression model's residuals indicated that they were
randomly distributed and equally dispersed. Normal probability plots showed that the data was
likely to have come from a population that was normally distributed.
6.2.2 MATRIX FULLNESS
The preliminary set of experiments turned up some interesting and somewhat surprising
results about how the fullness of a matrix affected the time required by subjects to complete the
design task. This "fullness" metric, described in more detail in Section 5.4.2.3, was designed to
measure the ratio of non-zero coefficients to the total number of coefficients in a given design
matrix. Although the design tasks were expected to become increasingly difficult for subjects to
manage as the design system matrices approached being entirely full, this turned out not to be the
case. In fact, the most difficult and time-consuming systems for subjects to solve were actually
those that were nearly full, rather than entirely full. This interesting trend can be seen more
clearly in Figure 6.2. As shown in this figure a maximum level of difficulty, as measured by the
amount of time required by subjects to complete the task, was encountered when matrix fullness,
f(A,,
was somewhere between 0.7 and 0.9, meaning that 70-90% of the coefficients in the
matrix were nonzero.
The reasons for this puzzling result are not entirely clear. Part of the explanation may lie
with the fact that the matrices developed for use in the first set of experiments were not quite as
rigorously characterized and consistent as those used for the primary set of tests. Another factor
86
may be that the irregular structure of nearly full matrices did actually pose a special challenge to
the subjects. While near-full matrices are almost as coupled as completely full matrices the way
in which the variables are coupled to one another is somewhat irregular. Thus, there is no
guarantee that each input variable will affect all output variables, and keeping track of this added
complexity may place an extra burden on the short-term memories of the subjects as they attempt
to solve the system, thus creating more difficulty and increasing the time necessary to complete
the task. A more thorough investigation is necessary, however, before any conclusive reasons for
this phenomenon can be established.
Log Average Normalized Completion Time vs. Matrix Fullness for 3x3 and 4x4 Matrices
(95% Confidence Interval)
20.000
-
-
--
-
----
-
0
E
15.000
-
10.000
-
E
A4x4
N
E
5.000
Z
0M
}
-
I
0.000
)
0.1
0.2
0.3
0.4
Matrices
o 3x3 Matrices
AI
.41
0.5
0.6
0.7
0.8
0.9
1
1 1
-5.000
Matrix Fullness (1=Full Matrix)
FIGURE 6.2 - Effects of matrix fullness on normalized completion time.
6.3 SCALING OF PROBLEM COMPLETION TIME WITH PROBLEM SIZE
A particularly important goal of the design task surrogate experiments was to compare
how the difficulty of solving the parameter design problem scaled with the size of the problem
and how this scaling law changed depending on whether the design system was fully coupled or
uncoupled. Two sets of matrices included in the experiment, each containing matrices ranging in
size from n
=
2 to n = 5, were used to generate data for exploring these trends. One set of four
87
matrices, Series C, was full and completely coupled, and the other, Series F, diagonal and
uncoupled. The fully coupled Series C matrices were designed so that diagonal and off-diagonal
coefficients had roughly the same effect on the output variables (i.e. the matrix "trace" was
approximately 1.00), and so that coefficients with a positive sign predominated.
After test results from the twelve experimental subjects were collected and analyzed, the
data clearly indicated that full-matrix completion time increased much more rapidly with matrix
size than diagonal matrix completion time. These trends can be seen more clearly in Figure 6.3,
which includes 95% confidence intervals for the data.
Normalized Completion Time for Full and Uncoupled Matrices vs. n
(95% Confidence Interval)
60 -
50E
400
A Full Matrix - Series C
0.
E
o
*Uncoupled Matrix - Series F
30
E 20 0
10
0-
2
4
3
5
6
n (matrix size)
FIGURE 6.3 - Scaling for fully coupled and uncoupled parameter
design matrices vs. matrix size.
6.3.1 INVESTIGATING THE SCALING LAW
A glance at the experimental results suggested that the trend for the full matrix
experiment probably followed either a polynomial or exponential scaling law while the data from
the diagonal matrix experiments probably scaled linearly with matrix size. To explore which
88
scaling law was most appropriate for the fully coupled experiment the results were linearized
using the transforms for the exponential case and the polynomial case and then linear regression
was performed on the transformed data to determine a best-fit model. A linear model was
assumed for the uncoupled matrix data and a similar regression analysis procedure was followed
to create a best-fit model.
For a general exponential function,
y = aebx + C
the transform
x =x
y'=lny
was used to linearize the experimental data.
While for a general second order polynomial model,
y = ax 2 +bx + C,
the transformation
x'= logx
y'= logy
provided the appropriate mapping.
A linear regression showed that the exponential model, which had an adjusted R-square
value of 0.994, fit the experimental results extremely well. The polynomial model fit fairly well
to the fully coupled Series C matrix data, with an adjusted R-square value of 0.945, but was
significantly inferior in fit to the exponential model.
When the best-fit model was transformed back to the exponential form, using the inverse
transform
89
y'= Ina + bx
the equation describing the relationship between matrix size, n , and normalized completion time
for fully coupled Series C matrices was found to be
y = 0.081e""".
The normalized completion time for the full matrix experiments and the best-fit
exponential model can be seen in Figure 6.4.
Actual Data and Best-Fit Exponential Model for
Full Matrix Normalized Completion Time vs. n
(95% confidence interval)
60.000
-
50.000
-
ICL
'40.000
-
E
0
U
S30.000-
A Full Matrix - Series C
x Full Matrix Exponential
Model
0
Z 20.000
<
-
10.000 -
A
0.000
1
2
4
3
5
6
n (matrix size)
FIGURE 6.4 - Full matrix completion time and best-fit exponential model.
The results from the uncoupled set of matrix experiments fit a standard linear model quite
well, with and adjusted R-square of 0.904 using the average normalized completion time data.
90
The experimental results and best-fit linear model for the uncoupled Series F system are shown in
Figure 6.5.
Actual Data and Best-Fit Linear Model for
Uncoupled Matrix Normalized Completion Time vs. n
(95% confidence interval)
2. 05 ----------- -------
------
---------------- ------------ ------------------------------------- -- --------------
----------
2.000 00
CUncoupled Matrix
E 1.500 0
U
-0
x Uncoupled Matrix Linear
Model
E 1.000 -
X
o
0.500 -
0.000
1
2
3
4
5
6
n (matrix size)
FIGURE
6.5 - Uncoupled matrix completion time and best-fit linear model.
Interestingly, a nonlinear relationship similar to that just found to exist between matrix
size and design task completion time for fully coupled systems has already been identified for the
human performance of mental mathematical calculations. Many cognitive science researchers
have noted that for subjects performing mental arithmetic, calculation time seems to depend
strongly on the size of the numbers involved and, in fact, correlates well with the product or
square of the digits [Dehaene 1997, Simon 1974]. This scaling law appears to be valid whether
the operation is multiplication, division, addition, or subtraction, and is likely to be due to both
short- and long-term memory constraints and the degree and type of domain-related training
received by the subject [Ashcraft 1992]. Another relevant investigation, in this case conducted in
an operations research context, also supported the results of the design task surrogate
experiments. Here again researchers found that problem size turned out to be the dominant factor
in problem difficulty, this time governing the difficulty human subjects had with a multi-node
91
distribution network design task [Robinson and Swink 1994]. These topics are discussed in more
detail in Sections 2.3.1 and 3.3.2 of this thesis.
6.4 DESIGN TASK COMPLEXITY
The design task presented to subjects was in many respects the graphical equivalent of a
problem involving finding the solution to a system of linear equations. In this sense the subjects
were required to "invert" the design matrix in their head and then perform "back substitution" in
order to determine an appropriate solution vector for the problem. Because of the similarity of
this task to a common computational activity it seemed worthwhile to use a complexity-based
metric to explore how human ability to "solve" a linear system compared with that of a computer
performing an equivalent task.
Naturally, a number of methods for measuring complexity, including information
theoretic methods such as "mutual information content" and even thermodynamic metrics, are
available [Horgan 1995]. However the most appropriate metric in this case is the computational
complexity of the problem, which refers to the intrinsic difficulty of finding a solution to a
problem as measured by the time, space, number of operations, or other quantity required for the
solution [Traub 1988]. In the case of the design task experiments a particularly good way is to
look at the computational complexity of the problem is explore the number of operations required
to solve the problem. Comparing the number of mathematical operations required for a
computational algorithm to factor and solve a linear system to the number of "operations," or
iterations, required for a human to perform the same task should give a good idea of the difficulty
of the task for humans relative to computers. This also eliminates the time-dependant aspects of
the comparison, which only makes sense as computers will always have the upper hand in this
respect.
A review of computational complexity, rate-of-growth functions, and the mathematics
behind factoring and solving linear systems is provided in the Appendix. The following sections
will assume an understanding of this material, so readers unfamiliar with any of these topics
should read the appendix for a basic introduction to this information.
6.4.1 NUMBER OF OPERATIONS
The first step in evaluating the relative complexity of the design task was determining the
number of operations required for subjects to complete each experiment. An operation, or
"move," was for this purpose defined as a single click by the subject on the "Refresh Plot" button
92
of the Task GUL This action caused the output display on the GUI to be updated and the DSemulator program to record both the positions of the inputs and outputs and the time at which the
subject pressed the "refresh plot" button in the database file created for each subject. A complete
time history of each subject's performance during the experiments was thus accessible for review
and analysis. As mentioned earlier, pressing the "Refresh Plot" button can be likened to iteration
in the actual design process and is also similar to a mathematical operation in a numerical
algorithm.
Using just the records for the Series C experiments, because these experiments contained
results for tests involving full matrices up to a size of n = 5 and well represented of the general
spectrum of complexity of the design task experiments, the average number of moves required for
each of the four matrices in both data sets was calculated. This data can be seen in Figure 6.6.
Average Number of Operations Required by Subjects to Solve
Fully Coupled Series C System vs. n
(95% Confidence Interval)
500.00 450.00 400.00 0
350.00 300.00 -
0
E
250.00 -
M0
2
W200.00 150.00 100.00 50.00
0.00
2
2
3
4
5
6
n (matrix size)
FIGURE 6.6 - Average number of operations for Series C fully coupled matrix data.
Regression analysis revealed that the number of operations required for human subjects
to solve the fully coupled Series C matrix problems clearly scaled exponentially with problem
size, with an adjusted R-square of 0.972 for the data, much like the full matrix task completion
time data discussed earlier. After transforming the results of the regression equation back into the
93
exponential domain the equation describing the relationship between matrix size and operation
count, as measured by the average number of presses by subjects of the "Refresh Plot" during
each Series C design task, turned out to be
y = 1.898e 100 8
6.4.2 COMPUTATIONAL COMPLEXITY
To evaluate the relative complexity of the parameter design task, the number of
operations required by human subjects to complete the Series C fully coupled matrix tasks was
compared with the theoretical number of mathematical operations required to solve similar linear
systems on a computer. This comparison was based on the assumption that the computer would
use an algorithm employing A = L U factorization of the system followed by Gaussian
elimination to find the solution to the problem.
This efficient, polynomially bounded algorithm requires
2 3
-n
3
5
+-n
2
2
-- 7 n
6
operations (multiplications and additions), which means that the rate of growth of the
computational complexity of the problem scales as the cube of the size of the n x n linear system
being analyzed, and is thus 0(n 3 ). If the matrix has already been factored, and only Gaussian
elimination is necessary in order to find a solution, then
2n 2 - n
operations are required and the complexity is 0(n 2)
The analysis of the number of operations required by human subjects to complete the
design task problems, discussed in Section 6.4.1, revealed that an exponential scaling law was
likely to govern this behavior. Based on the scaling law calculated from the experimental data
the rate of growth function describing the number of operations required to solve fully coupled
systems as a function of problem size is O(e1 .008").
As discussed in Section 6.4, the time
required to solve a problem can also be used as a measure of its computational complexity [Traub
94
1988]. In this case the results describing the completion time for the Series C matrix
experiments, and analyzed in Section 6.3, would indicate a scaling law of O(eI.2 13 ") for solving
fully coupled systems. These two rates of growth are quite different, but both indicate an
exponential increase in complexity with problem size.
A comparison of some significant operation count based rates of growth is shown in
Figure 6.7. This plot contrasts the number of operations required for human subjects to complete
the design task with the theoretical number of operations required by a full numerical solution to
the linear system and for just the Gaussian elimination portion of the algorithm. It is clear that
even for systems of a small size, human performance is far inferior to that of an efficient
algorithm that can be run on a computer. Moreover, the complexity of the problem from the
human standpoint mounts rapidly, as evidenced by the exponential rate of growth function that
characterizes the number of operations required to complete the task. As discussed in more detail
in the Appendix, such geometric growth is highly unfavorable, and indicates that from a
computational standpoint the problem will approach intractability as it becomes large.
Average Number of Operations by Subjects Compared to Theoretical Number
of Operations Required to Solve Fully Coupled Linear System
(95% Confidence Interval)
500.00 450.00 400.00 350.00 A
.
0
Human Subject - O(e~n)
Numerical Solution - O(n3)
o Back Substitution - o(n^2)
-+
250.00
200.00
E
Z 150.00
100.00
50.00
0.00
1
2
3
4
5
6
n (matrix size)
FIGURE 6.7 - A comparison of the relative complexity of the design task for
humans and computers using an efficient algorithm.
95
It is worthwhile to note the interesting relationship between the complexity of the fully
coupled design task from the standpoint of the human subjects and that of the numerical
algorithm for solving linear systems. For human subjects the scaling law was O(e" 008"), if
measured by the number of operations needed to complete the design task as a function of
problem size, or O(e1 2 1 "n) if the time required to solve the problem was used as the complexity
metric. Looking at these scaling laws a little more closely, it can be seen that e"00' ~ 2.74 and
e1.213 ~ 3.36 , so one might reasonably claim that the scaling law for humans solving such fully
coupled systems is roughly 0(3") while the numerical algorithm scales with problem size as
0(n 3).
6.5 OTHER ASPECTS OF DESIGN TASK PERFORMANCE
This section examines in detail some other significant findings concerning how human
subjects approached solving the problems posed to them by the design task surrogate experiment.
6.5.1 TIME PER OPERATION
Interestingly, analysis of experimental data from Series C and F design tasks indicated
that the average time subjects spent on each operation seemed to have no dependence whatsoever
on the size or complexity of the matrix system the subjects were attempting to solve. For
instance, the overall average time per operation for the fully coupled Series C matrix data was
4.18 seconds per move, with a 95% confidence interval of ± 1.32 seconds, and there was no
apparent correlation with matrix size. For the uncoupled Series F matrices the result was similar,
with an overall average time per move of 3.74 ± 1.15 seconds at a 95% confidence level. The
average amount of time required for each operation as a function of matrix size is shown in
Figures 6.8 and 6.9. Examination of the t-statistic for the slope, 8 , of the best-fit line for these
data sets supported the hypothesis that the slope was actually zero. For the full matrix time per
operation data shown in Figure 6.8, the t-statistic was 1.67, and for the uncoupled matrix data, in
Figure 6.9, the t-statistic was 2.24. In order for the slope of the best-fit line to be assumed to be
non-zero with any degree of confidence the value of the t-statistic should be well over two [John
1990]. As such, it is evident that time per operation was essentially invariant with respect to
matrix size for both the fully coupled and uncoupled design task matrices examined.
96
Average Time per Operation vs. n for Series C Full Matrix
(95% Confidence Interval)
--------
6.00 -
-
- -----
5.50 -
5.00 4.50 CL
0.
CL
4.00 -
3.50
E 3.00 -
2.50 -
S2.00
1.50
1.00 -T2
1
3
4
5
6
n (matrix size)
FIGURE 6.8 - Average time per operation for Series C experiments.
Average Time per Operation vs. n for Series F Uncoupled Matrix
(95% Confidence Interval)
65.5 5
2.5
4
C)
0.
3.52 <
2
1 .5-
2
3
4
5
6
n (matrix size)
FIGURE 6.9 - Average time per operation for Series F experiments.
97
As previously discussed earlier in Section 6.4.1, the number of operations required for
subjects to complete the fully coupled Series C matrix experiments scaled exponentially with
system size. Similarly, regression analysis showed that the number of operations required for
subjects to solve the uncoupled Series F matrices followed a linear trend with respect to matrix
size, with an adjusted R-square of 0.855 for the best-fit line. This trend can be seen in Figure
6.10.
Average Number of Operations Required by Subjects to Solve
Series F Uncoupled System vs. n
(95% Confidence Interval)
25
20 U,
0 1
0.
E
Z 10
5 -
0
1
2
4
3
5
6
n (matrix size)
FIGURE 6.10 - Average number of operations for Series F
uncoupled matrix experiments.
In essence, this mean that the increase in the time required for subjects to complete the
design tasks as they became larger seems to be driven almost entirely by the number of operations
required, rather than the subject's need to think more carefully (i.e. for a longer period of time)
about more complex or larger problems.
At first glance this seems to indicate that the subjects might have been trying to complete
the tasks by a random search method. However, for a number of reasons I'd like to argue that
this is not the case. First of all, three or four seconds is in fact a great deal of time to think about
a problem when the information processing speed of the human mind is considered. As discussed
in Chapter 2, a basic information processing task can be completed by the mind in about 40
98
milliseconds, and even a something as complex as a comparison of digit magnitude or a simple
mathematical calculation takes only a few hundred milliseconds [Dehaene 1997].
Another reason for the invariance of operation time probably lies with the volatility of the
short-term memory. Information retained in the STM only lasts for a few seconds before it is
forgotten, so if a subject wanted to store information about the design task's current state, change
the value of the input variables, and then refresh the display to capture a "derivative" of the
system's performance, it would have to be done rather quickly regardless of the design system's
size in order to avoid forgetting the information.
A final reason for my contention that random search was not exclusively or even
primarily employed as a solution method by the experimental subjects lies buried in the data files
that recorded every single move each subject made as he or she worked with the task surrogate
program. As I'll demonstrate in the next few sections, most subjects who participated in the
experiments, and all of those who performed well, did not use random search strategy but in fact
carefully evaluated the system's behavior in order to converge more rapidly on a solution.
6.5.2 GENERAL STRATEGIES FOR PROBLEM SOLUTION
During each experimental session the DS-emulator program monitored and recorded the
position of the input variable sliders, as well as the new position of the output variables, each time
the subject pressed the refresh plot button on the Task GUI. This compiled a detailed roadmap of
how each subject interacted with the tasks presented by the design surrogate experiment, creating
a clear picture of the task's state as a function of time. Database records of this type were
compiled for all matrix systems presented to subjects during the primary set of design task
surrogate experiments.
Rather than examining all this data I chose to carefully analyze a subset consisting solely
of the data from the 4 x 4 and 5 x 5 fully coupled matrix experiments from Series C. This was
done for several reasons. First of all, the smaller design problems, like those represented by
matrices of size n = 3 and lower, for instance, tended to be rather easy for the subjects to
complete and could be solved in a few minutes with a small number of operations. So, the data
set describing the system's state over time was not rich with information about the interaction of
the subject and the design task and meaningful trends were not often evident. I also ignored the
uncoupled Series F data sets for this same reason. Even though subjects completed this task
through the 5 x 5 matrix level its simplicity, and the speed with which such problems were
typically solved, made the detailed results far less informative than those of the more complex
fully coupled systems.
99
After ranking the subjects' results for the 4 x 4 and 5 x 5 fully coupled Series C
experiments in order of completion time I attempted to identify any traits that correlated with
overall performance on the task, as measured by the time required for the completion of each
design problem. A closer look this data, consisting of 19 separate tests in total, yielded some very
interesting results and highlighted some clear differences between those who preformed
comparatively well on the design task and completed it rapidly and those who did not.
6.5.2.1 GOOD PERFORMERS
Plots of the magnitude of the average change in the input variables vs. operation showed
that subjects who performed well on the design tasks tended to make large changes in the input
variables early on in the experiment. This was followed by a period characterized by much
smaller adjustments to the input variables that continued until the subject converged on a solution
to the design problem. Subjects with the shortest times to completion, for either the 4 x 4 or
5 x 5 fully coupled Series C matrix tests, or both, all followed this pattern. Such behavior can be
seen in Figure 6.11.
Average Magnitude of Input Variable Change vs. Operation
for 5x5 Series C Fully Coupled Matrix
6
CL
C
%4- a1)
co >
Q
C
fri
I
n
F
0
20
40
60
80
100
120
Operation
FIGURE 6.11 - Average magnitude of input adjustment vs.
operation for a subject who performed well.
100
Output Variable Value vs. Operation for 5x5 Series C Fully Coupled Matrix
8 *
6 -
****,
*
*
4 0
2 -
**
*
0.
0 -
-3
.*
* . 10
+
20
30
40
50
60
70
80
70
80
-2 0U
-4 -
-6 -8 -10 Operation
Input Variable Value vs. Operation for 5x5 Series C Fully Coupled Matrix
15
10
5
0
10
.S
20
30
*
40
50
60
-5
-10
* *
-15
Operation
FIGURE 6.12 - Input/output value plots for a single variable. This subject performed
well compared to others and solved the problem rapidly.
101
If the data for single input/output variable pairs for tasks that were completed in a relatively short
time are examined the picture becomes even clearer. An example is presented in Figure 6.12, in
which the lower plot shows the value of one input variable for a 5 x 5 matrix plotted with respect
to operation number and the upper graph plots the value of its associated output variable, also
with respect to operation number. Here, the subject spends some time at the very beginning of
the experiment moving the input variable in question over a wide range of values, possibly to
examine how the system would respond to changes in the input parameter. The flat response of
the input variable from just before operation 10 to just after operation 30 is due to the subject
examining the other 4 input variables one by one in a similar manner. During this time the output
variable in the upper plot still moves because the system is fully coupled. Shortly before
operation 40 the subject moves the specific input variable plotted in Figure 6.12 to an
approximately correct solution position, that places the output variable near its target range, and
then changes its value gradually while converging on an overall solution to the design problem.
The behavior shown in Figures 6.11 and 6.12 was typical of subjects who performed well
on the design task experiments and rapidly solved the problems presented to them.
6.5.2.2 AVERAGE AND POOR PERFORMERS
In contrast, subjects who took longer to complete the tests exhibited very different overall
performance characteristics. Often they would make small or medium-sized adjustments to the
value of the system's input variables at the outset, and would continue on for some time in this
manner before investigating the outer ranges of the input variables. This type of behavior can be
seen clearly in Figure 6.13, and was most characteristic of those subjects whose performance on
the design task experiments was about average as measured by completion time.
The system input variable adjustments made by subjects who performed the worst on the
design task experiments typically displayed non-converging oscillatory characteristics. In these
instances, subjects made continual, large changes to the system input variables, as though they
were hoping to find a solution to the design problem either by sheer luck or by randomly
searching the entire design space. Such behavior caused the system to respond in a like manner,
setting up an oscillatory cycle that did not converge toward a solution rapidly. This trend can be
seen clearly Figures 6.14 and 6.15.
102
Average Magnitude of Input Variable Change vs. Operation
for 5x5 Series C Fully Coupled Matrix
7
6
0
6C
>0C
0
0
100
50
150
200
250
300
350
400
Operation
FIGURE 6.13 - Delayed search of input variable ranges.
Average Magnitude of Input Variable Change vs. Operation
for Series C 5x5 Fully Coupled Matrix
2.5
-
2
4-
o0)a)
1.5
0>
I
0.5
0
100
200
300
400
500
600
Operation
FIGURE 6.14 - Average input variable adjustment for a subject who
performed poorly.
103
Figure 6.14 shows the average magnitude of input variable adjustment plotted vs.
operation number for a subject who performed poorly. The behavior of such poor performers was
characterized, as seen in this figure, by continual large adjustments of the input variables in an
attempt to find a solution to the system.
Output Variable Value vs. Operation for 5x5 Series C Fully Coupled Matrix
8 64$
4-
I
*
~
A
-2-
...*1
?~:
4p~
-0
4,
-2
-
-4
S-60
~* '~
+4
$4
*
*
444
0
**
.4.
4.
'-4*
*~400.
.
.~
4.
.4
+
$4
*
4
500
4%*
.
600
I.
4.
4
4
4
-8-10-12
-14
Operation
Input Variable Value vs. Operation for 5x5 Series C Fully Coupled Matrix
8 6 4 2 0
2D0
100
*
*
t,
500
400
300
.
-- 4-
44e
600
4.
+
.,
-6
*
+
-8 -10 -
Operation
FIGURE 6.15 - Input/output value plots for a single parameter from a subject
who had difficulty with the design task.
104
Oscillatory behavior can be seen even more clearly in Figure 6.15, which plots the values
for an input variable to the 5 x 5 fully coupled Series C design system and its associated output
variable with respect to operation number, just as in Figure 6.12, except this time for a subject
who required a relatively long time to complete the experimental task. The upper plot of the
figure shows the output variable and the lower plot the input variable.
Unfortunately, this tactic did not seem to pay off, as subjects who attempted this solution
strategy consistently required the most time to solve the design problems. All of the data in the
50% of the 4 x 4 and 5 x 5 Series C matrix experiments with the longest completion times
exhibited oscillatory or near oscillatory trends.
6.5.3 MENTAL MATHEMATICS AND THE DESIGN TASK
Along with the other input-output dynamics discussed above, I was curious to see if there
was any correlation between the amounts by which subjects changed the value of the system
input variables and the length of time they spent considering their action. Also of interest was
exploring any possible correlations between the magnitude of the subject's input variable changes
and the proximity of the output variables to a solution to the design task. There were several
reasons for investigating these factors.
Experiments conducted by Dehaene et al. [1999] have suggested that the human mind has
two possible regimes under which it carries out quantitative operations. These are "exact"
calculation, which Dehaene et al. contend is language based and employs parts of the mind
generally used for language processing to perform symbolic calculations, and "approximation,"
which is likely to be performed by a separate and specialized part of the brain. Approximate
calculation is, of course much more rapidly carried out than exact calculation.
I was curious to
find out if the subjects were employing a rapid approximation regimen to get close to a solution
followed by a more time consuming exact process to actually converge on the solution. This
would be indicated by an increase, perhaps marked, in the time required per operation as the
subjects moved the output indicators closer to the solution target ranges.
Experiments have also shown that the human ability to compare numerical magnitude is a
function of both the size of the numbers being compared and their proximity on the number line
[Dehaene 1997]. As explained in Section 2.3.1, it takes longer for the human mind to accurately
compare numbers that are large and also numbers that are similar in magnitude. These
phenomena also suggested that the time per operation for the design tasks might increase as the
subjects approached a solution. In this case the values of the output variables would be nearing
that of the target ranges, so the necessary comparison might become more time consuming.
105
Careful statistical analysis of the results for the 4 x 4 and 5 x 5 Series C matrix tests
indicated, however, that there was in fact no correlation whatsoever between either time per
operation and the magnitude by which the input variables were adjusted or time per operation and
the proximity of the output variables to the target ranges. Figure 6.16, which plots the average
magnitude of a subject's changes to the system input variables vs. the time taken by the subject
for each operation, is a representative example of the random dispersion of these values. It may
be that the graphical nature of the DS-emulator interface somehow mediated or dampened the
effects of the cognitive traits discussed in the previous paragraphs.
Average Change in Input Variable vs. Elapsed Time per
Operation for 5x5 Series C Fully Coupled Matrix
.
3
.
.
.
.
.
.
.
.
6
7
8
9
10
11
2.5 .
0.
o
2
CD
0)
1.5
CM
J 4
0-
*
0
6
*as
4*
.
1P
.w-Av
1Zd
.
44
0.5
0
2
3
4
5
12
Elapsed Time per Operation
FIGURE 6.16 - Random dispersion of change in input variables with
respect to time taken per move.
6.5.4 SOME GENERAL OBSERVATIONS ON TASK RELATED PERFORMANCE
The brief interviews conducted with the subjects after they had completed the
experimental tasks were generally quite revealing. Almost all subj ects indicated that they had
difficulty with 4 x 4 and 5 x 5 fully coupled systems while smaller full systems and uncoupled
systems seemed, in their view, to pose comparatively little challenge. Some subjects also
106
indicated that they found systems that had a predominance of negative coefficients particularly
difficult. However, there was no indication in the data that these systems actually took any longer
to solve than comparable full systems with mostly positive coefficients.
Subjects who performed well in the experiment also had several traits in common. Most
were able to discover that the a linear system was being used to model the design process, a fact
that had not been disclosed to them during the training session in which the use of the DSemulator program was explained prior to the actual experiments. Also, most good performers
indicated that they had developed a method or strategy for solving the design problems. Upon
closer investigation this often turned out to be a method for simplifying their internal
representation of the design problem, or more efficiently "chunking" the information presented
by the experimental GUI so that it could be more readily manipulated. For instance one subject
described mentally fitting a least squares line to the output variables, observing how this
imaginary line changed as each input variable was manipulated, and then using this information
to converge rapidly to a solution.
Many of the differences between "good" and "bad" performances on the design task
surrogate seemed to echo what is generally understood about the interaction of humans and
complex dynamic systems. D6rner has noted that in his experiments on human/system
interaction "good" participants usually spent more time up front gathering information (i.e.
exploring system behavior) and less time taking action while "bad" participants were eager to act
and did little information gathering. He also found a strong inverse relationship between
information gathering and readiness to act [D6rner 1996]. Interestingly, these differences also
reflect the distinct characteristics of "novice" and "expert" designers found by Christiaans and
Dorst [1992] in their study, which I discussed in detail in Chapter 3. As Figures 6.12 and 6.15
will attest, subjects who performed well on the design task also spent time at the beginning of the
experiment evaluating the system, whereas those who did not simply forged ahead in an
unproductive manner. The oscillatory performance and frequent overshooting of the target values
by some of the subjects on the more complex design problems also fit with D6rner's observations
about the general difficulty humans have with generating accurate mental models of dynamic
systems, as discussed in Section 2.3.4. It seemed that the subjects who performed the worst on
the complex design tasks were generally unwilling to change their ineffective strategy even
though it was clearly not working (see Figure 6.14), and those that were able to reevaluate the
system and their approach when their current strategy was failing (i.e. the subject depicted in
Figure 6.13) were in the minority.
107
6.6 LEARNING, EXPERIENCE, AND NOVICE/EXPERT EFFECTS
An investigation into learning effects using the design task surrogate confirmed findings
on this topic by cognitive science and design investigations conducted by Chase and Simon
[1973], Simon [1989], and Christiaans and Dorst [1992]. As discussed in Sections 2.3.3.2 and
2.3.3.3, increased domain-specific experience should allow a more sophisticated mental
framework with which to structure a problem solving task. Moreover, expertise also allows more
efficient search of the problem space, through the use of experience-based heuristics, and for
relevant problem information to be more easily stored and manipulated in the short-term memory
because it can be "chunked" more efficiently.
As part of the primary set of experiments, five additional subjects were presented with
the Series C fully coupled 2 x 2 design task, and instructed to solve the problem eight times.
Each time the position of the target ranges, in other words the solution to the design problem, was
changed but the matrix describing the design task was left unaltered. The time it took for subjects
to complete each of the eight repetitions of the experiment was recorded, and the overall results
are shown in Figure 6.17, which plots average completion time vs. number of experiment
repetitions.
Average Completion Time for 2x2 Series C Full Matrix
Experiment vs. Number of Repetitions of Experiment
(95% Confidence Interval)
95.00
85.00 75.00 -
E
65.00 -
2
55.00
45.00 -
0
35.00 25.00 15.00
5.00
0
1
2
3
4
5
6
7
8
9
Number of Repetitions of Experiment
FIGURE 6.17 - Average design task completion time vs. task repetition for
2 x 2 fully coupled Series C matrix.
108
If the number of times the design task is repeated is equated to the acquisition of domainspecific experience then, as can be seen in Figure 6.17, domain-related experience indeed
translates into significantly improved performance on the design task. On average, it appeared to
take about two iterations of the exercise for the subjects to develop a good mental model of the
design problem. After this point, average completion time for the task dropped by about a factor
of two and appeared to stabilize at around 30 seconds. It is evident that once the subjects
developed a clear understanding of the system with which they were interacting, the design task
became much easier.
6.7 SUMMARY
A brief summary of what was learned through the design task experiments is in order
before moving on to discuss the results and their implications in more detail in the next section of
the thesis. From the research conducted using the surrogate program I concluded that:
* For a given type of system, as the problem size increased the time required to solve the
system increased as well, regardless of the characteristics of the matrix representing the
system.
" Task completion time scaled geometrically for fully coupled matrices and linearly for
diagonal, uncoupled design matrices.
* Matrix size was the dominant factor governing the amount of time required to complete the
design task, with the degree of coupling in full matrices having a secondary but noticeable
effect on problem difficulty.
" The most difficult matrices for subjects to deal with were actually not full but almost full.
These matrices combined the complexity of a full matrix with some of the structural
randomness generally associated with certain types of small sparse matrices.
" Other matrix characteristics examined, such as the sign of coefficients, for instance, did not
appear to affect task completion time.
" The number of operations required for subjects to complete each problem increased
exponentially with matrix size for fully coupled matrices and linearly for uncoupled
matrices, but the average time per operation remained roughly constant regardless of
problem size or structure.
109
"
The computational complexity of the design task was governed by a geometric rate of
growth function, O(3"), for human subjects, while for a computer using an efficient
numerical algorithm to solve a similar problem the task complexity scaled as 0(n').
* The time per operation and the magnitude by which the subjects changed the input variables
did not appear to be correlated. Nor did time per operation and the proximity of the output
variables to the solution target ranges.
" A "two regime" approach to problem solving, as discussed in Section 6.5.3 with regard to
research conducted by Dehaene et al. [1999], was not indicated by the results. However, it
is unlikely that the DS-emulator program was set up to properly explore such behavior.
" By practicing a specific problem subjects were able to reduce the amount of time they
needed to solve a given system. This learning effect can be equated with the acquisition of
domain-specific knowledge, an improved mental representation of the problem, and the
superior information "chunking" ability that develops with experience and allows a more
efficient and rich problem representation to be stored in the short-term memory and
manipulated by the working memory.
* Likewise, subjects who required the least amount of time to complete the set of
experimental tasks were also the ones who came up with a clear mental problem
representation and an explicit method for finding a solution. There were obvious
differences between the ways "good' and "bad" performers approached finding a solution
to the design problem.
110
7 CONCLUSIONS, DISCUSSION, AND FUTURE WORK
As I have shown, fully coupled design problems become difficult quite rapidly as their
size, degree of variable coupling, and complexity increases. It is likely that a major cause for this
is the limited nature of human cognitive capabilities. Capacity and time limitations in the shortterm memory and working memory place limits on the amount of information that can be
processed by the human mind at any given time. These and other cognitive characteristics
conspire to cause problem-solving ability to degrade rapidly as the amount of information
necessary to accurately model the problem exceeds the mind's information processing
capabilities and place clear limits on the ability of designers to perform design tasks.
Despite the difficulty posed by large, coupled systems an important factor to keep in
mind is that small coupled systems are not really all that problematic for the designer, even when
compared to uncoupled systems. This was clearly demonstrated through the design task surrogate
experiments and can be seen to best effect in Figure 6.3. What I believe is the important factor,
however, is not the actual size of the coupled system per se, but its "effective size" or "effective
complexity." By "effective complexity," of course, I refer to Gell-Mann's definition of this term
as, "the length of a concise description of a set of [an] entity's regularities," or the degree of
regularity displayed by a system [Gell-Mann 1995, Horgan 1995]. As a result the comparative
difficulty of a design problem can depend many factors, such as the designer's experience and
skill in the domain or the presence of external problem solving aids (i.e. CAD and other tools).
Naturally, experience and the benefits it incurs, like improved information chunking and problem
structuring abilities, are the key to circumventing limitations related to human information
processing capacity. Such expertise allows the problem, if large, to be expressed in more
sophisticated and complex "chunks" and facilitates analysis of the problem space or execution of
the familiar task. Just as the experienced driver releases the gas pedal, engages the clutch,
upshifts, and accelerates without giving much thought to executing the individual actions, a
skilled designer is able to shift many of the basic trade-offs inherent in the design process to a
lower, more subconscious level, thus utilizing the limited mental information processing
resources that are available more efficiently.
Experience with a given task, as investigated discussed in Sections 2.3.3.2, 2.3.3.3 and
6.6, causes it to become easier, requiring less time and mental effort. This can be seen more
clearly in Figure 7.1, which graphically depicts some of the theoretical effects of increased
experience on task performance. As experience with a particular activity of certain complexity
and scale is gained, its apparent difficulty should decline asymptotically towards some lower
111
bound, as shown in the left-hand plot of the figure. I still contend that there is likely to be a
geometric relationship between problem difficulty and problem size, but what is a large problem
and what is small depends largely on the designer's level of expertise and the presence of tools
that confer experience-like advantages. The acquisition of expertise or the availability of
problem-solving tools shifts the exponential difficulty vs. problem size curve, presented in
Section 6.3.1, to the right, making previously intractable problems manageable.
Changes in Task Difficulty and Apparent Task Size and
Complexity with Increasing Experience
Increasing
Experience
Experience of Designer
|
Apparent Task Size and Complexity
FIGURE 7.1 - Theoretical effects of increased experience
on task performance.
These factors all point to a clear need for a more anthropocentric approach to the design
process. There is presently little interaction between the design and cognitive science fields, and
it is my contention that these fields have a great deal to offer one another. It is particularly
important that the product design and development field pays more attention to the characteristics
of the designer. Human ability to evaluate parameters along even one axis of comparison is
limited and design of any type involves constant trade-off along many such axes. Thus, one
major focus of design research should be on the development of tools that facilitate this type of
activity in order to relieve cognitive stresses imposed on the designer. More effort should be
expended on how human limitations affect the design process at all phases, from conceptual
112
design to process optimization and other more organization-related aspects of the product
development task.
It is my belief that a better understanding of the designer's ability to deal with coupled
systems will also allow more informed decisions about when to decompose a design problem and
when no to do so. As Krishnan et al. [1 997b] have noted, sequential decision making activity in
the design process can lead to hidden inefficiencies that degrade the performance characteristics
of the design. This is due to the fact that when design decisions are taken in sequence, decisions
early in the process constrain subsequent choices and can cause a hidden quality loss in the design
process. Because such phenomena are becoming more frequent, due to the increasing complexity
of designs and the design process and the resulting increase in conflicting design parameters, it is
now more important than ever to address them effectively.
An understanding of the "price" of coupling (i.e. that task difficulty and/or cycle time are
likely to increase exponentially with the size of the coupled block) is a first step towards putting
this issue into sharper focus. The findings I have put forth in this thesis might be used to develop
a quality loss penalty that allows trade-off between problem size, coupling, and other project
variables to achieve optimal coupling such that the time and cost of project is reduced and quality
maximized. For instance, the physical integration of a design could be traded off against the
development process cycle time, or time to completion against various arrangements of the
project Design Structure Matrix.
As stated earlier, systems of a small size, even when fully coupled, seem to pose no
problems for the designer. However, the general difficulty that subjects had with coupled
systems of larger sizes indicates that this will be a fruitful are for further research. There is likely
to be a high gain from added experimentation and modeling of these systems, and tools developed
to assist designers with such coupled blocks should significantly improve both the efficiency and
the end results of the design process.
As pressure mounts to reduce project cycle time and improve design quality and
robustness, viewing the product development process from an anthropocentric point of view will
confer other advantages. Although sophisticated and useful models of cycle time have been
proposed (see, for instance, [Carrascosa et al. 1998]) the uncertainty that is introduced by the
cognitive limitations of the human agents involved in the process has not been adequately
included in these models. This will benefit upstream product development activities, such as
project planning and resource allocation, where most of the resources expended in the effort are
actually committed.
113
A richer understanding of the characteristics of the individual human agents involved in
complex collaborative design processes will also be significant for the software-based distributed
design environments currently being proposed or implemented. The facility of these distributed
environments is in part based on the decomposition of a design project into tractable subproblems, so the need for optimal decomposition of design tasks is clearly critical for the success
of such tools [Pahng et al. 1998]. Moreover, as the true purpose of computer-aided design
systems is to minimize the overall cognitive complexity facing the designer or engineer, many
other critical activities such information management and human interaction will require the
development and integration of additional support tools sensitive to human information
processing parameters. Again, a clear view of the designer and the design process from a
cognitive science perspective is necessary to achieve these goals. However, further investigation
will be needed to determine how exactly the cognitive limitations of individual designers interact
with and impact design projects conducted within a collaborative decision-making and analysis
framework.
114
THESIS REFERENCE WORKS
Alexander, C. (1964). Notes on the Synthesis ofForm, Harvard University Press, Cambridge,
MA.
Altshuller, G. S. (1984). Creativity as an Exact Science, Gordon & Breach Science Publishers,
New York, NY.
Anderson, J. R. (1987). "Skill acquisition: compilation of weak-method problem solutions,"
PsychologicalReview, vol. 94, pp. 192-2 10.
Ashcraft, M. H. (1992). "Cognitive arithmetic: A review of data and theory," Cognition, vol. 44,
no. 1-2, pp. 75-106.
Baddeley, A. (1986). Working Memory, Oxford University Press, New York, NY.
Box, G., Hunter, W., and Hunter, J. (1978). StatisticsforExperimenters:An Introduction to
Design, Data Analysis, andModel Building, John Wiley & Sons, New York, NY.
Brainerd, C. J., and Kingma, J. (1985). "On the independence of short-term memory and working
memory in cognitive development," CognitivePsychology, vol. 17, pp. 210-247.
Butler, R., Miller, S., Potts, J., and Carref5o, V. (1998). "A Formal Methods Approach to the
Analysis of Mode Confusion," Proceedings of the 17* AIAA/IEEE Digital Avionics Systems
Conference, Oct. 31-Nov. 6, 1998.
Carrascosa, M., Eppinger, S. D., and Whitney, D. E. (1998). "Using the Design Structure Matrix
to Estimate Product Development Time," Proceedings of the DETC '98, Sept. 13-16, 1998,
Atlanta, GA.
Chandrasekaran, B. (1989). "A Framework for Design Problem Solving," Research in
EngineeringDesign, vol. 1, no. 2, pp. 75-86.
Chase, W. R., and Simon, H. A. (1973). "Perception in Chess," Cognitive Psychology, vol. 4, pp.
55-81.
Chi, T., Dashan, F. (1997). "Cognitive Limitations and Investment 'Myopia'," Decision
Sciences, vol. 28, no. 1, pp. 27-45.
Christiaans, H. H. C. M., and Dorst, K. H. (1992). "Cognitive Models in Industrial Design
Engineering: A Protocol Study," ASME Design Theory andMethodology, DE-Vol. 42, pp.
131-137.
Clark, K. B., and Fujimoto, T. (1991). ProductDevelopment Performance, Harvard Business
School Press, Boston, MA.
Condoor, S., Shankar, S., Brock, H., Burger, P., and Jansson, D. (1992). "A Cognitive
Framework for the Design Process," ASME Design Theory and Methodology, DE-Vol. 42,
pp. 277-281.
115
Cowan, N. (1995). Attention and Memory, An IntegratedFramework, Oxford University Press,
New York, NY.
Dehaene, S. (1997). The Number Sense. How the Mind Creates Mathematics, Oxford University
Press, New York, NY.
Dehaene, S., Spelke, E., Pinel, P., Stanescu, R., and Tsivkin, S. (1999). "Sources of
Mathematical Thinking: Behavioral and Brain Imaging Evidence," Science, vol. 284, pp.
970-974.
D6rner, D. (1996). The Logic ofFailure,Addison-Wesley, Reading, MA. [First published as Die
Logik des Misslingens, Rowholt Verlag GmbH, 1989.]
Edwards, C. H. Jr., Penney, D. E. (1988). Elementary LinearAlgebra, Prentice Hall, Englewood
Cliffs, NJ.
Ehret, B. D., Kirschenbaum, S. S., and Gray, W. D. (1998). "Contending with Complexity: The
Development and Use of Scaled Worlds as Research Tools," Proceedings of the Human
Factors and Ergonomics Society 4 2 "d Annual Meeting, 1998.
Enkawa, T., Salvendy, G. (1989). "Underlying Dimensions of Human Problem Solving and
Learning: Implications for Personnel Selection, Training, Task Design, and Expert Systems,"
InternationalJournalofMan-Machine Studies, vol. 30, no. 3, pp. 235-254.
Eppinger, S. D., Whitney, D. E., Smith, R. P., and Gebala, D. A. (1989). "Organizing the Tasks
in Complex Design Projects," Computer-Aided CooperativeProductDevelopment, in Goos.
G., and Hartmanis, J. (ed.), Lecture Notes in Computer Science, v. 492, pp. 229-252.
Springer-Verlag, New York, NY.
Eppinger, S. D., Whitney, D. E., Smith, R. P., and Gebala, D. A. (1994). "A Model-Based
Method for Organizing Tasks in Product Development," Research in EngineeringDesign,
vol. 6, no. 1, pp. 1-13.
Frey, D., Jahangir, E., and Engelhardt, F. (2000). "Computing the Information Content of
Decoupled Designs," Proceedings of the 1St International Conference on Axiomatic Design,
the Massachusetts Institute of Technology, Cambridge, MA, June 21-23, 2000. [to appear]
Gebala, D. R., Eppinger, S. D. (1991). "Methods for Analyzing the Design Process," ASME
Design Theory and Methodology, DE-Vol. 31, pp. 227-233.
Gell-Mann, M. (1995). "What is Complexity?" Complexity, vol. 1, no. 1.
Goodman, T., and Spence, R. (1978). "The Effect of System Response Time on Interactive
Computer Aided Problem Solving," Computer Graphics,vol. 12, no. 3, pp. 100-104.
Hair, J., Anderson, R., Tatham, R., and Black, W. (1998). MultivariateData Analysis, Prentice-
Hall, Upper Saddle River, NJ.
Hazelrigg, G. A. (1997). "On Irrationality in Engineering Design," JournalofMechanical
Design, vol. 119, pp. 194-196.
116
Hitch, G. J. (1978). "The role of short-term working memory in mental arithmetic," Cognitive
Psychology, vol. 10, pp. 302-323.
Horgan, J. (1995). "From Complexity to Perplexity," Scientific American, vol. 272, no. 6, pp.
104-109.
Horowitz, B. (1998). "Using Functional Brain Imaging to Understand Human Cognition,"
Complexity, vol. 3, no. 6, pp. 39-52.
John, P. W. M. (1990). StatisticalMethods in Engineeringand Quality Assurance, John Wiley &
Sons, New York, NY.
Jones, J. C. (1966). "Design methods reviewed," The Design Method, Butterworths, London,
England.
Kahney, H. (1993). Problem Solving, Open University Press, Buckingham, England.
Kovotsky, K. and Simon, H. A. (1990). "What makes some problems really hard: Explorations in
the problem space of Difficulty," Cognitive Psychology, vol. 22, pp. 143-183.
Krishnan, V. and Ulrich, K. (1998). "Product Development Decisions: A Review of the
Literature," Working Paper, Department of Operations and Information Management, The
Wharton School, Philadelphia, PA.
Krishnan, V., Eppinger, S. D., and Whitney, D. E. (1991). "Towards a Cooperative Design
Methodology: Analysis of Sequential Design Strategies," ASME Design Theory and
Methodology, DE-Vol. 31, pp. 165-172.
Krishnan, V., Eppinger, S. D., and Whitney, D. E. (1997a). "A Model-Based Framework to
Overlap Product Development Activities," Management Science, 43, pp. 437-451.
Krishnan, V., Eppinger, S. D., and Whitney, D. E. (1997b). "Simplifying Iterations in Crossfunctional Design Decision Making," JournalofMechanicalDesign, vol. 119, pp. 485-493.
Lawson, B. (1997). How Designers Think: The Design ProcessDemystified, Architectural Press,
Oxford, England.
Lewis, H. R., Papadimitriou, C. H. (1981). Elements of the Theory of Computation,Prentice-
Hall, Englewood Cliffs, NJ
Lindsay, P. H., Norman, D. A. (1977). Human Information Processing:An Introduction to
Psychology, Academic Press, New York, NY.
Mackay, J. M., Barr, S. H., and Kletke, M. G. (1992). "An Empirical Investigation of the Effects
of Decision Aids on Problem-Solving Processes," Decision Sciences, vol. 23, no. 3, pp. 648-
667.
Madanshetty, S. I. (1995). "Cognitive Basis for Conceptual Design," Research in Engineering
Design, vol. 7, no. 4, pp. 232-240.
117
Martin, J. C. (1991). Introduction to Languages and the Theory of Computation,McGraw-Hill,
New York, NY.
Matchett, E. (1968). "Control of Thought in Creative Work," CharteredMechanicalEngineer,
vol. 14, no. 4.
Mathews, J. H. (1987). NumericalMethodsfor Computer Science, Engineering,and
Mathematics, Prentice-Hall PTR, Englewood Cliffs, NJ.
Miller, G. A. (1956). "The Magical Number Seven, Plus or Minus Two: Some Limits on Our
Capacity for Processing Information," The PsychologicalReview, vol. 63, no. 2.
Newell, A., and Simon, H. A. (1972). Human Problem Solving, Prentice Hall, Englewood Cliffs,
NJ.
Pahl, G. and Beitz, W. (1988). EngineeringDesign, Springer-Verlag, New York, NY.
Pahng, F., Senin, N., and Wallace, D. (1998). "Distribution modeling and evaluation of product
design problems," Computer-AidedDesign, vol. 30, no. 6, pp. 411-423.
Phadke, M. S. (1989). Quality Engineering Using Robust Design, Prentice-Hall PTR, Englewood
Cliffs, NJ.
Reisberg, D., Schwartz, B. (1991). Learningand Memory, W. W. Norton and Company, New
York, NY.
Robertson, D., Ulrich, K., and Filerman, M. (1991). "CAD Systems and Cognitive Complexity:
Beyond the Drafting Board Metaphor," ASME Design Theory and Methodology, DE-Vol. 31,
pp. 77-83.
Robinson, E. P., and Swink, M. (1994). "Reason based solutions and the complexity of
distributed network design problems," European Journalof OperationsResearch, vol. 76,
no. 3, pp. 393-409.
Rodgers, J. L., and Bloebaum, C. L. (1994). "Ordering Tasks Based on Coupling Strengths,"
Fifth AIAA/NASA/USAF/ISSMO Symposium on Multidisciplinary Analysis and
Optimization, Panama City, FL, September 7-9, 1994.
Shannon, C. E., and Weaver, W. (1963). The Mathematical Theory of Communication,
University of Illinois Press, Chicago, IL.
Simon, H. A. (1957). Models ofMan, John Wiley & Sons, New York, NY.
Simon, H. A. (1969). The Sciences of the Artificial,The MIT Press, Cambridge, MA.
Simon, H. A. (1974). "How Big is a Chunk," Science, vol. 183, pp. 482-488.
Simon, H. A. (1978). "Information Processing Theories of Human Problem Solving," in Estes,
W. K. (ed.), Handbook ofLearning and Cognition Processes. Lawrence Erlbaum Associates,
Mahwah, NJ.
118
Simon, H. A. (1989). Models of Thought, Volume II, Yale University Press, New Haven, CT.
Simon, H. A., Kotovsky, K., and Hayes, J. (1985). "Why are some problems hard?" Cognitive
Psychology, vol. 17, pp. 248-294.
Steward, D. V. (1981). Systems Analysis and Management: Structure, Strategy, and Design,
Petrocelli Books, New York, NY.
Strang, G. (1986). Introduction to Applied Mathematics,Wellesley-Cambridge Press, Wellesley,
MA.
Suh, N. P. (1990). The Principlesof Design, Oxford University Press, New York, NY.
Tewarson, R. P. (1973). SparseMatrices, Academic Press, New York, NY.
Traub, J. F. (1988). "Introduction to Information-Based Complexity," in Abu-Mostafa, Y. S.
(ed.), Complexity in Information Theory, pp. 62-76. Springer-Verlag, New York, NY.
Ulrich, K., and Eppinger, S. D. (1995). ProductDesign and Development, McGraw-Hill, New
York, NY.
Waern, Y. (1989). Cognitive Aspects of Computer Supported Tasks, John Wiley & Sons, New
York, NY.
Watson, A. (1998). "The Universe Shows its Age," Science, vol. 279, pp. 981-983.
Welch, R. V., Dixon, J. R. (1994). "Guiding Conceptual Design Through Behavioral
Reasoning," Research in EngineeringDesign, vol. 6, no. 3, pp. 169-188.
Whitney, D. E. (1990). "Designing the Design Process," Research in EngineeringDesign, vol. 2,
no. 1, pp. 3-13.
119
APPENDIX
A.1 COMPUTATIONAL COMPLEXITY
The computational complexity of a problem refers to the intrinsic difficulty of finding a
solution to the problem, as measured by the time, space, number of operations, or other quantity
required for the solution [Traub 1988]. All computational problems can be divided into two
categories: those that can be solved by algorithms and those that cannot. Unfortunately, the
existence of an algorithm for a particular problem is not a guarantee that it is actually solvable.
The time requirements for computation may be so extreme that the problem, while theoretically
solvable, is effectively intractable [Lewis 1981].
A good example is the classic traveling salesman problem: A salesman must visit nine
cities and is interested in calculating a route that minimizes the distance traversed. If there are n
cities to be visited, then the number of possible itineraries that must be evaluated is
(n -1)!=Ix 2 x 3 x ... x (n -1).
This is a large number of routes to evaluate (40,320 to be exact), but by using a computer it is
probably a feasible task. But what if n were larger? Thirty cities? Fifty cities? In this case the
number of routes would be 49!, or 6.08x10
62
possible itineraries. Unfortunately, even if it were
possible to evaluate a billion routes per second, it would take 1.92x1042 years to find the optimum
solution to this problem! The fact that the universe is around 12 billion years old puts this
amount of time into proper perspective [Watson 1998].
A.2 RATE OF GROWTH FUNCTIONS
So, what to do? How does one know if a problem can be realistically solved? A good
way is to look at the time complexity or computationalcomplexity of a problem, in other words
the amount of time or number of operations required to solve a problem. A good upper bound on
this complexity can be determined by developing rate ofgrowth functions for specific classes of
problems. To do this the number of operations required to find a solution to a problem using a
certain algorithm are calculated [Lewis 1981]. Usually operation counts can then be expressed as
some type of polynomial or other similar function of problem size.
120
As with limits, the most significant factor governing how problem size affects the amount
of operations required to find a solution turns out to be the highest order term the function. Rates
of growth are notated by
f = 0(g),
where f is the rate of growth function and g is the highest order term, minus its coefficients, in
the function describing the number of operations or the time required to solve a certain problem.
For example, if
f
= 3n 2 + n
operations were required to calculate a solution to a certain class of problem, then the rate of
growth would be of order n 2 , and could be expressed as 0(n 2).
A.2.1 IMPORTANT CLASSES OF RATE OF GROWTH FUNCTIONS
As it turns out, the idea of rates of growth allows computationally complex problems to
be separated into a number of distinct and significant categories. Comparing rates of growth
makes it possible to evaluate how efficient an algorithm is for a particular task, and also provides
information about how the time complexity will change as problem size increases.
Two of the most important classes of problems, and certainly the most interesting to
compare in light of the focus of this thesis, are those bounded by polynomial and exponential
growth rates. Problems for which the operation count scales as a polynomial are said to be
solvable in polynomial time, while problems with exponential rates of growth a solvable in
exponential time. Problems solvable in polynomial time, or whose operation counts are bounded
by a polynomial function, such as f = n log(n + 1), are classified as polynomial-time decidable
and denoted by the symbol pV.
As exponentially scaling problems become large they often become essentially
intractable from a computational standpoint. Thus, one test for whether or not a problem is
solvable in real-world terms is if a polynomial-time decidable, rather than exponentially bounded,
solution algorithm actually exists. In all cases, the rate of growth of exponentially bounded
computations is much faster than that of polynomial growth rates [Lewis 1981]. This can be seen
121
in the following demonstration, which compares the growth rates for generalized polynomial and
exponential functions.
Assuming a general exponential function
q(n) =a",
a >,
and a general polynomial
p(n)=akn'+..+ain+ao,
we wish to show that p = O(q) but q
nk= O(a)
and a"n
O(p), which is equivalent to showing that
O(nk).
So, let
z(n) = ek(Inn/n)
so that
nk=
(z(n))n.
Since the log n / n factor in the exponent approaches zero as n becomes large, and since
e0
a for all n > No.
< a , a value of No may be chosen so that z(n)
So, it follows that if
n > NO , then
a
nk = zn)
thus n
-O
"(a).
To show that a" # O(nk), the ratio anInk must be shown to be unbounded. If we
write a"/nk as
n
n
n
n
122
it can be seen from the argument that
nk
a
is true for large n . This is equivalent to
a
n
>1
k+1-
and from this it can be seen that the ratio a"/nk is in fact unbounded [Martin 1991]. Thus, the
value of any exponential function eventually surpasses that of any function bounded by a
polynomial rate of growth function.
A.2.2 A PRACTICAL EXAMPLE
Factoring and solving a system of N equations in N unknowns is a well-characterized
problem. Because this process is so similar to what human subjects were trying to do when
solving design problems having fully coupled parameters during the experiments that were
conducted as part of this research, it is worth comparing the two approaches in terms of their
computational complexity.
An efficient algorithm for numerically solving a n x n linear system, Ax = b, first
involves triangular factorization of the matrix of coefficients, A, into a lower triangular form
A = LU .
This process requires
N
>(N- p)(N
~
-- 1)=
mtp3
multiplications and divisions, and
123
N'-N
3
a
IN
V
2N' - 3N 2 + N
L(N -p)(N -p) =6
6
p=1
subtractions. After the A = L U factorization is complete, the solution to the lower-triangular
system Ly = b requires
0 +1+...+ N -
=-(N2 - N)
2
multiplications and subtractions. Then, the solution of the upper-triangular system Ux = y
requires
1+2+N= (N 2+ N)
2
multiplications and divisions, and
0++...+ N -=
-(N2 - N)
2
subtractions.
Thus, a total of
2 N3+5 N2_7N
-N
~-Na+-kN2
3
2
6
operations are required to factor and solve the system. (Mathews 1987) Thus, the growth rate of
the computational complexity of evaluating a n x n linear system is 0(n3 ).
It is clear from the previous equations that the bulk of the effort involved in solving a
linear system in this manner comes from factoring the coefficient matrix A.
One advantage of
the A = L U decomposition is that if a system with the same coefficient matrix is to be solved
over and over again, with only the column vector b changing, it is unnecessary to recalculate the
triangularization of the coefficient matrix each time [Mathews 1987]. If such is the case, then the
124
number of operations required to evaluate the system drops to N 2 multiplication and division
operations and N 2 - N subtractions, and the order of the rate of growth function is reduced to
O(n 2)
Examining the relative growth rates of problems of order 0(n'), 0(n
2
), and O(e"),
provides an interesting comparison. In the end the exponential growth rate is always the most
unfavorable.
Polynomial and Exponential Growth Rate Comparison
500
-
-
--
-
--
-
-
--
450
400 350 300 ~*
polynomial, f=O(nA2)
+ polynomial, f=O(nA3)
250 -
+
200 -
A exponential, f=O(e
150 100 -
50
*..t4*:**
E
0
0
1
2
4
3
5
6
7
n
FIGURE A.1 - Polynomial and exponential growth rate comparison.
125
n)