Chapter 1 Method for Measuring the Impact of Design Process Improvement Research

advertisement
Chapter 1
Method for Measuring the Impact of Design
Process Improvement Research
R.R. Senescu and J.R. Haymaker
1.1 Introduction: Design Process Improvement
Field Requires New Validation Methods
Design processes are often inefficient and ineffective (Flager and Haymaker, 2007;
Gallaher et al., 2004; Navarro, 2009; Scofield, 2002; Young et al., 2007). Research
attempts to improve the industry’s processes by describing, predicting, and then
developing improved design methods.
The Design Process Improvement research field frequently validates
descriptive and predictive research with industry observation and case studies
(Clarkson and Eckert, 2005). Once validated, the descriptive and predictive design
process research lay important foundation for normative research that proposes
new design process methods aimed at directly improving industry processes.
Validating this normative research is critical for technology transfer to industry.
Much of the normative Design Process Improvement research focuses on
reducing coordination and rework (e.g. Design Structure Matrix (Eppinger, 1991;
Steward, 1981), Virtual Design Team (Jin and Levitt, 1996), Process Integration
Design Optimization (Flager et al., 2009), and Lean Design and Construction
(Koskela, 1992)). Researchers frequently distinguish this coordination and rework
(non-value add tasks) from production work (value add tasks) (Ballard, 2000;
Flager et al., 2009; Jin and Levitt, 1996). It is difficult to isolate the impact of new
methods on the efficiency and effectiveness of coordination and rework. Case
Study validation methods are difficult, because every project is different and
project duration is long, so acquiring statistically significant data is usually not
possible. Also quantitative and objective measurements are usually difficult to
obtain. In past experimental validation methods, the level of designer expertise
greatly influences results, and value functions that assess the effectiveness of
design processes are frequently subjective and qualitative. The proposed MockSimulation Charrette for Efficiency and Effectiveness (MSCEE) evaluates and
compares Design Process Improvement research.
2
R.R. Senescu and J.R. Haymaker
Adopting the CIFE Horseshoe research framework (Fischer, 2006), MSCEE is
a specific Research Method used in the Testing task to validate Results.In terms of
the Design Research Methodology (DRM), MSCEE uses the information
processing view of design to validate research that aims to Support design. The
authors intend MSCEE to be used as part of a Comprehensive Study in the
“Descriptive Study II stage to investigate the impact of the support and its ability to
realise the desired situation” (Blessing and Chakrabarti, 2009). MSCEE permits
quick iterations on process improvement research allowing for continued research
development, statistically significant results, and economically viable
implementation. MSCEE is also insensitive to test participant expertise, because it
uses mock-simulation tools created in Microsoft Excel to emulate production
design activities. This paper presents MSCEE and then applies it to the validation
of the Design Process Communication Methodology.
1.2 Points of Departure
1.2.1 Design Process Modeling Research
The design process is the act of “changing existing situations into preferred ones”
(Simon, 1988). A design process model is an abstract representation of the actual
design process. Organizations create information to represent the Product through
the actual Process (Garcia et al., 2004). To improve the design process, researchers
may develop process models from three different lenses: conversion, flow, and
value generation (Ballard and Koskela, 1998). Using these lenses, researchers
develop process models to support new working methods, identify gaps in product
information models, and inform new information models (Wix, 2007). Process
models may also aim to facilitate collaboration, share better practice, or
communicate decisions. The validation method used to measure the impact of this
process model research generally does not provide quantitative, statistically
significant evidence of a particular method’s impact on efficiency and
effectiveness.
The Geometric Narrator process model improves the efficiency of designing
deck attachments by creating automated geometric tool modules that could be
linked together to create a process (Haymaker et al., 2004). Haymaker et al.
validate the method with a single retrospective project case study that demonstrates
the power of the method to design deck attachments more efficiently and
effectively than was achieved in practice. They demonstrate generality by applying
the method retrospectively to discover ceiling connection details more efficiently
and effectively than in practice. However, the validity of this evidence is
questionable because of the differences in the control (a live project) and
experimental case (a PhD researcher using the method in the laboratory).
Narrator lacks the automation capability of Geometric Narrator but aims to
more generally apply to plan and manage processes and describe them post facto
(Haymaker, 2006). Lacking a method for measuring their impact, researchers only
demonstrate Narratives in the classroom with no quantitative comparisons to other
Method for Measuring the Impact of Design Process Improvement Research
3
means of describing and managing processes. This shortcoming is common among
design process improvement research. Frequently, the definitions of effectiveness
are not sufficient for determining a criteria for research success (Blessing and
Chakrabarti, 2009).
The Design Structure Matrix (DSM) similarly plans the design process through
task dependencies but also identifies iteration and includes methods for scheduling
activities to minimize rework (Eppinger, 1991; Steward, 1981). Huovila (1995)
applies DSM to a project ex post and found that the problems predicted by DSM
correlated with the actual problems on the project. While identifying problems is
necessary for process improvement (Senescu and Haymaker, 2008), this DSM
research generally falls short of demonstrating actual Design Process
Improvement. For example, even without DSM, the design team may be able to
predict problems. Also, identifying problems and then developing a better process
plan on paper does not necessarily correlate with an improved outcome. DSM
researchers should address the goals for design, so they have criteria for judging
the influences on design success. By understanding these influences, researchers
can improve design processes (Blessing and Chakrabarti, 2009).
Modeling the design process using the structured analysis diagramming
technique of data flow diagrams, Baldwin et al. (1998) demonstrate the feasibility
of modeling the building design process using the “discrete-event simulation
technique.” They validate the impact by presenting design scenarios and asking
how the proposed process solution would impact the scenario via surveys. Asking
designers to predict the impact of different processes is not as convincing as
quantitatively measuring the impact in a controlled laboratory setting.
The Virtual Design Team (VDT) relates organization and process models to
predict coordination work and rework (Jin and Levitt, 1996). Jin and Levitt test
VDT for the accuracy of its prediction by comparing simulation results with
theoretical predictions and real engineering projects. While validated for VDT’s
prescriptive power, the authors are not aware of quantitative evidence that VDT
simulation actually improves the design process.
Process model research aims to improve design processes. Yet, most of the
research is either descriptive or prescriptive (O'Donovan et al., 2005). Validation
techniques usually demonstrate that the model describes or predicts the actual
process accurately. Alternatively, the research demonstrates a method for planning
an improved process. The field lacks a quick, objective, and quantitative method
for measuring the impact of the process modeling research on process efficiency
and effectiveness and comparing it to traditional methods. Research frequently
lacks a link between “the stated goals [of the design research] and the actual focus
of the research project, e.g., improving communication between project
members…..and as a consequence little evidence exists that the goal has indeed
been achieved.” (Blessing and Chakrabarti, 2009). By explicitly defining efficiency
and effectiveness and linking it to actual experimental results, the MSCEE
provides criteria for measuring research success.
4
R.R. Senescu and J.R. Haymaker
1.2.2 Validation Methods
Once researchers select a design process model, Design Computing researchers
study technologies to assist or automate the design process. However, “The
evaluation of new design computing technologies is difficult to achieve with user
studies or theoretical proof of improved efficiency or quality of the solution”
(Maher, 2007).
Many researchers perform experiments in the laboratory. For example, Heiser
and Tversky (2006) perform A-B experiments with students to describe that
students shown arrows in diagrams describe equipment with functional verbs as
opposed to describing structure. Those students with text descriptions containing
functional verbs are more likely to draw arrows. This research describes a
cognitive phenomenon but falls short of measuring the impact on design process
efficiency and effectiveness. Should teams use more arrows when collaborating
with each other? Should they use functional verbs? Existing methods do not
adequately address these normative questions.
Also validated in the laboratory, GISMO aims to improve decision making by
graphically displaying information dependencies (Pracht, 1986). Pracht
demonstrates that certain business students made decisions leading to higher net
income for their mock companies when presented dependencies in graphical form.
Though not applied to a design problem, the validation method for GISMO
presents quick, quantitative results demonstrating that a new computer-aided
process results in more effective decision making.
Clayton et al. (1998) provide an overview of other validation methods applied
to design computing research (e.g. Worked Example, Demonstration, Trial, and
Protocal) and software development (e.g. Software Development Productivity,
Software Effectiveness, Empirical Artificial Intelligence, and Software Usability).
Addressing the shortcomings of these methods and the research discussed above,
the Charrette Test Method compares the efficiency and effectiveness of using
different tools to perform a process. The term, charrette, means “cart” in French.
Originating in the Ecole des Beaux Arts, architects use the term to mean a short,
intense design exercise. The Charrette Test Method combines this architectural
notion of charrette with the software usability testing common in the Human
Computer Interaction research field. Clayton et al. develope the charrette method
“to provide empirical evidence for effectiveness of a design process to complement
evidence derived from theory…” The Charrette Test Method permits:
•
•
•
•
multiple trials which increases reliability;
repeatable experimental protocol;
objective measurements;
comparison of two processes to provide evidence for an innovative
process.
Researchers can widely apply this method to design computing research
questions, but they must spend much time customizing the method to their
particular question. In design improvement research, test customization prevents
comparisons from being made across research projects, and “few attempts are
Method for Measuring the Impact of Design Process Improvement Research
5
made to bring results together” (Blessing and Chakrabarti, 2009). Also, in
Clayton’s application of the method, skewed results occurred due to variability in
participant expertise and software problems.
In the Design Exploration Assessment Methodology (DEAM), the Energy
Explorer (a Microsoft Excel spreadsheet) allows test subjects to quickly generate
and record design alternatives to provide quantitative measurements of different
design strategies (Clevenger and Haymaker, 2009). However, this implementation
of the Charrette Test Method is very customized and cannot be generalized to other
Design Process Improvement research. The next section describes how MSCEE
further develops the Charrette Test Method without prohibitively narrowing its
application.
1.3 The MSCEE Method
1.3.1 Test Setup
Participants in the charrette work on project teams consisting of five members,
each assigned one or two of the following roles: Project Architect, Design
Architect, Daylighting Consultant, Mechanical Engineer, Structural Engineer, Cost
Estimator, CAD Manager.
The researchers present the team members with the goal of maximizing total
MACDADI value (Haymaker et al., 2009) for their relocatable classroom design.
The teams begin the charrette with a five minute kickoff meeting to plan their
design process. Simulating the typical non-collocated, asynchronous project team,
the team members then disperse to sit at different computers and communicate
only via e-mail and/or any other research technologies being tested. The team tries
to maximize the MACDADI value by assigning values to the following
independent variables: Building Width, Building Height, Window Length,
Orientation, Equipment, and Structural Materials.
Each member inputs their role’s independent variables into one or more of the
mock-simulation tools assigned to their role. The mock-simulation tools (created in
Microsoft Excel) then analyze the input values to output performance values
(Figure 1.1). The conversion of inputs to outputs is not scientific; the simulation
does not correlate with actual physical building performance. This lack of
correlation is acceptable, because the intent of MSCEE is to model the
coordination design work; the work performed between simulations. The actual
input and output values have no significance, which is actually preferable to using
real simulation tools, because MSCEE nullifies the domain specific skills and
experience of the test participants and focuses instead on the coordination design
work that is impacted by the Design Process Improvement research.
The project architect collects the performance values from the other team
members and enters the values into the MACDADI value function. The value
function contains five product performance goals and one design process goal. The
design process goal (Maximize Design Iterations) incentivizes fast iteration by the
6
R.R. Senescu and J.R. Haymaker
design team. The teams must deliver a minimum of one milestone design every 20
minutes and a final design after 80 minutes.
Each design team takes the control and experimental test (a within-subject
design). The order is varied to verify that learning does not impact the results.
Learning is avoided by changing the names of the variables and the design tools
while keeping the topology of the information relationships the same.
Figure 1.1. Each Mock-Simulation tool is created in Microsoft Excel and resembles this
Energy Analysis tool. The participant finds dependent input values from the output values of
other tools. He then chooses a design by selecting input independent variables. Clicking the
“Analyze” button produces the output values, which become input to a subsequent tool.
1.3.2 Metrics Comparing Efficiency and Effectiveness
Each participant’s interaction with the design process improvement technology, the
mock simulations in Excel, and all e-mails sent to their team are logged. Tracking
the time spent on various tasks permits efficiency measurements and recorded
MACDADI values permit effectiveness measurements.
Method for Measuring the Impact of Design Process Improvement Research
7
1.3.2.1 Process Efficiency
Do designers with the experimental method perform more efficiently than with the
control method? Design processes consist of “production work that directly adds
value to final products, and coordination work that facilitates the production work”
(Jin and Levitt, 1996). An efficient process will minimize Total Work (the sum of
production and coordination work). MSCEE is only appropriate for validating
Design Process Improvement research aimed at coordination work. Narrator,
Geometric Narrator, DSM, and VDT all focus on this coordination work – they
concern themselves only with information flow, not how individual design tasks
are carried out nor how decisions are made (Baldwin et al., 1998).
By tracking the amount of time spent on each task, the percentage change in
design process efficiency due to introduction of the experimental method can be
calculated. The percentage change in efficiency is defined as:
∆ Efficiency = ∆ (Value Added Time / Total Work Time)
(1.1)
Note that MSCEE condenses value added time to near-zero through the use of the
mock-simulation tools. MSCEE cannot measure the impact of research intended to
affect production work. Also, MSCEE allows the teams only a fixed time period
with which to work. Consequently, the efficiency equation is modified to:
∆ EfficiencyMSCEE = ∆ (Time / Iteration)
(1.2)
The Efficiency is simply the percentage change in time per iteration between the
control and experimental groups.
1.3.2.2 Process Effectiveness
Effective design processes more likely lead to effective products. For each
iteration, the Project Architect collects the Mock-Simulation tool performance
output and enters the output into the MACDADI tool. MACDADI measures the
each goal on a scale of -3 to +3 and aggregates the weighted goals to a single score.
The researchers calculate the percentage increase in the MACDADI score due to
the experimental method’s implementation:
∆ EffectivenessMSCEE = ∆ MACDADI
(1.3)
This measurement indicates the impact of the experimental method on design
process effectiveness.
8
R.R. Senescu and J.R. Haymaker
1.4 Example Application of MSCEE to Validate
Design Process Communication Methodology
1.4.1 Description of Design Process Communication
Methodology and the Process Integration Platform
The Design Process Communication Methodology (DPCM) specifies a social,
technical, and representational environment for design process communication that
is Computable, Distributed, Embedded, Modular, Personalized, Scalable, Shared,
Social, Transparent, and Usable (Senescu and Haymaker, 2009). To test DPCM,
the research maps the specifications to software features in the Process Integration
Platform (PIP). PIP is a process-based information communication web tool. The
authors used MSCEE to measure the impact of PIP (a proxy for DPCM) on design
process efficiency and effectiveness.
1.4.2 Testing PIP Using the MSCEE
The researchers assigned each role only one mock-simulation tool. When the
analysis is complete, each participant uploads the mock-simulation tool to PIP, so
that other participants can view the mock-simulation results. The control group
does not have arrow drawing capability in PIP, so the control team cannot see how
the various mock-simulation tools are related. The experimental group is instructed
to draw arrows to show information dependencies.
The research hypothesizes that after information relationships are made
transparent during the first iteration, the experimental group will better
comprehend the process. This process awareness will allow the experimental group
to collaborate more efficiently and effectively.
1.4.3 Preliminary Results
The experimental group accurately communicated information relationships
between the mock-simulation tools, and the control group simply uploaded the
tools in a list (Figure 1.2).
The control group achieved the best design in Iteration 2, but subsequently
designed options of increasingly lower value (Figure 1.3). This sporadic design
iteration suggests that they selected design variables randomly as opposed to
collaborating effectively to progress toward increasingly higher valued designs.
On the other hand, the experimental group systematically increased their
design’s value with each iteration, suggesting PIP allowed them to better
comprehend the process and consequently, collaborate to achieve a better design.
A post-experiment survey questioned the participants about their design
experience. For the Experimental Group, 40% Strongly and 40% Moderately
Agreed that after designing the first iteration they learned the design process and
Method for Measuring the Impact of Design Process Improvement Research
9
designed subsequent iterations more quickly. For the Control Group, 0% Strongly
Agreed and 75% Moderately Agreed that after designing the first iteration they
learned the process and designed subsequent iterations more quickly. This result
suggests an efficiency increase. A similar question about effectiveness did not
suggest any perceived difference between control and experiment.
Figure 1.2. Using the MSCEE, the Control Group exchanged mock-simulation results
(Excel files) in PIP without defining the information relationships (left). The Experimental
Group drew arrows to define the information dependencies as they shared the mocksimulation results (right). Only design iteration two is shown.
Also, 100% of the participants that drew arrows claimed that they decided on a
design based on someone in their group asking for a certain design value, as
opposed to only 25% for the control group. This result suggests that the transparent
information relationships allowed designers to better comprehend who impacted
their designs, allowing them to more easily request particular designs. These
requests explain the experimental group’s steady increase in MACDADI value
(Figure 1.3).
When validating DPCM for the first time, both the Control and Experimental
groups only iterated once per milestone, resulting in four iterations and no
difference in efficiency. This result prompted the authors to include the process
goal (discussed above) in the MACDADI value function to incentivize more
iteration.
The efficiency and effectiveness metrics were inconclusive for the first PIP
charrette. A future, more rigorous and statistically significant implementation of
the MSCEE is planned.
10
R.R. Senescu and J.R. Haymaker
Figure 1.3. The Control Group did not continuously increase MACDADI value,
demonstrating an inability to meet design goals. Able to view the team’s design process, the
Experimental Group systematically increased MACDADI value.
1.5 Conclusion
This paper extends the Charrette Test Method and generalizes DEAM to develop
the MSCEE – a quick, quantitative, general experimental method for evaluating
Design Process Improvement research focused on coordination and rework. By
testing the DPCM using the MSCEE, this paper demonstrates that the MSCEE
provides insightful results that potentially provide evidence for efficiency and
effectiveness of new design process research. As the researchers only applied the
method to a small group of students, drawing conclusions about the impact of
DPCM on design processes is difficult. Also, since the researchers did not apply
MSCEE to other research projects nor test DPCM with other research methods,
this paper does not provide quantitative evidence comparing MSCEE to other
methods. However, unlike previous Charrette Methods, the MSCEE allows DPCM
to be compared with other Design Process Improvement research that also uses
MSCEE; it is highly repeatable. Also, MSCEE allows quick iteration, quantifiable
efficiency and effectiveness metrics, and insensitivity to participant expertise.
Because MSCEE only focuses on Coordination Work, its application is
generalizable to any design domain where such work significantly impacts the
design process. Widespread application of the method may provide sufficient
validation for technology transfer of process improvement research through
investment in commercializing process improvement technology.
Application of MSCEE revealed an unexpected benefit to design education.
Students reported surprise at the difficulty of the task, given the simplicity of the
mock-simulation tools. The method taught the significance of coordination and
rework and the difficulty in making multi-disciplinary decisions.
Method for Measuring the Impact of Design Process Improvement Research
11
Future work will better calibrate the mock-simulation tools and MACDADI
goal metrics to permit variations in efficiency measurements. The authors also
want to further investigate the external validity of the method to confirm that the
coordination and rework adequately resembles processes in industry.
1.6 References
Baldwin AN, Austin SA, Hassan TM, Thorpe A (1998) Planning building design by
simulating information flow. Automation in Construction 8: 149-163
Ballard G (2000) Positive vs negative iteration in design. 8th Annual Conference of the
International Group for Lean Construction. July 17-19, 2000. Brighton, UK
Ballard G, Koskela L (1998) On the agenda of design management research. 6th Annual
Conference of the International Group for Lean Construction. August 13-15, 1998.
Guaruj, Brazil
Blessing LTM, Chakrabarti A (2009) DRM, a design research methodology. SpringerVerlag
Clarkson J, Eckert C (2005) Design process improvement: A review of current practice,
Springer
Clayton M, Kunz J, Fischer M (1998) The charrette test method. Center For Integrated
Facility Engineering, Stanford University TR-120.
Clevenger C, Haymaker J (2009) Framework and metrics for assessing the guidance of
design processes. 17th International Conference on Engineering Design. August 24-27,
2009. Stanford, CA
Eppinger SD (1991) Model-based approaches to managing concurrent engineering. Journal
of Engineering Design 2(4): 283-290
Fischer M (2006) Formalizing construction knowledge for concurrent performance-based
design. In: Intelligent computing in engineering and architecture. Springer, Berlin,
Germany, Vol.4200, pp 186-205
Flager F, Haymaker J (2007) A comparison of multidisciplinary design, analysis and
optimization processes in the building construction and aerospace industries. 24th
International Conference on Information Technology in Construction. June 27-29, 2007.
Maribor, Slovenia, pp 625-630
Flager F, Welle B, Bansal P, Sorekmekun G, Haymaker J (2009) Multidisciplinary process
integration and design optimization of a classroom building. Journal of Information
Technology in Construction 14: 595-612
Gallaher MP, O'Connor AC, John L. Dettbarn J, Gilday LT (2004) Cost analysis of
inadequate interoperability in the U.S. capital facilities industry. National Institute of
Standards and Technology
Garcia ACB, Kunz J, Ekstrom M, Kiviniemi A (2004) Building a project ontology with
extreme collaboration and virtual design and construction. Advanced Engineering
Informatics 18: 71-83
Haymaker J (2006) Communicating, integrating and improving multidisciplinary design
narratives. In: Gero GS (ed.) Second International Conference on Design Computing and
Cognition. Technical University of Eindhoven, The Netherlands, Springer, pp 635-653
Haymaker J, Chachere J, Senescu R (2008) Measuring and improving rationale clarity in a
university office building design process. Center for Integrated Facility Engineering,
Stanford University, TR-178
Haymaker J, Kunz J, Suter B, Fischer M (2004) Perspectors: Composable, reusable
reasoning modules to construct an engineering view from other engineering views.
Advanced Engineering Informatics 18: 49-67
12
R.R. Senescu and J.R. Haymaker
Heiser J, Tversky B (2006) Arrows in comprehending and producing mechanical diagrams.
Cognitive Science 30: 581-592
Huovila P, Koskela L, Lautanala M, Tanhuanpaa VP (1995) Use of the design structure
matrix in construction. In: Alarcon L (ed.) 3rd Workshop on Lean Construction.
Albuquerque, NM, USA, A.A.Balkema, pp 429-437
Jin Y, Levitt RE (1996) The virtual design team: A computational model of project
organizations. Computational & Mathematical Organization Theory 2: 171-195
Koskela L (1992) Application of the new production philosophy to construction. Center for
Integrated Facility Engineering, Stanford University TR-72
Maher ML (2007) The synergies between design computing and design cognition. ASCE
Conference Proceedings. Pittsburgh, PA, USA, pp 374-382
Navarro M (2009) Some buildings not living up to green label. New York Times, New
York, NY, USA, August 30, 2009, pp A8
O'Donovan B, Eckert C, Clarkson J, Browning TR (2005) Design planning and modelling.
In: Clarkson J, Eckert C (eds.) Design process improvement: A review of current practice
Springer, London, pp 60-87
Pracht WE (1986) GISMO: A visual problem-structuring and knowledge-organization tool.
IEEE Transactions on Systems, Man and Cybernetics 16: 265-270
Scofield JH (2002) Early performance of a green academic building. ASHRAE
Transactions. Atlanta, GA, pp 1214-1230
Senescu R, Haymaker J (2008) Requirements for a process integration platform. Social
Intelligence Design Workshop, December 3-5, 2008, San Juan, Puerto Rico
Senescu RR, Haymaker JR (2009) Specifications for a social and technical environment for
improving design process communication. In: Dikbas A, Giritli FH (eds.) 26th
International Conference, Managing IT in Construction. October 1-3, 2009, Istanbul,
Turkey
Simon HA (1988) The science of design: Creating the artificial. Design Issues 4: 67-82
Steward D (1981) The design structure matrix: A method for managing the design of
complex systems. IEEE Transactions on Engineering Management 28: 74–87
Wix J (2007) Information delivery manual: Guide to components and development methods.
buildingSMART, Norway
Young N, Jr., Jones SA, Bernstein HM (2007) Interoperability in the construction industry.
SmartMarket Report: Design & Construction Intelligence, McGraw Hill Construction
Download