2. FANG-1 System Dynamic Model

advertisement
Fast Adaptable Next-Generation Ground Vehicle
Challenge, Phase 1 (FANG – 1)
System Dynamics Model Development
Final Report
Dr. Eun Suk Suh
Junior Consultant
Olivier L. de Weck
Chief Scientist
Intelligent Action Inc.
Prime Contract: DARPA # HR0011-13-C-0041
Subcontract: VU # 1723-S8
December 31, 2013
1
Executive Summary
The first phase of FANG Design Challenge, sponsored by DARPA from January 15 to April
15, 2013, is noted for DARPA’s first attempt to assess the feasibility and effectiveness of
crowd sourcing for its military vehicle design, deviating from the traditional design process,
where only a handful of experienced companies bid for a system development contract.
For the first phase of the design competition, over 200 teams participated and competed for
the $1 million dollar prize. The competition had a specific target date for design effort
termination, and concepts generated had to meet a minimum utility score with all
requirements being satisfied. The winner was a three-person team called “Ground Systems”,
with members located in Ohio, Texas, and California, claiming the prize money.
Subsequent post-processing analysis of the FANG-1 competition (see separate report dated
September 21, 2013) was conducted and presented on July 9, 2013 at the AVM PI meeting,
which was held at Camp Pendleton, CA. The post-processing analysis revealed several
interesting feedbacks regarding the model library, META tool chain and productivity, scoring
and test benches, and finally, competing team size and effort distribution.
Based on the post-processing analysis results and benchmarking of the already validated
META process system dynamic simulation, a FANG-1 system dynamic simulation was
constructed to describe the dynamics of individual teams participating in the design
competition in context of the entire competition environment.
In this report, details of the FANG-1 competition system dynamic simulation model are
presented. The purpose of the model is to mimic the behavior of a design team during the
competition, and through the sensitivity analysis of key identified variables, determine the
favorable conditions for design teams to be most productive and yield high quality results.
Background, model description and some preliminary simulation validation results are
presented in this report.
2
Table of Contents
Executive Summary.......................................................................................................... 2
1. Background ................................................................................................................... 4
2. FANG-1 System Dynamic Model .............................................................................. 5
2.1 General Model Overview .................................................................................................. 5
2.2 General Simulation Flow ................................................................................................... 5
2.3 Competition level factors ................................................................................................. 6
2.4 Team level factors .............................................................................................................. 7
2.5 Resulting Factors ................................................................................................................ 8
2.6 Section Summary ............................................................................................................... 8
3. Current Status of SD Model and Sample Results ................................................. 9
4. Next Steps ...................................................................................................................12
References ........................................................................................................................ 12
3
1. Background
Typically, complex system concept developments, such as military vehicles, are done by
major corporations, staffed with subject matter experts in that particular system and subsystem design. However, with the rise of crowd sourcing utilization for many different
purposes, it was suggested that crowd sourcing can also be utilized for concept generation of
complex system architectures.
In early 2013, the Defense Advanced Research Project Agency (DARPA) initiated a design
competition to assess the feasibility of crowd sourcing in complex system concept
development. The competition was named Fast Adaptable, Next-Generation Ground Vehicle
Design Challenge Phase 1 (also known as FANG-1), and interested participants were invited
to form a design team to create concepts for the next generation ground vehicle. The winning
team was to be awarded $1 million. The competition ran from January 15 through April 15,
2013, with over 200 teams participating, and on April 22, “Ground Systems”, a three member
design team, was announced as the winner.
In order to assess the pros and cons of the FANG-1 competition, a post-processing analysis
was performed [de Weck 2013]. The analysis was based on vehicle architecture concepts
submitted by competing teams on VehicleForge and survey responses received from 29
finalists. The analysis yielded results in four key areas of competition: 1) model library
architecture exploration, 2) META tool chain and productivity, 3) team size and effort
distribution, and 4) scoring and test benches.
Based on the post-processing and survey results, a simulation model to simulate a design
team’s internal working dynamics during the FANG-1 competition was created. The
simulation model is based on system dynamics [Sterman 2000], which can quantitatively
describe the dynamics and feedback loops of architecture concept generation and assessment
processes. In the next section, a general overview of the simulation model is presented.
Subsequent sections describe the current status of the simulation and intended future work,
utilizing the model.
4
2. FANG-1 System Dynamic Model
2.1 General Model Overview
The main objectives for constructing this simulation model are:
1. To gain an improved understanding of the inner dynamics of a typical system design
team who participated in the FANG-1 competition, within the context of the overall
design competition environment.
2. Identify key drivers that can enhance the performance of participating design teams.
3. Implement findings to improve the second and third FANG challenges.
With these established objectives in mind, the system dynamic based simulation model for
FANG-1 design teams was constructed. Figure 1 shows the current model.
<Time>
<Time>
FANG
Team Size
Team
Efficiency
Work Rate
Total Team
Time Spent
<Team Top
Score>
Competition
Top Score
<Number of
Competing Teams>
Score Gap
Fatigue Factor
Team Top
Score
Team
Morale
Work Hour Fraction
per Team Member
Team Rank in the
Competition
Productivity
Architecture
Synthesis Rate
Architecture
Generated
Ready to
Evaluate
Architecture
Upload
Server Load
Completion
Indicator
Expected
Compile Time
Level of
Abstraction
Time Required
for Architecture
Generation
Components
Used in
Architecture
<Time>
Architecture
Concept
Evaluation
Portion of Teams
Testing Concepts
per Day
Number of
Competing Teams
DARPA Purse Size
Current Score
Evaluation
Success
Score
Computed
Number of
Servers
Programmatic
Weight
Average Testing Run
Duration for All Teams
Test Execution
Rate
Testbenches
Completed
Figure 1: FANG-1 System Dynamics Model
2.2 General Simulation Flow
The simulation was created within the FANG-1 design competition context and assumptions.
It covers 70 days, which is equivalent to the number of working days from January 15
through April 15, 2013. It describes individual design team’s architecture generation and
5
evaluation activities, with appropriate feedback loops to reflect a team’s status as competition
progresses. The team activities modeled are as follows.
1. Architecture Generation: At this stage, the team creates vehicle architecture
concepts, using the C2M2L component library provided by the competition sponsor
(www.vehicleforge.org). The created vehicle architecture contains anywhere between
40 components, up to ~340 components for more complex designs.
2. Architecture Assessment: Completed vehicle architectures are then uploaded onto a
server, to be evaluated for performance and lead-time for manufacturing. The
evaluation process on the server can take time, depending on the complexity of the
vehicle architecture and the number of evaluation jobs that are ahead in the queue,
which are submitted by competing teams.
3. Concept Scoring: The assessed vehicle architecture is given a score, based on its
performance level, manufacturing lead-time, and unit cost. The score is then provided
to the design team, who is impacted by the score (in terms of morale and
productivity), and subsequently iterates the process to improve the score and the
overall ranking.
As individual teams follow the outlined design process, there are several factors that
influence a team’s performance. To accurately represent the competition environment and the
individual team environment, the simulation takes into account both competition and team
level factors.
2.3 Competition level factors
Design teams who participated in the competition are inevitably influenced by the overall
competition environment they are subjected to. The dashboard shows the current leaders in
the competition along with their current best score and rank. In FANG-1 only the winning
team received a monetary reward, while all other teams did not receive compensation. The
simulation attempted to capture several competition level factors that may influence a design
team’s performance.
1. Architecture component library: Competition participants were provided with a
library of vehicle components (C2M2L-1) for their architecture concept generation.
According to FANG-1 post-processing analysis, 196 component types were provided.
On average, a typical architecture concept created by design teams had 190
components, which includes “repeated” components that were instantiated more than
once (e.g. wheels).
2. Component level of abstraction: This represents the fidelity of component
description. In the post-processing analysis of FANG-1, the level of abstraction for
the component library used was one. The level of component abstraction and the
number of components available in the library have great influence on the system
architecture synthesis rate. In the simulation, it was assumed that, with one level of
abstraction, a three-person team can generate approximately ten concepts within the
70 day period. This is close to the empirically observed average during the FANG-1
6
competition. If the level of abstraction is enhanced to three, then the number of
concepts generated will increase to thirty, thus reflecting the benefit of increased
abstraction levels.
3. DARPA prize: For this competition, the prize money for the winning team was $1
million. The assumption built into the model is that if the prize money is increased or
decreased, the number of participating team changes accordingly, making the contest
more or less attractive and competitive.
4. Concepts compiling server: Participating design teams were required to upload and
compile their architecture concepts on VehicleForge servers provided by the
competition host. However, as the competition progressed, participating teams were
uploading more concepts with increasing frequency, thus increasing computational
load on the online server. This queuing effect was incorporated into the simulation.
5. Competition top score: As competition progresses, design teams will submit
architecture concepts which are more improved in terms of key metrics, thus driving
the competition top score upwards. This trend is incorporated in the simulation using
a simple linear equation. If the top score time series is available for the FANG-1
competition, it could be substituted in lieu of current equation.
6. Competition team ranking profile: Currently, it is assumed that the score
distribution for all competing teams is triangular, with the competition top score as
the maximum, 10% of the competition top score as the minimum, and 40% of the
competition top score as the mean. The simulation takes an individual team score’s
position within the distribution into account, and then assigns the team rank based on
it.
2.4 Team level factors
With the competition environment established in the simulation model, the next step is to
incorporate individual team behavior into the simulation model. The following factors are
considered and implemented in the model.
1. FANG-1 team size: In the FANG-1 competition, individual team size ranged from
one person to three or more people. In the simulation, the “FANG Team Size” affects
“Team Efficiency.” It was assumed that the team efficiency follows Taguchi’s
Nominal Is Best (NIB) utility curve, with a four-member team being the most
efficient, and any team with more or less number of people will be less efficient. This,
in turn, will impact the team’s productivity.
2. Work hour fraction per team member: The survey results indicated that the
average finalist team spent about 1,200 hours of total time on the competition, or
about 0.6 person years. The simulation was calibrated to the total work hour spent by
a three-person finalist team, over the entire competition period of 70 days. If the team
works overtime, then it will have an impact on team fatigue factor, which will
negatively impact team productivity.
3. Number of components used in architecture: One can set the number of
components the design team uses for the vehicle architecture concept. Currently, the
7
number is set to 190, which is the “average” number of components that were used
for a typical vehicle architecture created by FANG-1 competition teams. As the
number of components in the architecture concept increases, it will increase the
compiling time for architecture concept evaluation.
4. Programmatic Weight: This is the design team’s preference with respect to weight
given to architecture evaluation metric, namely vehicle manufacturing lead time and
unit manufacturing cost. For FANG-1, the score for the vehicle architecture concept
was heavily depended on the manufacturing lead-time.
2.5 Resulting Factors
With simulation incorporating all of key factors outlined above, several resulting factors are
obtained, and are fed back into various aspects of the simulation. The following factors are
key resulting factors.
1. Team top score: Based on design team configuration and the programmatic weight
choices they made, a team’s architecture will get a certain score, within a range. The
top score up to date will be kept.
2. Team rank in the competition: Once the team obtains the score for the architecture
created, it is then compared against the current overall competition score profile. The
ranking is then assigned to the team, based on a team’s top score.
3. Team morale: This variable was added to reflect a design team’s motivation for the
competition as function of the team’s ranking among participating teams and the
score gap between the team score and the top score. The underlying assumption is
that the team morale is greatest when the team rank is around 2 or 3, and the score
gap is small enough that they have a good chance to become the top team. The top
team will have great motivation, but they may behave defensively, thus lacking the
aggressive edge that other teams possess. This impacts team productivity, as more
motivated teams will tend to be more productive. Teams with very low scores and a
large score gap may be less motivated as well as their deficit may seem
insurmountable. The assumptions about team morale are best on best guesses from
the post-FANG-1 survey and could be adjusted in the future.
2.6 Section Summary
In this section, the FANG-1 design team system dynamic model was presented in detail. The
model takes into account the overall architecture generation and assessment process,
competition level factors, individual design team level factors, and resulting factors to capture
all relevant dynamics that influence a generic design team that is going through the
architecture concept creation and evaluation process within the competition environment.
8
3. Current Status of SD Model and Sample Results
The FANG-1 system dynamic model shown in Figure 1 was constructed with the
assumptions presented in Section 2 and that were subsequently implemented within the
model. Several key parts of the model were tested and validated against actual FANG-1
results. Some of those results are presented below.
Level of Component Abstraction
Figure 2 shows the simulated number of architecture concepts generated by a participating
design team as function of available component abstraction levels.
Figure 2: Architecture Concepts Generated by Design Team
(Blue: Abstraction Level 1, Red: Abstraction Level 3)
Note that as the abstraction level increases, the rate of architecture concept creation by the
design team increases (in this case by a factor of ~ 3).
Architecture Concept Evaluation Time
Figure 3 shows the time taken for a single architecture concept to be evaluated on the server.
Figure 3: Architecture concept evaluation time as function of time
(Left: Simulated Time for Individual Team, Right: Actual FANG-1 Daily Computation Load)
9
On the left is the simulated concept evaluation time (in hours) for a single architecture,
submitted by a design team. Evaluation time increases as competition progresses, due to
increasing submission frequency of architecture concepts by all competing teams, creating
additional loads on the server, as shown on the right graph of Figure 3. The total number of
servers available is held constant in the simulation.
Team Score, Rank and Morale
Figure 4 shows the score for the vehicle architecture, generated by the design team over the
course of the competition. The score depends on team’s preference weight given to the
programmatic performance metric, which was one of the major factors for the actual FANG-1
score. In this case, the team’s preference set was 0.9. In this particular case the team was not
able to significantly improve its score over the course of the simulation.
Figure 4: Scores for Vehicle Architecture Generated by a Design Team
Figure 5 shows the team’s rank with respect to other teams in the competition, with the top
score obtained in Figure 4. Note that in the beginning of the competition, the team rank was
“high” (staying within top ten), but as the competition progressed further, the team’s design
was overshadowed by competing team’s improvements, resulting in a “lower” rank.
Figure 6 shows team’s morale as competition progressed, and as the team’s rank fluctuated.
The morale was high in the beginning, and was slightly increased as they were in the top
contender position (~day 50). However, with the subsequent decline in rank, the morale
decreased accordingly.
Finally, Figure 7 shows the team’s productivity as the competition progresses. As with team
morale, the productivity of the team fluctuates, depending on the team’s rank, fatigue factors,
and other relevant factors that are modeled in the simulation.
In this section, the current status of the simulation model was presented through
demonstration of some key graphs pertaining to various aspects of the competition and team
dynamics. In the final section, the next step for the project is discussed.
10
Team Rank in the Competition
20
15
10
5
0
0
5
10
15
20
25
30 35 40
Time (Day)
45
50
55
60
65
Team Rank in the Competition : Current
Figure 5: Team’s Rank within the Competition
(With team top scores from Figure 4)
Figure 6: Plot of Design Team Morale
Figure 7: Plot of Design Team Productivity
11
70
4. Next Steps
In previous sections, the background of the FANG-1 competition was presented. A system
dynamics model of FANG-1 design teams was created to better understand the dynamics of
design teams in the competition environment. Preliminary models have been constructed, and
the initial demonstration of key performance metrics showed promise. To bring this project to
successful conclusion, and to improve the execution of FANG challenges two and three, the
following activities need to be carried out as next steps:
1. Validation of model with actual FANG-1 results: This need to be done in order to
bring the simulation model closer to reality of the competition
2. Sensitivity analysis: Sensitivity analysis of key independent variables (e.g. level of
component abstraction, DARPA purse size, number of servers available for concept
assessment, etc…) that can be controlled by the competition sponsor and individual
design team is critical for identifying the influential factors that impact the overall
vehicle architecture concept creation process.
3. Optimization and recommendation: The simulation can then be used to find the
optimal conditions for vehicle architecture concept generation. Based on the results, a
set of recommendations will be made for FANG challenges two and three, potentially
improving the design experience for participating teams as well as yielding superior
vehicle architecture designs.
Overall, the project shows great promise for implementation of crowd sourcing based
architecture concept generation, opening many doors to future complex system development
challenges.
References
DARPA FANG-1 Press Release, April 22, 2013
http://www.darpa.mil/NewsEvents/Releases/2013/04/22.aspx
de Weck O.L., “Feasibility of a 5x Speedup in System Development due to META Design”,
Paper DETC2012-70791, ASME 2012 International Design Engineering Technical
Conferences (IDETC) and Computers and Information in Engineering Conference (CIE)
Chicago, Illinois, August 12-15, 2012
Sterman, John D.. Business Dynamics: Systems thinking and modeling for a complex world.
McGraw Hill. ISBN 0-07-231135-5, 2000
Vehicle Forge Platform, URL accessed on December 31, 2013: www.vehicleforge.org
12
Download