Document 13356103

advertisement
Multi­Attribute Tradespace Exploration as Front End for Effective Space System Design
9 October 2009
2LT. John Richmond
Greg O’Neill
Jorge Cañizales Diaz
MATE­CON
“Multi­Attribute Tradespace Exploration with
Con­Current Design”
• What does it mean?
• Applying a series of decision metrics (attributes)
that consider the integration of all stakeholder
requirements to generate a framework
incorporating all qualified designs and indicating
the most viable candidates.
9 Oct 2009
2
Taxonomy
•
MATE­CON buzzwords
Decision Maker ­ Person who makes decisions that impact a system at any stage
of its lifecycle
Design Variable ­ Designer­controlled quantitative parameter that reflects an
aspect of a concept
Design Vector ­ Set of design variables that, taken together, uniquely define a
design or architecture
Attribute ­ Decision maker perceived metric measuring how well a defined
objective is met
Utility ­ Perceived value under uncertainty of an attribute
Tradespace ­ Space spanned by completely enumerated design variables
Pareto Frontier ­ Set of efficient allocations of resources forming a surface in
metric space
Exploration ­ Utility­guided search for better solutions within a tradespace
Concurrent Design ­ Techniques of design that utilize information technology for
real­time interaction among specialists
Architecture ­ Level of segmentation for analysis that represents overall project form and function
9 Oct 2009
3
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
4
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
5
Context of MATE­CON
• Process for Tradespace
Exploration and Concept
Selection (MATE).
• Includes aid for
Requirements Definition
• Plunges forth and back
into Design (CON), to
win accuracy.
9 Oct 2009
Requirements
Definition
System Architecture
Concept Generation
Tradespace Exploration
Concept Selection
Human
Factors
Design Definition
Multidisciplinary Optimization
6
Context of MATE­CON
• Inputs:
Requirements
Definition
• Important Stakeholders.
• Set of different
Concepts.
System Architecture
Concept Generation
• Outputs:
• System requirements for
the Detailed Design
phase.
• Knowledge of the design
tradespace.
9 Oct 2009
Tradespace Exploration
Concept Selection
Human
Factors
Design Definition
Multidisciplinary Optimization
7
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
8
Implementing MATE­CON
Architecture-level Analysis
True
preference
space
2a
1a
Key decision
makers
User
MAUT
Trade
space
Concept
generation 2b
Model
3a
Engineering
judgment
Designers
3b
Analysts
Verification
Preference
space
Design-level Analysis
Customer
4 Simulation (e.g. X-TOS)
Solution
space
Firm
5 Pareto
subset
6a Validation Reduced
solution
space
6b
Sensitivity
analysis
True
preference
space
User
Customer
7a Proposal
Firm
1b
Simulation
7b
Discussions
Architecture­level Analysis
T.S.
P.S.
M
S.S.
R.S.S.
Designer
MATE-CON
chair
1
Fidelity feedback
Subsystem
chair
Systems
engineer
3a
Real-time
utility
tracking
Baseline
2
ICE
3b
Analyst
Subsystem
chair
Subsystem
chair
Images by MIT OpenCourseWare.
9 Oct 2009
Design­level Analysis
9
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
10
“MATE” Overview
Stakeholderi
System Preference Interview
Repeat for each
Stakeholder
Stakeholderi
Single Attribute
Utility
Formulation
Single Attribute Utility Interview
Defining the System
Preferences
Corner Point Interview
Define the System
Attributes for Stakeholderi
Repeat for each
Stakeholder
Multi­
Attribute
Utility
Formulation
Generate Multi­Attribute Utility Functioni
Tradespace n
Utility
Define one Set of System Attributes
Create Design Vectors
9 Oct 2009
Physics­based
and MAUF
System
Modeling Tool
Metric i
Image by MIT OpenCourseWare.
11
System Preference Interview & Single
Attribute Utility
1. Outcome of System Preference Interview
Remote Sensing Mission
Attributes
Communications
Satellite
RANK
Units
Range
Utility Form
Range
Rank
4
3
1
Revisit Rate
Mission Duration
hour
[24, 1.5]
decreasing
year
[5,15]
increasing
Data Continuity
(System Availability)
%
[30,100]
increasing
9 Oct 2009
minute
[240,5]
decreasing
10
15
Attribute
i Axis
Revisit
Rate (hours)
20
25
Attribute j
System Availability: Single-Attribute Utility Function
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Utility
Utility (-)
Utility (-)
Utility
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
5
LTAN Timing
[least acceptable value : most realistic, desirable value]
1 = most important attribute, 4 = least important attribute
2. Outcome of Single Attribute Utility Interview
Revisit Rate: Single-Attribute
Attribute Utility
i Function
0
2
30
40
50
60
70
System Availability (%)
Attribute j Axis80
90
100
12
Generating Single Attribute Utility Curves: The Lottery
Equivalent Probability Method (LEP)
� LEP Process (for one specific attribute value)
Present the Interviewee
with this Attributes
Utility Interview
Scenario
Attributei
(@ value)
Select another Attribute Value
Calculate the
Utility Point
Bracketing the Indifference Point
Select the
Probability
(P*) for the
Scenario
Setup the
Bracket
Select the
Preferred
Situation
Select another Probability Value
Indifference Point
(Repeat for at least 7
Attribute Values)
• LEP Situation Setup
Situation A
Prob. = 0.5
Situation B
Attributei
(@ value)
Prob. = P*
OR
Prob. = 0.5
9 Oct 2009
Attributei
(@ worst value)
Attributei
(@ best value)
0 ≤ P* ≤ 0.5
Prob. = (1­P*)
Attributei
(@ worst value)
13
A Lottery Equivalent Probability
Method (LEP) Scenario
• Purpose: To provide context for the interviewee when selecting whether they prefer the outcomes of
Situation A or Situation B in the LEP Situation Setup.
• Example Interview Scenario
• Attribute: resolution, Attribute Value: 4 Megapixels, Attribute Range: 1­7 Megapixels
“A new optical system has been developed for a satellite that provides a higher amount of image
resolution. If this optical system is used there is a chance that it could provide 7 Megapixel images versus
only 4 Megapixel images when using a traditional optical system. However, the new optical system
employs the use of state of the art glass manufacturing so there is a chance that the new optical system
could lead to reduced image resolution (as compared to a traditional optical system). A team of engineers
has studied the issue and determined that this new optical system has a P* chance of providing images
with a 7 Megapixel resolution, or a (1­P*) chance of providing images with a 1 Megapixel resolution,
while traditional optical systems will provide images with a 1 Megapixel resolution with a probability of
50%, and a images with a 4 Megapixel resolution with a probability of 50%. Which optical system would
you prefer to use?”
Situation A (Traditional Optical System)
Situation B (New Optical System)
4 Megapixels
Prob. = P*
Prob. = 0.5
7 Megapixels
OR
Prob. = 0.5
9 Oct 2009
1 Megapixels
Prob. = (1­P*)
1 Megapixels
14
Utility Point Calculation (from LEP
Method Results)
Process (for one specific attribute value)
Known: The indifference point for the attribute value (i.e. P' that renders both situation A and B equally
desirable to the stakeholder).
Calculating the utility point for the specific attribute value is then done using Eqn. 1:
0.5 . U(Xi) + 0.5 . U(Xmin) = P'.U(Xmax ) + (1-P') . U(Xmin)
The utility is calculated on a ordinal scale, where the maximum and minimum utility equal 1.0 and 0.0
respectively. Hence, Eqn. 1 becomes:
0.5 . U(Xi) + 0.5 . U(Xmin) = P'.U(Xmax ) + (1-P') . U(Xmin)
0
P'
0
U(Xi) = 2 . P'
Image by MIT OpenCourseWare.
9 Oct 2009
15
Generating the Multi­Attribute
Utility Function
• Process (for one stakeholder)
• Known: the SUAF’s for the stakeholder.
• Terms: U ( X ) i ≡the i th SAUF
ki ≡the i th Corner Point (SAUF Weighting Factor)
K ≡the MAUF Normalization Coefficient __
U (X) ≡ the MAUF
• Constructing the MAUF
__
U(X ) =
1
K
n

(K ⋅ ki ⋅U ( X )i +1)
−1+
∏

i=1


• Capabilities of the MAUF
• Determine the stakeholder aggregate utility value for a given set of single attribute utility values.
• Implications
• Must have the MAUF in a explicit function form
• Assumptions (in addition to the 4 single attribute utility theory assumptions)
• Preferential Independence: the ranking of preferences over any pair of attributes is independent of all
the other attributes.
• Utility Independence: The utility curve for one attribute is unique, and independent of all the other attribute utility functions.
9 Oct 2009
16
Multi­Attribute Utility Function
Normalization Constant
• Purpose: To ensure consistency between the MAUF and the SUAF’s. That is, ensure that
the MAUF is defined over the same range as the SAUF’s (i.e. [0, 1]).
• Process for Determining the MAUF Normalization Constant
• Known: All the SAUF weighting factors (ki) – corner point values.
• Solve Eqn. 5 for K (can be done via an iterative procedure)
n
K = −1 + ∏ (K ⋅ ki + 1)
i =1
• Normalization Constant Ranges
n
if
∑k
i
< 1.0
then i
> 1.0
­1 < K < 0
then i
= 1.0
then i =1
n
if
∑k
i =1
n
if
∑k
i =1
9 Oct 2009
K >0
K =0
17
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
18
(Integrated) Concurrent Engineering
Objective: to enable engineering design, tradestudies, and subsequent decisions to occur in real­
time with all design team members and critical stakeholders colocated and an emphasis placed on
stakeholder feedback.
Screen
Screen
Screen
C
om
m
Av
io
ni
cs
Pr
A
sN
op
ul
ee
sio
de
n
d
Reliability
Conference IA&T Room and
Thermal
Information
Stakeholder Systems
Support
Team
Engineering
Mechanical
Door
Team
Lead
Kitchen
Printer
Printer
A/V
Control
Copier
Mission Design Laboratory (MDL)
Mission
NASA Goddard Space Flight Center
Flight Ops
Flight Courtesy of Integrated Design Center, NASA Goddard Space Flight Center.
Dynamics
LVs and Software
Used with permission.
Administrative Attitude Cost
and Technical
Concurrent engineering session example:
Control
Support
Radiation
System: satellite
Stakeholder: external program manager
Orbital
Power Printer
Model Fidelity: conceptual (Phase A)
Door
Door
Debris
Session Length:’ 1 week, 5 days
Daily Schedule: Design time (8 AM – noon and 1­5:30
Layout of the MDL
PM); lunch (noon­1 PM)
Courtesy of Mark Avnet
9 Oct 2009
Courtesy of Mark S. Avnet. Used with permission.
19
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
20
Alternatives to MATE for Tradespace
Exploration (TE)
Tradespace Exploration Intent: enumerate candidate design concepts and ultimately
select a small number of designs (called point designs), on the basis of stakeholder­
influenced criteria, to be assessed at a higher level of fidelity.
Benefit­Centric TE
Value­Centric TE
Multiple Attribute Tradespace
Exploration (MATE)
Value Quantification
Technique for Preference by Similarity to the
Ideal (TOPSIS)
Tradespace n
Utility
1. MATE­CON
2. Dynamic MATE
3. System of Systems (SoS) Tradespace
Exploration
4. MATE for Survivability
5. Responsive Systems Comparison (RSC)
1. Value function
2. Multi­attribute value
function theory (in progress)
“Traditional” TE
Quantification of “Traditional”
Figures of Merit (FoM)
9 Oct 2009
Metric i
Image by MIT OpenCourseWare.
21
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
22
Benefits
• Forced design decisions’ changes, during the
design phase, can be guided using the knowledge
of the larger tradespace. Their impact is thus
reduced.
• By calculating utility gradients, counterintuitive
design decisions are revealed.
• Almost full automatization reduces impact of
changing stakeholder expectations.
9 Oct 2009
23
Benefits
• Propagating the utility metric down through the
Design levels prevents pursuing a detailed design
without understanding its global effects.
• Proved less time and effort for a given project, and
other benefits.
• But the reference is to a conference paper by the
author.
9 Oct 2009
24
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
25
Limitations
• Very different concepts are a challenge to model
in 1 vector.
Computer generated images of space vehicles removed due to copyright restrictions.
9 Oct 2009
26
Limitations
• What do you do if the
Requirements
Definition
tradespace is so big
you cannot generate
System Architecture
Concept Generation
but a very small
Tradespace Exploration
fraction of it?
Concept Selection
• No HF concerns are Human
Factors
Design Definition
explicitly addressed.
Multidisciplinary Optimization
9 Oct 2009
27
Limitations
• Real­time design (CON) is hard to achieve for
logistical and schedule reasons.
• The process “doesn’t scale up”.
• Not used much anymore.
• Even if it’s only for early design, that needs to be done
fast, the class did 12 sessions.
9 Oct 2009
28
Limitations
• Doesn’t consider any “­ility”.
• They all change from Concept to Concept, and even
inside each one.
• Their utility is usually better assessed by the engineers
than by the stakeholders.
• Pushing towards the frontier normally increases design
cost, which isn’t considered (and can be relevant
compared to manufacturing and operations cost).
• Consider isoperformant frontiers.
9 Oct 2009
29
Index
1. Context of MATE­CON
2. Implementing MATE­CON
1. MATE
2. CON
3.
4.
5.
6.
Alternatives
Benefits
Limitations
Discussion
9 Oct 2009
30
Discussion Questions
• Considering the architecture­level analysis and
the design­level analysis that incorporate MAUT
and ICEMaker, at what point do you freeze the
design and move forward?
• For tradespace exploration, do you think
employing the metric of utility is a viable
alternative to “more traditional” metrics, given
the inherent advantages (e.g., aggregation of
benefit) and disadvantages (e.g., ordinal nature) of
utility?
9 Oct 2009
31
Discussion Questions
• Stakeholders networks (utility flow) can be easily
incorporated into the methodology.
• What is MIT’s Generalized Information Network
Analysis (GINA) method (that provided advances
in modeling tradespaces)?
• What is Quality Function Deployment (QFD),
which is used to organize and prioritize suggested
variables?
• What is SMAD’s Small Satellite Cost Model?
9 Oct 2009
32
MIT OpenCourseWare
http://ocw.mit.edu
16.842 Fundamentals of Systems Engineering
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Download