Uploaded by ismoiljon.s

2013 Panchal Kalidindi McDowell JCAD 0

Computer-Aided Design 45 (2013) 4–25
Contents lists available at SciVerse ScienceDirect
Computer-Aided Design
journal homepage: www.elsevier.com/locate/cad
Key computational modeling issues in Integrated Computational
Materials Engineering
Jitesh H. Panchal a , Surya R. Kalidindi b , David L. McDowell c,∗
a
School of Mechanical and Materials Engineering, Washington State University, Pullman, WA 99164-2920, USA
b
Department of Materials Science and Engineering, Department of Mechanical Engineering and Mechanics, Drexel University, Philadelphia, PA 19104, USA
c
School of Materials Science and Engineering, Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0405, USA
article
Keywords:
Materials design
Multiscale modeling
ICME
Databases
Uncertainty
info
abstract
Designing materials for targeted performance requirements as required in Integrated Computational
Materials Engineering (ICME) demands a combined strategy of bottom–up and top–down modeling and
simulation which treats various levels of hierarchical material structure as a mathematical representation,
with infusion of systems engineering and informatics to deal with differing model degrees of freedom
and uncertainty. Moreover, with time, the classical materials selection approach is becoming generalized
to address concurrent design of microstructure or mesostructure to satisfy product-level performance
requirements. Computational materials science and multiscale mechanics models play key roles in
evaluating performance metrics necessary to support materials design. The interplay of systems-based
design of materials with multiscale modeling methodologies is at the core of materials design. In high
performance alloys and composite materials, maximum performance is often achieved within a relatively
narrow window of process path and resulting microstructures.
Much of the attention to ICME in the materials community has focused on the role of generating and
representing data, including methods for characterization and digital representation of microstructure, as
well as databases and model integration. On the other hand, the computational mechanics of materials and
multidisciplinary design optimization communities are grappling with many fundamental issues related
to stochasticity of processes and uncertainty of data, models, and multiscale modeling chains in decisionbased design. This paper explores computational and information aspects of design of materials with
hierarchical microstructures and identifies key underdeveloped elements essential to supporting ICME.
One of the messages of this overview paper is that ICME is not simply an assemblage of existing tools, for
such tools do not have natural interfaces to material structure nor are they framed in a way that quantifies
sources of uncertainty and manages uncertainty in representing physical phenomena to support decisionbased design.
© 2012 Elsevier Ltd. All rights reserved.
1. The path to Integrated Computational Materials Engineering
(ICME)
Within the past decade, several prominent streams of research
and development have emerged regarding integrated design of
materials and products. One involves selection of materials, emphasizing population of databases and efficient search algorithms
for properties or responses that best suit a set of specified performance indices [1], often using combinatorial search methods [2],
pattern recognition, and so on. In such materials informatics approaches, attention is focused on data mining, visualization, and
providing convenient and powerful interfaces for the designer to
∗
Corresponding author. Tel.: +1 404 894 5128; fax: +1 404 894 0186.
E-mail address: david.mcdowell@me.gatech.edu (D.L. McDowell).
0010-4485/$ – see front matter © 2012 Elsevier Ltd. All rights reserved.
doi:10.1016/j.cad.2012.06.006
support materials selection. Another class of approaches advocates
simulation-based design to exploit computational materials science and physics in accelerating the discovery of new materials,
computing structure and properties using a bottom–up approach.
This can be combined with materials informatics. For example,
the vision for Materials Cyber-Models for Engineering is a computational materials physics and chemistry perspective [3] on using
quantum and molecular modeling tools to explore for new materials and compounds, making the link to properties. Examples
include the computational estimate of stable structure and properties of multicomponent phases using first principles approaches
(e.g., Refs. [4–6]). These approaches have been developed and popularized largely within the context of the materials chemistry,
physics and science communities.
A third approach, involving more the integration of databases
with tools for modeling and simulation, as well as design
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
5
Fig. 1. Hierarchy of levels from atomic scale to system level in concurrent materials and product design (multi-level design). Existing systems design methods focus on
levels to the right of the vertical bar, treating ‘materials design’ as a materials selection problem. Levels to the left require computational materials science and mechanics
models of microstructure–property relations. Olson’s linear mapping of process–structure–property–performance relations in Materials by DesignTM is shown on the upper
left [10].
optimization and concurrent design for manufacture, is that
of Integrated Computational Materials Engineering (ICME). It is
concerned with multiple levels of structure hierarchy typical of
materials with microstructure. ICME aims to reduce the time to
market of innovative products by exploiting concurrent design
and development of material, product, and process/manufacturing
path, as described in the National Research Council report by a
National Materials Advisory Board (NMAB) committee [7]. It is ‘‘an
approach to design products, the materials that comprise them,
and their associated materials processing methods by linking
materials models at multiple length scales’’.
ICME was preceded by another major Defense Advanced
Research Projects Agency (DARPA)-funded initiative, Accelerated
Insertion of Materials (AIM), from 2001–2003, which sought to
build systems approaches to accelerate the insertion of new and/or
improved materials into products. An NMAB study [8] discussed
the DARPA-AIM program, highlighting the role of small technology
startup companies in enabling this technology, and summarizing
the commercial computational tools and supporting currently
available databases. The impact of AIM on framing of ICME is
discussed further in Ref. [9].
The significant distinction of ICME from the first two approaches (materials informatics and combinatorial searches for
atomic/molecular structures) is its embrace of the engineering perspective of a top–down, goal–means strategy elucidated by Olson
[10]. This perspective was embraced and refined for the academic
and industry/government research communities at a 1997 National Science Foundation (NSF)-sponsored workshop hosted by
the Georgia Institute of Technology and Morehouse College [11]
on Materials Design Science and Engineering (MDS&E). Effectively,
ICME incorporates informatics and combinatorics, as well as multiscale modeling and experiments. As mentioned, ICME is fully
cognizant of the important role that microstructure plays in tailoring material properties/responses in most engineering applications,
well beyond the atomic or molecular scale. This is a key concept
in materials science, as typically distinct from materials physics
or chemistry. For example, mechanical properties depend not
only on atomic bonding and atomic/molecular structure, but are
strongly influenced by morphology of multiple phases, phase and
interface strengthening, etc. Fig. 1 shows a hierarchy of material
length scales of atomic structure and microstructure that can be
considered in design of advanced metallic alloys for aircraft gas
turbine engine components in commercial aircraft. The diagram inset in the upper-left in Fig. 1 shows the Venn diagram from Olson
[10] of overlapping process–structure–property–performance relations that are foundational to design of materials to the left of the
vertical bar; conventional engineering design methods and multidisciplinary design optimization (MDO) are rather well established. Extending systems design methods down to address the
concurrent design of material structure at various levels of hierarchy is a substantial challenge [12–15]. The process of tailoring the
material to deliver application-dependent performance requirements is quite distinct from that of traditional materials selection
(relating properties to performance requirements) or from combinatorial searching for atomic/molecular structure–property relations at the level of individual phases; such combinatorial searches
typically do not consider the role of microstructure, which is typically central to meeting multifunctional performance requirements. The conventional approach in science-based modeling of
hierarchical systems is a bottom–up, deductive methodology of
modeling the material’s process path, microstructure, and resulting properties, with properties then related to performance requirements, as shown in Fig. 1. Design is a top–down process in
which systems level requirements drive properties, structure and
process routes.
The pervasive difficulty in inverting process–structure and
structure–property relations in modeling and simulation greatly
complicates this top–down enterprise, and owes to certain unavoidable realities of material systems:
• Nonlinear, nonequilibrium path dependent behavior that requires intensive simulation effort, limiting parametric study
and imparting dependence upon initial conditions.
• Transitions from dynamic (lower length and time scales) to
thermodynamic (higher length and time scales) models for
cooperative phenomena in multiscale modeling, with nonuniqueness due to reduction of model degrees of freedom
during scale-up.
• Potential for a wide range of suboptimal solutions that complicate design search and optimization.
6
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
Fig. 2. Typical separation of materials development and materials selection activities in the process of integrated design of materials and products.
• Approximations made in digital microstructure representation
•
•
•
•
of material structure.
Dependence of certain properties such as true fracture ductility, fracture toughness, fatigue strength, etc., on extreme
value distributions of microstructure attributes that affect damage/degradation.
Microstructure metastability and long term evolution in service.
Lack and variability of experimental data for validation purposes or to support methods based on pattern recognition.
Uncertainty of microstructure, models, and model parameters.
Since uncertainty is endemic in most facets of ICME, managing
uncertainty and its propagation through model/experiment chains
is of paramount importance. Materials design is complicated by the
complexity of nonlinear phenomena and pervasive uncertainty in
materials processing, related models and model parameters, for
example. This motivates the notion of robust design and other
means to address the role of uncertainty. The materials design
challenge is to develop methods that employ bottom–up modeling
and simulation, calibrated and validated by characterization and
measurement to the extent possible, combined with top–down,
requirements-driven exploration of the hierarchy of material
length scales shown in Fig. 1. It is a multi-level engineering systems
design problem, characterized by top–down design of systems
with successive embedded sub-systems, such as the aircraft shown
in Fig. 1. Multiscale modeling pertains more to the bottom–up
consideration of scales of materials hierarchy.
Another critical aspect of materials design identified in the
NMAB ICME report [7] is the key role played by quantitative representation of microstructure. The notion of microstructure-mediated
design is predicated on the use of mathematical/digital representation of microstructure, rather than properties, as the taxonomic
basis for describing material structure hierarchy in materials design. This is not merely semantics, but has quite practical implications in terms of actual pathways of material and product
development. Specifically, process–structure relations tend to be
quite dissociated from structure–property relations, with different
model developers, organizations involved, degree of empiricism,
etc. The timeframe for process–structure relations in materials
development often do not match those of structure–property
relations. Process–structure relations tend to be more highly
empirical, complex, and less developed than structure–property
relations. The light dashed line on the left in Fig. 2 effectively
represents a ‘‘fence’’ between organizations/groups pursuing process–structure and structure–property relations, reflecting the
common decomposition of activities between material suppliers
and OEMs. One goal of an integrated systems strategy for materials design and development is to more closely couple these activities. Clearly, microstructure is solidly at this interface, the result
of process–structure relations. Hence, framing ICME in terms of
microstructure-mediated design facilitates enhanced concurrency
of material suppliers and OEM activities. Another important goal
of ICME should be to effectively remove the bold dashed line to
the right in Fig. 2 that serves as a ‘‘fence’’ between materials development and materials selection. Conventionally, design engineers
have worked with databases of existing materials to select, rather
than design, materials in the process of designing components
(systems, sub-systems). Tools for materials selection [1] are reasonably well-developed, supporting the materials selection problem. However, as discussed by McDowell et al. [12–15], materials
selection supports the property-performance linkage to the right in
Fig. 2 and is just one component of the overall integration of materials and product design. Integration of design engineering into
the overall process requires a coupling back to material suppliers,
for example. This is one of the great challenges to realizing ICME in
terms of concurrent computational materials design and manufacturing. As stated in the report of the 1998 MDS&E workshop [11],
‘‘The field of materials design is entrepreneurial in nature, similar
to such areas as microelectronic devices or software. MDS&E may
very well spawn a ‘‘cottage industry’’ specializing in tailoring materials for function, depending on how responsive large supplier
industries can be to this demand. In fact, this is already underway’’.
The ICME report [7] focused mainly on advances in modeling
and simulation, the role of experiments in validation, databases
and microstructure representation, and design integration tools.
In addition, cultural barriers to ICME in industry, government and
academia were discussed. Elements of ICME broadly encompass:
• Modeling and simulation across multiple levels of hierarchy
•
•
•
•
•
•
•
of structure, including both process–structure and structure–
property relations
Experiments and prototyping
Distributed collaborative networks
Materials databases
Microstructure representation as the focus of the materials
design problem
Knowledge framework that integrates databases with experiments and modeling simulation to pursue inverse design
Uncertainty quantification and management
Integration with OEMS and materials suppliers.
The Materials Genome Initiative aspires to build on the preceding AIM and ICME initiatives to develop an infrastructure to accelerate advanced materials discovery and deployment, including all
phases of development, property optimization, systems design and
integration, certification and manufacturing [16]. As outlined in
Refs. [7,15], this goal and the premise of ICME is more comprehensive in addressing the connection with product design and manufacturing than combinatorial searching algorithms applied using
first principles calculations to discover new materials or multicomponent phases, for example. The notion of genomics as predictive
exploration of crystal structures, single phases, or molecules for
desirable phase properties is a valuable part of the overall concept of ICME to more broadly scope during design exploration,
but the ‘‘connective tissue’’ of multiscale modeling and multi-level
decision-based design in ICME are essential aspects of its integrative character that links microstructure to performance and considers integrated systems design, certification, manufacturing, and
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
product scale-up. Moreover, the analogy of ICME to bioinformatics, DNA sequencing and so forth is therefore limited and not fully
descriptive of scope and challenges of ICME. In our view, the notion of integrating manufacturing processes with concurrent design of materials and products is at the core of ICME, considering
the inescapable role of the hierarchy of material structure (i.e., microstructure) on properties and performance. Of course, one can
conceive of important application domains in which discovery of
new material phases perhaps addresses the majority of the accelerated deployment problem; it may be entirely sufficient for purposes of searching for specific molecular or atomic structures for
nanoscale application such as quantum dots, semiconductors, etc.
But even in many applications where certain key performance indicators are dictated by atomic structure (e.g., piezoelectrics, battery
and fuel cell electrodes, etc.), material durability and utility in service is often dictated by mesoscopic heterogeneity of structure and
its lack of reversibility under cycling or in extreme environments.
Therefore, materials informatics is a key technology for nanoscale
design problems but only one toolset in the solution strategy for
multi-level materials design that must consider scale-up and long
term product performance.
This paper will focus on several critical path issues in realizing
ICME, namely materials databases, the role of multiscale modeling,
materials knowledge systems and strategies that support inverse
design (top–down in Fig. 1), and uncertainty quantification and
management considerations that undergird plausible approaches.
2. Materials databases and the need for a structure-based
taxonomy
The NMAB report [7] has emphasized the key role of materials
databases in capturing, curating, and archiving the critical
information needed to facilitate ICME. While the report does
not elaborate on how these databases should be structured or
what specifically they should include, it discusses how these
new databases will produce substantial savings by accelerating
the development of advanced high performance materials, while
eliminating the unintended repetition of effort from multiple
groups of researchers possibly over multiple generations.
Before we contemplate on the ideal structure and content of
such novel databases, it would be beneficial to get a historic perspective on the successes and failures of the prior efforts on building materials databases. In a recent summary, Freiman et al. [17]
discuss the lessons learned from building and maintaining the
National Materials Property Data Network that was operated commercially until 1995. Although this network contained numerous databases on many different materials, it proved too costly
to maintain because of (i) a lack of a single common standard for
information on the very broad range of materials included in the
database, (ii) enhancements/modifications to testing procedures
used for evaluating properties of interest that sometimes invalidated previous measurements, and (iii) the placement of onus on a
single entity as opposed to being jointly developed by multiple entities that possess the broad range of requisite sophisticated skill
sets. Notwithstanding these shortcomings, a major deficiency in
these, as well as other similar, databases is that they did not aim
to capture the details of the microstructure. It is well-known in the
materials science and engineering discipline that the microstructure–property connections are not unique, and that composition
alone is inadequate to correlate to the many physical properties of
the material. Indeed, the spatial arrangement of local features or
‘‘states’’ in the internal structure (e.g., morphological attributes) at
various constituent length scales play an influential role in determining the overall properties of the material [18–20].
The key materials databases needed to facilitate ICME have to be
built around microstructure information. There is therefore a critical need to establish a comprehensive framework for a rigorous
7
quantification of microstructure in a large number of disparate material systems. Furthermore, the framework has to be sufficiently
versatile to allow compact representation and visualization of the
salient microstructure–property–processing correlations, which
populate the core knowledge bases used by materials designers. It
is important to recognize here that only the microstructure space
can serve as the common higher dimensional space in which the
much needed knowledge of the process–microstructure–property
correlations can be stored and communicated reliably without
appreciable loss of information. Any attempt to represent this
core knowledge by circumventing microstructure using lower dimensional spaces (e.g., processing–property correlations or composition–property correlations) should be expected to result in
substantial loss of fidelity and utility in the generated knowledge
bases.
In arriving at a rigorous framework for microstructure quantification, it is extremely important to recognize and understand
the difference between a micrograph (or a microstructure dataset)
and the microstructure function [21]. Microstructure function is
best interpreted as a stochastic process, in essence a response to
applied stimuli, whose experimental outcome is a micrograph (a
two or three dimensional microstructure dataset). As such, the microstructure function cannot be observed directly and is expected
to capture the ensemble-averaged structure information from a
multitude of micrographs extracted from multiple samples processed by nominally the same manufacturing history. Since individual micrographs have no natural origin, the only meaningful
information one can extract from each micrograph has to be concerning the relative spatial arrangement or spatial correlations of
local material states (e.g., phases, orientations) in the structure. A
number of different measures of the spatial correlations in the microstructure have been proposed and utilized in literature [19,20].
Of these, only the n-point spatial correlations (or n-point statistics) provide the most complete set of measures that are naturally
organized according to increasing degree of microstructure information. For example, the most basic of the n-point statistics are
the 1-point statistics, and they reflect the probability density associated with finding a specific local state of interest at any randomly selected single point (or voxel) in the microstructure. In
other words, they essentially capture the information on volume
fractions of the various distinct local states present in the material system. The next higher level of microstructure information is
′
contained in the 2-point statistics, denoted frhh , which capture the
probability density associated with finding local states h and h′ at
the tail and head, respectively, of a prescribed vector, r, randomly
placed into the microstructure. There is a tremendous increase in
the amount of microstructure information contained in the 2-point
statistics compared to the 1-point statistics. Higher-order correlations (3-point and higher) are defined in a completely analogous
manner. Another benefit of treating the microstructure function as
a stochastic process is the associated rigorous quantification of the
variance associated with the microstructure [21]. The variance in
structure can then be combined appropriately with the other uncertainties in the process (for example, those associated with the
measurements and those associated with the models used to predict overall properties or performance characteristics of interest in
an engineering application) to arrive at the overall variance in the
performance characteristics or responses of interest.
Materials Informatics has recently emerged as a new field of
study that focuses on the development of new toolkits for efficient
synthesis of the core knowledge in large materials datasets (generated by both experiments and models). The NMAB report [7] has
specifically highlighted the important role to be played by Materials Informatics in the realization of the potential of ICME. The
standardization and adoption of a rigorous framework, such as the
8
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
n-point statistics, in enabling the quantification of the microstructure is a critical and essential step in the emergence of Materials
Informatics. In recent work, it was demonstrated that the use of
concepts borrowed from informatics can produce a new toolkit
[22] with the following unprecedented capabilities: (i) archival,
real-time searches, and direct quantitative comparisons of different microstructure datasets within a large microstructure
database, (ii) automatic identification and extraction of microstructure features or other metrics of interest from very large
datasets, (iii) establishment of reliable microstructure–property
correlations using objective measures of the microstructure,
and (iv) reliable extraction of precise quantitative insights
on how the local neighborhood influences the localization of
macroscale loading and/or the local evolution of microstructure
leading to development of robust, scale-bridging, microstructure–property–processing linkages. The new field of Materials
Informatics is still very much in its embryonic stage of development. The customization and adoption of modern information
technology tools such as image-based search engines, data-mining,
machine learning and crowd-sourcing can potentially identify
completely new research directions for addressing successfully
some of the most difficult computational challenges in the implementation of the ICME paradigm.
3. Multiscale modeling and materials design
Multiscale modeling involves seeking solutions to physical
problems that have important features at multiple spatial and/or
temporal scales. Multiscale models are necessary to address
the relative contributions of various levels of material structure
hierarchy shown in Fig. 1 to responses at higher scales [12–15,
23–25]. However, multiscale models are subservient to the overall
design integration strategy in materials design. In other words,
they offer utility in various specific ways to materials design,
including, but not limited to:
• relating polycrystalline and/or polyphase microstructures to
•
•
•
•
•
properties involving cooperative behavior at higher scales
(e.g., yield strength, ductility, ductile fracture, etc.),
quantifying sensitivity of higher scale responses to different
mechanisms and levels of structure/microstructure hierarchy,
supporting microstructure-sensitive prediction of material failure or degradation,
quantifying the influence of environment or complicating contributions of impurities, manufacturing induced variability or
defects,
considering competition of distinct mechanical, chemical and
transport phenomena without relying too heavily on intuition
to guide solutions in the case of highly nonlinear interactions,
and
bridging domains of quantum physics and chemistry with engineering for problems that span vast ranges of length and time
scales.
Olson’s Venn diagram structure in Fig. 1 should not be confused
with multiscale modeling strategies; the former is much more
comprehensive in scope, embedding multiscale modeling as part
of the suite of modeling and simulation tools that provide decisionsupport in design. For example, structure–property relations
can involve the full gamut of length and time scales, as can
process–structure relations. In other words, the levels in Olson’s
linear design strategy of Fig. 1 do not map uniquely to levels
of material hierarchy. Even concurrent multiscale models which
attempt to simultaneously execute models at different levels of
resolution or fidelity do not serve the full purpose of top–down
materials design. Materials design and multiscale modeling are
not equivalent pursuits. This distinction is important because
it means that notions of cyberdiscovery of new or improved
materials must emphasize not only modeling and simulation tools
but also systems design strategies for using simulations to support
decision-based concurrent design of materials and products.
Materials with long range order, such as crystalline materials
or oriented fibrous composites or thermoplastic polymer chains,
offer a particular challenge and motivate sophisticated multiscale
modeling methods. Characteristic scale transitions achieved by
multiscale modeling do not follow a single mathematical or
algorithm structure, owing both to the consideration of long range
order and to the fact that as models are coarse grained, the models
shift from those concerning nonequilibirum, dynamic particle
systems to continuously distributed mass systems that often
invoke statistical equilibrium arguments and radically reduce
degrees of freedom. A distinction is made between so-called
hierarchical and concurrent multiscale modeling schemes. The
former apply scale-appropriate models to different levels of
hierarchy to generate responses or properties at that scale. In
contrast, the latter attempt to cast the problem over common
spatial and temporal domains by virtue of boundary conditions
and introduction of consistent variable order descriptions of
kinematics to achieve a variable resolution/fidelity description.
Clearly, concurrent schemes are more costly but are argued to
provide high value in cases in which localization of responses
within a microstructure cannot be located or anticipated a priori,
as well as the propagation of such localization events across a
broad range of scales shown in Fig. 1. Information is lost in either
way — in the case of hierarchical multiscale models, information is
lost (i.e., increase of information entropy) by disposing of model
degrees of freedom, while in the case of concurrent multiscale
modeling information is lost by virtue of the idealization necessary
to frame the models with consistent structure to pass information
back and forth across length and time scales.
It is a daunting task to briefly summarize a range of multiscale modeling approaches for all relevant process–structure or
structure–property relations for all material classes, beyond the
scope of any single work. We accordingly limit discussion here to
multiscale dislocation plasticity of metals to illustrate the range
of considerations involved. Dislocation plasticity is a multiscale,
multi-mechanism process [24–27] that focuses on evolving irreversible microstructure rearrangement [28] associated with line
defects crystals. Existing approaches address mechanisms over a
wide range of length and time scales, as summarized below.
(a) Discrete to continuous modeling approaches based on contiguous (domain decomposition) or overlapping (coarse graining)
domains:
• Domain decomposition, with an atomistic domain abutted to
an overlapping continuum domain to atomistically resolve
crack tips and interfaces [29–35] to model fracture and
defect interactions.
• Quasi-continuum approaches with the continuum elastic
energy function informed by nonlocal atomistic interatomic
potentials, suitable for long range fields without defects
[36–39], with adaptivity to selectively model regions with
full atomic resolution [40].
• Dynamically equivalent continuum or quasi-continuum,
consistent with fully dynamic atomistic simulations over
same overlapping domain [41–46].
• Coarse-grained molecular dynamics [47–51] for somewhat
longer ranged spatial interactions of microstructure and
defects such as dislocation pile-ups.
• Continuum phase field modeling (PFM) [52,53] to simulate microstructure morphology and evolution, including
microscopic PFM [54] informed by ab initio or atomistic
simulations in terms of energy functions, with kinetics
(mobilities) established by matching overlapping MD or
experiments [55].
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
9
Table 1
Examples of dominant material length scales (in each row) for design with regard to bulk mechanical properties of metallic systems.
Strength
Bulk polycrystals
Inelastic flow and
hardening, uniform ductility
2 nm — unit processes of
nucleation (low defect density
single crystals, nanoscale
multilayers)
20 nm — source activation
(nanocrystalline metals with
prior cold work)
200 nm — source controlled
(NEMS, MEMS)
2 µm — free surfaces and
starvation (micropillars,
unconstrained layers)
Brittle fracture
resistance
Ductile fracture
resistance
High cycle fatigue
Low cycle fatigue
2 µm — damage
process zone
2 µm — localized
slip structures
20 µm — individual
activated grains or
phases, barrier spacing
20 µm –damage
process zone,
barrier spacing
2 nm — bond breaking
(cleavage)
20 nm — dislocation
reactions with grain, phase
or twin boundaries
20 nm — cooperative
effects of impurities,
segregants, oxides, etc.
(intergranular)
20 nm — valve effect
dislocation
nucleation
200 nm — dislocation
multiplication, junctions
2 µm — statistical field of
junctions, forests
200 nm — source
activation
2 µm — fracture
process zone
20 µm — intergranular
interactions
20 µm — boundary
interactions,
roughness and crack
path bifurcation
(b) Concurrent multiscale continuum modeling schemes for
plastic deformation and failure processes based on imposition
of consistent multiscale boundary conditions and/or higher
order continua accounting for heterogeneous deformation at
fine scales over the same physical domain [56–61].
(c) Multiscale modeling schemes of hierarchical nature for inelastic deformation and damage in the presence of microstructure:
• Multilevel models for plasticity and fracture based on
handshaking [62–65].
• Self-consistent micromechanics schemes [66–68].
• Internal state variable models informed by atomistic/continuum simulations and coupled with micromechanics or FE
methods [69,70].
• Weak form variational principle of virtual velocities (PVV)
with addition of enhanced kinematics based on microstructure degrees of freedom [71–73].
• Extended weak form variational approaches of generalized
continua or second gradient theory with conjugate microstresses to represent gradient effects [74–79].
(d) Statistical continuum theories that directly incorporate discrete dislocation dynamics [80–89] and dislocation substructure evolution [24,25].
(e) Field dislocation mechanics based on incompatibility with
dislocation flux and Nye transport [90–93].
(f) Dislocation substructure formation and evolution models
based on fragmentation and blocky single slip or laminated
flow characteristics, including non-convex energy functions
[94–98].
(g) Transition state theory, based on use of atomistic unit process
models [99–101] to inform event probabilities in kinetic Monte
Carlo models of mesoscopic many body behavior [102] and
other coarse graining approaches [103].
Clearly, even in the focused application area of multiscale dislocation plasticity, no single multiscale modeling strategy exists
or suffices to wrap together the various scale transitions. Each
material class and response of interest is influenced by a finite, characteristic set of length and time scales associated with
key controlling microstructure attributes. For this reason, materials design can be (and often is) pursued using sets of models
that are scale appropriate to the length scales and microstructure attributes deemed most influential on the target response(s)
at the scale of the application. So if the scale of the application is on the order of tens of microns, as in MEMS devices,
200 µm — effect of
neighbors
both the atomic scale structure and mesostructure of defect distributions (dislocations, grain boundaries, etc.) can be important. For nanostructures, however, atomic scale arrangement and
its evolution is dominant. In many systems, the hierarchy includes strong heterogeneity or phase contrast (e.g., weldments in
structures, fiber-reinforced polymer composites) and these scales
may dominate responses of interest. Even interfacial-dominated
phenomena, such as fracture of heterogeneous materials (cf.
[104–107]), is complicated according to whether the fracture occurs via brittle or ductile separation, with the latter involving interface structure only as a gating mechanism for inelastic deformation
modes [108].
For illustrative purposes, at the risk of oversimplifying the
wide range of phenomena and scales over a wide range of
metallic systems, Table 1 attempts to identify dominant material
length scales to address in design for various target mechanical
properties of metallic systems. Although phenomena occur over
a full spectrum of scales, the role of defects and microstructure
is largely manifested at various discrete levels of structure. One
important point is the dominant mechanisms and scales change
with the scale and nature of the application. Another point is
that certain properties are sensitive to material structure at only
one or two levels of hierarchy. These observations suggest that
distinct models or scale transition methods that bridge several
scales can supply first order information for materials design.
Clearly, no single multiscale modeling strategy exists or suffices to
link the various scale transitions. Table 2 offers a partial glimpse
into the complexities of models at various length and time scales
for metallic systems, distinguishing scale transition methods from
models appropriate to each scale.
It is emphasized that there is considerable uncertainty in any
kind of multiscale modeling scheme, including the selection of
specific length scales to model, approximations made in separating
length and time scales in models used, model structure and
parameter uncertainty at each scale, approximations made in
various scale transition methods as well as boundary conditions,
and lack of complete characterization of initial conditions.
Furthermore, microstructure itself has random character, such as
sizes and distributions of grains, phases, and so forth. These sources
of uncertainty give rise to the need to consider sensitivity of
properties and responses of interest to variation of microstructure
at various scales, as well as propagation of model uncertainty
through multiscale model chains. Uncertainty will be addressed in
a later section of this paper.
10
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
Table 2
Models and scale bridging methods for multiscale dislocation plasticity in metallic crystals and polycrystals. Shaded boxes appear in rows associated with scale bridging
methodologies.
Length scale
Time scale
Models
2 nm
NA/ ground
state
First principles, e.g., Density
Functional Theory (DFT)
200 nm
10 ns
Molecular dynamics (MD)
Examples of scale bridging approaches
Primary sources of uncertainty
Assumptions in DFT method, placement of atoms
Quantum MD
Interatomic potential, cutoff, thermostat and ensemble
Domain decomposition, coupled
atomistics discrete dislocation (CADD),
coarse grained MD, kinetic Monte Carlo
2 µm
s
Discrete dislocation
dynamics
20 µm
1000 s
Crystal plasticity, including
generalized continuum
models (gradient,
micropolar, micromorphic)
Discretization of dislocation lines, cores, reactions and
junctions, grain boundaries
Multiscale crystal plasticity
200 µm
>2 mm
Days
Years
Attenuation due to abrupt interfaces of models, passing
defects, coarse graining defects
Averaging methods for defect kinetics and lattice rotation
Kinetics, slip system hardening (self and latent) relations,
cross slip, obstacle interactions, increasing # adjustable
parameters
RVE simulations,
polycrystal/composite homogenization
RVE size, initial and boundary conditions, eigenstrain
fields, self-consistency
Mesh refinement, convergence, model reduction
Substructuring, variable fidelity,
adaptive remeshing, multigrid
Loss of information, remeshing error, boundary
conditions
Low order models, meshing, geometric modeling
Heterogeneous FE with
simplified constitutive
relations
Structural FE
4. Methods for requirements-driven, top–down design
As mentioned earlier, many of the currently available materials
databases are better suited for material selection [109] instead of
rigorous materials design. This is because materials design requires
knowledge of a validated set of process–microstructure–property
linkages that are amenable to information flow in both forward and
reverse directions, as essential to facilitate solutions to top–down,
requirements-driven (inverse) design problems.
For example, sophisticated analytical theories and computational models already exist for predicting several macroscale
(effective or homogenized) properties associated with a given microstructure. However, most of these theories are not well suited
for identifying the complete set of microstructures that are theoretically predicted to meet or exceed a set of designer-specified
macroscale properties or performance criteria. Topology optimization is one of the early approaches developed for materials design.
In this method, the region to be occupied by the material is tessellated into spatial bins and numerical approaches such as finite
element methods are used to solve the governing field equations.
The design problem is set up as an optimization problem, where
the objective is the maximization of a selected macroscale property (or a performance criterion) and the design space enumerates
all of the possible assignments of one of the available choices of
material phases (e.g., a solid or a pore phase) to each of the spatial
bins in the numerical model [110,111]. Although the method has
been successfully applied to a limited class of problems [111–116],
its main drawback is that it is likely to explore only a very small region of the vast design space. As an example, consider a relatively
small problem with about 100 elements covering the space and
assume that there are two choices for the selection of a material
phase in each element. For this simple problem, the design space
constitutes 2100 discrete choices of microstructures. It is therefore
very unlikely that an optimization process can explore the complete design space unless constraints (perhaps ad hoc or limited
in nature) are imposed. Moreover, explicit simulation of a large
parametric range of microstructures via finite element or finite
difference methods is extremely time-consuming, particularly if
nonlinear and time-dependent processes are considered.
A number of modern approaches to material design start by
dramatically reducing the design space by focusing on the most
important measures of the microstructure deemed relevant to
the problem. However, this reduction of the design space is
attained by rational consideration of the governing physics of
the problem and/or taking into account the constraints on the
availability of microstructures (i.e., recognizing that only a very
small fraction of all theoretically possible microstructures are
accessible by currently employed manufacturing options). In other
words, in these approaches, the design space in the simple problem
illustrated above will be drastically reduced first by consideration
of suitably selected subset of microstructure measures deemed
relevant to the selected physical phenomenon of interest, and
then optimal solutions will be explored in this reduced space.
It is anticipated that the approach described above will produce
better solutions compared to conventional approaches that rely
exclusively on the optimization algorithms. We next discuss
examples of approaches that attempt to reduce complexity of
top–down design as an illustration.
Microstructure Sensitive Design (MSD) was formulated using
spectral representations across the process, microstructure, and
property design spaces in order to drastically reduce the computational requirements for bi-directional linkages between the
spaces that can enable inverse design [20,117–125]. Much of the
focus in MSD was placed on anisotropic, polycrystalline materials, where the local material state is defined using a proper orthogonal tensor that captures the local orientation of the crystal
lattice in each spatial bin in the microstructure with respect to a
fixed sample (or global) reference frame. The corresponding local state space is identified as the fundamental zone of SO3 (the
set of all distinct proper orthogonal tensors in 3-D) that takes
into account the inherent symmetries of the crystal lattice. The
distinguishing feature of MSD is its use of spectral representations (e.g., generalized spherical harmonics [126]) of functions
defined over the orientation space. This leads to compact representations of process–structure–property relationships that can be
executed at minimal computational cost. Most of the MSD success stories have focused on 1-point statistics of microstructure
(also called Orientation Distribution Function [126]). At its core, the
MSD effort involved the development of very large and highly efficient databases organized in the form of microstructure hulls and
anisotropic property closures. Microstructure hulls delineate the
complete set of theoretically feasible statistical distributions that
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
quantitatively capture the important details of the microstructure.
Property closures identify the complete set of theoretically feasible
combinations of macroscale effective properties for a selected homogenization (composite) theory. The primary advantages of the
MSD approach are its (a) consideration of anisotropy of the properties at the local length scales, (b) exploration of the complete set of
relevant microstructures (captured in the microstructure hull and
property closures) leading to global optima, and (c) invertibility of
the process–microstructure–property relations (due to the use of
spectral methods in representing these linkages).
Extension of the MSD framework to include 2-point statistics of
orientation has proved to be computationally expensive because
of the very large number of statistical variables and terms
involved. It is important to recognize that 1-point statistics
are defined exclusively over the local state space, whereas the
higher-order statistics are defined over the product spaces of the
orientation space (for the local state variables) and the real threedimensional physical space (for the vectors defining the spatial
correlations). While the use of generalized spherical harmonics
provides substantial compaction for functions over the orientation
space, the analogous compact representations for functions over
the real three-dimensional physical space are yet to be developed.
In an effort to address the need to establish high fidelity microstructure–property–process relations that include higher-order
descriptors of the microstructure, a new mathematically rigorous
approach for scale-bridging called Microstructure Knowledge Systems (MKS) was recently developed. MKS facilitates bi-directional
exchange of information between multiple length (and possibly
time) scales. In the MKS approach [127–131], the focus is on the
localization (the opposite of homogenization), which describes the
spatial distribution of the response field of interest (e.g., stress or
strain fields) at the microscale for an imposed loading condition at
the macroscale. Localization is critically important in correlating
various failure-related properties of the material with appropriate
microstructure conformations that produce the hot spots associated with damage initiation in the material. Moreover, if one can
successfully establish accurate expression for localization (referred
to as localization linkages), it automatically leads to highly accurate
homogenization and computationally efficient multiscale modeling and simulation.
The MKS approach is built on the statistical continuum theories
developed by Kröner [132,133]. However, the MKS approach dramatically improves the accuracy of these expressions by calibrating the convolution kernels in these expressions to results from
previously validated physics-based models. In recent work [131],
the MKS approach was demonstrated to successfully capture the
tails of the microscale stress and strain distributions in composite material systems with relatively high phase contrast, using the
higher-order terms in the localization relationships. The approach
has its analogue in the use of Volterra kernels [134,135] for modeling the dynamic response of nonlinear systems.
In the MKS framework, the localization relationships of interest
are expressed as calibrated meta-models that take the form of
a simple algebraic series whose terms capture the individual
contributions from a hierarchy of microstructure descriptors. Each
term in this series expansion is expressed as a convolution of the
appropriate local microstructure descriptor and its corresponding
local influence at the microscale. The microstructure is defined
as a digital signal using the discrete approximation of the
microstructure function, mhs , reflecting the volume fraction of local
state h in the spatial bin (or voxel) s. Let ps be the averaged
local response in spatial cell s defined on the same spatial grid
as the microstructure function; it reflects a thoughtfully selected
local physical response central to the phenomenon being studied,
and may include stress, strain or microstructure evolution rate
11
variables [127–130]. In the MKS framework, the localization
relationship captures the local response field using a set of kernels
and their convolution with higher-order descriptions of the local
microstructure [132,133,136–142].
The localization relationship can be expressed as

H îµ°
S −1
îµ°
ps =
h
h
1
at11 ms+
t1
h1 =1 t1 =0
+
H îµ°
H îµ°
S −1 îµ°
S −1
îµ°
h h
h
h
2
1
at11t22 ms+
t1 ms+t1 +t2
h1 =1 h2 =1 t1 =0 t2 =0
+
H îµ°
H îµ°
H îµ°
S −1 îµ°
S −1 îµ°
S −1
îµ°
h1 =1 h2 =1 h3 =1 t1 =0 t2 =0 t3 =0
îµ 
×
h3
h h h
h2
h1
at11t22t3 3 ms+
t1 ms+t1 +t2 ms+t1 +t2 +t3
+ ··· ,
(1)
where H and S denote the number of distinct local states and the
number of spatial bins used to define the microstructure signal
(mhs ), and all the ti enumerate the different vectors used to define
the spatial correlations.
The kernels, or influence
coefficients, are
î´›
î´ 
h
h h
h h h
h
h h
h h h
defined by the set at11 , at11t22 , at11t22t3 3 , . . . . at11 , at11t22 , and at11t22t3 3
are the first, second, and third-order influence coefficients, respectively. The higher-order descriptions of the local microstructure
h1
h2
hN
are expressed by the products ms+
t1 ms+t1 +t2 · · · ms+t1 +t2 +···+tN ,
where N denotes the order.
The higher-order localization relationship shown in Eq. (1)
is a particular form of Volterra series [134,135,143–146] and
implicitly assumes that the system is spatially-invariant, causal,
and has finite memory. The influence coefficients in Eq. (1) are
analogous to Volterra kernels. The higher-order terms in the
series are designed to capture systematically the nonlinearity
in the system. The influence coefficients are expected to be
independent of the microstructure, since the system is assumed
to be spatially invariant. Application of the concepts of Volterra
series directly to materials phenomena poses major challenges.
Materials phenomena require an extremely large number of
input channels (because of the complexities associated with
microstructure representation) and an extremely large number of
output channels (because of the broad range of response fields of
interest). However, the microstructure representation described
above allows a reformulation of the localization relationship into
a form that is amenable for calibration to results from previously
validated numerical models (e.g. finite element models, phasefield models) by linear regression in the Discrete Fourier transform
space [127–131], which is one of the distinctive features of the MKS
framework.
A major challenge in the MKS framework is the calibration
of the selected higher-order influence coefficients via linear
regression. Although other approaches have been used in literature
to approximate the higher-order Volterra kernels (e.g. crosscorrelation methods [145,146], artificial neural networks [144],
and Bayesian methods [147,148]), the solutions obtained using
ordinary least squares fit [131] generally provide the best
solutions for the MKS case studies conducted to date. Fig. 3
demonstrates the accuracy of the MKS approach for predicting
the local elastic response in an example microstructure with a
contrast of 10 in the values of the Young’s moduli of the local
states. The MKS methodology has thus far been successfully
applied to capturing thermo-elastic stress (or strain) distributions
in composite materials, rigid-viscoplastic deformation fields in
composite materials, and the evolution of the composition
fields in spinodal decomposition of binary alloys [127–131].
The latter two phenomena (rigid-viscoplastic deformations and
12
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
Finite Element Model
45
40
35
30
25
20
15
10
5
Materials Knowledge system
10
20
30
40
× 10-4
45
2500
15
35
2000
Frequency
30
1500
10
25
20
1000
15
500
5
10
5
0
0
2
4
6
7 10
Strain
12
14
16
18
× 10-4
0
10
20
30
40
Fig. 3. Demonstration of the accuracy of the MKS approach in predicting the local elastic fields in a two-phase composite system with a contrast of 10 in the elastic moduli
of the constituent phases. The strain contours on a mid-section through the 3-D composite microstructure are shown along with the strain distributions in each phase in
the entire 3-D volume element. It is seen that the MKS approach produced excellent predictions at a fraction of the computational cost associated with the finite element
method (FEM).
spinodal decomposition) involve highly nonlinear governing
field equations. However, MKS framework has not yet been
applied to highly complex microstructures such as those seen
in polycrystalline material systems. For the compact inclusion of
lattice orientation in the higher-order localization relationships, it
would be necessary to introduce the spectral representations over
the orientation space (used in the MSD framework) into the MKS
framework described above.
MSD and MKS approaches described in the preceding address the compact representation of high fidelity microstructure–property–processing linkages at a selected length scale. In
order to obtain viable materials design solutions, one needs to
link relevant models from different length and time scales, quantify uncertainty in each model, and communicate the uncertainty
throughout the model chain in a robust multi-level design framework.
In a later section, the Inductive Design Exploration Method
(IDEM), will be introduced as a means of mitigating or managing
uncertainty while facilitating top–down searches for candidate
microstructures based on mappings compiled from bottom–up
simulations. In general, the utility of IDEM might be greatest
for applications involving highly heterogeneous datasets and/or
material models for which the inverse design strategy afforded
by MSD or MKS approaches might be less clearly related to
microstructure representation. Such cases may be typical of multiphysics phenomena with weak coupling to each other or to
radically different length scales of microstructure, or to problems
for which the microstructure–property/response relations run
through a sequence of hierarchical models, with each transition
characterized by a reduction of model order.
5. Characterization for model validation and process–structure
models
Verification and validation (V&V) is a core component of
ICME [7]. Model verification and validation is an underdeveloped
aspect of simulation-supported design of materials and products
within the ICME vision. Within systems engineering, verification
refers to the process of ensuring that the ‘‘system is built right’’
(i.e., the system meets its requirements) whereas validation refers
to building the ‘‘right system’’ (i.e., the system satisfies the
users’ needs) [149]. These are general definitions that apply to
verification and validation of individual and multiscale models,
design processes and design outcomes. Within the context of
simulation model development [150], verification and validation
have specific definitions, which are consistent with the systems
engineering definitions above:
• Verification: The process of determining that a model implementation accurately represents the developer’s conceptual description of the model and the solution to the model.
• Validation: The process of determining the degree to which a
model is an accurate representation of the real world from the
perspective of the intended uses of the model.
One of the most difficult tasks in validating models is the
documentation of the material microstructure and its evolution
in any selected process. As noted earlier, it is essential to
address stochasticity of microstructure. Consequently, there is a
critical need to establish rigorous protocols for documenting in
a statistically meaningful way the rich three-dimensional (3-D)
topological details of the microstructure that might span multiple
length scales. These protocols need to establish reliable statistical
measures to assess if a sufficient number of datasets have been
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
Low Variance
Mid Variance
High Variance
2
α3
acquired, and if the spatial extent and spatial resolution in the
acquired datasets are adequate. The protocols should also address
reconstructions of representative volume elements (RVEs) from
the measured datasets. Since the microstructure has a stochastic
character, it is unlikely that a single sample or statistical volume
element of microstructure [24,25] can accurately represent the
microstructure. Therefore, one aim is to establish protocols for
extracting a set of such samples that accurately reflect the
natural variance in the microstructure such that the statistics of
the ensemble can be statistically representative (i.e., statistically
homogeneous).
Obtaining 3-D microstructure datasets from samples continues to be a major challenge for most advanced material systems.
Impressive advances have been made in generating 3-D microstructure datasets [151–162]. Despite these advances, the acquisition of such datasets remains effort intensive, highly expensive,
very slow, and inaccessible to the broader scientific community. Moreover, the volumes of materials studied using these
approaches are often limited, which raises critical questions regarding their adequacy for reliably extracting desired microstructure measures and their variances.
If we restrict attention to 2-point and 3-point statistics of
the microstructure (these constitute the most important spatial
correlations), it should be possible to acquire and assemble 3-D
descriptions of these statistics from large 2-D microstructure scans
on multiple oblique sections into the sample [138,163]. This is
because all of the vectors used to define the 2-point and 3-point
statistics can be populated on carefully selected oblique sections
into the sample. Since it is significantly less expensive to obtain
large 2-D microstructure scans in most materials, this approach is
much more practical for statistically meaningful characterization
of the microstructure in a given sample set.
It is important to emphasize that the complete set of 2-point
statistics is an extremely large set. Compact and meaningful
grouping of the dominant 2-point statistics in a given material
system is an area of active current research. It has long been
known that the complete set of 2-point statistics exhibits many
interdependencies. Gokhale et al. [164] showed that at most
(N (N − 1)) /2 2-point correlations are independent for a material
system comprised of N distinct local states. Niezgoda et al. [165]
recently showed that only (N − 1) of the 2-point correlations
are actually independent by exploiting known properties of the
Discrete Fourier transform representation of the correlations.
However, for a material system comprising a very large number of
local states (e.g. a polycrystalline material system), even tracking
(N − 1) spatial correlations is an arduous task. It was recently
demonstrated that techniques such as principal component
analysis (PCA), commonly used in face recognition and pattern
recognition, can be used to obtain objective low dimensional
representations of the 2-point statistics [21,22], as shown in Fig. 4.
Since the n-point statistics are smooth analytic functions with
a well defined origin, PCA decomposition is quite effective in
establishing an uncorrelated orthogonal basis for any ensemble
of microstructure datasets. The PCA decomposition allows one
to represent each microstructure as a linear combination of the
eigenvectors of a correlation matrix, with the weights interpreted
in the same manner as Fourier coefficients. The weights now form
an objective low dimensional representation of the much larger set
of independent 2-point statistics. The PCA representations allow
simple visualization of an ensemble of microstructures, and a
rigorous quantification of the microstructural variance within the
ensemble [120] (see Fig. 4). The same concepts can also be used
effectively to define weighted sets of statistical volume elements
(WSVEs) as an alternative to identifying RVEs for a given ensemble
of microstructure datasets [22,166,167].
13
0
-2
4
2
α2
0
-60
-40
-20
α1
0
20
Fig. 4. Visualization of ensembles of microstructure datasets and their variance using PCA representations of the 2-point statistics. Three ensembles of microstructure
datasets are shown with High-, Mid-, and Low-Variance. Each point in the figure
corresponds to a complete set of 2-point statistics, projected in the first three dimensions of the PCA space [21]. The αi values in the figure are the corresponding
PCA weights. The points corresponding to each ensemble are enclosed in convex
hulls to help visualize the space spanned by each ensemble. Note that the range of
the PCA weights decreases substantially with each new PCA dimension.
Successful pursuit of the ICME vision requires integration of
all of the different components described earlier. The NMAB report on ICME [7] specifically identifies three main tasks for integration tools: (i) linking information from different sources and
different knowledge domains, (ii) networking and collaborative
development, and (iii) optimization. Certainly, collaborative multidisciplinary design optimization presents challenges due to the
geographically distributed nature of the materials supply and
product design enterprises, often undertaken in different corporations and in different divisions, as well as in consulting and
university laboratories [15]. Major emphasis to date on design integration has been placed on use of commercial software [13,14]
that includes capabilities to manage databases and pursue various optimization strategies. With regard to optimization, materials design differs substantially from conventional problems in
design optimization owing to the complexity, nonlinearity, and
uncertainty [168,169] of material process–structure and structure–property models. It is not merely the case where existing
tools can be identified and deployed, but rather basic research issues at the intersection of computational materials science and
engineering, experimental mechanics and processing, and multidisciplinary design optimization must be identified and addressed.
As mentioned earlier, one of the most difficult challenges in the
integration is the visualization of microstructure–property–processing linkages in a common space. This is exacerbated by the fact
that a comprehensive description of the microstructure requires
a very large number of statistics or spatial correlations. Therefore,
reduced-order representations of the microstructure are essential.
The PCA decompositions of the n-point statistics discussed earlier
offer new exciting opportunities to build new integrations tools.
As an example, consider the process design problem where the
objective is to identify hybrid processing routes comprising an
arbitrary sequence of available manufacturing routes aimed at
transforming a given initial microstructure into a desired one. Fig. 5
shows schematically how one might address the process design
problem by representing the microstructure–property–processing
linkages in a common (microstructure) space. The green volume in
Fig. 5 depicts a microstructure hull corresponding to an ensemble
of digitally created porous solids [22]. Each point in this space
represents a microstructure (an example is depicted in the top
right in Fig. 5) through the PCA weights of its 2-point correlations.
The elastic moduli for all microstructure members in the selected
14
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
2
60
60
40
0
40
20
20
-2
Predicted Modules
-4
50
-6
7
40
0
α2
-10
3000
2000
0 1000
-1000
α1
-2000
-3000
30
6.8
20
10
6.6
0
-10
Processing
Pathlines
6.4
Modules (MPa)
α3
4
60
50
40
30
20
10
-20
-30
-40
6.2
-1
0
-0.5
0
0.5
1
1.5
2nd principal Component
0.01
0.02
0.03
0.04
2
0.05
Relative Error
Fig. 5. Illustration of the compact representation of microstructure–property–processing linkages using the reduced-order representations of the microstructure obtained
from principal component analyses of higher-order statistics (2-point statistics). The αi values in the figure represent the PCA weights in the reduced-order representation
of the n-point spatial correlations. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
ensemble were evaluated using finite element models (FEM),
and were correlated to the first two PCA weights using artificial
neural networks. The correlation and its validation are depicted
in the contour plot on the bottom right in Fig. 5. It is seen
that the microstructure–modulus correlation provides excellent
predictions for various members of the ensemble (with less than 5%
error) [22]. It is noted that the PCA weights of the microstructure
can also be related to properties of interest using various regression
methods.
Fig. 5 illustrates schematically the superposition of processing
information on the microstructure hull in the form of process pathlines. In any selected processing option, the salient features of the
microstructure are expected to change, which can be visualized as
a pathline in the multi-dimensional microstructure space. Different processing operations are expected to change the microstructure in different ways, and therefore there are likely to be different
processing pathlines intersecting at each microstructure. Identification (either experimentally or by modeling) of all the processing
pathlines in the microstructure hull corresponding to all available
processing options that may be exercised results in a large database
(or a process network). Once such a database is established, it
should be possible to identify the ideal hybrid process route (based
on cost or other criterion such as sustainability) that will transform
any selected initial microstructure into one that has been predicted
to exhibit superior properties or performance characteristics. The
basic concept was evaluated recently in a simple case study that
aimed at transforming the given initial crystallographic texture in
a sample into a desired one [123]. In all examples considered, the
process network identified solutions that could not be arrived at by
pure intuition or by brute-force methods (including constant performance maximization or repeated trials). There is a critical need
to further develop this framework to include higher-order descriptions of microstructure (possibly based on n-point statistics) and
higher-order homogenization theories (e.g. MKS framework). This
linkage of models that explicitly consider microstructure to process path design is a key underdeveloped technology within the
ICME paradigm, as discussed earlier in reference to Fig. 2.
6. Uncertainty in ICME
One of the key enablers of the ICME paradigm is the ability
to account for different aspects of uncertainty within models,
experiments, and the design process. Uncertainty is a first
order concern in the process of simulation-supported, decisionbased materials design. It is often overlooked in discussion
of ICME and materials design, possibly because it is assumed
that approaches exist within systems design and uncertainty
management methods that are suitable. This is generally not
the case in addressing nuances and complexity of multi-level
materials design. Only two paragraphs were devoted to the
uncertainty section in the NMAB ICME report [7], and little
discussion was devoted to MDO aspects and the richness of
required verification and validation approaches; however, these
are perhaps among the most critical scientific obstacles to
realizing ICME owing to its nascent stages of formulation. The
ICME report focused more on model integration/communication,
i.e., assembling and networking modeling and simulation tools
along with accompanying supportive experimental methods.
There are various sources of uncertainty in addressing ICME,
inexhaustively listed without classification as [15]:
• Variability induced by processing, operating conditions, etc. and
due to randomness of microstructure.
• Incomplete knowledge of model parameters due to insufficient
or inaccurate data, including initial and boundary conditions.
• Uncertain structure of a model due to insufficient knowledge
(necessitating approximations and simplifications) about a
physical process such as process–structure or structure–property relations.
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
• Uncertainty in meshing and numerical solution methods.
• Propagation of uncertainty through a chain of models (amplification or attenuation).
• Experimental idealizations and measurement error.
• Lack of direct connection of measured quantities to model
degrees of freedom.
Accounting for uncertainty involves four aspects: uncertainty
quantification, uncertainty propagation, uncertainty mitigation,
and uncertainty management. Uncertainty quantification is the
process of identifying different sources of uncertainty and
developing corresponding mathematical representations. There
are perhaps several unique aspects of materials with regard to
uncertainty quantification [168]:
• The long range character of defect/structure interactions in
materials necessitates complex nonlocal formulations of continuum field equations or corresponding coarse grain internal
variables. An economics analogy might be consumer reaction
to market instabilities abroad.
• Use of 2D images to reconstruct 3-D microstructures to estimate properties/responses is often necessitated by lack of expensive and time-consuming 3-D quantitative microstructure
information; moreover, measured microstructures of new materials of interest to materials design do not yet exist, and predicted microstructures may not be thermodynamically feasible
or metastable.
• Extreme value (rare event) problems are common in failure of
materials, such as localization of damage within the microstructure in fracture or high cycle fatigue. An analogy might be insurance company estimation of probabilities of rare events such as
1000 year floods, tsunamis, and so forth.
• The large range of equations used in solving field equations
of nonlinear, time-dependent, multi-physics behavior of materials, including constraints, side conditions, jump conditions,
thresholds, etc., which may vary from group to group (i.e., little
standardization).
Uncertainty propagation involves determining the effect of input uncertainties on the output uncertainties, both for individual models and chains of models. The combination of uncertainty
representation and propagation is sometimes referred to as uncertainty analysis. Uncertainty mitigation utilizes techniques for reducing the impact of input uncertainties on the performance of the
material system. Finally, uncertainty management involves deciding the levels of uncertainty that are appropriate for making design decisions and prioritizing uncertainty reduction alternatives
based on the tradeoff between efforts and benefits. While uncertainty quantification and propagation activities are carried out by
analysts or model developers, uncertainty mitigation and management are activities performed by system designers during the design process. These four aspects and corresponding techniques are
discussed in a fairly general way in Sections 6.1 through 6.4, followed by a brief discussion of some of the unique aspects associated with multiscale modeling and design of materials.
6.1. Uncertainty quantification
During the past few decades, a number of different classifications of uncertainty have been proposed. Different classifications have emerged within diverse fields such as economics and
finance, systems engineering, management science, product development, and computational modeling and simulation. Uncertainty classification has also evolved over time, e.g., within economics the classification has evolved from (a) risk and uncertainty
15
in early 20th century to (b) environmental and behavioral uncertainty in the mid 20th century, to the recent classification of fundamental uncertainty and ambiguity [169]. Uncertainty is also related to the concept of risk which can be defined as a triplet of scenarios, likelihood of scenarios, and possible consequences [170].
Uncertainty gives rise to risk. While risk is always associated with
undesirable consequences, uncertainty may have desirable consequences [171]. An understanding of these classifications is necessary to properly position the methods for design under uncertainty.
Within computational modeling and simulation, uncertainty
has been classified from different standpoints. Chen et al. [172]
classify uncertainty into three types — Type 1 uncertainty associated with inherent variation in the physical system, Type 2 uncertainty associated with lack of knowledge, and Type 3 uncertainty
associated with known limitations of simulation models. With specific goal of materials design, Choi et al. [173] classify uncertainty
associated with different aspects of simulation models into model
parameter uncertainty, model structural uncertainty, and propagated uncertainty.
The most widely adopted classification splits uncertainty into
aleatory uncertainty and epistemic uncertainty. Aleatory uncertainty is due to the inherent randomness in material systems, such
as nearest neighbor distributions of grain size, orientation, and
shape in metallic polycrystals. This uncertainty is also referred to
as irreducible uncertainty, statistical variability, inherent uncertainty, or stochastic uncertainty. Aleatory uncertainty associated
with parameters is commonly represented in terms of probability
distributions. Probability theory is very well developed and hence
has served as a language and mathematics for uncertainty quantification. This is one of the reasons that aleatory uncertainty has
received significant attention both in engineering design and computational materials science.
Epistemic uncertainty, on the other hand, refers to the lack of
knowledge which may be due to the use of idealizations (simplified relationships between inputs and outputs), approximations,
unanticipated interactions, or numerical errors. Examples include
size of the representative volume element (RVE) of microstructure used to predict properties or responses, simplified material constitutive models, including the form and/or structure of
such models, and approximate model parameters. Epistemic uncertainty is also referred to as reducible uncertainty, subjective
uncertainty, and/or model-form uncertainty. Attempts have been
made to use a Bayesian approach to quantify epistemic uncertainty using probability functions. Under the Bayesian framework
for epistemic uncertainty, the subjective interpretation of probability functions is used [174]. Subjective interpretation of probability is different from the frequentist interpretation in the sense
that subjective probabilities refer to degree of belief rather than
the frequency of occurrence. The subjective interpretation allows
for modeling all types of uncertainty (epistemic and aleatory) into
a common framework, which is convenient for uncertainty analysis. The Bayesian approach involves determining prior probability distributions of uncertain parameters that represent modeler’s
initial belief about uncertainty and using Bayes’ theorem to update the probability functions based on new pieces of information
(e.g., from experiments).
However, by the very nature of epistemic uncertainty, it
is inherently difficult or impossible to characterize probability
functions associated with limited knowledge such as model
idealization. An incorrect representation of uncertainty may
result in wrong decisions. Hence, different types of mathematical
techniques are needed for representing epistemic uncertainty.
Alternative approaches for representing epistemic uncertainty
include fuzzy set theory, evidence theory, possibility theory,
and interval analysis [175]. Different representations are suitable
under different situations.
16
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
While there has been significant progress in accounting for
aleatory uncertainty using probability theory, little progress has
been made on (a) holistic quantification of epistemic uncertainty
within multiscale material models, and (b) simultaneous consideration of different mathematical constructs for representing both
aleatory and epistemic uncertainty in an integrated manner. We
regard this as one of the primary challenges facing ICME.
6.2. Uncertainty propagation
Within statistics, uncertainty propagation refers to the effect of
uncertainties in independent variables on the dependent variables.
In the context of simulation-supported design (decisions based
at least in part on modeling and simulation), this translates to
the effect of input uncertainty on the output uncertainty. Of
course, the same general logic applies to input–output relations in
experiments, which assert certain models in their own right and
have error and approximation. In addition to the uncertainty at
individual scales, materials design is associated with propagation
of uncertainties in networks of interconnected models, which
may either cause amplification or reduction of uncertainties in
serial application of models. Hence, for the realization of ICME,
techniques are required for (a) propagating uncertainty from the
inputs of one model to its outputs, (b) propagating uncertainty
from lower level models to upper level models within a model
chain, (c) propagating uncertainty from material composition
to structure through process route, material microstructure to
material properties or responses, and from material properties to
product performance.
A number of techniques such as Monte Carlo techniques and
global sensitivity analysis are available for propagating uncertainty
in the inputs (represented as random variables) to the outputs.
Probability-based techniques have also been developed for propagating uncertainty across models. Lee and Chen [176] discuss five
categories of methods for (typically aleatory) uncertainty propagation: Monte-Carlo methods, local expansion based methods
(e.g., Taylor’s series expansion), most probable point (MPP) based
methods (e.g., first order reliability method and second order reliability method), functional expansion based methods (e.g., polynomial chaos expansion), and numerical integration based methods.
A comparison of full factorial numerical integration, univariate dimensional reduction method, and polynomial chaos expansion is
presented in Ref. [176].
The primary challenge with Monte Carlo techniques is that they
are computationally intensive in nature. Repeated execution of
simulation models may be computationally expensive and generally infeasible when using multiscale models with a number
of sources of uncertainty. Hence, efficient techniques for uncertainty propagation are necessary. Chen and co-authors [172] have
developed techniques for efficient uncertainty propagation using
metamodels, but the use of metamodels in multiscale modeling
where sound physical basis must exist for models at fine scales
has not been fully acknowledged or pursued. There is strong suspicion that metamodels are too limited for this task, in general. Yin
et al. [177] use random fields to model uncertainty within material
microstructure and use dimension reduction techniques for efficient uncertainty propagation. Methods for propagation of uncertainties represented as intervals are also available [178]. However,
approaches for propagating different types of uncertainty in a single framework are in early stages.
A related issue is whether to engage description of stochasticity at each level of simulation in terms of parameter distribution (e.g., variation of microstructure, model parameters) or to
use many deterministic realizations of microstructure and corresponding models/parameters extracted from assigned probabilistic distributions at some ‘‘primal’’ scale to propagate uncertainty
to higher levels of predicted response(s). The latter approach is
straightforward and tractable, but requires identification of some
scale as a starting point at which finer scale response can be
treated as deterministic, which of course does not fully accord with
physics. From a practical perspective, Table 1 suggests that each
property or response may have a dominant minimal scale which
can serve as a starting point for applying deterministic models with
probability distributions of inputs to propagate through a multiscale modeling framework. On the other hand, the temptation with
a fully stochastic approach is to avoid the hard but fruitful work of
assessing the viability of such a scale, perhaps leading to an early
concession to Bayesian approaches to provide answers where the
physics is complex or microstructure-sensitive models are limited
or avoided. In this regard, there are some recent works that merge
the two approaches in multiscale modeling in a manner that may
provide promising avenues for future applications [179].
6.3. Uncertainty mitigation
Uncertainty mitigation refers to reducing the effect of uncertainty on systems design. Techniques such as robust design and
reliability based design and optimization (RBDO) have been developed in the systems design community to achieve desired system
performance in the presence of uncertainty. Robust design techniques are based on the principle of minimizing the effects of uncertainty on system performance [180] whereas reliability-based
design optimization techniques are based on finding optimized designs characterized by low probability of failure [181]. Note that reducing the effect of uncertainty on design using these approaches is
different from reducing the (epistemic) uncertainty itself. In robust
design and RBDO approaches, the levels of uncertainty are generally assumed to be fixed (or given). The goal is to determine the
best design that is insensitive to the given uncertainty or has low
probability of failure.
Robust design techniques are rooted in Taguchi’s quality engineering methodology [182]. One of the earliest robust design
approaches for systems design include the Robust Concept Exploration Method (RCEM) [183] which combines statistical techniques of surrogate modeling and compromise decision support
problems [184] to perform robust design. RCEM has been extended
to account for other sources of uncertainty (e.g., uncertainty in input variables) [185], and to develop product platforms [186]. The
multi-disciplinary design optimization (MDO) community has developed techniques for optimizing systems in the presence of uncertainty [187–190].
Existing techniques for multi-level deterministic optimization
have also been extended to account for uncertainty. An example is
the extension of the Analytical Target Cascading (ATC) approach
to Probabilistic Analytical Target Cascading (PATC) [191]. ATC
is an approach for cascading overall system design targets to
sub-system specifications based on an hierarchical multi-level
decomposition [192], similar to the concurrent material and
product design problem shown in Fig. 1, when layered with the
understanding of Olson’s Venn diagram. The subsystem design
problems are formulated as minimum deviation optimization
problems and the outputs of the subsystem problems are used as
inputs to the system-level problem. Within PATC, uncertainty is
modeled in terms of random variables and uncertainty propagation
techniques are used to determine the impact of uncertainties at the
bottom level on parameters at the top level.
While analytical target cascading approach is based on choosing a design point during each iteration and repeating the optimization loops until convergence is achieved, approaches such as
the Inductive Design Exploration Method (IDEM) use ranges of parameters and design variables [15,193,194]. The primary advantage of working with ranged sets of design points is the possibility
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
17
Fig. 6. Inductive Design Exploration Method [15,193–195]. Step 1 involves bottom–up simulations or experiments, typically conducted in parallel fashion, to map
composition into structure and then into properties, with regions in blue showing the feasible ranged set of points from these mappings. Step 2 involves top–down evaluation
of points from the ranged set of specified performance requirements that overlap feasible regions (red) established as subsets of bottom–up simulations in Step 1. (For
interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
of parallelization of bottom–up simulations that project composition onto structure via process route and structure onto properties.
IDEM facilitates design while accounting for uncertainty in models and its propagation through chains of models. IDEM [193,194]
has two major objectives: (i) to explore top–down, requirementsdriven design, guiding bottom–up modeling and simulation, and
(ii) to manage uncertainty in model chains. Fig. 6 illustrates the
concept of IDEM. We consider multiple spaces of different dimension (DOF), between which models, simulations or experiments
serve as mappings that project points in a given space (e.g., composition) to the next (e.g., microstructure) and to the final (e.g., properties/responses). First we must configure the design system,
including information flow between specific databases and modeling and simulation tools, and specify ranged set of performance
requirements. Then, steps in IDEM, shown below in Fig. 6, are as
follows:
1. Perform parallel discrete points evaluation at each level of
design process (bottom–up simulations and experiments).
2. Perform inductive (top–down) feasible space exploration based
on metamodels from Step 2.
3. Obtain a robust solution with respect to model uncertainty.
Projected points need not comprise simply connected regions
in each space, but we plot as such for simplicity. There is also
provision for reconfiguring the design system in case there is no
overlap between performance requirements and feasible ranges
of the bottom–up projections and/or to mitigate uncertainty.
Mitigation of uncertainty is a second primary feature of IDEM.
If two candidate solutions exist, say Designs 1 and 2, for which
final performance is identical, then which design should we select?
In general, we select solutions that lie the furthest from the
constraint boundaries in each space, including the boundaries
of feasible regions established in Step 1 of IDEM (blue regions
in Fig. 6). Current robust design methods cannot address this
issue since these methods only focus on the final performance
range. The fundamental difference between IDEM and other multilevel design methods is that IDEM involves finding ranged sets of
design specifications by passing feasible solution spaces from final
performance requirements by way of an interdependent response
space to the design space while preserving the feasible solution
space to the greatest extent possible.
Choi and co-authors [195] use the IDEM framework to
design a multi-functional energetic structural material (MESM)
using a multiscale model consisting of a microscale discrete
particle mixture (DPM) model and a continuum non-equilibrium
thermodynamic mixture (NTM) model. The inputs of the DPM
model are volume fractions and mean radii of the MESM
constituents and the output is the weighted average temperature
of local hotspots. The inputs of the NTM model include volume
fractions and critical temperature for reaction initiation and the
output is the accumulated reaction product. For each input–output
mapping, the inputs space is discretized using experimental design
techniques. The model is executed at discrete points considering
various types of uncertainty (discussed in Section 6.1). Due to the
uncertainties, each point maps to a subspace in the output space.
Specific techniques have been developed to account for inputs
shared across multiple levels. After generating the mappings
between different levels, measures such as Hyperdimensional
Error-Margin Index (HD-EMI) are used to determine the robustness
of the design to various forms of uncertainty while satisfying
the problem constraints. Using the IDEM approach, Choi and coauthors [195] determine robust combinations of design variables
(volume fractions, and radii of constituents) to achieve the desired
accumulation of reaction products. Other examples of the IDEM
implementation are discussed in [15,193,194].
The forward mappings in Step 1 of IDEM pertain either to
multiscale models, in which information from a model at one
scale is projected onto inputs into higher scale models, or multilevel models, in which composition is mapped into structure
and structure into properties. By remaining as far as possible
from constraints, IDEM attempts to pursue solutions that are
less sensitive to uncertainty, particularly in terms of solutions
that avoid infeasible designs. The IDEM process accounts for
the effect of aleatory uncertainty in the input parameters (such
18
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
as randomness in particle sizes and randomness in simulated
microstructure) on the outputs of the multiscale model, along
with the propagation of aleatory uncertainty along a design chain.
Instead of using a concurrent multiscale modeling approach,
an information-passing approach is used in IDEM. The primary
distinction is that instead of passing information about single
design points throughout the model chain, ranged sets of solutions
are passed across different levels. It allows for independent
execution of models at different scales, which simplifies the
uncertainty propagation within models. Since the process of
uncertainty analysis is decoupled from design exploration, IDEM
is flexible to be extended for other forms of uncertainty using
existing uncertainty propagation techniques. Recently, IDEM has
been extended to account for model parameter uncertainty [196].
Further work needs to be carried out to account for other
representations of epistemic uncertainty such as intervals and
fuzzy sets.
6.4. Uncertainty management
Uncertainty management is the process of deciding the levels
of uncertainty that are appropriate for design decisions and prioritizing uncertainty reduction alternatives based on the tradeoff between efforts and benefits. The focus in uncertainty management is
on deciding when to reduce epistemic uncertainty by gaining additional knowledge and when to reduce the complexity of the design
process by using simpler models and design processes. The notion
of introducing uncertainty by using simplified models during the
early conceptual design phases has received little attention from
the engineering design and the materials science communities.
When designing a material system, the focus is on achieving
properties such as strength and ductility that address performance
requirements. In contrast, when managing uncertainty in the
design process, focus is placed on design-process goals such as
reducing the complexity of the design process and achieving the
best design in the shortest possible time. The notion of designing the
design process is also important because of (a) evolving simulation
models, resulting in multiple fidelities of models at different
stages of a design process, and (b) significant model development
and execution costs, necessitating judicious use of computational
resources. Moreover, the needs for accurate information depend
on whether the goal is to narrow down the alternatives to a
specific class of materials (i.e., during early design phase) or to
optimize the composition and structure of a specific material
(i.e., during the later stages of design). In the early phases of
design, models and accurate representations of uncertainty may
not be available — they may also not be necessary. Uncertainty
management can be applied to problems such as refinement
of simulation models [197], selection of models from a set of
available models [198], and reducing the complexity of the design
process [199].
A potential approach to uncertainty management is to utilize concepts and mathematical constructs from information economics [200,201] such as value of information to quantify the
cost/benefit tradeoff to make the best design decisions in the presence of uncertainty. Information economics is the study of costs
and benefits of information in the context of decision-making.
Metrics from information economics, specifically the expected value
of information, are extended to account for both statistical variability and imprecision in models. The value of this added information
is referred to as the improvement in a designer’s decision-making
ability. Mitigation of uncertainty in simulation models is effectively
analogous to the acquisition of a new source of information, and
the set of results from the execution of this refined model is analogous to additional information for making a decision.
Different measures are developed for quantifying the benefit of
uncertainty mitigation in individual models [197,202]. These include (a) the improvement potential, and (b) process performance
indicator (PPI). The improvement potential quantifies the maximum possible improvement in a designer’s decision that can be
achieved by refining a simulation model [197] (e.g., through increasing fidelity, improving the physical basis of the model, etc.).
If the improvement potential is high, the design decision can possibly be improved by refining the simulation model. However, if
the improvement potential is low, then the possibility of improving the design solution is low. Hence the level of refinement of the
model is appropriate for decision-making. The main assumption is
that the bounds on the outputs of models, which translate to the
bounds on payoff, are available throughout the design space. The
improvement potential measure is applied to problems involving
simulation model refinement [197] and the coupling between simulation models at different scales [15,199].
The PPI is developed for scenarios where information about
upper and lower bounds on the outputs of models are not available
but it is possible to generate accurate information at a few points in
the design space [198,203]. This is generally the case when physical
experiments are expensive and can only be executed at a few
points in the design space. Another application of the PPI is when
there are multiple models (or refinements of the same model) that
a design can choose from, and one of the models is known to be
the most truthful. The PPI is calculated as the difference in payoff
between the outcomes of decisions made using the most truthful
model and the outcomes achievable using less complex models.
The improvement potential and the process performance
indicator account for the benefits of gaining additional information
by reducing epistemic uncertainty but do not directly capture the
effort involved in reducing uncertainty. This is due to a lack of
holistic models that predict the effort involved in developing and
executing simulation models. The effort models must be predictive
in nature because they need to be employed before the simulation
is developed or refined. An effort model for ICME would be based
on the cost models from software development and systems
engineering [204]. The effort model accounts for three dimensions
of effort — person hours, monetary cost, and computational time.
Using this effort model, measures that account for both value and
effort are under development.
Constructs based on information economics can also be used to
choose between a concurrent multiscale modeling approach and a
hierarchical multiscale modeling approach. In the case of concurrent multiscale modeling, models at various scales are tightly coupled with each other. In contrast, hierarchical multiscale modeling
approaches typically involve passing information between models
in the form of ranges of values or surrogate models. Due to the tight
coupling in concurrent multiscale models, it is more challenging to
identify the impact of specific sources of uncertainties and specific
models on the overall output uncertainty. Additionally, the impact
of uncertainty propagation through individual models is difficult to
evaluate. Hence, it is more challenging to determine which models
within a multiscale framework should be refined to reduce the impact uncertainty or which models can be simplified to reduce the
complexity of the process for initial design exploration purposes.
Information economics can be used to strike the balance between
the complexity of concurrent multiscale models and the loss of information in hierarchical multiscale models. These are very important practical considerations for tools for addressing uncertainty in
materials design.
Efficient and effective means for uncertainty management must
be integrated with the materials design strategy, and can be leveraged tremendously by incorporation of parallelized computational
evaluations of process–structure and structure–property relations.
Clearly, the parametric nature of the underlying modeling and simulation necessary to quantify sensitivity to variation of material
structure must rely on increasing computational power to enhance
feasibility and utility.
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
19
7. Verification and validation
Verification and validation were defined at the beginning of
Section 5. In the context of ICME, pursuit of V&V consists of
following activities:
1. Individual model V&V — single model focusing on single length
and/or time scales.
2. Multiscale model V&V — single model or coupled set of models
comprising a simulation operating over multiple length/time
scales in concurrent or hierarchical manner.
3. Multi-physics model V&V — ensuring that the implementation
of a modeling framework spanning multiple phenomena is
consistent mathematically and physically.
4. Design process validation — ensuring that the configured design
process will yield a solution that satisfies design requirements.
5. Design (outcome) validation — comparing design outcomes to
system-level requirements.
It is noted that this decomposition is much broader than typical
notions of V&V applied either to models or design processes.
7.1. Individual model V&V
Development of a computational model involves various steps
such as conceptual modeling, mathematical modeling, discretization and algorithm selection, computer programming, numerical
solution, solution representation [205]. Uncertainties and errors
may be introduced during each of these steps. The process of verification and validation of simulation models is illustrated in Fig. 7.
It consists of the following tasks [206]:
• Conceptual model validation (also called theoretical validation):
process of substantiating that the theories and assumptions
underlying a model are correct and that the representation of
the system is reasonable for the intended simulation study.
• Model verification: process of assuring that a computerized
model is a sufficiently accurate representation of the corresponding conceptual model.
• Operational validation: process of determining whether the
computerized model is sufficiently accurate for the needs of the
simulation study.
• Data validation: process of ensuring that all numerical data used
to support the models are accurate and consistent.
Verification and validation of models has recently received
significant attention in simulation-based design community.
The Bayesian framework is one of the commonly adopted
frameworks for model validation [207]. Bayesian techniques for
model validation have also been extended to include epistemic
uncertainty regarding statistical quantities [208]. Another widely
adopted technique involves concurrently validating models and
exploring the design space [209]. The traditional view of validation
of simulation models is that if the outputs of the models are close
enough to the experimental data, the models are valid. However,
this view is much too limiting in ICME, which is characterized
by a hierarchy of material length scales and focuses on designing
novel materials that may not have been previously realized
or characterized. Furthermore, it is essential to recognize that
experimental data also have uncertainty. This uncertainty typically
increases at lower scales owing to an interplay between spatial
resolution of the measurement device/method and duration of
measurement signal accumulation. In fact at the molecular level,
the uncertainty in experiments due to signal resolution and
accumulation may exceed that of models. It may be infeasible
or too expensive to run the experiments for complete model
validation before using them for design. Hence, validation activities
and design activities need to be carried out in an iterative manner.
Confirmation
Reality of
Interest
Validation
Computer
Model
Modeling
Mathematical
Model
Simulation
Outcomes
tion
enta
lem
Imp
Verification
Modeling and Simulation Activities
Assessment Activities
Fig. 7. Model verification and validation process [206].
Models are used to guide design, and the design process guides the
validation experiments.
There is another validation-related challenge in ICME due to the
focus on design. When models are developed and employed to design materials that do not exist, the models cannot be experimentally validated before the system is designed. Experiments are also
characterized by epistemic uncertainty such as the lack of knowledge regarding the system configuration. Hence, validation using
experiments must account for both aleatory and epistemic uncertainty associated with the experiments. In that sense, the distinction between simulation models and experiments is blurred.
Due to the specialized nature of models, their reuse is currently
severely limited in ICME. It is generally assumed that someone who
is familiar with the complete implementation details of the model
is always available to customize or adapt the models for different
scenarios. The notion of using models as black boxes is generally
discouraged because their validity outside the ranges for which
they are developed is unknown and generally undocumented.
Currently, significant attention is not given to the reuse of models
within ICME. While the models capture a significant amount
of domain-specific knowledge, they are fragile in nature. Their
fragility is not due to a non-robust implementation but due to a
lack of documentation of their capabilities, scope of application,
and limitations. Hence, the models cannot be used by system
designers for integration into the design process as reusable, plugand-play, ‘‘black box’’ units as might be desired for a robust
methodology. As ICME develops we maintain that significant
efforts are needed to embed knowledge and methods regarding
the validity of models [210] so that they can be used by designers
who may be generally familiar with the underlying physics but
not familiar with all the internal details of the model and its
implementation.
Finally, the validity of a model depends on its purpose. Model
validation is effectively the process of determining the degree to
which a model is an accurate representation of the real world from
the perspective of the intended uses of the model. As discussed in the
uncertainty management section above, the designers may prefer
to use simpler models during the early design phases to narrow
down the design space. Hence, there is a need for different levels
of scope, maturity and fidelity of models within the design process.
There is a need for a classification scheme to classify models based
on their scope and levels of fidelity [211], perhaps relating to a
combination of model scope, readiness, and material length scale
level of input/output structure. The systems designers can use this
classification scheme in association with information economics
constructs to choose appropriate models that are valid within a
given application context.
20
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
7.2. Multiscale model V&V
Valid models for given characteristic length and time scales
may not result in valid multiscale models across scales. Hence,
multiscale model V&V is an important aspect of the overall V&V
strategy, and very little attention has been devoted to it. Multiscale
model V&V consists of two aspects:
(a) ensuring compatibility between the domains of validity for
individual models, and
(b) estimating the propagation of uncertainty through the chain of
models.
Compatibility assessment is the process of determining whether
the domain of inputs of an upper-level model is consistent with
the domain of outputs of the lower-level model. Individual models
are generally valid within a domain of inputs. The input domain
can be defined as a hypercube in terms of the lower and upper
bounds on input variables [xlb xub ]. The set of inputs within
this hypercube generates an output domain, which can be nonconvex with disjoint sub-domains. This output domain serves
as the input domain to a higher-level model. Multiscale model
V&V involves ensuring that the output domain of the lowerlevel model is a subset of the valid input domain of the upperlevel model. Otherwise, the multilevel model would result in
erroneous system-level predictions. Techniques such as support
vector machines have been used for mathematically quantifying
the validity domains of individual models [212].
The second aspect of multiscale modeling V&V is related to
the effect of uncertainty propagation across a chain of models.
Uncertainty in models at lower material length scales and/or in
the process–structure–property mappings may be amplified as
information flows from lower-level to upper-level models. Both
types of chains are relevant to materials design, the first in
terms of multiscale modeling and the latter in terms of multilevel design. The goal of V&V is to ensure that the effects of
uncertainty at the lower-level(s) do not amplify beyond the
desired uncertainty levels at which design decisions must be made.
Uncertainty propagation can be viewed from both bottom–up
or top–down perspectives. From the bottom–up perspective, the
effect of lower-level uncertainties on system-level prediction is
determined. From a top–down view, the allowable uncertainty
in system-level predictions is used to determine the allowable
uncertainty in lower-scale/level models.
7.3. Design process V&V
Design processes involve decisions regarding how to search the
space of alternatives so that the design targets can be achieved at
a minimum cost. It involves decisions such as how to decompose
the design problem, how to reduce the design space, sequencing
fixation of design parameters in iterations, which models to
develop, how to invest the resources in achieving the design
objectives, etc. These decisions affect not only the final design
outcome but also affect the amount of effort required.
Analogous to model validation, where the goal is to ensure
that models accurately predict the desired response, the goal
of validating design processes is to ensure that a given design
process will yield a solution that satisfies the design requirements.
Design process validation is important to guide the selection
of appropriate design methods and to support the development
and evaluation of new methods. In simulation-based multilevel
design, the design processes represent the manner in which the
networks of decisions and simulation models are configured. One
of the approaches for validating design methods is the validation
square [213,214]. The validation square, shown in Fig. 8, consists
of four phases:
Theoretical
Structural
Validity
Theoretical
Performance
Validity
1
4
2
3
Empirical
Structural
Validity
Empirical
Performance
Validity
Fig. 8. Four phases in the validation square [213,214].
1. Theoretical structural validation: Internal consistency of design
method, i.e., the logical soundness of its constructs both
individually and integrated.
2. Empirical structural validation: The appropriateness of the
chosen example problem(s) intended to test design method.
3. Empirical performance validation: The ability to produce useful
results for the chosen example problem(s).
4. Theoretical performance validation: The ability to produce useful
results beyond the chosen example problem(s). This requires a
‘leap of faith’ which is eased by the process (1)–(3) of building
confidence in the general usefulness of the design method.
7.4. Design (outcome) validation
Li et al. [215] argue that validation should be done on the design
rather than computational models used to obtain them. The goal of
design validation is to gain confidence in the resulting design (of
the material) rather than the simulation model(s) used for design.
Validation of the design involves determining whether the system
as designed will perform as expected. The design outcomes are
evaluated against the system-level design requirements. This is
generally carried out using experiments.
8. Closure: advancing computational modeling and simulation
to pursue the ICME vision
The vision of Integrated Computational Materials Engineering
is compelling in terms of value-added technology that reduces
time to market for new products that exploit advanced, tailored
materials. This article addresses underdeveloped concepts/tools
within the portfolio, as well as some important points regarding
the contribution of multiscale modeling to the overall materials
design process. Effectively, the role of computational modeling and
simulation is so pervasive that the probability of success of ICME
is inextricably intertwined with addressing certain scientific and
technological ‘‘pressure points’’ as follows:
• ICME extends beyond informatics and combinatorics pursued
at the level of individual phases, addressing hierarchical microstructures/structures of materials and embracing the ‘‘systems’’ character of integrating materials design with product
design and development. An engineering systems approach is
essential.
• Recognition of material structure hierarchy is of first order importance in multiscale modeling and multi-level materials design. Multiscale modeling and multi-level materials design are
complementary but distinct enterprises, with the former serving the needs of the latter in the context of ICME.
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
21
Fig. 9. Expanded elements of the ‘materials innovation infrastructure’.
• Novel methods for inverse design including knowledge systems
and process path design are underdeveloped and require attention.
• Uncertainty is a pervasive and complex issue to address, with
uncertainty quantification, propagation, mitigation and management carefully distinguished. Improvement of physicallybased models provides a promising foundational technology.
Approaches to uncertainty that recognize and embrace the full
complexity of ICME are in their infancy.
• Integration of multiscale modeling and multi-level design of
materials with manufacturing and systems optimization is
essential to pursuing the vision of ICME.
In advancing methodologies and tools to address ICME, it is
essential to maintain a consistent long term vision among industry,
academia, and government. A critical assessment of prospects for
progress towards the ICME vision in this regard was recently
offered by McDowell [216]. ICME is an engineering-directed
activity, requiring distributed, collaborative systems based design.
Clearly, one consistent thread throughout the past two decades
of developing this field of integrated design of materials and
products is the trend towards increasing emphasis on the role
of computational materials science and mechanics across a broad
range of length and time scales, depending on application. Much is
to be gained by extending beyond the conventional computational
materials science community to incorporate important advances in
multiscale modeling and design optimization in the computational
mechanics of materials and structures communities, as well as the
MDO and manufacturing communities. The tools and methods that
are foundational to ICME effectively constitute the elements of
an expanded ‘materials innovation infrastructure’ in the Materials
Genome Initiative [16], as shown in Fig. 9.
Acknowledgments
JHP acknowledges financial support from National Science
Foundation awards CMMI-0954447 and CMMI- 1042350. SRK
acknowledges support from Office of Naval Research (ONR) award
N00014-11-1-0759. DLM acknowledges the support of the Carter
N. Paden, Jr. Distinguished Chair in Metals Processing, as well
as the NSF Industry/University Cooperative Research Center for
Computational Materials Design (CCMD), supported by CCMD
members, through grants NSF IIP-0541674 (Penn State) and IIP541678 (Georgia Tech). Any opinions, findings and conclusions or
recommendations expressed in this publication are those of the
author(s) and do not necessarily reflect the views of the NSF or
ONR.
References
[1] Ashby MF. Materials selection in mechanical design. 2nd ed. Oxford, UK:
Butterworth-Heinemann; 1999.
[2] Broderick S, Suh C, Nowers J, Vogel B, Mallapragada S, Narasimhan B, et al.
Informatics for combinatorial materials science. JOM Journal of the Minerals,
Metals and Materials Society 2008;60:56–9.
[3] Billinge SJE, Rajan K, Sinnot SB. From cyberinfrastructure to cyberdiscovery in materials science: enhancing outcomes in materials research,
Report of NSF-Sponsored workshop held in Arlington, VA, August 3–5.
http://www.mcc.uiuc.edu/nsf/ciw_2006/, accessed October 19, 2011. See
also http://www.nsf.gov/crssprgm/cdi/, accessed October 19, 2011, 2006.
[4] Liu ZK. First principles calculations and Calphad modeling of thermodynamics. Journal of Phase Equilibria and Diffusion 2009;30:517–34.
[5] Hautier G, Fischer C, Ehrlacher V, Jain A, Ceder G. Data mined ionic
substitutions for the discovery of new compounds. Inorganic Chemistry
2011;50:656–63.
[6] Dudiy SV, Zunger A. Searching for alloy configurations with target physical
properties: impurity design via a genetic algorithm inverse band structure
approach. Physical Review Letters 2006;97:046401. (4 pages).
[7] Pollock TM, Allison JE, Backman DG, Boyce MC, Gersh M, Holm EA,
et al. Integrated computational materials engineering: A transformational
discipline for improved competitiveness and national security. National
Materials Advisory Board, NAE, National Academies Press; 2008. Report
Number: ISBN-10: 0-309-11999-5.
[8] Apelian D, Alleyne A, Handwerker CA, Hopkins D, Isaacs JA, Olson GB, et al.
Accelerating technology transition: Bridging the valley of death for materials
and processes in defense systems. National Materials Advisory Board, NAE,
National Academies Press; 2004. Report Number: ISBN-10: 0-309-09317-1.
[9] Allison J, Backman D, Christodoulou L. Integrated computational materials
engineering: a new paradigm for the global materials profession. JOM Journal
of the Minerals, Metals and Materials Society 2006;58:25–7.
[10] Olson GB. Computational design of hierarchically structured materials.
Science 1997;277:1237–42.
[11] McDowell DL, Story TL. New directions in materials design science and
engineering (MDS&E), Report of a NSF DMR-sponsored workshop, October
19-21, 1998. http://www.me.gatech.edu/paden/material-design/md_se.pdf,
accessed May 5, 2012.
[12] McDowell DL. Simulation-assisted materials design for the concurrent design
of materials and products. JOM Journal of the Minerals, Metals and Materials
Society 2007;59:21–5.
[13] McDowell DL, Olson GB. Concurrent design of hierarchical materials and
structures. Scientific Modeling and Simulation 2008;15:207–40.
[14] McDowell DL, Backman D. Simulation-assisted design and accelerated
insertion of materials. In: Ghosh S, Dimiduk D, editors. Computational
methods for microstructure-property relationships. Springer; 2010.
[15] McDowell DL, Panchal JH, Choi H-J, Seepersad CC, Allen JK, Mistree F.
Integrated design of multiscale, multifunctional materials and products.
Elsevier; 2009.
[16] Materials genome initiative for global competitiveness, National Science
and Technology Council, Committee on Technology, June 2011. http://www.
whitehouse.gov/sites/default/files/microsites/ostp/materials_genome_
initiative-final.pdf, accessed May 5, 2012.
[17] Freiman S, Madsen LD, Rumble J. A perspective on materials databases.
American Ceramic Society Bulletin 2011;90:28–32.
[18] Milton GW. The theory of composites. Cambridge monographs on applied
and computational mathematics, 2001.
[19] Torquato S. Random heterogeneous materials. New York: Springer-Verlag;
2002.
22
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
[20] Fullwood DT, Niezgoda SR, Adams BL, Kalidindi SR. Microstructure sensitive
design for performance optimization. Progress in Materials Science 2010;55:
477–562.
[21] Niezgoda SR, Yabansu YC, Kalidindi SR. Understanding and visualizing
microstructure and microstructure variance as a stochastic process. Acta
Materialia 2011;59:6387–400.
[22] Kalidindi SR, Niezgoda SR, Salem AA. Microstructure informatics using
higher-order statistics and efficient data-mining protocols. JOM Journal of the
Minerals, Metals and Materials Society 2011;63:34–41.
[23] McDowell DL. Materials design: a useful research focus for inelastic behavior
of structural metals. Theoretical and Applied Fracture Mechanics 2001;37:
245–59. Special issue on the prospects of mesomechanics in the 21st century:
current thinking on multiscale mechanics problems.
[24] McDowell DL. Viscoplasticity of heterogeneous metallic materials. Materials
Science and Engineering R 2008;62:67–123.
[25] McDowell DL. A perspective on trends in multiscale plasticity. International
Journal of Plasticity 2010;26:1280–309.
[26] Ziegler H. In: Becker E, Budiansky B, Koiter WT, Lauwerier HA, editors. An
introduction to thermomechanics. 2nd ed. North Holland series in applied
mathematics and mechanics, vol. 21. Amsterdam–New York: North Holland;
1983.
[27] Drucker DC. Material response and continuum relations: or from microscales
to macroscales. ASME Journal of Engineering Materials and Technology 1984;
106:286–9.
[28] McDowell DL. Internal state variable theory. In: Yip S, Horstemeyer MF,
editors. Handbook of materials modeling, Part A: methods. the Netherlands:
Springer; 2005. p. 1151–70.
[29] Gumbsch P. An atomistic study of brittle fracture: toward explicit failure
criteria from atomistic modeling. Journal of Materials Research 1995;10:
2897–907.
[30] Weinan E, Huang Z. Matching conditions in atomistic-continuum modeling
of materials. Physical Review Letters 2001;87:135501.
[31] Shilkrot LE, Curtin WA, Miller RE. A coupled atomistic/continuum model of
defects in solids. Journal of the Mechanics and Physics of Solids 2002;50:
2085–106.
[32] Shilkrot LE, Miller RE, Curtin WA. multiscale plasticity modeling: coupled
atomistics and discrete dislocation mechanics. Journal of the Mechanics and
Physics of Solids 2004;52:755–87.
[33] Shiari B, Miller RE, Curtin WA. Coupled atomistic/discrete dislocation
simulations of nanoindentation at finite temperature. ASME Journal of
Engineering Materials and Technology 2005;127:358–68.
[34] Qu S, Shastry V, Curtin WA, Miller RE. A finite-temperature dynamic
coupled atomistic/discrete dislocation method. Modelling and Simulation in
Materials Science and Engineering 2005;13:1101–18.
[35] Nair AK, Warner DH, Hennig RG, Curtin WA. Coupling quantum and
continuum scales to predict crack tip dislocation nucleation. Scripta
Materialia 2010;63:1212–5.
[36] Tadmor EB, Ortiz M, Phillips R. Quasicontinuum analysis of defects in solids.
Philosophical Magazine A 1996;73:1529–63.
[37] Tadmor EB, Phillips R, Ortiz M. Mixed atomistic and continuum models of
deformation in solids. Langmuir 1996;12:4529–34.
[38] Shenoy VB, Miller R, Tadmor EB, Phillips R, Ortiz M. Quasicontinuum models
of interfacial structure and deformation. Physical Review Letters 1998;80:
742–5.
[39] Miller R, Tadmor EB, Phillips R, Ortiz M. Quasicontinuum simulation of
fracture at the atomic scale. Modelling and Simulation in Materials Science
and Engineering 1998;6:607–38.
[40] Shenoy VB, Miller R, Tadmor EB, Rodney D, Phillips R, Ortiz M. An adaptive
finite element approach to atomic-scale mechanics — the quasicontinuum
method. Journal of the Mechanics and Physics of Solids 1999;47:611–42.
[41] Curtarolo S, Ceder G. Dynamics of an inhomogeneously coarse grained
multiscale system. Physical Review Letters 2002;88:255504.
[42] Zhou M, McDowell DL. Equivalent continuum for dynamically deforming
atomistic particle systems. Philosophical Magazine A 2002;82:2547–74.
[43] Chung PW, Namburu RR. On a formulation for a multiscale atomisticcontinuum homogenization method. International Journal of Solids and
Structures 2003;40:2563–88.
[44] Zhou M. Thermomechanical continuum representation of atomistic deformation at arbitrary size scales. Proceedings of the Royal Society A 2005;461:
3437–72.
[45] Dupuy LM, Tadmor EB, Miller RE, Phillips R. Finite-temperature quasicontinuum: molecular dynamics without all the atoms. Physical Review Letters
2005;95:060202.
[46] Ramasubramaniam A, Carter EA. Coupled quantum-atomistic and quantumcontinuum mechanics methods in materials research. MRS Bulletin 2007;32:
913–8.
[47] Rudd RE, Broughton JQ. Coarse-grained molecular dynamics and the atomic
limit of finite elements. Physical Review B 1998;58:R5893–6.
[48] Rudd RE, Broughton JQ. Concurrent coupling of length scales in solid state
systems. Physica Status Solidi B-Basic Research 2000;217:251–91.
[49] Rudd RE, Broughton JQ. Coarse-grained molecular dynamics: nonlinear finite
elements and finite temperature. Physical Review B 2005;72:144104.
[50] Kulkarni Y, Knap J, Ortiz M. A variational approach to coarse-graining of
equilibrium and non-equilibrium atomistic description at finite temperature.
Journal of the Mechanics and Physics of Solids 2008;56:1417–49.
[51] Chen Y. Reformulation of microscopic balance equations for multiscale
materials modeling. Journal of Chemical Physics 2009;130:134706.
[52] Wang YU, Jin YM, Cuitino AM, Khachaturyan AG. Nanoscale phase field
microelasticity theory of dislocations: model and 3-D simulations. Acta
Materialia 2001;49:1847–57.
[53] Chen L-Q. Phase-field models for microstructure evolution. Annual Review of
Materials Research 2002;32:113–40.
[54] Shen C, Wang Y. Modeling dislocation network and dislocation–precipitate
interaction at mesoscopic scale using phase field method. International
Journal of Multiscale Computational Engineering 2003;1:91–104.
[55] Belak J, Turchi PEA, Dorr MR, Richards DF, Fattebert J-L, Wickett ME. et
al. Coupling of atomistic and meso-scale phase-field modeling of rapid
solidification, in 2009 APS March Meeting, March 16–20, Pittsburgh, PA,
2009.
[56] Ghosh S, Lee K, Raghavan P. A multi-level computational model for multiscale damage analysis in composite and porous materials. International
Journal of Solids and Structures 2001;38:2335–85.
[57] Liu WK, Karpov EG, Zhang S, Park HS. An introduction to computational
nanomechanics and materials. Computer Methods in Applied Mechanics and
Engineering 2004;193:1529–78.
[58] Liu WK, Park HS, Qian D, Karpov EG, Kadowaki H, Wagner GJ. Bridging scale
methods for nanomechanics and materials. Computer Methods in Applied
Mechanics and Engineering 2006;195:1407–21.
[59] Oden JT, Prudhomme S, Romkes A, Bauman PT. Multiscale modeling of
physical phenomena: adaptive control of models. SIAM Journal on Scientific
Computing 2006;29:2359–89.
[60] Ghosh S, Bai J, Raghavan P. Concurrent multi-level model for damage
evolution in micro structurally debonding composites. Mechanics of
Materials 2007;39:241–66.
[61] Nuggehally MA, Shephard MS, Picu CR, Fish J. Adaptive model selection
procedure for concurrent multiscale problems. Journal of Multiscale
Computational Engineering 2007;5:369–86.
[62] Hao S, Moran B, Liu WK, Olson GB. A hierarchical multi-physics model for
design of high toughness steels. Journal of Computer-Aided Materials Design
2003;10:99–142.
[63] Hao S, Liu WK, Moran B, Vernerey F, Olson GB. Multi-scale constitutive model
and computational framework for the design of ultra-high strength, high
toughness steels. Computer Methods in Applied Mechanics and Engineering
2004;193:1865–908.
[64] Shenoy M, Tjiptowidjojo Y, McDowell DL. Microstructure-sensitive modeling
of polycrystalline in 100. International Journal of Plasticity 2008;24:
1694–730.
[65] Groh S, Marin EB, Horstemeyer MF, Zbib HM. Multiscale modeling of the
plasticity in an aluminum single crystal. International Journal of Plasticity
2009;25:1456–73.
[66] Suquet PM. Homogenization techniques for composite media. Lecture notes
in physics, Berlin: Springer; 1987.
[67] Mura T. Micromechanics of defects in solids. 2nd ed. The Netherlands: Kluwer
Academic Publishers; 1987.
[68] Lebensohn RA, Liu Y, Ponte Castañeda P. Macroscopic properties and field
fluctuations in model power-law polycrystals: full-field solutions versus selfconsistent estimates. Proceedings of the Royal Society of London A 2004;460:
1381–405.
[69] Warner DH, Sansoz F, Molinari JF. Atomistic based continuum investigation
of plastic deformation in nanocrystalline copper. International Journal of
Plasticity 2006;22:754–74.
[70] Capolungo L, Spearot DE, Cherkaoui M, McDowell DL, Qu J, Jacob K.
dislocation nucleation from bicrystal interfaces and grain boundary ledges:
relationship to nanocrystalline deformation. Journal of the Mechanics and
Physics of Solids 2007;55:2300–27.
[71] Needleman A, Rice JR. Plastic creep flow effects in the diffusive cavitation of
grain-boundaries. Acta Metallurgica 1980;28:1315–32.
[72] Moldovan D, Wolf D, Phillpot SR, Haslam AJ. Role of grain rotation during
grain growth in a columnar microstructure by mesoscale simulation. Acta
Materialia 2002;50:3397–414.
[73] Cleri F, D’Agostino G, Satta A, Colombo L. Microstructure evolution from the
atomic scale up. Compuational Materials Science 2002;24:21–7.
[74] Kouznetsova V, Geers MGD, Brekelmans WAM. Multi-scale constitutive modelling of heterogeneous materials with a gradient-enhanced computational
homogenization scheme. International Journal for Numerical Methods in
Engineering 2002;54:1235–60.
[75] Kouznetsova VG, Geers MGD, Brekelmans WAM. Multi-scale second-order
computational homogenization of multi-phase materials: a nested finite
element solution strategy. Computer Methods in Applied Mechanics and
Engineering 2004;193:5525–50.
[76] Larsson R, Diebels S. A second-order homogenization procedure for multiscale analysis based on micropolar kinematics. International Journal of
Numerical Methods in Engineering 2007;69:2485–512.
[77] Vernerey F, Liu WK, Moran B. Multi-scale micromorphic theory for
hierarchical materials. Journal of the Mechanics and Physics of Solids 2007;
55:2603–51.
[78] Luscher DJ, McDowell DL, Bronkhorst CA. A second gradient theoretical
framework for hierarchical multiscale modeling of materials. International
Journal of Plasticity 2010;26:1248–75.
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
[79] Luscher DJ, McDowell DL, Bronkhorst CA. Essential features of fine scale
boundary conditions for second gradient multiscale homogenization of statistical volume elements, Journal of Multiscale Computational Engineering
(in press).
[80] Amodeo RJ, Ghoniem NM. A review of experimental-observations and
theoretical-models of dislocation cells and subgrains. Res Mechanica 1988;
23:137–60.
[81] Kubin LP, Canova G. The modeling of dislocation patterns. Scripta Metallurgica 1992;27:957–62.
[82] Groma I. Link between the microscopic and mesocopic length-scale
description of the collective behavior of dislocations. Physical Review B 1997;
56:5807–13.
[83] Zaiser M. Statistical modeling of dislocation systems. Material Science and
Engineering A 2001;309–310:304–15.
[84] Arsenlis A, Parks DM. Crystallographic aspects of of geometrically-necessary
and statistically-stored dislocation density. Acta Materialia 1999;47:
1597–611.
[85] Zbib HM, de la Rubia TD, Bulatov V. A multiscale model of plasticity based
on discrete dislocation dynamics. ASME Journal of Engineering Materials and
Technology 2002;124:78–87.
[86] Zbib HM, de la Rubia TD. A multiscale model of plasticity. International
Journal of Plasticity 2002;18:1133–63.
[87] Zbib HM, Khaleel MA. Recent advances in multiscale modeling of plasticity.
International Journal of Plasticity 2004;20(6):979–1182.
[88] Rickman JM, LeSar R. Issues in the coarse-graining of dislocation energetics
and dynamics. Scripta Materialia 2006;54:735–9.
[89] Ohashi T, Kawamukai M, Zbib H. A multiscale approach for modeling scaledependent yield stress in polycrystalline metals. International Journal of
Plasticity 2007;23:897–914.
[90] Acharya A. A model of crystal plasticity based on the theory of continuously
distributed dislocations. Journal of the Mechanics and Physics of Solids 2001;
49:761–84.
[91] Acharya A. Driving forces and boundary conditions in continuum dislocation
mechanics. Proceedings of the Royal Society of London A 2003;459:1343–63.
[92] Acharya A, Beaudoin AJ, Miller R. New perspectives in plasticity theory:
dislocation nucleation, waves and partial continuity of the plastic strain rate.
Mathematics and Mechanics of Solids 2008;13:292–315.
[93] Acharya A, Roy A, Sawant A. Continuum theory and methods for coarsegrained, mesoscopic plasticity. Scripta Materialia 2006;54:705–10.
[94] Kuhlmann-Wilsdorf D. Theory of plastic deformation: properties of low
energy dislocation structures. Materials Science and Engineering A 1989;113:
1–41.
[95] Leffers T. Lattice rotations during plastic deformation with grain subdivision.
Materials Science Forum 1994;157–162:1815–20.
[96] Hughes DA, Liu Q, Chrzan DC, Hansen N. Scaling of microstructural
parameters: misorientations of deformation induced boundaries. Acta
Materialia 1997;45:105–12.
[97] Ortiz M, Repetto EA, Stainier L. A theory of subgrain dislocation structures.
Journal of the Mechanics and Physics of Solids 2000;48:2077–114.
[98] Carstensen C, Hackl K, Mielke A. Non-convex potentials and microstructures
in finite-strain plasticity. Proceedings of the Royal Society of London A 2002;
458:299–317.
[99] Li J. The mechanics and physics of defect nucleation. MRS Bulletin 2007;32:
151–9.
[100] Zhu T, Li L, Samanta A, Kim HG, Suresh S. Interfacial plasticity governs strain
rate sensitivity and ductility in nanostructured metals. Proceedings of the
National Academy of Sciences of the United States of America 2007;104:
3031–6.
[101] Zhu T, Li L, Samanta A, Leach A, Gall K. Temperature and strain-rate
dependence of surface dislocation nucleation. Physical Review Letters 2008;
100:025502.
[102] Deo CS, Srolovitz DJ. First passage time Markov chain analysis of rare events
for kinetic Monte Carlo: double kink nucleation during dislocation glide.
Modeling and Simulation in Materials Science and Engineering 2002;10:
581–96.
[103] Li J, Kevrekidis PG, Gear CW, Kevrekidis IG. Deciding the nature of the coarse
equation through microscopic simulations: the baby-bathwater scheme.
SIAM Review 2007;49:469–87.
[104] Needleman A. Micromechanical modelling of interfacial decohesion. Ultramicroscopy 1992;40(3):203–14.
[105] Camacho G, Ortiz M. Computational modeling of impact damage in brittle
materials. International Journal of Solids Structures 1996;33:2899–938.
[106] Zhai J, Tomar V, Zhou M. Micromechanical simulation of dynamic fracture
using the cohesive finite element method. Journal of Engineering Materials
and Technology, ASME 2004;126:179–91.
[107] Tomar V, Zhou M. Deterministic and stochastic analyses of fracture processes
in a brittle microstructure system. Engineering Fracture Mechanics 2005;72:
1920–41.
[108] Hao S, Moran B, Liu W-K, Olson GB. A hierarchical multi-physics model for
design of high toughness steels. Journal of Computer-Aided Materials Design
2003;10:99–142.
[109] CES selector, Granta Material Intelligence, 2008.
[110] Bendsoe MP, Sigmund O. Topology optimization theory, methods and
applications. New York: Springer-Verlag; 2003.
[111] Sigmund O, Torquato S. Design of materials with extreme thermal expansion
using a three-phase topology optimization method. Journal of the Mechanics
and Physics of Solids 1997;45:1037–67.
23
[112] Sigmund O. Materials with prescribed constitutive parameters: an inverse
homogenization problem. International Journal of Solids and Structures
1994;31:2313–29.
[113] Sigmund O. Tailoring materials with prescribed elastic properties. Mechanics
of Materials 1995;20:351–68.
[114] Gibiansky LV, Sigmund O. Multiphase composites with extremal bulk
modulus. Journal of the Mechanics and Physics of Solids 2000;48:461–98.
[115] Neves MM, Rodrigues H, Guedes JM. Optimal design of periodic linear elastic
microstructures. Computers & Structures 2000;76:421–9.
[116] Gu S, Lu TJ, Evans AG. On the design of two-dimensional cellular metals for
combined heat dissipation and structural load capacity. International Journal
of Heat and Mass Transfer 2001;44:2163–75.
[117] Knezevic M, Kalidindi SR. Fast computation of first-order elastic-plastic
closures for polycrystalline cubic-orthorhombic microstructures. Computational Materials Science 2007;39:643–8.
[118] Adams BL, Henrie A, Henrie B, Lyon M, Kalidindi SR, Garmestani H.
Microstructure-sensitive design of a compliant beam. Journal of the
Mechanics and Physics of Solids 2001;49:1639–63.
[119] Proust G, Kalidindi SR. Procedures for construction of anisotropic elasticplastic property closures for face-centered cubic polycrystals using firstorder bounding relations. Journal of the Mechanics and Physics of Solids
2006;54:1744–62.
[120] Houskamp JR, Proust G, Kalidindi SR. Integration of microstructure-sensitive
design with finite element methods: elastic-plastic case studies in FCC
polycrystals. International Journal for Multiscale Computational Engineering
2007;5:261–72.
[121] Knezevic M, Kalidindi SR, Mishra RK. Delineation of first-order closures for
plastic properties requiring explicit consideration of strain hardening and
crystallographic texture evolution. International Journal of Plasticity 2008;
24:327–42.
[122] Wu X, Proust G, Knezevic M, Kalidindi SR. Elastic-plastic property closures
for hexagonal close-packed polycrystalline metals using first-order bounding
theories. Acta Materialia 2007;55:2729–37.
[123] Shaffer JB, Knezevic M, Kalidindi SR. Building texture evolution networks
for deformation processing of polycrystalline FCC metals using spectral
approaches: applications to process design for targeted performance.
International Journal of Plasticity 2010;26:1183–94.
[124] Fullwood DT, Adams BL, Kalidindi SR. Generalized Pareto front methods
applied to second-order material property closures. Computational Materials
Science 2007;38:788–99.
[125] Adams BL, Xiang G, Kalidindi SR. Finite approximations to the second-order
properties closure in single phase polycrystals. Acta Materialia 2005;53:
3563–77.
[126] Bunge H-J. Texture analysis in materials science. Mathematical methods.
Göttingen: Cuvillier Verlag; 1993.
[127] Fast T, Niezgoda SR, Kalidindi SR. A new framework for computationally
efficient structure-structure evolution linkages to facilitate high-fidelity
scale bridging in multi-scale materials models. Acta Materialia 2011;59:
699–707.
[128] Kalidindi SR, Niezgoda SR, Landi G, Vachhani S, Fast A. A novel framework
for building materials knowledge systems. Computers Materials & Continua
2009;17:103–26.
[129] Landi G, Kalidindi SR. Thermo-elastic localization relationships for multiphase composites. Computers, Materials & Continua 2010;16:273–93.
[130] Landi G, Niezgoda SR, Kalidindi SR. Multi-scale modeling of elastic response
of three-dimensional voxel-based microstructure datasets using novel
DFT-based knowledge systems. Acta Materialia 2009;58:2716–25.
[131] Fast T, Kalidindi SR. Formulation and calibration of higher-order elastic
localization relationships using the MKS approach. Acta Materialia 2011;59:
4595–605.
[132] Kröner E. Statistical modelling. In: Gittus J, Zarka J, editors. Modelling small
deformations of polycrystals. London: Elsevier Science Publishers; 1986.
p. 229–91.
[133] Kröner E. Bounds for effective elastic moduli of disordered materials. Journal
of the Mechanics and Physics of Solids 1977;25:137–55.
[134] Volterra V. Theory of functionals and integral and integro-differential
equations. New York: Dover; 1959.
[135] Wiener N. Nonlinear problems in random theory. Cambridge, MA: MIT Press;
1958.
[136] Garmestani H, Lin S, Adams BL, Ahzi S. Statistical continuum theory for large
plastic deformation of polycrystalline materials. Journal of the Mechanics and
Physics of Solids 2001;49:589–607.
[137] Lin S, Garmestani H. Statistical continuum mechanics analysis of an elastic
two-isotropic-phase composite material. Composites Part B: Engineering
2000;31:39–46.
[138] Mason TA, Adams BL. Use of microstructural statistics in predicting
polycrystalline material properties. Metallurgical and Materials Transactions
A: Physical Metallurgy and Materials Science 1999;30:969–79.
[139] Garmestani H, Lin S, Adams BL. Statistical continuum theory for inelastic
behavior of a two-phase medium. International Journal of Plasticity 1998;
14:719–31.
[140] Beran MJ, Mason TA, Adams BL, Olsen T. Bounding elastic constants of an
orthotropic polycrystal using measurements of the microstructure. Journal
of the Mechanics and Physics of Solids 1996;44:1543–63.
[141] Beran MJ. Statistical continuum theories. New York: Interscience Publishers;
1968.
24
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
[142] Milton GW. In: Ciarlet PG, Iserles A, Khon RV, Wright MH, editors. The
theory of composites. Cambridge monographs on applied and computational
mathematics, Cambridge: Cambridge University Press; 2002.
[143] Cherry JA. Distortion analysis of weakly nonlinear filters using Volterra series.
Ottawa: National Library of Canada; 1995.
[144] Wray J, Green G. Calculation of the Volterra kernels of non-linear dynamic
systems using an artificial neural network. Biological Cybernetics 1994;71:
187–95.
[145] Korenberg M, Hunter I. The identification of nonlinear biological systems:
volterra kernel approaches. Annals of Biomedical Engineering 1996;24:
250–68.
[146] Lee YW, Schetzen M. Measurement of the wiener kernels of a non-linear
system by cross-correlation. International Journal of Control 1965;2:237–54.
[147] Cockayne E, van de Walle A. Building effective models from sparse but precise
data: application to an alloy cluster expansion model. Physical Review B
2009;81:012104.
[148] Kerschen G, Golinval J-C, Hemez FM. Bayesian model screening for the
identification of nonlinear mechanical structures. Journal of Vibration and
Acoustics 2003;125:389–97.
[149] Buede DM. The engineering design of systems: models and methods. New
York, N.Y: John Wiley & Sons, Inc; 2000.
[150] AIAA, Guidelines for the verification and validation of computational fluid
dynamics simulations, 1998, Report Number: AIAA-G-077-1998.
[151] Sivel VG, Van den Brand J, Wang WR, Mohdadi H, Tichelaar FD, Alkemade P,
et al. Application of the dual-beam fib/sem to metals research. Journal of
Microscopy 2004;21:237–45.
[152] Giannuzzi LA, Stevie FA. Introduction to focused ion beams : instrumentation,
theory, techniques, and practice. New York: Springer; 2005.
[153] Spowart J, Mullens H, Puchala B. Collecting and analyzing microstructures in
three dimensions: a fully automated approach. JOM Journal of the Minerals,
Metals and Materials Society 2003;55:35–7.
[154] Spowart JE. Automated serial sectioning for 3-d analysis of microstructures.
Scripta Materialia 2006;55:5–10.
[155] Salvo L, Cloetens P, Maire E, Zabler S, Blandin JJ, Buffiére JY, et al. X-ray microtomography an attractive characterisation technique in materials science.
Nuclear Instruments and Methods in Physics Research Section B: Beam
Interactions with Materials and Atoms 2003;200:273–86.
[156] Midgley PA, Weyland M. 3-D electron microscopy in the physical sciences:
the development of Z-contrast and EFTEM tomography. Ultramicroscopy
2003;96:413–31.
[157] Stiénon A, Fazekas A, Buffière JY, Vincent A, Daguier P, Merchi F. A
new methodology based on X-ray micro-tomography to estimate stress
concentrations around inclusions in high strength steels. Materials Science
and Engineering: A 2009;513–514:376–83.
[158] Proudhon H, Buffière JY, Fouvry S. Three-dimensional study of a fretting
crack using synchrotron X-ray micro-tomography. Engineering Fracture
Mechanics 2007;74(5):782–93.
[159] Tiseanu I, Craciunescu T, Petrisor T, Corte AD. 3-D X-ray micro-tomography
for modeling of NB3SN multifilamentary superconducting wires. Fusion
Engineering and Design 2007;82(5–14):1447–53.
[160] Miller MK, Forbes RG. Atom probe tomography. Materials Characterization
2009;60(6):461–9.
[161] Arslan I, Marquis EA, Homer M, Hekmaty MA, Bartelt NC. Towards better
3-D reconstructions by combining electron tomography and atom-probe
tomography. Ultramicroscopy 2008;108(12):1579–85.
[162] Rowenhorst DJ, Lewis AC, Spanos G. Three-dimensional analysis of grain
topology and interface curvature in a β -titanium alloy. Acta Materialia 2010;
58(16):5511–9.
[163] Gao X, Przybyla CP, Adams BL. Methodology for recovering and analyzing
two-point pair correlation functions in polycrystalline materials. Metallurgical and Materials Transactions A 2006;37:2379–87.
[164] Gokhale AM, Tewari A, Garmestani H. Constraints on microstructural twopoint correlation functions. Scripta Materialia 2005;53:989–93.
[165] Niezgoda SR, Fullwood DT, Kalidindi SR. Delineation of the space of 2point correlations in a composite material system. Acta Materialia 2008;56:
5285–92.
[166] Niezgoda SR, Turner DM, Fullwood DT, Kalidindi SR. Optimized structure
based representative volume element sets reflecting the ensemble averaged
2-point statistics. Acta Materialia 2010;58:4432–45.
[167] McDowell DL, Ghosh S, Kalidindi SR. Representation and computational
structure-property relations of random media. JOM Journal of the Minerals,
Metals and Materials Society 2011;63:45–51.
[168] SAMSI workshop on uncertainty quantification in engineering and renewable
energy, http://www.samsi.info/programs/2011-12-program-uncertaintyquantification-engineering-and-renewable-energy, accessed May 5, 2012.
[169] Thunnissen DP. Propagating and mitigating uncertainty in the design of
complex multidisciplinary systems. Pasadena, CA: California Institute of
Technology; 2005.
[170] Kaplan S, Garrick BJ. On the quantitative definition of risk. Risk Analysis 1981;
1:11–26.
[171] McManus H, Hastings D. A framework for understanding uncertainty
and its mitigation and exploitation in complex systems. IEEE Engineering
Management Review 2006;34(3):81–94.
[172] Chen W, Baghdasaryan L, Buranathiti T, Cao J. Model validation via
uncertainty propagation and data transformations. AIAA Journal 2004;42:
1406–15.
[173] Choi H-J, McDowell DL, Rosen D, Allen JK, Mistree F. An inductive design
exploration method for robust multiscale materials design. ASME Journal of
Mechanical Design 2008;130: (031402)1–13.
[174] Kennedy MC, O’Hagan A. Bayesian calibration of computer models. Journal
of the Royal Statistical Society: Series B (Statistical Methodology) 2001;63:
425–64.
[175] Helton JC, Oberkampf WL. Alternative representations of epistemic uncertainty. Reliability Engineering & System Safety 2004;85:1–10.
[176] Lee SH, Chen W. A comparative study of uncertainty propagation methods
for black-box-type problems. Structural and Mulidisciplinary Optimization
2009;37:239–53.
[177] Yin X, Lee S, Chen W, Liu WK, Horstemeyer MF. Efficient random field
uncertainty propagation in design using multiscale analysis. Journal of
Mechancial Design 2009;131: 021006(1–10).
[178] Ferson S, Kreinovich V, Hajagos J, Oberkampf W, Ginzburg L. Experimental
uncertainty estimation and statistics for data having interval uncertainty,
Sandia National Laboratories, 2007, Report Number: SAND2007-0939.
[179] Zabaras N. Advanced computational techniques for materials-by-design, in
Joint Contractors Meeting of the AFOSR Applied Analysis and Computational
Mathematics Programs, Long Beach, CA, 2006 (August 7–11).
[180] Allen JK, Seepersad CC, Choi H-J, Mistree F. Robust design for multidisciplinary and multiscale applications. ASME Journal of Mechanical Design
2006;128:832–43.
[181] Gano SE, Agarwal H, Renaud JE, Tovar A. Reliability-based design using
variable fidelity optimization. Structure and Infrastructure Engineering:
Maintenance, Management, Lifecycle 2006;2:247–60.
[182] Taguchi G. Introduction to quality engineering. New York: UNIPUB; 1986.
[183] Chen W, Allen JK, Mistree F. A robust concept exploration method
for enhancing productivity in concurrent systems design. Concurrent
Engineering: Research and Applications 1997;5:203–17.
[184] Mistree F, Hughes O, Bras B. The compromise decision support problem and
the adaptive linear programming algorithm. In: Kamat M, editor. Structural
optimization: status and promise. Washington, DC: AIAA; 1992. p. 251–90.
[185] Chen W, Allen JK, Tsui K-L, Mistree F. A procedure for robust design:
minimizing variations caused by noise factors and control factors. ASME
Journal of Mechanical Design 1996;118:478–85.
[186] Simpson T, Maier J, Mistree F. Product platform design: method and
application. Research in Engineering Design 2001;13:2–22.
[187] Sues RH, Oakley DR, Rhodes GS. Multidisciplinary stochastic optimization, in:
10th conference on engineering mechanics, Boulder, CO, 1995, 934–7.
[188] Oakley DR, Sues RH, Rhodes GS. Performance optimization of multidisciplinary mechanical systems subject to uncertainties. Probabilistic Engineering Mechanics 1998;13:15–26.
[189] Du X, Chen W. Efficient uncertainty analysis methods for multidisciplinary
robust design. AIAA Journal 2002;40:545–52.
[190] Gu X, Renaud JE, Batill SM, Brach RM, Budhiraja AS. Worst case propagated
uncertainty of multidisciplinary systems in robust design optimization.
Structural and Multidisciplinary Optimization 2000;20:190–213.
[191] Liu H, Chen W, Kokkolaras M, Papalambros PY, Kim HM. Probabilistic
analytical target cascading: a moment matching formulation for multilevel
optimization under uncertainty. Journal of Mechanical Design 2006;128:
991–1000.
[192] Kim HM, Michelena NF, Papalambros PY, Jiang T. Target cascading in optimal
system design. Journal of Mechanical Design 2003;125:481–9.
[193] Choi H-J, Allen JK, Rosen D, McDowell DL, Mistree F. An inductive design
exploration method for the integrated design of multi-scale materials and
products, in: Design engineering technical conferences — design automation
conference, Long Beach, CA, 2005, Paper Number DETC2005-85335.
[194] Choi H-J. A robust design method for model and propagated uncertainty,
Ph.D. Dissertation, The GW Woodruff School of Mechanical Engineering,
Georgia Institute of Technology, Atlanta, GA, 2005.
[195] Choi H-J, McDowell DL, Allen JK, Mistree F. An inductive design exploration
method for hierarchical systems design under uncertainty. Engineering
Optimization 2008;40:287–307.
[196] Sinha A, Panchal JH, Allen JK, Mistree F. Managing uncertainty in multiscale
systems via simulation model refinement, in: ASME 2011 international
design engineering technical conferences and computers and information
in engineering conference, August 28–31, Washington, DC, 2011. Paper
Number: DETC2011/47655.
[197] Panchal JH, Paredis CJJ, Allen JK, Mistree F. A value-of-information based
approach to simulation model refinement. Engineering Optimization 2008;
40:223–51.
[198] Messer M, Panchal JH, Krishnamurthy V, Klein B, Yoder PD, Allen JK, et al.
Model selection under limited information using a value-of-informationbased indicator. Journal of Mechancial Design 2010;132: 121008(1–13).
[199] Panchal JH, Choi H-J, Allen JK, McDowell DL, Mistree F. A systems based
approach for integrated design of materials, products, and design process
chains. Journal of Computer-Aided Materials Design 2007;14:265–93.
[200] Howard R. Information value theory. IEEE Transactions on Systems Science
and Cybernetics 1966;SSC-2:779–83.
[201] Lawrence DB. The economic value of information. New York: Springer; 1999.
[202] Panchal JH, Allen JK, Paredis CJJ, Mistree F. Simulation model refinement
for decision making via a value-of-information based metric, in: Design
J.H. Panchal et al. / Computer-Aided Design 45 (2013) 4–25
[203]
[204]
[205]
[206]
[207]
[208]
[209]
[210]
[211]
automation conference, Philadelphia, PA, 2006. Paper Number: DETC200699433.
Messer M, Panchal JH, Allen JK, Mistree F. Model refinement decisions using
the process performance indicator. Engineering Optimization 2011;43(7):
741–62.
Valerdi R. The constructive systems engineering cost model (Cosysmo), Ph.D.
Dissertation, Industrial and Systems Engineering, University of Southern
California, Los Angeles, 2005.
Oberkampf WL, DeLand SM, Rutherford BM, Diegert KV, Alvin KF. Estimation
of total uncertainty in modeling and simulation, Sandia National Laboratories, 2000, Report Number: SAND2000-0824.
Sargent R. Verification and validation of simulation models, in: Winter
simulation conferences, New Orleans, LA, 2003, 37-48.
Bayarri MJ, Berger JO, Paulo R, Sacks J, Cafeo JA, Cavendish J, et al. A framework
for validation of computer models. Technometrics 2007;49:138–54.
Sankararaman S, Mahadevan S. Model validation under epistemic uncertainty. Reliability Engineering and System Safety 2011;96:1232–41.
Chen W, Xiong Y, Tsui K-L, Wang S. A design-driven validation approach using
Bayesine prediction models. ASME Journal of Mechanical Design 2008;130:
021101(1–12).
Malak RJ, Paredis CJJ. Validating behavioral models for reuse. Research in
Engineering Design 2007;18:111–28.
Mocko GM, Panchal JH, Fernández MG, Peak R, Mistree F. Towards
reusable knowledge-based idealizations for rapid design and analysis, in:
[212]
[213]
[214]
[215]
[216]
25
45th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics & materials
conference, Palm Springs, CA, 2004. Paper Number: AIAA-2004-2009.
Malak RJ, Paredis CJJ. Using support vector machines to formalize the valid
input domain in predictive models in systems design problems. Journal of
Mechancial Design 2010;132: 101001(1–14).
Pedersen K, Emblemsvag J, Bailey R, Allen J, Mistree F. The ‘validation square’ — validating design methods, in: ASME design theory and
methodology conference, New York, 2000. Paper Number: ASME DETC2000/
DTM-14579.
Seepersad CC, Pedersen K, Emblemsvag J, Bailey RR, Allen JK, Mistree F.
The validation square: how does one verify and validate a design method?
In: Chen W, Lewis K, Schmidt L, editors. Decision-based design: making
effective decisions in product and systems design. New York, NY: ASME
Press; 2005.
Li J, Mourelatos ZP, Kokkolaras M, Papalambros PY. Validating designs
through sequential simulation-based optimization, in: ASME 2010 design
engineering technical conferences & computers in engineering conference,
Montreal, Quebec, 2010. Paper Number: DETC2010-28431.
McDowell DL. Critical path issues in ICME, models, databases, and simulation
tools needed for the realization of integrated computational materials
engineering. In: Arnold SM, Wong TT, editors. Proc. symposium held at
materials science and technology 2010. Houston, Tx: ASM International;
2011. p. 31–7.