uqreport_fin_sr1 - Institute for Advanced Computational Science

advertisement
Workshop:
Sensitivity, Error and Uncertainty Quantification for Atomic,
Plasma, and Material Data
REPORT
Basic Meeting Information
Location and dates: The meeting was held at Stony Brook University, 7-9 November 2015, using the
new conference facilities at the Institute for Advanced Computational Science. Seventeen invited
speakers and as many additional attendees participated in the meeting. Each invited speaker had 45
minutes for his/her presentation, with 15 minutes of discussion after the talk. A special poster session was
organized on November 8. Meeting conclusions and further plans were discussed with all participants.
Workshop web pages: http://www.iacs.stonybrook.edu/uq/pages/workshop
IACS website: http://www.iacs.stonybrook.edu/
Meeting Chair: Predrag Krstic
Meeting Co-Chairs: Robert Harrison and Richard Archibald
Workshop Administrators: Sarena Romano and Lynn Allopenna
Scientific Committee:
Richard K. Archibald, Oak Ridge National Laboratory
Bastiaan J. Braams, International Atomic Energy Agency
Gordon W. F. Drake, University of Windsor
Predrag Krstic, Stony Brook University (Committee chair)
Robert J. Harrison, Stony Brook University/Brookhaven National Laboratory
Petr Plechac, University of Delaware
Daren Stotler, Princeton Plasma Physics Laboratory
Contacts:
P. Krstic (krsticps@gmail.com, 865-603-2970)
R. Harrison (rjharrison@gmail.com, 865-274-8544)
S. Romano (sarena.romano@stonybrook.edu)
L. Allopenna (lynn.allopenna@stonybrook.edu)
Audio-visual equipment:
Javier Dominguez (dominijavier@gmail.com)
Financial Assistance for this conference was provided by the National Science Foundation, Grant
Number PHY-1560572, and by the Institute for Advanced Computational Science of Stony Brook
University.
1
MOTIVATION
The motivation to hold the workshop arose from the increasing need to move predictive simulation
methodologies from investigations in basic science to their robust application within design and
engineering processes, including consistent integration with data from observation or experiment.
Anticipated to be the first in a series, the workshop focused on theoretical and computational data relevant
to fusion and astrophysical plasmas and material design, where modeling codes mostly use theoretical
atomic, molecular, optical and material interface data for which critical assessments of their accuracy are
necessary. Attendees included researchers in computational, plasma, material, atomic, molecular and
optical physics and chemistry, along with mathematicians and computational scientists whose focus was
the mathematical and computational foundations of both epistemic and aleatoric uncertainty
quantification (UQ). A primary goal and vision behind the workshop was to initiate multidisciplinary
activity to extend established (and novel) techniques of UQ to new or so far nonstandard areas of
application by stimulating interaction of the data users and producers with the already well developed
mathematical and computational formalisms of UQ.
The problem of the uncertainty of all types, including sensitivity of the plasma and material modeling to
the data quality, is strongly enhanced by the multiscale character of the underlying physics and chemistry,
running from nm and fs to meters and years. Evaluation and subsequent choice and recommendation of
the theoretically and computationally obtained atomic, molecular and material data for use in modeling of
the nuclear fusion and astrophysical plasmas as well as in synthesis and design of new materials is
critically dependent on the data validation, verification, the uncertainty analysis and quantification.
Choice of the theoretical method, accuracy and time needed for calculation of the desired data is strongly
conditioned by the sensitivity of the plasma or material model to the data uncertainty.
Expanding applicability of the UQ developed in mathematical/computational sciences to theoretical and
computer simulation research in material, atomic, and plasma has been a long awaited development in the
respective scientific communities. Supporting interactions between the communities is crucial and will
increase the quality of the data and other scientific information disseminated by the relevant physics
communities. Building collaboration across these disciplines will ensure that the advantages of the recent
mathematical/computational development in UQ can be effectively utilized in atomic, plasma and
material data science.
The science of UQ has undergone significant development in mathematical and computational sciences
within the last decade, as seen by the publications and a number of scientific meetings devoted to this
subject (for example, a series of SIAM conferences on UQ, the latest being UQ14, 2014. Savannah, GA,
https://www.siam.org/meetings/uq14). Although this science has been well applied in climate research as
well as in material characterization, analysis of uncertainties in theoretical physics and chemistry, in
particular the analysis of sensitivity and errors of theoretical atomic, material and interfacial data for
astrophysical and nuclear fusion plasma applications, have so far had minimal impact on the development
of UQ science in these areas. This is visible in the conclusions of the recent Joint IAEA-ITAMP
Technical Meeting on Uncertainty Assessment for Theoretical Atomic and Molecular Scattering Data,
Cambridge, MA, July 2014, https://www-amdis.iaea.org/meetings/ITAMP.
Expected workshop outcomes included establishing a research agenda in the space and the germination of
a multidisciplinary research community. This report summarizes the workshop (the agenda, conference
organization, participants, activities, summary of survey data, presentations and discussions, conclusions
and future plans, and the budget).
2
CONFERENCE PRESENTATIONS
First Day:
Roger Ghanem (Polynomial Chaos as a Comprehensive Modeling Tool for Complex Systems)
presented a comprehensive framework, using Polynomial chaos, to quantify uncertainty in complex
systems. The framework was built upon high-dimensional polynomial approximations specifically chosen
to provide optimal approximations of the particular probability distribution of the complex system. The
tools of adapted basis approaches and reduction of the null space between input and output of complex
systems provided tractability of very large dimensional problems. He demonstrated this framework on the
complex problem of designing composite cars, focusing on the material properties of various composites
along with the simulation of manufacturing. In addition, he demonstrated this framework to modeling
subsurface landscapes of the Gulf Coast given the limited information from boreholes.
Petr Plechac (Information-Theoretic Tools for Uncertainty Quantification of High Dimensional
Stochastic Models) considered complicated, large dimensional, stochastic systems of equations that
could arise in biology, reaction kinetics and materials science. Representation of rare events in these
systems using the naïve approach of direct simulation, under all possible uncertain input and parameters,
is not possible even for the largest of computing platforms. However, using path-dependent and risk
sensitive functions, along with multi-resolution techniques, error estimation and UQ can be obtained.
Tight bounds on rare events were demonstrated by using non-equilibrium statistics, Fisher information
and goal-oriented quantities in order to characterize the statistics of defined observables. Demonstration
of this was done on reaction networks for ethanol production.
Robert Moser (Reliability and Uncertainty in the Simulation of Tokamak Plasmas) focused on the
challenging problem of developing robust UQ for the computational modeling of tokamak plasmas. The
key difficulty with tokamak plasma is that only a hand full of high fidelity simulations can be performed
each year at the leading supercomputing centers. Such a restriction on these high fidelity simulations
requires advanced mathematical treatment to up-scaling of micro-physics in order to provide the most
information for design, control and operation of current and future tokamak facilities. The march to create
bigger and bigger tokamak facilities is spurred by the promise of greater energy output but creates
problems that push the potential limits of even future exascale computing systems. This presentation
showed that effective and reliable extrapolative predictions are possible for tokamak systems, providing
rigorous methods to validate and build predictive assessments of these systems.
Udo von Touissaint (The Vlasov-Poisson Plasma Model with Uncertain Inputs: A Bayesian
Modeling Approach) applied a Galerkin framework with spectral expansion to represent the random
processes of noise in the Vlasov-Poisson model describing electrostatic plasma. While systematic UQ in
plasma physics was historically limited to the parameter scans, this is now considered inadequate. For the
intrusive and nonintrusive spectral methods, expansion coefficients can be done using collocation but do
not tolerate much noise and very high dimension. For the Gaussian distributed noise, Hermite
polynomials are the appropriate choice for an orthonormal basis system. The coefficients are derived from
collocation points in a nonintrusive approach. Design of sampling space is very challenging – it must fill
space to explore correlations and variation but stay within physically valid regions, and automation can
push codes beyond the range of testing/design. Simulators model physical processes and can yield new
insights. Emulator, a surrogate for a simulator, seeks statistical representation, often assuming underlying
Gaussian processes (N3 scaling?).
3
Panagiotis Angelikopoulos (Bayesian Uncertainty Quantification for Molecular Dynamics (MD)
Simulations) focused on the challenging MD problem associated with liquid water simulations using the
high-performance computing framework known as 4U. Within the MD community, there exists several
orders of magnitude difference in the prediction and measurements of water flow through a carbon
nanotube membrane. This problem is an ideal candidate for UQ methods in MD simulations. The talk
demonstrated that the hierarchical Bayesian model that is developed and implemented by 4U can
calibrate and produce weight model selection from a collection of various MD simulations. The talk
showed that using this hierarchical Bayesian model, the uncertainties in MD properties, such as water
contact angle and water transport, can be quantified and calibrated to experimental results.
Daniel Savin (Astrochemistry from the First Stars to the Origins of Organic Chemistry) talked
about the importance of interstellar chemistry in the evolution of the universe from the formation of the
first stars to the origins of life. He discussed the chain of chemical reactions leading to H2 formation, with
uncertainties for the formation of the first stars as well as those responsible for the origin of life (like gasphase reactions of C with H3+ which initiated the synthesis of complex organic molecules, and reactions
of O with H3+ leading to the formation of water). Uncertainties in these data have hindered our
understanding of the universe’s evolution, in particular of the pathway toward life. He stressed the
importance of chemical reaction networks which would benefit from more systematic UQ and
experimental data.
Francesco Rizzi (Uncertainty Quantification in Molecular Dynamics Simulations: Forward and
Inverse Problem) outlined that one of the major factors in UQ for molecular dynamics (MD) simulations
is the particular potential function used to compute atomic forces. First principle calculations in MD are
cost prohibitive and, therefore, model reduction in potential function approximation can make tractable,
complex and large biomolecule simulations. This talk presented a Bayesian regression using polynomial
chaos expansions to develop a framework to isolate the impact of parametric uncertainty and molecular
noise models in MD simulations. This talk demonstrated the suitability of this framework in predicting
the major target observables on a variety of MD simulations.
Sophie Blondel (Uncertainty Quantification Effort within PSI-SciDAC) talked about uncertainties in
plasma-surface interactions in fusion tokamaks, in particular their multiscale character in space and time,
which extends over 10 orders of magnitude. Since classical molecular dynamics cannot handle long-time
scales, especially at experimentally low fluxes of particles, the models, appropriate for both various time
scales and their hand-shaking, can be introduced to span long-time scales, from atomistic to continuum
ones. Sophie described the code development and benchmarking of Xolotl, a new continuum advectionreaction-diffusion cluster dynamics code to simulate the diverter surface response to fusion relevant
plasma exposure, with initial focus to the tungsten exposed to 100 eV helium plasma. How do we
robustly couple atomistic and continuum models? A big question remained unanswered: How do we
construct such models with some rigor and connect them with the existing UQ mathematical tools (like
those of Plechac)? The quantification of uncertainties is concentrated to the diffusion factors coming from
the different cross potential functions, that will be propagated through Xolotl in a later phase.
Dimitrios Giannakis (Data-Driven Spectral Decomposition and Forecasting of Ergodic Dynamical
Systems) talked about a method particularly valuable to weather and atmospheric modeling, namely, for a
given set of time-order observations that is assumed to be derived from a vector-valued function, provide
a nonparametric forecasting model whose computational cost is reduced by dimension reduction in the
observation dataset. This talk approached this problem using operator theory and demonstrated that the
undying physics of weather systems can be represented by a smooth, orthonormal basis of simple
harmonic oscillators that is determined directly from observations using diffusion map algorithms. Using
4
the time evolution of the derived basis, nonparametric forecasting models can be constructed for arbitrary
probability densities and observations. The end result is a data-driven forecasting model for weather
without modeling or even knowing the underlying equation of motion.
Second Day:
Choong-Seock Chang (Telescoping UQ Method for Extreme Scale Kinetic Simulation of Magnetic
Fusion Plasma) talked about a new trial method, “telescoping” UQ, which is applied in gyrokinetic code
XGC1. The more conventional UQ is studied in a reduced-size problem, calibrating the UQ scalability
with the experimental results, and using the calibrated UQ to make predictions for future problems (ITER
in the fusion case). However there are limits - reduced fidelity simulations can lose entire physical
phenomena. Current ITER simulations take several months - about 5 per year, and the traditional
statistical method relying on a large number of simulations cannot be used. So how do we research UQ if
we cannot get statistics? Fortunately, the use of the so-called “first principles” equations tends not to have
a large number of input parameters and could provide a means for simplifying this insurmountable task.
Uncertain input data include atomic cross sections and material interaction data (recycling and
sputtering). 6D simulations of plasma reduced to 5D by averaging over gyromotion - justified first
principles approximation - enables a 100x increase in time step. In addition this is a highly multiscale
problem - gyromotion is 10-9s, turbulence 10-3s, operation 100s.
Jean-Paul Allain (Challenges and Strategies to Experimental Validation of Multi-Scale Nuclear
Fusion PMI Computational Modeling) described that plasma-material interface (PMI) is a key region in
the device since material can be emitted both atomistically (evaporating, sputtering, etc.) and/or
macroscopically (i.e. during disruptions or edge localized modes). There are critical knowledge gaps on
the strong coupling between plasma and materials, which have to be bridged by the computational
models. This fusion core performance has a sensitive dependence on the wall surface not yet fully
understood. For example, surprisingly, recent JET-ILW experiments show much worse confinement than
with carbon. Physical and chemical sputtering are very important - surface chemistry and composition are
important including self-organization of structures. Due to the complexity of the processes, it is necessary
to isolate phenomena and understand the effect of each without coupling to others. However, models
alone cannot reach the level of maturity needed for predictive understanding without multiple validation
steps. Understanding the morphological and topographical evolution of the plasma-material interface is
nascent. A multiscale approach to validation - controlled micro-experiments - would provide more control
and instrumentation than is possible in a full device and are in the spirit of isolating phenomena. The
limiting step in this approach to a large degree depends on the sophistication and fidelity of surface
response codes. Another limiting step is the large uncertainty inherent in many of the experimental
measurements involved in PMI.
Kody J.H. Law (Multilevel Sequential Monte Carlo Samplers) talked about the prevalent use of the
Monte Carlo (MC) method in UQ. However, due to the slow convergence of the MC method, the
MultiLevel MC (MLMC) has provided a reduction of computational cost for problems with admit
hierarchy of approximation levels. This talk presented the development of MultiLevel Sequential Monte
Carlo samplers (MLSMC) which provide a significant reduction in cost as compared to MLMC or
sequential MC. Specifically, this talk demonstrated that by using MLSMC, the inverse problem modeling
associated with sub-surface modeling cost-to- ratio can be asymptotically the same as for a scalar
random variable. Under specific circumstances, the optimal cost of MLSMC is of the same order as the
cost of a single simulation at the finest level.
5
Jonathan Tennyson (Uncertainty Quantification for Theoretical Atomic and Molecular Data)
discussed the need to stop the practice and culture in theory and simulation in atomic/molecular
physics/chemistry of not providing error bars or performing uncertainty analysis. It is expected that
benchmark atomic and molecular calculations should follow accepted experimental practice and include
an uncertainty estimate alongside any numerical values presented. This is particularly important when the
computational data are used as the primary source of data, such as cross sections, for input into modeling
codes for, for example, plasma and radiative transport studies. It is imperative that these data should be
accompanied by estimated uncertainties. Since 2013 Phys. Rev A has introduced editorial policy to have
uncertainty estimates required in theoretical data publications. Tennyson stressed that it is not only an
issue of the math tools for UQ, it equally requires a change in culture. Among successful examples are
about 20 contributions to the dissociation energy of water - theoretical prediction that included error bar
(4cm-1) that was subsequently confirmed by an experiment that was within the error bars. Significant
examples of very high activities are sources and sinks of CO2 in the atmosphere where there is a need of
0.5% accuracy in line intensities. Ab initio theory for solving multibody Schrodinger equation provides
routes to the right answer with quantifiable errors. The challenge is that DFT doesn’t having a systematic
path for improvement. While static, spectroscopic-state oriented calculations are reaching acceptable UQ,
it is not the case for dynamic, scattering calculations. There, epistemic uncertainty due to model selection
dominates scattering calculations. There are a lot of different processes, and there is no code that deals
with all of these at the same time (mostly due to treating the nuclear motion, resonances, etc.).
Alexander Kramida (Critical Evaluation and Estimation of Uncertainties of Atomic Spectral Data
at NIST) described the procedures used in the Atomic Spectroscopy Group at NIST for critical evaluation
of experimental and theoretical data for atomic energy levels, wavelengths and radiative rates.
Interestingly, some of the data date as far in the past as 1802. A need is recognized for more statistical and
UQ analysis in evaluating the data. How do we estimate uncertainties in spectroscopic data? How do we
compare with experiments?. The convergence can be reached with the addition of more configurations.
He stressed a need to be particular about what data are validated or correlated. Some observables have a
large energy dependence that can rescale errors by factors of thousands.
Matthew Dunlop (Bayesian Level Set Inversion) outlined a new level set inversion method using a
Bayesian framework. Level set methods embed boundary information of images and data into a higher
dimensional function, which is the signed distance function to the boundaries. Within the context of
geometrically defined inverse problems, such as electrical impedance tomography, the goal is to
determine an unknown, piecewise constant function from a finite set of indirect measurements. Using a
Bayesian approach to build in probability distributions for the determination of the boundaries of these
piecewise constant functions provides a more complete reconstruction for geometrically defined inverse
problems given a limited set of measurements. This talk demonstrated the improvement of the level set
method and outlined computational, faster approaches using a hierarchical Bayesian level set inversion
method.
Alan Calder (Verification, Validation and Uncertainty Quantification in Astrophysics) talked about
how the verification, validation, and UQ in astrophysics present challenges since stellar interiors cannot
be fully reproduced in a terrestrial laboratory. Still, in order to make confident predictions, UQ is highly
needed. To validate the calculated data we need experiments that can be performed on Earth, which
stresses the importance of talking to the experimentalists when trying to validate a code. Alan stressed
that agreement of a simulation with an experiment is not necessarily the end of validation and UQ: For
example, if there are known missing physics and components, how do we interpret and use the
6
agreement? Is this “agreement from the good reasons?” Therefore, one cannot use UQ as a “black box” in
this area of physics.
Richard Archibald (Sparse Sampling Methods for Experimental Data from the DOE Facilities)
discussed that designing computational simulations to best capture uncertainty for the extreme scale
requires scalable algorithms to estimate error for a broad range of situations. In order to take full
advantage of high performance computing, scalable error estimation of stochastic simulations needs to be
developed on unstructured data, providing the ability of uncertainty quantification methods to operate on
both measured data or on computations that are either guided or come from a legacy database. This talk
described how these fast methods are adapting to high performance computing. The talk described the
connection between polynomial approximation (or polynomial chaos methods) and Gaussian processes
and provided specific examples of these UQ methods to the applications of climate modeling and neutron
tomography.
Third Day:
Hyun-Kyung Chung (Internationally Coordinated Activities of Uncertainty Quantification and
Assessment of Atomic, Molecular and Plasma-Surface Interaction Data for Fusion Applications)
addressed the progress toward evaluating A+M collision data. The internationally coordinated activities at
IAEA toward the UQ science of A+M/PMI data were reviewed. The IAEA A+M data unit is encouraging
work to develop guidelines for critically accessing theoretical A+M structure and collision data, taking
into account the processes and quantities of interest as well as specific theoretical methods employed in
calculations. A joint ITAMP-IAEA workshop was organized in July 2014 to discuss sources of
uncertainty in the physical models of interest: Electron/atom/molecule collisions as well as electronic
structure calculations and PMI processes. It is important to connect theory and experiment – a good
example is the failure of NIF.
7
CONFERENCE CONCLUSIONS & FUTURE PLANS
There is a profound lack of culture in uncertainty quantification in computer simulations for Atomic and
Molecular Physics (AMO), Plasma Physics (PP) and Plasma-Material Interface (PMI). Quantifying
uncertainties involves sensitivity analysis (for which parameters are most important), variability analysis
(intrinsic variation associated with inherent randomness in a physical system), and epistemic uncertainty
analysis (degree of confidence in data and models). While calculations without uncertainties were
acceptable at the initial stages of development of the theoretical sciences, further progress now dictates a
change in the paradigm: Calculated values without uncertainty estimates at best have diminished value for
both fundamental science and certainly for engineering applications, and at worst are meaningless and
even harmful and misleading. UQ has become a key for credible predictions and “decision making,” as
well as a necessary ingredient for validating theory with experiment. Thus, all published simulation
results should be accompanied by an analysis of uncertainties in much the same manner as already being
done by the best experimental papers. Still, for the sensitivity and variability analyses the need for
estimating uncertainties requires educating a theorist, as well as an experimentalist, in the field of applied
statistics. Thus, closer interaction between AMO+PP+PMI scientists and applied statisticians, which was
provided by this workshop, is highly desirable. Such interactions reveal to AMO+PP+PMI scientists, on
the one hand, the plethora of well-developed methods for treating uncertainties and other statistical
properties of quantities of interest and, on the other hand, communicate to applied statisticians new types
of problems that could potentially lead to developing new methods and further advances in applied
statistics.
The immaturity of UQ application in AMO physics is in part a multifaceted cultural issue arising from,
for instance, the relationship of theorists to the errors and uncertainty of their own work; the lack of
routine exchange of detailed information between different, mutually coupled branches of physics; and
the lack of routine collaboration among mathematicians and statisticians and computer scientists who are
capable of developing and specializing UQ techniques for application to particular fields of physics. The
terminology barrier here exists, and this is also a cultural issue. Superficially, this is also in part an issue
of different scientific interests. While most of the current development/application of UQ is oriented
toward complex physical and engineering systems controlled by the 3D PDEs that control fluid flow and
mechanics structures, for example UQ of long-range forecasting of the coupled atmosphere-ocean system,
the AMO, materials and plasma communities have equations in 3D, 6D, ND and even infinite dimensions
with roots in quantum mechanics, statistical mechanics, magnetohydrodynamics, and so forth. Thus, from
the point of view of computational complexity, AMO physics deals with problems unfamiliar to the UQ
field, though it appears that the UQ field (in its present status) has the potential to be more immediately
relevant to the work on PMI and PP than it does to work on AMO data. Still, PMI handling of multiple
timescales and uncertainty propagation is a central concern. In a complex system like tokamak, reduced
PP scale models are augmented with experimental validation. But there are limits - reduced fidelity
simulations due to the computational limitations can lose entire physical phenomena. Thus current
diverter-plasma simulations take several months - about 5 per year. So how do we study UQ if one cannot
get statistics?
Of course, the answer is in improving epistemic uncertainty. But, for epistemic uncertainty there is no
systematic approach – it is still treated on a case by case basis. However, for the ab initio quantum theory
approaches, such as in AMO theory, UQ is almost always aligned with model choices and solutions,
rather than with statistics or numerics of the solution. With multiscale exploration, the distinction between
aleatoric (system sensitivity and variability) and epistemic uncertainties is blurred. What seems to be
aleatoric at one scale becomes epistemic at the other scale. Of course, uncertainty on the finer scale
becomes aleatoric. In the context of polynomial chaos expansions, the two types are handled
simultaneously using polynomial expansions with random coefficients.
8
This discussion gave rise to an important question at the meeting: How do we integrate UQ into a
particular field of physics? While the UQ field is mature for developing general theory of parametric
uncertainty, its application to particular physical models is in its infancy. There is a difference between
math research and the application-driven development of techniques and the maturing of those techniques
into practical tools. Both are needed, but their impact and timeline of impact are different. UQ is presently
most applicable for reasonably well understood and modeled systems. However, all fields of science grow
as they encounter new applications. In that sense there was no conceptual obstacle in any of the problems
considered at this workshop. While many UQ approaches can be computationally challenging out of the
box, experience suggests they can be adapted to become more efficient after being tailored to the structure
of a given physical problem. For instance, perfectly parallel sampling algorithms for Bayesian posterior
inference do exist and work in practice.
While workshops and events like this can make personal and technical connections between UQ and
physics scientists, a real step forward would be building integrated and commonly funded collaborative
teams, which would provide physics knowledge to mathematicians and mathematical skills to the
physicists. Otherwise, incorrect interpretations and terminology barriers can lead to misapplications, thus
producing false conclusions. While the math side has more formal definitions and language, these terms
must be translated into each new application space – this all requires interdisciplinary communication and
engagement. A plausible model for a collaborative approach across disciplines is the DOE SciDAC
program including its UQ Institute, which is presently primarily focused on engineering applications.
There are a number of successful applications of UQ in climate, engineering, in the DOE ASC program
and in the IAEA nuclear data program. Plasma edge simulations provide success stories, which are based
on merging multiple data sources and diagnostics. As a result of this effort, the mentioned application has
the best UQ model. Still, running the aforementioned model is a big problem, addressed usually by
running with reduced resolution. For the AMO systems, computation is not the main barrier. Designing
an adequate test for a particular model’s limits and expanding the model toward large systems presents
real computational challenges. Thus, the Monte Carlo approach seems to be probably the only black box
that is now ready to go – everything else seems to require expert collaboration between application and
UQ-math. Still, there are success examples in AMO and plasma physics. These are atomic structure
calculations (Gordon Drake, Klaus Bartschat, Igor Bray, Dmitry Fursa); examples that close the
experiment-models-prediction cycle in employing AMO data in astrophysics (Daniel Savin); reducedorder modeling in plasma – making an intractable problem tractable (C.S. Chang); emulators and
surrogate models – physics based-reduction ensuring overall features are correct with subsequent
validation of assumptions (U. von Toussaint).
Different classes of challenges for applying UQ exist in various physics areas. For example, in some cases
the boundary conditions were variable and unknown (e.g. in PMI and PP of fusion), which could have
severe implications to the tokamak plasma physics predictions. In some other tokamak simulation cases
the cost of a single simulation is exorbitant. In the latter case one must determine a compromise between
a very detailed but barely feasible single simulation and multiple, less-resolved simulations that include
UQ. The choice is not obvious: It seems that for the tokamak problems, some transformative algorithms
(both physics and data-driven) are required. Another class of systems is one that is multiscale in space and
time. Molecular dynamics cannot handle the low flux or long time scales (spanning 10+ orders of
magnitude). How then do we robustly couple atomistic and continuum models in this case and propagate
uncertainties in particular?
While it is in principle possible to combine models to span the length scales, how do we construct such
models with some rigor? Can the approach of Plechác be applied in the real world where heuristics are
used to “pass parameters” between scales? Models alone cannot reach the level of maturity needed for
9
predictive understanding without multiple validation steps. PMI models are too limited, but understanding
the morphological and topographical evolution of the plasma-material interface is nascent. Multiscale
approaches to validation – controlled micro-experiments - provide more control and instrumentation than
is possible in a full device and are in the spirit of isolating phenomena.
The participants of the workshop concluded that the “dynamics and energy in the room” and the need to
establish collaboration between UQ and physics scientists suggested that the community should have
regular meetings, probably once every one or two years. It was proposed to co-locate this workshop with
SIAM-UQ meetings and to include classes and tutorials. It was also suggested that future meetings be colocated with APS-DPP, APS-DAMOP, APS-MATERIALS, and/or biannual PSI meetings, as well as to
corroboratewith the IAEA technical meetings devoted to the UQ theme. Annual meetings seem to better
suit developments in applied mathematics and current needs for UQ in physics. Co-locating the meeting
in 2016 with either the SIAM annual meeting in Boston, MA (July 11-14) or with the APS-DPP annual
meeting in San Jose, CA (October 31-November 4, 2016), and then alternating in 2017, is one possibility.
The final decision on future workshops will be made by the Workshop Scientific Committee after a wide
community discussion.
10
Appendix 1
Agenda
Saturday, November 7
08:00 am09:00 am
08:55 am09:00 am
09:00 am09:45 am
09:45 am10:30 am
10:30 am11:00 am
11:00 am11:45 pm
11:45 am12:30 pm
12:30 pm02:00 pm
02:00 pm02:45 pm
02:45 pm03:30 pm
03:30 pm04:15 pm
04:15 pm04:45 pm
04:45 pm05:30 pm
05:30 pm06:15 pm
Continental Breakfast/Registration
Opening (R. Harrison, P. Krstic)
Morning Session (Chair: J. Tennyson, U. of Delaware)
Roger Ghanem
Polynomial Chaos as a Comprehensive Modeling Tool for Complex Systems
Petr Plechac
Information-Theoretic Tools for Uncertainty Quantification of High Dimensional Stochastic Models
Coffee Break
Robert Moser
Reliability and Uncertainty in the Simulation of Tokamak Plasmas
Udo von Touissaint
The Vlasov-Poisson Plasma Model with Uncertain Inputs: A Bayesian Modeling Approach
Sandwich Lunch
Afternoon Session (Chair: P. Plechac, U.C. London)
Panagiotis Angelikopoulos
Bayesian Uncertainty Quantification for Molecular Dynamics Simulations
Daniel Savin
Astrochemistry from the First Stars to the Origins of Organic Chemistry
Francesco Rizzi
Uncertainty Quantification in Molecular Dynamics Simulations: Forward and Inverse Problem,
Coffee Break
Sophie Blondel
Uncertainty Quantification Effort within PSI-SciDAC
Dimitrios Giannakis
Data-Driven Spectral Decomposition and Forecasting of Ergodic Dynamical Systems
Sunday, November 8
08:00 am09:00 am
09:00 am09:45 am
09:45 am10:30 am
10:30 am11:00 am
Continental Breakfast/Registration
Morning Session (Chair: D. Stotler, PPPL)
Choong-Seock Chang
Telescoping UQ Method for Extreme Scale Kinetic Simulation of Magnetic Fusion Plasma
Jean-Paul Allain
Challenges and Strategies to Experimental Validation of Multi-Scale Nuclear Fusion PMI Computational
Modeling
Coffee Break
11
11:00 am11:45 am
11:45 pm02:00 pm
02:00 pm02:45 pm
02:45 pm03:30 pm
03:30 pm04:15 pm
04:15 pm04:45 pm
04:45 pm05:30 pm
05:30 pm06:15 pm
Kody J.H. Law
Multilevel Sequential Monte Carlo Samplers
Sandwich Lunch and Poster Session
Afternoon Session (Chair: U. von Toussaint, IPP Garching, Germany)
Jonathan Tennyson
Uncertainty Quantification for Theoretical Atomic and Molecular Data
Alexander Kramida
Critical Evaluation and Estimation of Uncertainties of Atomic Spectral Data at NIST
Matthew Dunlop
Bayesian Level Set Inversion
Coffee Break
Alan Calder
Verification, Validation and Uncertainty Quantification in Astrophysics
Richard Archibald
Sparse Sampling Methods for Experimental Data from the DOE Facilities
POSTER SESSION Sunday, 11:45-12:30 (Chair: Sophie Blondel, ORNL)
Varis Carey
Telescoping Methods for Uncertainty Quantification
Ozgur Cekmer
Uncertainty Quantification Analysis for Xolotl
Scott Ferson
Computing with Confidence
Scott Ferson
Sensitivity Analysis of Probabilistic Models
Michael Probst
Data Accuracy in Modeling and Computation: Some Examples
Monday, November 9
08:00 am09:00 am
Continental Breakfast
09:00 am09:45 am
Morning Session I (Chair: R. Archibald, ORNL)
Hyun-Kyung Chung
Internationally Coordinated Activities of Uncertainty Quantification and Assessment of Atomic, Molecular and
Plasma-Surface Interaction Data for Fusion Applications
Morning Session II (Chairs: TBD)
09:45 am10:30 am
10:30 am12:00 pm
Panel Discussion
Conference Conclusions & Future Plans
12
Appendix 2
Budget
2015 UQ ACCOUNT
MASTER LIST OF ALL EXPENDITURES
1128723
73191
Beginning Balance
Date
Category
Vendor
11/6/15
11/6/15
11/6/15
11/6/15
11/6/15
11/6/15
11/6/15
11/7/15
Travel
Travel
Media
Media
Travel
Food
Travel
Travel
11/7/15
Travel
Hilton
Hilton
Chris Rosaschi
Eco printing
Spartan
Bliss
Hilton
Enterprise
Jonathan
Tennyson
11/7/15
Travel
Matthew Dunlop
Revenue
Conf Participants
Charges
Balance
$1,008.00
$3,528.00
$1,210.00
$838.30
$1,974.40
$2,152.00
$1,512.00
$126.96
$15,000.00
$13,992.00
$10,464.00
$9,254.00
$8,415.70
$6,441.30
$4,289.30
$2,777.30
$2,650.34
$476.86
$2,173.48
$30.50
$2,142.98
$2,142.98
Description
Lodging for UQ speakers
Lodging for UQ speakers
Graphic Design
Brochure
Ground transportation for UQ speakers
Catering
Lodging for UQ speakers
Rental cars for UQ speakers
For UQ Speakers - round trip airfare
reimbursement
For UQ Speakers - round trip LIRR
ticket reimbursement
$12,857.02
$910.00
$3052.98
Participant Registration Revenue
13
Appendix 3
List of Participants
Panagiotis Angelikopoulos
Richard Archibald
CSE Lab, Institute of Computational Science, Zurich, Switzerland
ORNL, Oak Ridge, TN, USA
Johan Bengtsson
JB Optima, LLC
Sophie Blondel
ORNL, Oak Ridge, TN, USA
Alan Calder
Stony Brook University, Stony Brook, NY, USA
Varis Carey
CU- Denver, Denver, CO, USA
Ozgur Cekmer
Choong-Seock Chang
Hyun-Kyung Chung
Matthew Dunlop
ORNL, Oak Ridge, TN, USA
PPPL, Princeton, USA
IAEA, AT
University of Warwick, Coventry, GB
Scott Ferson
Applied Biomathematics, Setauket, NY, USA
Roger Ghanem
University of Southern California, CA, USA
Dimitrios Giannakis
New York University, NY, USA
Javier Dominguez-Gutierrez
Stony Brook University, Stony Brook, NY USA,
Longtao Han
Stony Brook University, Stony Brook, NY USA
Robert Harrison
Stony Brook University, Stony Brook, NY, USA
Alexander Kramida
NIST, MD, USA
Predrag Krstic
Stony Brook University, Stony Brook, NY, USA
Kody J.H. Law
KAUST, Saudi Arabia
Craig Michoski
University of Texas, Austin, TX, USA
Robert Moser
University of Texas, Austin, TX, USA
Petr Plechac
University of Delaware, DE, USA,
Michael Probst
University of Innsbruck, Innsbruck, AT
Maksim Rakitin
Stony Brook University, Stony Brook, NY, USA
Francesco Rizzi
Sandia NL, Albuquerque, NM, USA
Daniel Savin
Columbia University, NY USA
Daren Stotler
PPPL, Princeton, NJ, USA
Jonathan Tennyson
University College of London, GB
Udo von Toussaint
Max Planck Institute for Plasma Physics, Germany
Shinjae Yoo
BNL, Upton, NY, USA
Yang Zhang
EMNL, Stony Brook, NY, USA
14
Download