Final Program - ASME Conferences

advertisement
$$!$& $"#%
WELCOME LETTER
Welcome to ASME’s 2013 Verification and Validation Symposium!
Dear Colleagues,
Thank you for participating in ASME’s second annual Verification and Validation Symposium: dedicated entirely to verification, validation and
uncertainty quantification of computer simulations. The outstanding response to the call for abstracts has inspired us to create a program that
brings together engineers and scientists from around the world and from diverse disciplines: all of whom use computational modeling and
simulation.
Our goal is to provide you and your fellow engineers and scientists—who might normally never cross paths—with the unique opportunity to
interact: by exchanging ideas and methods for verification of codes and solutions, simulation validation and assessment of uncertainties in
mathematical models, computational solutions and experimental data.
The presentations have been organized both by application field and technical goal and approach. We are pleased that you are here with us and
your colleagues to share verification and validation methods, approaches, successes and failures and ideas for the future.
Thanks again for attending. We look forward to your valued participation.
Sincerely,
Symposium Co-Chairs
Ryan Crane
ASME
New York, NY, United States
Scott Doebling
Los Alamos National Laboratory
Los Alamos, NM, United States
Christopher Freitas
Southwest Research Institute
San Antonio, TX, United States
TABLE OF CONTENTS
General Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2
Schedule-at-a-Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Keynote and Plenary Speakers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Technical Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-19
Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20-66
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67-68
Session Organizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
Exhibitors and Sponsors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70
About ASME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
ASME Standards and Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
ASME Officers and Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .71
Hotel Floor Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .inside back cover
1
GENERAL INFORMATION
ACKNOWLEDGEMENT
FREE ASME MEMBERSHIP
The Verification and Validation Symposium is sponsored by ASME. All
technical sessions and conference events will take place at Planet
Hollywood Resort & Casino. Please check the schedule for event
times and locations.
Non-ASME Members who pay the non-Member conference
registration fee, including students who pay the non-Member student
fee, will receive a one-year FREE ASME Membership. ASME will
automatically activate this complimentary membership for qualified
attendees. Please allow approximately 4 weeks after the conclusion of
the conference for your membership to become active. Visit
asme.org/membership for more information about the benefits of
ASME Membership.
REGISTRATION HOURS AND LOCATION
Registration is in the Mezzanine Foyer on the Mezzanine Level.
Registration Hours:
Tuesday, May 21
Wednesday, May 22
Thursday, May 23
Friday, May 24
5:00pm–7:00pm
7:00am–6:00pm
7:00am–6:00pm
7:00am–12:30pm
INTERNET ACCESS IN THE HOTEL
ASME has negotiated complimentary internet in the sleeping
rooms of Planet Hollywood.
Wi-Fi is available in the Starbucks on the Casino Level.
REGISTRATION FEES
Registration Fee includes:
• Admission to all technical sessions
• All scheduled meals
• Symposium program with abstracts
EMERGENCY
One-day Registration Fee includes Admission to events above for that
day.
The hotel business center is open Monday-Friday from 7:00am to
6:00pm and is located on the Mezzanine Level.
Guest Registration Fee includes all scheduled meals; but does NOT
include admission to technical sessions.
ACCESSIBILITY AND GENERAL QUESTIONS
In case of an emergency in the hotel, dial 5911” for Security or “O” for
the hotel operator.
HOTEL BUSINESS SERVICES
Whenever possible, we are pleased to accommodate attendees with
special needs. Advanced notice may be required for certain requests.
For on-site assistance related directly to the conference events and for
general conference questions, please visit the ASME registration desk.
For special needs related to your hotel stay, please visit the Planet
Hollywood concierge or front desk.
NAME BADGES
Please wear your name badge at all times; you will need it for
admission to all conference functions unless otherwise noted. Your
badge also provides a helpful introduction to other attendees.
TAX DEDUCTIBILITY
Expenses of attending a professional meeting such as registration fees
and costs of technical publications are tax deductible as ordinary and
necessary business expenses for U.S. citizens. Please note that tax
code changes in recent years have affected the level of deductibility.
2
SCHEDULE-AT-A-GLANCE
Time
5:00pm–7:00pm
Session # Session
Registration
7:00am–6:00pm
7:00am–8:00am
8:00am–10:15am
10:15am–10:30am
10:30am–12:30pm
1-1
10:30am–12:30pm
2- 1
10:30am–12:30pm
9- 1
10:30am–12:30pm
12:30pm–1:30pm
12-1
1:30pm–3:30pm
1-2
TUESDAY, MAY 21, 2013
Track
WEDNESDAY, MAY 22, 2013
Registration
Continental Breakfast
Opening Address w/ Plenaries 1 and 2
Coffee Break
General Topics in Validation
General Topics in Validation
Uncertainty Quantification, Sensitivity Analysis,
Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 1
and Prediction
Verification and Validation for Energy, Power,
Verification and Validation for Energy, Power,
Building, and Environmental Systems and
Building, and Environmental Systems
Verification for Fluid Dynamics and Heat Transfer
Room
Mezzanine Foyer
Mezzanine Foyer
Mezzanine Foyer
Celebrity Ballroom 1 & 2
Mezzanine Foyer
Wilshire B
Wilshire A
Sunset 5&6
1:30pm–3:30pm
2-2
1:30pm–3:30pm
4-1
Validation Methods
Lunch
General Topics in Validation and Validation
Methods for Materials Engineering
Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 2
Verification and Validation for Fluid Dynamics and
Heat Transfer: Part 1
1:30pm–3:30pm
10-1
Validation Methods for Bioengineering and Case
Studies for Medical Devices
Validation Methods for Bioengineering
Coffee Break
Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 3
Verification and Validation for Fluid Dynamics
and Heat Transfer: Part 2
Validation Methods for Impact and Blast
Mezzanine Foyer
Uncertainty Quantification, Sensitivity Analysis,
Wilshire A
and Prediction
Verification and Validation for Fluid Dynamics and Wilshire B
Heat Transfer
Sunset 5&6
Panel Session: ASME Committee on Verification
and Validation in Computational Modeling of
Medical Devices
Standards Development Activities for Verification Sunset 3&4
and Validation
3:30pm–4:00pm
4:00pm–6:00pm
2-3
4:00pm–6:00pm
4-2
4:00pm–6:00pm
5-1
4:00pm–6:00pm
11-4
4:00pm–6:00pm
5-1
7:00am–6:00pm
7:00am–8:00am
8:00am–10:00am
10:00am–10:30am
10:30am–12:30pm
2- 4
10:30am–12:30pm
3-1
10:30am–12:30pm
10:30am–12:30pm
12:30pm–1:30pm
7-1
13-1
1:30pm–3:30pm
2-5
1:30pm–3:30pm
4-3
1:30pm–3:30pm
7-2
1:30pm–3:30pm
3:30pm–4:00pm
13-2
4:00pm–6:00pm
4-4
4:00pm–6:00pm
6-1
4:00pm–6:00pm
9-2
4:00pm–6:00pm
10-2
7:00am–12:30pm
7:00am–8:00am
8:00am–10:00am
3-2
8:00am–10:00am
11-1
8:00am–10:00am
11-2
10:00am–10:30am
10:30am–12:30pm
6-2
10:30am–12:30pm
11-3
10:30am–12:30pm
11-5
Validation Methods
Sunset 3&4
Celebrity Ballroom 4
General Topics in Validation
Sunset 3&4
Uncertainty Quantification, Sensitivity Analysis,
and Prediction
Verification and Validation for Fluid Dynamics
and Heat Transfer
Wilshire A
Sunset 5&6
Validation Methods for Impact and Blast
Validation Methods for Impact and Blast
THURSDAY, MAY 23 2013
Registration
Continental Breakfast
Plenary Sessions 3 and 4
Coffee Break
Uncertainty Quantification, Sensitivity Analysis,
Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 4
and Prediction
ValidationMethods for Solid Mechanics and
Validation Methods for Solid Mechanics and
Structures: Part 1
Structures
Verification for Fluid Dynamics and Heat Transfer Verification for Fluid Dynamics and Heat Transfer
Verification Methods: Part 1
Verification Methods
Lunch
Uncertainty Quantification, Sensitivity Analysis,
Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 5
and Prediction
Verification and Validation for Fluid Dynamics
Verification and Validation for Fluid Dynamics and
and Heat Transfer: Part 3
Heat Transfer
Verification for Fluid Dynamics and Heat Transfer: Verfication for Fluid Dynamics and Heat Transfer
Part 2
Verification Methods: Part 2
Verification Methods
Coffee Break
Verification and Validation for Fluid Dynamics
Verification and Validation for Fluid Dynamics
and Heat Transfer: Part 4
and Heat Transfer
Verification and Validation for Simulation of
Verification and Validation for Simulation of
Nuclear Applications: Part 1
Nuclear Applications
Verification and Validation for Energy, Power,
Verification and Validation for Energy, Power,
Building, and Environmental Systems and
Building, and Environmental Systems and
Verification for Fluid Dynamics and Heat Transfer Verification for Fluid Dynamics and Heat Transfer
Validation Methods for Bioengineering
Validation Methods for Bioengineering
FRIDAY, MAY 24 2013
Registration
Continental Breakfast
Validation Methods for Solid Mechanics and
Validation Methods for Solid Mechanics and
Structures, Part 2
Structures
Standards Development Activities for Verification
Government Related Activities
and Validation
Development of V&V Community Resources:
Standards Development Activities for Verification
Part 1
and Validation
Coffee Break
Verification and Validation for Simulation of
Verification and Validation for Simulation of
Nuclear Applications: Part 2
Nuclear Applications
Standards Development Activities for Verification
Standards Development Activities
and Validation
Development of V&V Community Resources:
Standards Development Activities for Verification
Part 2
and Validation
3
Wilshire B
Sunset 5&6
Mezzanine Foyer
Mezzanine Foyer
Celebrity Ballroom 1&2
Mezzanine Foyer
Wilshire A
Wilshire B
Sunset 5&6
Sunset 3&4
Celebrity Ballroom 4
Wilshire A
Wilshire B
Sunset 5&6
Sunset 3&4
Mezzanine Foyer
Wilshire B
Sunset 3&4
Wilshire A
Sunset 5&6
Mezzanine Foyer
Mezzanine Foyer
Wilshire B
Sunset 3&4
Wilshire A
Mezzanine Foyer
Sunset 3&4
Sunset 5&6
Wilshire A
KEYNOTE AND PLENARY SESSIONS
Plenary 1
Enabling Faster, Better Medical Device Development and
Evaluation with Modeling and Simulation
Wednesday, May 22
8:00am–10:15am
Celebrity Ballroom, Mezzanine Level
Opening Address: The Need for Global Verification and
Validation Standards – Emerging Trends and a Personal
Perspective
Tina Morrison, Ph.D.
Food and Drug Administration
Dr. Tina Morrison is a mechanical engineer
who studied Cardiovascular Biomechanics
for two years as a post doctoral fellow at
Stanford University. Her research focused
on quantifying the in vivo deformations of
the thoracic aorta from CT imaging. She
continues her research efforts at the Food
and Drug Administration at the Center for Devices and Radiological
Health (CDRH) in the Office of Device Evaluation (ODE). Currently she
is the Chief Advisor of Computational Modeling for the ODE and is
leading the Regulatory Review of Computational Modeling working
group at CDRH. She is energetic about advancing regulatory science
through modeling and simulation because she believes the future of
medical device design and evaluation, and thus enhanced patient
care, lies with computation and enhanced visualization.
Kenneth R. Balkey, P.E.
Consulting Engineer,
Westinghouse Electric Company,
University of Pittsburgh
Kenneth R. Balkey is senior vice president,
ASME Standards and Certification (2011 –
2014) and chair, ASME Council on
Standards and Certification (2011 – 2014).
He is a Consulting Engineer at
Westinghouse Electric Company in Pittsburgh, PA with 41 years of
service in the nuclear power industry. Mr. Balkey advises technology
developments related to Codes and Standards and provides
consultation across a range of technical fields for current and new
reactors worldwide. He performed and directed reliability and risk
evaluations for nuclear and non-nuclear structures, systems and
components over his lengthy career. He has authored 150
publications and documents related to risk evaluations of the integrity
of piping, vessels and structures, and the performance of components
using state-of-the-art probabilistic assessment techniques. He holds
two patents related to reactor pressure vessel integrity and riskinformed inspection of heat exchangers. His honors include the
ASME Melvin R. Green Codes and Standards Medal (2008) and
several other awards from ASME, Westinghouse, and other
institutions. He was elevated to the respected grade of ASME Fellow
in 2010. Mr. Balkey earned B.S. and M.S. degrees in Mechanical
Engineering at the University of Pittsburgh. He serves as an adjunct
faculty lecturer, University of Pittsburgh Nuclear Engineering Program
co-leading the delivery of a graduate course, “Case Studies in Nuclear
Codes and Standards,” since January 2010.
She has been a scientific reviewer, principal investigator on two
projects, and a technical expert on another since 2008. She is a coprincipal investigator on a Critical Path Initiative (CPI)-funded project
titled “Leveraging the Simulation-Based Engineering and Medical
Imaging Technology Revolutions for Medical Devices”, where she
continues to interact with industry and academic experts through
workshops on Computer Methods for Medical Devices. Additionally,
she is the principal investigator on a CPI project titled “Characterization
of Human Aortic Anatomy Project (CHAP)”, a large multicenter
national study examining the anatomical parameters of more than
10,000 diseased aortas. She also provides technical expertise
regarding finite element analysis and medical imaging for another CPI
project titled “Assessment of plaque composition, dynamic
biomechanics, and therapeutic outcomes in subjects implanted with
endovascular devices (ASPECT)”. She received her Ph.D. in
Theoretical and Applied Mechanics from Cornell University in 2006
and her Masters in Mechanical Engineering in 2002 from the University
of Connecticut.
4
KEYNOTE AND PLENARY SESSIONS
Manager at SNL with responsibilities in areas including Z-pinch
development and applications of pulsed power to material dynamics
studies, and was the lead on developing experimental plutonium
capabilities on the Z Facility. In 2005, he became a Senior Manager at
Sandia responsible for the Pulsed Power Technology Group,
managing five departments and a $25M program. In August 2006, he
joined the NNSA as a member of the Senior Executive Service in the
position of Director, Office of Defense Science and, since 2010, as
Assistant Deputy Administrator for Stockpile Stewardship. To date, Dr.
Deeney has published over 120 journal papers on high energy density
physics, x-ray lasers, spectroscopy, x-ray diagnostics, and dynamic
material properties.
Plenary 2
Implementation of QMU for Model-Based Spacecraft
Qualification
Lee Peterson, Ph.D.
Principal Engineer,
Jet Propulsion Laboratory
Dr. Lee D. Peterson is a Principal Engineer
in the Mechanical Systems Engineering,
Fabrication and Test Division at the Jet
Propulsion Laboratory. He received his SB,
SM and PhD in Aeronautics and
Astronautics from the Massachusetts
Institute of Technology. From 1987 to 1989, he was a Member of the
Technical Staff at Sandia National Laboratories in Albuquerque. He
was an Assistant Professor of Aeronautics and Astronautics at Purdue
University from 1989 to 1991. From 1991 to 2008, he was on the
faculty of the Aerospace Engineering Sciences department at the
University of Colorado at Boulder, where he has the Gary L. Roubos
Endowed Professor, the Director of the Center for Aerospace
Structures, and the Department Chair. He has spent most of the past
three decades learning to not trust models of aerospace mechanical
systems. As an academic, he puzzled over exotic problems like
nonlinear low gravity fuel slosh and the microdynamic stability of
nanometer-precise deployable telescope structures. Since going to
JPL in 2008, he has been working to develop and transition
technology for model-based flight qualification to project practice.
Plenary 4
Perspectives for V & V in the Licensing Process for NPP
Alessandro Petruzzi, Ph.D.
Deputy Manager, Nuclear Research
Group of San Piero a Grado
Dr. Petruzzi’s main area of interest is in the
methods for uncertainty and sensitivity
analysis applied to best estimate thermalhydraulics system codes.
He received his PhD at University of Pisa
2008, preparing his thesis on the
development and application of methodologies for sensitivity analysis
and uncertainty evaluation of the results of the best estimate system
codes applied in nuclear technology. He was a visiting scholar at Penn
State University from 2004 to 2005, working under the U.S.
Department of Energy NEER Grant for the “Development of
Methodology for Early Detection of BWR Instabilities”. At the same
time he worked under the US NRC Purdue Sub-Contract for the
“Assessment of the Coupled Code TRACE/PARCS for BWR
Applications”.
Thursday, May 23
8:00am–10:00am
Celebrity Ballroom, Mezzanine Level
Plenary 3
Christopher Deeney, Ph.D.
National Nuclear Security
Administration, NNSA
His current position is Deputy Manager of the Nuclear Research Group
of San Piero a Grado (GRNSPG). He also serves as president of the
nuclear and industrial engineering company (N.IN.E). He is the founder
and director of the seminar-course 3D S.UN.COP (Scaling, Uncertainty
and 3D COuPled Code Calculation) organized in nine editions in
different countries.
Christopher Deeney is the Assistant Deputy
Administrator for Stockpile Stewardship at
the U.S. Department of Energy’s National
Nuclear Security Administration (NNSA).
He is responsible for a $1.7 B portfolio that
encompasses three offices within NNSA’s Office of Defense Programs:
Defense Science, Inertial Confinement Fusion, and Advanced
Simulation and Computing and Institutional R&D Programs.
His activity has included the application and analysis of best estimate
computer codes for licensing application of Nuclear Power Plants
(recently he serves as Project manager of GRNSPG for the preparation
of the FSAR Chapter 15 of the Argentinian Atucha-2 NPP).
Dr. Deeney was born and lived in Hamilton, Scotland, graduating in
June 1984 with a First Class Honours B.Sc. in Physics from the
University of Strathclyde, Glasgow. From October 1984 to October
1987, he completed his Ph.D. research on the formation of hotspots
and electron beams in gas puff Z pinches and plasma focii at Imperial
College, London. Dr. Deeney was a postdoctoral researcher at the
University of Stuttgart, Germany until May 1988 when he joined
Physics International Co. in California. At Physics International he
became the Program Manager for Z-pinch-based plasma radiation
source development, for x-ray laser research and the application of
pulsed corona technologies to pollution control. In 1991, he was
promoted to Department Manager of the Plasma Physics Group. In
February 1995, Dr. Deeney joined Sandia National Laboratories (SNL)
as an experimenter on the 8-MA Saturn and 20-MA Z pulsed-power
generators, where he did pioneering research on high wire number
arrays, and nested wire arrays. In 2000, he became a Department
Additional fields of work include Thermal-Hydraulic, System codes and
3D TH-NK code, Uncertainty Evaluation and Safety Analysis.
5
WEDNESDAY, MAY 22
TECHNICAL
SESSIONS
WEDNESDAY, MAY 22
TRACK 2 Uncertainty Quantification,
Sensitivity Analysis, and Prediction
TRACK 1 General Topics in Validation
2-1 Uncertainty Quantification, Sensitivity Analysis,
and Prediction (Track): Part 1
1-1 General Topics in Validation
Wilshire B
10:30am–12:30pm
Wilshire A
10:30am–12:30pm
Session Chair: Christopher J. Roy, Virginia Tech, Blacksburg,
VA, United States
Session Co-Chair: Richard Schultz, Texas A&M Univeristy,
College Station, TX, United States
Session Chair: David Moorcroft, Federal Aviation
Administration, Oklahoma City, OK, United States
Session Co-Chair: Ben Thacker, Southwest Research Institute,
San Antonio, United States
Model-Based Verification and Real-Time Validation of
Automotive Controller Software
Technical Presentation. V&V2013-2160
Mahdi Shahbakhti, Michigan Tech. University, Houghton, MI,
United States, Kyle Edelberg, Selina Pan, Karl Hedrick,
University of California Berkeley, Berkeley, CA, United States,
Andreas Hansen, Hamburg University of Technology, Hamburg,
Germany, Jimmy Li, University of California Los Angeles, Los
Angeles, CA, United States
On Stochastic Model Interpolation and Extrapolation Methods
for Robust Design
Technical Presentation. V&V2013-2276
Zhenfei Zhan, University of Michigan, Livonia, MI, United
States, Yan Fu, Ford, Canton, MI, United States, Ren-jye Yang,
Ford Research & Advanced Engineering, Dearborn, MI, United
States
Uncertainty Quantification in Reduced-order Model of a
Resonant Nonlinear Structure
Technical Presentation. V&V2013-2357
Rajat Goyal, Anil K. Bajaj, Purdue University, West Lafayette, IN,
United States
Planar Mechanisms Analysis with MATLAB and SolidWorks
Technical Presentation. V&V2013-2198
Dan Marghitu, Hamid Ghaednia, Auburn University, Auburn,
United States
Calculation of the Error Transport Equation Source Term using
a Weighted Spline Curve Fit
Technical Presentation. V&V2013-2358
Don Parsons, West Virginia University, Ridgeley, United States,
Ismail Celik, West Virginia University, Morgantown, WV, United
States
Validation Issues in Advanced Numerical Modeling: Pitting
Corrosion
Technical Presentation. V&V2013-2211
NIthyanand Kota, SAIC, Arlington, VA, United States, Virginia
Degiorgi, Siddiq Qidwai, Alexis Lewis, Naval Research
Laboratory, Washington, DC, United States
Quantification of Fragmentation Energy from Cased Munitions
using Sensitivity Analysis
Technical Presentation. V&V2013-2408
John Crews, Evan McCorkle, Applied Research Associates,
Raleigh, United States
On the Software User Action Effect to the Predictive Capability
of Computational Models
Technical Presentation. V&V2013-2231
Yuanzhang Zhang, X.E. Liu, Institute of Structural Mechanics,
CAEP, Mianyang, Sichuan, China
A Stochastic Paradigm for Validation of Patient-Specific
Biomechanical Simulations
Technical Presentation. V&V2013-2422
Sethuraman Sankaran, Dawn Bardot, Heartflow Inc., Redwood
City, United States
Verification of Test-Based Identified Simulation Models
Technical Presentation. V&V2013-2269
Alexander Schwarz, Matthias Behrendt, KIT-Karlsruhe Institute
of Technology, Karlsruhe, Baden-Württemberg, Germany, Albert
Albers, IPEK at KIT Karlsruher Institute of Technology,
Karlsruhe, Baden-Württemberg, Germany
Uncertainty Analysis in HTGR Neutronics, Thermalhydraulics
and Depletion Modelling
Technical Presentation. V&V2013-2423
Bismark Tyobeka, International Atomic Energy Agency, Vienna,
Austria, Kostadin Ivanov, Pennsylvania State University,
University Park, PA, United States, Frederik Reitsma,
Steenkampskraal Thorium Linited, Centurion,South Africa
CFD and Test Results Comparison of a High Head Centrifugal
Stage at Various Locations
Technical Presentation. V&V2013-2352
Syed Fakhri, Dresser-Rand Company, Olean, United States
6
TECHNICAL
SESSIONS
WEDNESDAY, MAY 22
TRACK 9 Verification and Validation for
Energy, Power, Building, and Environmental
Systems and Verification for Fluid Dynamics
and Heat Transfer
TRACK 12 Validation Methods
12-1 Validation Methods
Sunset 3&4
Session Chair: David Quinn, Veryst Engineering, Cambridge, MA,
United States
Session Co-Chair: Koji Okamoto, The University of Tokyo,
Tokyo, Japan
9-1 Verification and Validation for Energy, Power,
Building, and Environmental Systems
Sunset 5&6
10:30am–12:30pm
10:30am–12:30pm
Entropy Generation as a Validation Criterion for Flow and Heat
Transfer Models
Technical Presentation. V&V2013-2230
Heinz Herwig, Yan Jin, Tu Hamburg-harburg, Hamburg, Germany
Session Chair: Dimitri Tselepidakis, ANSYS, Inc., Lebanon, NH,
United States
An Effective Power Factor Billing Method
Technical Presentation. V&V2013-2118
Charles N. Ndungu, Kenya Power, Nairobi, Nairobi, Kenya
Validation Based on a Probabilistic Measure of Response
Technical Presentation. V&V2013-2282
Daniel Segalman, Lara Bauman, Sandia National Laboratories,
Livermore, CA, United States, Thomas Paez, Thomas Paez
Consulting, Durango, CO, United States
Numerical Modeling of 3D Unsteady Flow around a Rotor Blade
of a Horizontal Axis Wind Turbine -Rutland 503
Technical Presentation. V&V2013-2139
Belkheir Noura, University of Khemis-Miliana, Khemis-Miliana,
Algeria
Assessment of Model Validation Approaches using the Method
of Manufactured Universes
Technical Presentation. V&V2013-2329
Christopher J. Roy, Ian Voyles, Virginia Tech, Blacksburg, VA,
United States
The Validation of BuildingEnergy and the Performance
Evaluation of Thermochromic Window using Energy Saving
Index
Technical Presentation. V&V2013-2182
Hong Ye, Linshuang Long, Haitao Zhang, University of Science
and Technology of China, Hefei, Anhui, China, Yanfeng Gao,
Shanghai Institute of Ceramics, CAS, Shanghai, Shanghai,
China
Validation of Vehicle Drive Systems with Real-Time Simulation
on High-Dynamic Test Benches
Technical Presentation. V&V2013-2391
Albert Albers, Martin Geier, Steffen Jaeger, Christian Stier,
Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
Application of a Multivariate Approach to Quantify Validation
Uncertainty and Model Comparison Error for Industrial CFD
Applications: Multi-Phase Liquid-Solids Mixing in Pulse-Jet
Mixing Vessels
Technical Presentation. V&V2013-2266
Michael Kinzel, Leonard Peltier, Andri Rizhakov, Jonathan
Berkoe, Bechtel National, Inc., Reston, VA, United States,
Brigette Rosendall, Mallory Elbert, Kellie Evon, Bechtel
National Advanced Simulation & Analysis, Reston, VA, United
States, Kelly Knight, Bechtel National Inc, Frederick, MD,
United States
Reliability-Based Approach to Model Validation under Uncertainty
Technical Presentation. V&V2013-2399
Sankaran Mahadevan, You Ling, Chenzhao Li, Joshua G.
Mullins, Vanderbilt University, Nashville, TN, United States,
Shankar Sankararaman, NASA Ames Research Center, Moffett
Field, CA, United States
Descriptive Management Method of Engineering Analysis
Modeling and its Operation Process toward Verification and
Validation in Multi-Disciplinary System Design
Technical Presentation. V&V2013-2402
Yutaka Nomaguchi, Haruki Inoue, Kikuo Fujita, Osaka
University, Osaka, Japan
Examining the Dampening Effect of Pipe-Soil Interaction at the
Ends of a Subsea Pipeline Free Span Using Coupled EulerianLagrangian Analysis and Computational Design of Experiments
Technical Presentation. V&V2013-2346
Marcus Gamino, Samuel Abankwa, Ricardo Silva, Leonardo
Chica, Paul Yambao, Raresh Pascali, Egidio Marotta, University
of Houston, Houston, United States, Carlos Silva, Juan-Alberto
Rivas-Cardona, FMC Technologies, Houston, TX, United States
Verification and Validation of Building Energy Models
Technical Presentation. V&V2013-2418
Yuming Sun, Godfried Augenbroe, Georgia Institute of
Technology, Atlanta, United States
7
WEDNESDAY, MAY 22
TECHNICAL
SESSIONS
TRACK 1 General Topics in Validation
TRACK 2 Uncertainty Quantification,
Sensitivity Analysis, and Prediction
1-2 General Topics in Validation and Validation
Methods for Materials Engineering (Track)
Sunset 3&4
2-2 Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 2
1:30pm–3:30pm
Wilshire A
Session Chair: Ryan Crane, ASME, New York, NY, United States
1:30pm–3:30pm
Session Chair: Donald Simons, TASC, Inc., El Segundo, United
States
Session Co-Chair: Dimitri Tselepidakis, ANSYS, Inc., Lebanon,
NH, United States
Uncertainties in Acoustical Measurement and Estimation and
their Effects in Community Noise
Technical Presentation. V&V2013-2120
Robert A. Putnam, RAP Acoustical Engineers, Winter Springs,
FL, United States, Richard J. Peppin, Engineers for Change,
Inc., Rockville, MD, United States
Analytical Modeling of the Forces and Surface Roughness in
Grinding Process with Microstructured Wheels
Technical Presentation. V&V2013-2360
Taghi Tawakoli, Hochschule Furtwangen University, VillingenSchwenningen, Baden-Württemberg, Germany, Javad Akbari,
Sharif University of Technology, Tehran, Iran, Ali Zahedi, Sharif
University of Technology, Villingen-Schwenningen, BadenWürttemberg, Germany
Experiment Design: A Key Ingredient in Defining a Software
Validation Matrix
Technical Presentation. V&V2013-2428
Richard Schultz, Yassin Hassan, Texas A&M Univeristy, College
Station, TX, United States, Edwin Harvego, Consulting
Engineer, Idaho Falls, ID, United States, Glenn McCreery, Idaho
State University, Idaho Falls, ID, United States, Ryan Crane,
ASME, New York, NY, United States
Numerical Estimation of Strain Gauge Measurement
Uncertainties
Technical Presentation. V&V2013-2283
Jérémy Arpin-Pont, S. Antoine Tahan, École de Technologie
Supérieure, Montreal, QC, Canada, Martin Gagnon, Denis
Thibault, Research Center of Hydro-Québec (IREQ), Varennes,
QC, Canada, Andre Coutu, Andritz Hydro Ltd, Pointe-Claire,
QC, Canada
Verification and Validation of a Penetration Model for Ballistic
Threats in Vehicle Design
Technical Presentation. V&V2013-2429
David Riha, John McFarland, James Walker, Sidney Chocron,
Southwest Research Institute, San Antonio, TX, United States
Multi-fidelity Training Data in Uncertainty Analysis of Advanced
Simulation Models
Technical Presentation. V&V2013-2366
Oleg Roderick, Mihai Anitescu, Argonne National Laboratory,
Argonne, IL, United States
Dynamic Stability of a Wings for Drone Aircraft Subjected to
Parametric Excitation
Technical Presentation. V&V2013-2158
Iyd Maree, TU-Freiberg, Erbil, AL,Iraq, Bast Jürgen Lothar, TU
Bergakademie Freiberg, Freiberg, Germany, Nazhad Ahmad
Hussein, University of Salahaddine, Erbil, AL,Iraq
Verification, Validation and Uncertainty Quantification of
Circulating Fluidized Bed Riser Simulation
Technical Presentation. V&V2013-2385
Aytekin Gel, ALPEMI Consulting, LLC, Phoenix, AZ, United
States, Tingwen Li, URS Corporation, Morgantown, WV, United
States, Balaji Gopalan, West Virginia University Research
Corporation / NETL, Morgantown, WV, United States, Mehrdad
Shahnam, Madhava Syamlal, DOE-NETL, Morgantown, WV,
United States
Dynamic Modeling and Validation of Angle Sensor using
Artificial Muscle Material
Technical Presentation. V&V2013-2206
Yonghong Tan, Ruili Dong, College of Mechanical and Electronic
Engineering, Shanghai Normal University, Shanghai, China
Single-Grid Discretization Error Estimation for Computational
Fluid Dynamics
Technical Presentation. V&V2013-2327
Tyrone Phillips, Christopher J. Roy, Virginia Tech, Blacksburg,
VA, United States
Verification and Validation of Material Models using a New
Aggregation Rule of Evidence
Technical Presentation. V&V2013-2370
Shahab Salehghaffari, Northwestern University, Evanston, IL,
United States
Certification of Simulated FEA Behavior
Technical Presentation. V&V2013-2412
A Van Der Velden, Dassault Systemes, Atlanta, GA, United
States, George Haduch, Dassault Systemes SIMULIA Corp.,
Pawtucket, RI, United States
8
TECHNICAL
SESSIONS
WEDNESDAY, MAY 22
TRACK 4 Verification and Validation for Fluid
Dynamics and Heat Transfer
TRACK 10 Validation Methods for
Bioengineering
4-1 Verification and Validation for Fluid Dynamics and
Heat Transfer: Part 1
10-1 Validation Methods for Bioengineering and Case
Studies for Medical Devices
Wilshire B
Sunset 5&6
1:30pm–3:30pm
1:30pm–3:30pm
Session Chair: Hyung Lee, Bettis Laboratory, West Mifflin, PA,
United States
Session Co-Chair: Koji Okamoto, The University of Tokyo,
Tokyo, Japan
Session Chair: Carl Popelar, Southwest Research Institute, San
Antonio, TX, United States
Session Co-Chair: Tina Morrison, Food and Drug
Administration, Silver Springs, MD, United States
Validation Data and Model Development for Nuclear Fuel
Assembly Response to Seismic Loads
Technical Presentation. V&V2013-2334
Noah Weichselbaum, Philippe Bardet, Morteza Abkenar, Oren
Breslouer, Majid Manzari, Elias Balara, George Washington
University, Washington, DC, United States, W. David Pointer,
National Laboratory, Washington, DC, United States, Elia
Merzari, Argonne National Laboratory, Chicago, IL, United
States
Designing a New ACL Graft Fixation Applicable for Low Density
Bones
Technical Presentation. V&V2013-2156
Mahmoud Chizari, Brunel University, London, United Kingdom,
Mohammad Alrashidi, College of Technological Studies, Kuwait
The Effect of Quadriceps Muscles Forces over Anterior
Cruciate Ligament: a Validation Study.
Technical Presentation. V&V2013-2175
Ariful Bhuiytan, Stephen Ekwaro-Osire, Texas Tech University,
Lubbock, TX, United States, Javad Hashemi, Florida Atlantic
University, Boca Raton, FL, United States
An Overview of the First Workshop on Verification and
Validation of CFD for Offshore Flows
Technical Presentation. V&V2013-2306
Luis Eca, Instituto Superior Tecnico, Lisboa, Portugal,
Guilherme Vaz, MARIN, Wageningen, Netherlands, Owen
Oakley, Consultant, Alamo, CA, United States
Adapting NASA-STD-7009 to Assess the Credibility of
Biomedical Models and Simulations
Technical Presentation. V&V2013-2209
Lealem Mulugeta, Universities Space Research Association,
Division of Space Life Sciences, Houston, United States, Marlei
Walton, Wyle Science, Technology & Engineering Group,
Houston, TX, United States, Emily Nelson, Jerry Myers, NASA
Glenn Research Center, Cleveland, OH, United States
Impact of a Gaseous Jet on the Capillary Free Surface of a
Liquid: Experimental and Numerical Study
Technical Presentation. V&V2013-2311
Stephane Gounand, Benjamin Cariteau, Olivier Asserin, Xiaofei
Kong, Commissariat A l’Energie Atomique, Gif-sur-Yvette,
France, Vincent Mons, Ecole Polytechnique, Palaiseau, France,
Philippe Gilles, Areva NP, Paris La Defense, France
Development and Experimental Validation of Advanced NonLinear, Rate-Dependent Constitutive Models for Polymeric
Materials
Technical Presentation. V&V2013-2427
David Quinn, Jorgen Bergstrom, Nagi Elabassi, Veryst
Engineering, Needham, MA, United States
CFD Validation vs PIV Data Using Explicit PIV Filtering of CFD
Results
Technical Presentation. V&V2013-2340
Laura Campo, Ivan Bermejo-Moreno, John Eaton, Stanford
University, Stanford, CA, United States
Validation of Fatigue Failure Results of a Newly Designed
Cementless Femoral Stem Based on Anthropometric Data of
Indians
Technical Presentation. V&V2013-2219
B.R. Rawal, Shri G.S. Institute of Technology and Science,,
Indore, India, Rahul Ribeiro, Naresh Bhatnagar, Department of
Mechanical Engineering, New Delhi, India
Validation and Uncertainty Quantification of Large scale CFD
Models for Post Combustion Carbon Capture
Technical Presentation. V&V2013-2361
Emily Ryan, William Lane, Boston University, Boston, United
States, Curt Storlie, Joanne Wendelberger, Los Alamos
National Laboratory, Los Alamos, NM, United States,
Christopher Montgomery, URS Corporation, Morgantown, WV,
United States
Expansion and Application of Structure Function Technology for
Verifying and Exploring Hypervelocity Simulation Calculations
Technical Presentation. V&V2013-2393
Scott Runnels, Len Margolin, Los Alamos National Laboratory,
Los Alamos, United States
9
WEDNESDAY, MAY 22
TECHNICAL
SESSIONS
TRACK 2 Uncertainty Quantification,
Sensitivity Analysis, and Prediction
TRACK 4 Verification and Validation for Fluid
Dynamics and Heat Transfer
2-3 Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 3
4-2 Verification and Validation for Fluid Dynamics and
Heat Transfer: Part 2
Wilshire A
Wilshire B
4:00pm–6:00pm
Session Chair: Scott Doebling, Los Alamos National Laboratory,
Los Alamos, NM, United States
Session Co-Chair: Steve Weinman, ASME, New York, NY, United
States
4:00pm–6:00pm
Session Chair: Christopher Freitas, Southwest Research
Institute, San Antonio, TX, United States
Verification and Validation Studies on In-house Finite Element
Simulations of Streamfunction, Heatfunction and Heat Flux for
Natural Convection within Irregular Domains
Technical Presentation. V&V2013-2207
Tanmay Basak, Pratibha Biswal, Chandu Babu, Indian Institute
of Technology, Madras Chennai, India
Variation Studies on Analyst and Solver Settings Contribution
to CFD Analysis Uncertainty
Technical Presentation. V&V2013-2265
Daniel Eilers, Fisher Controls International LLC, Marshalltown,
IA, United States
Calibration and Validation of Computer Models for Radiative
Shock Experiments
Technical Presentation. V&V2013-2213
Jean Giorla, Commissariat A l’Energie Atomique, Arpajon,
France, Josselin Garnier, Laboratoire J-L Lions, Universite Paris
VII, Paris, France, Emeric Falize, Berenice Loupias, Clotilde
Busschaert, CEA/DAM/DIF, Arpajon, France, Michel Koenig,
Alessandra Ravasio, LULI, Palaiseau, France, Claire Michaut,
LUTH, Université Paris-Diderot, Meudon, France
Formalizing and Codifying Uncertainty Quantification Based
Model Validation Procedures for Naval Shock Applications
Technical Presentation. V&V2013-2268
Kirubel Teferra, Michael Shields, Adam Hapij, Raymond
Daddazio, Weidlinger Associates, Inc., New York, NY, United
States
Non-intrusive Parametric Uncertainty Quantification for
Reacting Multiphase Flows in Coal Gasifiers
Technical Presentation. V&V2013-2303
Aytekin Gel, ALPEMI Consulting, LLC, Phoenix, AZ, United
States, Tingwen Li, URS Corporation, Morgantown, WV, United
States, Mehrdad Shahnam, DOE-NETL, Morgantown, WV,
United States, Chris Guenther, National Energy Technology
Laboratory (NETL), Morgantown, WV, United States
High Fidelity Numerical Methods using Large Eddy Simulation
Compared to Reynolds Average Navier-Stokes
Technical Presentation. V&V2013-2316
Ayoub Glia, Masdar institute of technology, Abu Dhabi, United
Arab Emirates
Validation and Uncertainty Quantification of an Ethanol Spray
Evaporation Model
Technical Presentation. V&V2013-2319
Bastian Rauch, Patrick Le Clercq, DLR - Institute of
Combustion Technology, Stuttgart, Germany, Raffaela Calabria,
Patrizio Massoli, CNR - Istituto Motori, Naples, Italy
Use of Information-Theoretic Criteria in Smoothing Full-Field
Displacement Data
Technical Presentation. V&V2013-2305
Sankara Subramanian, Indian Institute of Technology Madras,
Chennai, Tamil Nadu, India
Effect of Interacting Uncertainties on Validation
Technical Presentation. V&V2013-2321
Arun Karthi Subramaniyan, GE Global Research, Niskayuna,
NY, United States, Liping Wang, GE Corporate Res & Develop,
Niskayuna, NY, United States, Gene Wiggs, GE Aviation,
Cincinnati, OH, United States
Validation of Transient Bifurcation Structures in Physical Vapor
Transport Modeling
Technical Presentation. V&V2013-2325
Patrick Tebbe, Joseph Dobmeier, Minnesota State University,
Mankato, MN, United States
Experimental Data to Validate Multiphase Flow Code Interface
Tracking Methods
Technical Presentation. V&V2013-2330
Matthieu A. Andre, Philippe Bardet, The George Washington
University, Washington, DC, United States
Sensitivity and Uncertainty of Time-Domain Boiling Water
Reactor Stability
Technical Presentation. V&V2013-2323
Tomasz Kozlowski, University of Illinois, Urbana, IL, United
States, Ivan Gajev, Royal Institute of Technology,
Stockholm,Sweden
10
TECHNICAL
SESSIONS
WEDNESDAY, MAY 22
TRACK 5 Validation Methods for Impact and
Blast
TRACK 11 Standards Development Activities
for Verification and Validation
5-1 Validation Methods for Impact and Blast
Sunset 5&6
11-4 Panel Session: ASME Committee on Verification
and Validation in Computational Modeling of
Medical Devices
4:00pm–6:00pm
Session Chair: Richard Lucas, ASME, New York, NY, United
States
Sunset 3&4
4:00pm–6:00pm
Session Chair: Carl Popelar, Southwest Research Institute, San
Antonio, TX, United States
Session Co-Chairs: Tina Morrison, Food and Drug
Administration, Silver Springs, MD, United States, Andrew Rau,
Exponent, Inc., Philadelphia, PA, United States,Ryan Crane,
ASME, New York, NY, United States
A Wavelet-Based Time-Frequency Metric for Validation of
Computational Models for Shock Problems
Technical Presentation. V&V2013-2232
Michael Shields, Kirubel Teferra, Adam Hapij, Raymond
Daddazio, Weidlinger Associates, Inc., New York, NY, United
States
V&V40 Verification & Validation in Computational Modeling of
Medical Devices Panel Session
Technical Presentation. V&V2013-2401
Carl Popelar, Southwest Research Institute, San Antonio, TX,
United States
High Fidelity Modeling of Human Head under Blast Loading:
Development and Validation
Technical Presentation. V&V2013-2256
Nithyanand Kota, Science Applications International
Corporation (SAIC), La Plata, MD, United States, Alan
Leung,Amit Bagchi, Siddiq Qidwai, Naval Research Lab,
Washington, DC, United States
Application of a Standard Quantitative Comparison Method to
Assess a Full Body Finite Element Model in Regional Impacts
Technical Presentation. V&V2013-2345
Nicholas A. Vavalle, Joel D. Stitzel, F. Scott Gayzik, Wake
Forest University School of Medicine, Winston-Salem, NC,
United States, Daniel P. Moreno, Wake Forest University School
of Medicine, Vienna, VA, United States
An Independent Verification and Validation Plan for DTRA
Application Codes
Technical Presentation. V&V2013-2353
David Littlefield, University of Alabama At Birmingham,
Birmingham, AL, United States, Photios Papados, US Army
Engineer Research and Development Center, Viscksburg, MS,
United States, Jacqueline Bell, Defense Threat Reduction
Agency, Fort Belvoir, VA, United States
Analytical and Numerical Analysis of Energy Absorber
Hexagonal Honeycomb Core with Cut-Away Sections
Technical Presentation. V&V2013-2390
Shabnam Sadeghi-Esfahlani, Hassan Shirvani, Anglia Ruskin
University, Chelmsford, United Kingdom
Computational Simulation and Experimental Study of Plastic
Deformation In A36 Steel During High Velocity Impact
Technical Presentation. V&V2013-2426
Brendan O’Toole, Mohamed Trabia, Jagadeep Thota, Deepak
Somasundaram, Richard Jennings, Shawoon Roy, University of
Nevada Las Vegas, Las Vegas, United States, Steven Becker,
Edward Daykin, Robert Hixson, Timothy Meehan, Eric
Machorro, National Security Technologies, LLC, North Las
Vegas, NV, United States
11
THURSDAY, MAY 23
TECHNICAL
SESSIONS
THURSDAY, MAY 23
TRACK 3 Validation Methods for Solid
Mechanics and Structures
TRACK 2 Uncertainty Quantification,
Sensitivity Analysis, and Prediction
3-1 Validation Methods for Solid Mechanics and
Structures
2-4 Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 4
Wilshire A
Wilshire B
10:30am–12:30pm
Session Chair: David Moorcroft, Federal Aviation
Administration, Oklahoma City, OK, United States
Session Co-Chair: Danny Levine, Zimmer, Inc., Warsaw, IN,
United States
10:30am–12:30pm
Session Chair: Tina Morrison, Food and Drug Administration,
Silver Springs, MD, United States
Session Co-Chair: Laura McNamara, Sandia National
Laboratories, Albuquerque, NM, United States
A Study into the Predictability of Thermal Expansion for Interior
Plastic Trim Components using Different Non-Linear Virtual
(CAE) Tools and Their Accuracy
Technical Presentation. V&V2013-2341
Jacob Cheriyan, Jeff Webb, Ford Motor Company, Dearborn,
MI, United States, Meraj Ahmed, Ram Iyer, Eicher Engineering
Solutions Inc., Farmington Hills, MI, United States
A Simple Validation Metric for the Comparison of Uncertain
Model and Test Results
Technical Presentation. V&V2013-2194
Ben Thacker, Southwest Research Institute, San Antonio, TX,
United States, Thomas Paez, Thomas Paez Consulting,
Durango, CO, United States
Validation of Experimental Modal Analysis of Steering Test
Fixture
Technical Presentation. V&V2013-2378
Basem Alzahabi, Kettering University, Flint, MI, United States
Efficient Uncertainty Estimation using Latin Hypercube
Sampling
Technical Presentation. V&V2013-2222
Chengcheng Deng, Tsinghua University, Beijing, China, Weili
Liu, China Institute of Atomic Energy, Beijing, China, Brian
Hallee, Qiao Wu, Oregon State University, Corvallis, OR, United
States
Verification and Validation of Load Reconstruction Technique
Augmented by Finite Element Analysis
Technical Presentation. V&V2013-2417
Deepak Gupta, Konecranes, Inc. (Nuclear Division), New Berlin,
WI, United States, Anoop Dhingra, University of Wisconsin Milwaukee, Milwaukee, WI, United States
Study on the Approaches for Interval-Bound Estimation Based
on Data Sample of Small Size
Technical Presentation. V&V2013-2223
Z.P. Shen, S.F. Xiao, X.E. Liu, S.Q. Hu, Institute of Structural
Mechanics, CAEP, Mianyang, China
Verification, Calibration and Validation of Crystal Plasticity
Constitutive Model for Modeling Spall Damage
Technical Presentation. V&V2013-2421
Kapil Krishnan, Pedro Peralta, Arizona State University, Tempe,
AZ, United States
QUICKER: Quantifying Uncertainty in Computational
Knowledge Engineering Rapidly a Faster Sampling Algorithm
for Uncertainty Quantification
Technical Presentation. V&V2013-2235
Adam Donato, Rangarajan Pitchumani, Virginia Tech,
Blacksburg, VA, United States
Uncertainties in Structural Collapse Mitigation
Technical Presentation. V&V2013-2196
Shalva Marjanishvili, Brian Katz, Hinman Consulting Engineers,
Inc., San Francisco, CA, United States
Dynamic Sensitivity Analysis and Uncertainty Propagation
Supporting Verification and Validation
Technical Presentation. V&V2013-2254
John Doty, University of Dayton, Dayton, OH, United States
Effects of Statistical Variability of Strength and Scale Effects on
the Convergence of Unnotched Charpy and Taylor Anvil Impact
Simulations
Technical Presentation. V&V2013-2369
Krishna Kamojjala, Rebecca Brannon, University of Utah, Salt
Lake City, UT, United States, Jeffrey Lacy, Idaho National
Laboratory, Idaho Falls, ID, United States
Uncertainty Quantification for CFD: Development and Use of
the 5/95 Method
Technical Presentation. V&V2013-2262
Arnaud Barthet, Philippe Freydier, EDF, Villeurbanne, France
12
TECHNICAL
SESSIONS
THURSDAY, MAY 23
TRACK 7 Verification for Fluid Dynamics and
Heat Transfer
TRACK 13 Verification Methods
13-1 Verification Methods: Part 1
7-1 Verification for Fluid Dynamics and Heat Transfer
Sunset 3&4
Sunset 5&6
Session Chair: Dawn Bardot, Heartflow, Inc., Redwood City, CA,
United States
10:30am–12:30pm
Session Chair: Dimitri Tselepidakis, ANSYS, Inc., Lebanon, NH,
United States
Session Co-Chair: Carlos Corrales, Baxter Health Care,
Chicago, IL, United States
10:30am–12:30pm
Simple Convergence Checks and Error Estimates for Finite
Element Stresses at Stress Concentrations
Technical Presentation. V&V2013-2208
Glenn Sinclair, Louisiana State University, Baton Rouge, LA,
United States, Jeff R. Beishem, ANSYS, Canonsburg, PA,
United States
Optimisation of Pin Fin Heat Sink using Taguchi Method
Technical Presentation. V&V2013-2183
Nagesh Chougule, Gajanan Parishwad, College of Engineering
Pune, Pune, India
Verifying LANL Physics Codes: Benchmark Analytic Solution of
the Dynamic Response of a Small Deformation Elastic or
Viscoplastic Sphere
Technical Presentation. V&V2013-2237
Brandon Chabaud, Jerry S. Brock, Todd O. Williams, Brandon
M. Smith, Los Alamos National Laboratory, Los Alamos, NM,
United States
Thermal Design Optimization for Two-Phase Loop Cooling
System Applications
Technical Presentation. V&V2013-2191
Jeehoon Choi, Zalman Tech. Co., Ltd. / Georgia Institute of
Technlogy, Atlanta, GA, United States, Minjoong Jeong, Kyung
Seo Park, Korea Institute of Science and Technology
Information, Daejeon, Korea (Republic), Yunkeun Lee, Zalman
Tech. Co., Ltd., Seoul, Korea (Republic)
Robust Numerical Discretization Error Estimation without
Arbitrary Constants Using Optimization Techniques
Technical Presentation. V&V2013-2239
William Rider, Jim Kamm, Walt Witkowski, Sandia National
Laboratories, Albuquerque, NM, United States
Design of a New Thermal Transit-time Flow Meter for High
Temperature, Corrosive and Irradiation Environment
Technical Presentation. V&V2013-2233
Elaheh Alidoosti, Jian Ma, Yingtao Jiang, Taleb Moazzeni,
University of Nevada, Las Vegas, United States
Multi-Level Code Verification of Reduced Order Modeling for
Nonlinear Mechanics
Technical Presentation. V&V2013-2250
William Olson, Ethicon Endo-surgery, Blue Ash, OH, United
States, Amit Shukla, Miami University, Oxford, FL, United States
Free Jet Entrainment in Steady Fluid
Technical Presentation. V&V2013-2236
Mike Dahl Giversen, Aalborg University, Department of Energy
Technology, Aalborg, Denmark, Mads Smed, Peter Thomassen,
Katrine Juhl, Morten Lind, Aalborg University, Aalborg,
Denmark
Applying Sensitivity Analysis to Model and Simulation
Verification
Technical Presentation. V&V2013-2333
David Hall, SURVICE Engineering Company, Carlsbad, CA,
United States, James Elele, Naval Air Systems Command,
Patuxent River, MD, United States, Allie Farid, SURVICE
Engineering Company, Lexington Park, MD, United States
Development of a Microorganism Incubator using CFD
Simulations
Technical Presentation. V&V2013-2243
Ilyoup Sohn, Kyung Seo Park, Sang-Min Lee, Korea Institute of
Science and Technology Information, Daejeon, Korea
(Republic), Hae-Yoon Jeong, EcoBiznet, Chuncheon, Korea
(Republic)
Collecting and Characterizing Validation Data to Support
Advanced Simulation of Nuclear Reactor Thermal Hydraulics
Technical Presentation. V&V2013-2320
Nam Dinh, Anh Bui, Idaho National Laboratory, Idaho Falls, ID,
United States, Hyung Lee, Bettis Laboratory, West Mifflin, PA,
United States
Verification of Modelling Electrovortex Flows in DC EAF
Technical Presentation. V&V2013-2248
Oleg Kazak, Donetsk national university, Donetsk, Ukraine
13
THURSDAY, MAY 23
TECHNICAL
SESSIONS
TRACK 2 Uncertainty Quantification,
Sensitivity Analysis, and Prediction
TRACK 4 Verification and Validation for Fluid
Dynamics and Heat Transfer
2-5 Uncertainty Quantification, Sensitivity Analysis,
and Prediction: Part 5
4-3 Verification and Validation for Fluid Dynamics and
Heat Transfer: Part 3
Wilshire A
Wilshire B
1:30pm–3:30pm
Session Chair: Steve Weinman, ASME, New York, NY, United
States
1:30pm–3:30pm
Session Chair: Richard Lucas, ASME, New York, NY, United
States
Session Co-Chair: William Olson, Ethicon Endo-surgery, Blue
Ash, OH, United States
A Novel Approach for Input and Output Uncertainty
Quantification in Nuclear System-Thermal-Hydraulics
Technical Presentation. V&V2013-2112
Alessandro Petruzzi, Andriy Kovtonyuk, GRNSPG, Pisa, Italy
Validation of a Flame Acceleration Simulator Computational
Fluid Dynamics Model of Helium Distribution in an H-type
Chamber
Technical Presentation. V&V2013-2133
Wen Sheng Hsu, Hung Pei Chen, Chang Lin Zhong, Hui Chen
Lin, National Tsing Hua University, Hsinchu,Taiwan
Uncertainty Modeling in the Full-Spectrum Correlated-k
Distribution Method for Thermal Radiation
Technical Presentation. V&V2013-2149
John Tencer, University of Texas, Austin, Pflugerville, TX, United
States, John Howell, University of Texas, Austin, TX, United
States
CFD Investigation of Carbon Monoxide Concentration within a
Ship Vehicle Garage
Technical Presentation. V&V2013-2178
Ahmed F. Elsafty, College of Engineering and Technology,
Kuwait, Kuwait, Nasr A. Nasr, Marine Engineering Upgrading
Studies Dept. - Institute of Upgrading Studies- AASTMT,
Alexandria, Egypt, Mosaad Mosleh, Marine Engineering and
Naval Arch. Dept., Alexandria, Egypt
Modeling and Incorporation of Software Implementation
Uncertainty to Realize Easily Verifiable Automotive Controllers
Technical Presentation. V&V2013-2161
Kyle Edelberg, Karl Hedrick, University of California Berkeley,
Berkeley, CA, United States, Mahdi Shahbakhti, Michigan Tech.
University, Houghton, MI, United States
Validation of an In-house 3D Free-Surface Flow Solver in Inkjet
Printing
Technical Presentation. V&V2013-2186
Hua Tan, Erik Tornianinen, David P. Markel, Robert N K
Browning, Hewlett-Packard Company, Corvallis, OR, United
States
Using Hybrid Techniques to Perform Sensitivity Analysis During
the Calibration Process
Technical Presentation. V&V2013-2187
Genetha Gray, Sandia National Labs, Livermore, CA, United
States, John Siirola, Sandia National Labs, Albuquerque, NM,
United States
On the Uncertainty of CFD in Sail Aerodynamics
Technical Presentation. V&V2013-2226
Ignazio Maria Viola, Matthieu Riotte, Newcastle University,
Newcastle upon Tyne, United Kingdom, Patrick Bot, Institut de
Recherche de l’Ecole Navale, Brest, France
Integration of Verification, Validation, and Calibration Results
for Overall Uncertainty Quantification in Multi-Level Systems
Technical Presentation. V&V2013-2400
Sankaran Mahadevan, Chenzhao Li, Joshua G. Mullins,
Vanderbilt University, Nashville, TN, United States, Shankar
Sankararaman, NASA Ames Research Center, Moffett Field,
CA, United States
The RoBuT Wind Tunnel for CFD Validation of Natural, Mixed,
and Forced Convection
Technical Presentation. V&V2013-2242
Barton Smith, Blake Lance, Jeff Harris, Utah State University,
Logan, UT, United States
Improving Flexibility and Applicability of Modeling and
Simulation Credibility Assessments
Technical Presentation. V&V2013-2240
William Rider, Timothy Trucano, Walt Witkowski, Angel Urbina,
Sandia National Laboratories, Albuquerque, NM, United States,
Richard Hills, Sandia National Laboratories, Fairacres, NM,
United States
Shrinkage Porosity Prediction Simulation using Sierra
Technical Presentation. V&V2013-2263
Eric Sawyer, Tod Neidt, Jordan Lee, Terry Sheridan, Wesley
Everhart, Honeywell, fm&t, Kansas City, MO, United States
14
TECHNICAL
SESSIONS
THURSDAY, MAY 23
TRACK 7 Verification for Fluid Dynamics and
Heat Transfer
TRACK 13 Verification Methods
13-2 Verification Methods: Part 2
7-2 Verification for Fluid Dynamics and Heat Transfer:
Part 2
Sunset 3&4
Sunset 5&6
Session Chair: Deepak Gupta, Konecranes, Inc. (Nuclear
Division), New Berlin, WI, United States
1:30pm–3:30pm
Session Chair: Ryan Crane, ASME, New York, NY, United States
1:30pm–3:30pm
Techniques for using the Method of Manufactured Solutions for
Verification and Uncertainty Quantification of CFD Simulations
having Discontinuities
Technical Presentation. V&V2013-2288
Benjamin Grier, Richard Figliola, Clemson University, Clemson,
United States, Edward Alyanak, US Air Force Research Lab,
WPAFB, OH, United States, Jose Camberos, US Air Force
Research Lab, Dayton, OH, United States
Spherical Particles in a Low Reynolds Number Flow: A V&V
Exercise
Technical Presentation. V&V2013-2281
Nima Fathi, Peter Vorobieff, The University of New Mexico,
Albuquerque, NM, United States
Radiation Hydrodynamics Test Problems with Linear Velocity
Profiles
Technical Presentation. V&V2013-2309
Raymond Hendon, Middle Tennessee State University,
Murfreesboro, United States, Scott Ramsey, Los Alamos
National Laboratory, Los Alamos, NM, United States
Code Verification of ReFRESCO using the Method of
Manufactured Solutions
Technical Presentation. V&V2013-2307
Luis Eca, Filipe Pereira, Instituto Superior Tecnico, Lisboa,
Portugal, Guilherme Vaz, MARIN, Wageningen, Netherlands
A Manufactured Solution for a Steady Capillary Free Surface
Flow Governed by the Navier-Stokes Equations
Technical Presentation. V&V2013-2310
Stephane Gounand, Commissariat À L’Energie Atomique, Gifsur-Yvette, France, Vincent Mons, Ecole Polytechnique,
Palaiseau, France
Solution Verification Based on Grid Refinement Studies and
Power Series Expansions
Technical Presentation. V&V2013-2308
Luis Eca, Instituto Superior Tecnico, Lisboa, Portugal, Martin
Hoekstra, MARIN, Wageningen, Netherlands
Compact Finite Difference Scheme Comparison with
Automatic Differentiation Method on Different Complex
Domains In Solid and Fluidic Problems
Technical Presentation. V&V2013-2313
Kamran Moradi, Siavash Rastkar, Florida International
University, Miami, United States
On the Convergence of the Discrete Ordinates and Finite
Volume Methods for the Solution of the Radiative Transfer
Equation
Technical Presentation. V&V2013-2326
Pedro J. Coelho, Instituto Superior Tecnico, Technical
University of Lisbon, Lisbon, Portugal
Coping with Extreme Discretization Sensitivity and Computational
Expense in a Solid Mechanics Pipe Bomb Application
Technical Presentation. V&V2013-2425
James Dempsey, Vicente Romero, Sandia National
Laboratories, Albuquerque, NM, United States
Code Verification of Multiphase Flows with MFIX
Technical Presentation. V&V2013-2332
Aniruddha Choudhary, Christopher J. Roy, Virginia Tech,
Blacksburg, VA, United States
The Dyson Problem Revisited
Technical Presentation. V&V2013-2362
Scott Ramsey, Los Alamos National Laboratory, Los Alamos,
NM, United States
15
THURSDAY, MAY 23
TECHNICAL
SESSIONS
TRACK 4 Verification and Validation for Fluid
Dynamics and Heat Transfer
TRACK 6 Verification and Validation for
Simulation of Nuclear Applications
4-4 Verification and Validation for Fluid Dynamics and
Heat Transfer: Part 4
6-1 Verification and Validation for Simulation of
Nuclear Applications: Part 1
Wilshire B
Sunset 3&4
4:00pm–6:00pm
Session Chair: Richard Lucas, ASME, New York, NY, United States
Session Co-Chair: Steve Weinman, ASME, New York, NY, United
States
4:00pm–6:00pm
Session Chair: Christopher Freitas, Southwest Research
Institute, San Antonio, TX, United States
Experiment-Reconstruction-Calculation in Benchmark Study
for Validation of Neutronic Calculations
Technical Presentation. V&V2013-2245
Elena Mitenkova, Nuclear Safety Institute of Russian Academy
of Sciences, Moscow, Russia
External Rate of Vaporization Control for Self-dislocating Liquids
Technical Presentation. V&V2013-2411
Radu Rugescu, Universitatea Politehnica Bucuresti, Bucuresti,
Bucuresti - Sector 6, Romania, Ciprian Dumitrache, University
of Colorado, Fort Collins, CO, United States
Validation Basis Testing for the Westinghouse Small Modular
Reactor
Technical Presentation. V&V2013-2249
Richard Wright, Westinghouse Electric Co, LLC, Greenock, PA,
United States, Cesare Frepoli, Westinghouse Electric Co, LLC,
Cranberry Township, PA, United States
Verification and Validation of CFD Model of Dilution Process in
a GT Combustor
Technical Presentation. V&V2013-2416
Alka Gupta, University of Wisconsin Milwaukee, Milwaukee, WI,
United States, Ryoichi Amano, University of Wisconsin,
Glendale, WI, United States
Validation and Verification Methodsfor Nuclear Fuel Assembly
Mechanical Modelling and Structure Analyses
Technical Presentation. V&V2013-2253
Xiaoyan Jiang, Westinghouse, Columbia, SC, United States
Validation of CFD Solutions to Oil and Gas Problems
Technical Presentation. V&V2013-2419
Alistair Gill, Prospect Flow Solutions, LLC, Katy, TX, United States
Quantification of Numerical Error for a Laminar Round Jet
Simulation
Technical Presentation. V&V2013-2420
Sagnik Mazumdar, Mark L. Kimber, University of Pittsburgh,
Pittsburgh, PA, United States, Anirban Jana, Pittsburgh
Supercomputing Center, Carnegie Mellon University, Pittsburgh,
PA, United States
Thermal Stratification by Direct Contact Condensation of
Steam in a Pressure Suppression Pool
Technical Presentation. V&V2013-2257
Daehun Song, Nejdet Erkan, Koji Okamoto, The University of
Tokyo, Tokyo, Japan
Verification and Validation of Advanced Neutronics Design
Code System AEGIS/SCOPE2
Technical Presentation. V&V2013-2259
Satoshi Takeda, Nuclear Fuel Industries, Ltd., Osaka-hu, Japan,
Tadashi Ushio, Yasunori Ohoka, Yasuhiro Kodama, Kento
Yamamoto, Nuclear Fuel Industries, Ltd., Kumatori, Japan,
Masahiro Tatsumi, Masato Tabuchi, Kotaro Sato, Nuclear
Engineering, Ltd., Osaka, Japan
The Validation Hierarchy and Challenges in Rolling up
Validation Results to a Target Application
Technical Presentation. V&V2013-2124
Richard Hills, Sandia National Laboratories, Albuquerque, NM,
United States
16
TECHNICAL
SESSIONS
THURSDAY, MAY 23
TRACK 9 Verification and Validation for
Energy, Power, Building, and Environmental
Systems and Verification for Fluid Dynamics
and Heat Transfer
TRACK 10 Validation Methods for
Bioengineering
10-2 Validation Methods for Bioengineering
Sunset 5&6
9-2 Verification and Validation for Energy, Power,
Building, and Environmental Systems and
Verification for Fluid Dynamics and Heat Transfer
Wilshire A
4:00pm–6:00pm
Session Chair: Richard Swift, MED Institute, West Lafayette, IN,
United States
Session Co-Chair: Dawn Bardot, Heartflow, Inc. Redwood City,
CA, United States
4:00pm–6:00pm
Model-Based Surgical Outcome Prediction, Verification and
Validation of Rhinoplasty
Technical Presentation. V&V2013-2286
Mohammad Rahman, Ehren Biglari, Yusheng Feng, University
of Texas at San Antonio, San Antonio, TX, United States,
Christian Stallworth, University of Texas Health Science Center
at San Antonio, San Antonio, TX, United States
Session Chair: Ryan Crane, ASME, New York, NY, United States
Determining What Vortex-Induced Vibration Variables have the
Maximum Effect on a Pipeline Frees Span’s Amplitude of
Displacement with Computational Fluid-Structure Interaction
Technical Presentation. V&V2013-2315
Marcus Gamino, Ricardo Silva, Samuel Abankwa, Edwin
Johnson, Michael Fischer, Raresh Pascali, Egidio Marotta,
University of Houston, Houston, TX, United States, Carlos Silva,
Juan-Alberto Rivas-Cardona, FMC Technologies, Houston, TX,
United States
The Role of Uncertainty Quantification in the Model Validation
of a Mouse Ulna
Technical Presentation. V&V2013-2414
David Riha, Southwest Research Institute, San Antonio, United
States, Stephen Harris, University of Texas Health Science
Center at San Antonio, San Antonio, TX, United States
The Effect of Welding Self-Equilibrating Stresses on the Natural
Frequencies of Thin Rectangular Plate with Edges Rotation and
Translation Flexibility
Technical Presentation. V&V2013-2331
Kareem Salloomi, A. Salam Al-Ammri, University of
Baghdad/Al-Khwarizmi College of Engineering, Baghdad, Iraq,
Shakir Al-Samarrai, University of Baghdad/ College of Eng.,
Baghdad, Iraq
Case Study: Credibility Assessment of Biomedical Models and
Simulations with NASA-STD-7009
Technical Presentation. V&V2013-2430
Lealem Mulugeta, Universities Space Research Association,
Division of Space Life Sciences, Houston, TX, United States
3-D Cell Modeling and Investigating Its Movement on NonRigid Substrates
Technical Presentation. V&V2013-2115
Mohammad Hadi Mahmoudi Nejad, Hormozgan University of
Medical Science, Bandar Abbas, Iran
High Pressure Gas Property Testing and Uncertainty Analysis
Technical Presentation. V&V2013-2432
Brandon Ridens, Southwest Research Institute, San Antonio,
TX, United States
Manufactured Solutions for the Three-dimensional Euler
Equations with Relevance to ICF
Technical Presentation. V&V2013-2284
Thomas R Canfield, Nathaniel R Morgan, L Dean Risinger,
Jacob Waltz, John G Wohlbier, Los Alamos National
Laboratory, Los Alamos, NM, United States
Verification and Validation of Finite Element Models of Human
Knee Joint Using an Evidence-Based Uncertainty
Quantification Approach
Technical Presentation. V&V2013-2200
Shahab Salehghaffari, Yasin Dhaher, Northwestern University,
Evanston, IL, United States
Impact of Numerical Algorithms on Verification Assessment of
a Lagrangian Hydrodynamics Code
Technical Presentation. V&V2013-2285
Scott Doebling, Scott Ramsey, Los Alamos National Laboratory,
Los Alamos, NM, United States
Dual Phase Flow Modeling and Experiment into SelfPressurized, Storable Liquids
Technical Presentation. V&V2013-2415
Radu Rugescu, Universitatea Politehnica Bucuresti, Bucuresti,
Bucuresti - Sector 6, Romania
17
FRIDAY, MAY 24
TECHNICAL
SESSIONS
V&V and UQ In U.S. Energy Policy Modeling: History, Current
Practice, and Future Directions
Technical Presentation. V&V2013-2397
Alan Sanstad, Lawrence Berkeley National Laboratory, Berkeley,
CA, United States
FRIDAY, MAY 24
TRACK 3 Validation Methods for Solid
Mechanics and Structures
Evaluation of the Effect of Subject Matter Expert Estimated
Mean and Standard Deviation for a Deterministic Testing
Environment
Technical Presentation. V&V2013-2272
David Moorcroft, Federal Aviation Administration, Oklahoma
City, OK, United States
3-2 Validation Methods for Solid Mechanics and
Structures: Part 2
Wilshire B
8:00am–10:00am
Session Chair: Danny Levine, Zimmer, Inc., Warsaw, IN, United
States
Session Co-Chair: Richard Swift, MED Institute, West Lafayette,
IN, United States
11-2 Development of V&V Community Resources:
Part 1
Wilshire A
Elasto-plastic Stress-strain Model for Notched Components
Technical Presentation. V&V2013-2123
Ayhan Ince, General Dynamics Land Systems-Canada, London,
ON, Canada
Session Chair: Christopher Freitas, Southwest Research
Institute, San Antonio, TX, United States
Session Co-Chair: Richard Lucas, ASME, New York, NY, United
States
Uncertainty Research on the Stiffness of Some Rubber Isolator
Technical Presentation. V&V2013-2214
Chen Xueqian, S.F. Xiao, Shen Zhanpeng, X.E. Liu, Institute of
Structural Mechanics, CAEP, Mianyang, China
A Validation Suite of Full-Scale, Fracture Mechanics Pipe Tests
Technical Presentation. V&V2013-2215
Bruce Young, Richard Olson, Paul Scott, Andrew Cox, Battelle
Memorial Institute, Columbus, OH, United States
Structural Mechanics Validation Application with Variable
Loading and High Strain Gradient Fields
Technical Presentation. V&V2013-2275
Nathan Spencer, Myles Lam, Brian Damkroger, Sandia National
Laboratory, Livermore, CA, United States, Sharlotte Kramer,
Sandia National Laboratory, Albuquerque, NM, United States
Application of a Multivariate Approach and Static Validation
Library to Quantify Validation Uncertainty and Model
Comparison Error for Industrial CFD Applications: SinglePhase Liquid Mixing in PJMs
Technical Presentation. V&V2013-2255
Andri Rizhakov, Leonard Peltier, Jonathan Berkoe, Bechtel
National, Inc., Reston, VA, United States, Michael Kinzel,
Brigette Rosendall, Mallory Elbert, Kellie Evon, Bechtel
National Advanced Simulation & Analysis, Reston, VA, United
States, Kelly Knight, Bechtel National Inc, Frederick, MD,
United States
An Ensemble Approach for Model Bias Prediction
Technical Presentation. V&V2013-2364
Zhimin Xi, University of Michigan - Dearborn, Dearborn, MI,
United States, Yan Fu, Ford, Canton, MI, United States, Ren-jye
Yang, Ford Research & Advanced Engineering, Dearborn, MI,
United States
Knowledge Base Development for Data Quality Assessment,
Uncertainty Quantification, and Simulation Validation
Technical Presentation. V&V2013-2238
Weiju Ren, Oak Ridge National Laboratory, Oak Ridge, TN,
United States, Hyung Lee, Bettis Laboratory, West Mifflin, PA,
United States, Kimberlyn C. Mousseau, Sandia National
Laboratories, Albuquerque, NM, United States
TRACK 11 Standards Development Activities
for Verification and Validation
11-1 Government Related Activities
Sunset 3&4
8:00am–10:00am
8:00am–10:00am
Development of Standardized and Consolidated Reference
Experimental Database (SCRED)
Technical Presentation. V&V2013-2111
Alessandro Petruzzi, Andriy Kovtonyuk, GRNSPG, Pisa, Italy
Session Chair: Alan Sanstad, Lawrence Berkeley National
Laboratory, Berkeley, CA, United States
Session Co-Chair: William Oberkampf, W. L. Oberkampf
Consulting, Georgetown, United States
The Application of Fast Fourier Transforms on FrequencyModulated Continuous-Wave Radars for Meteorological
Application
Technical Presentation. V&V2013-2252
Akbar Eslami, Elizabeth City State University, Elizabeth City, NC,
United States
Challenges for Computational Social Science Modeling and
Simulation in Policy and Decision-Making
Technical Presentation. V&V2013-2336
Laura McNamara, Timothy Trucano, Sandia National
Laboratories, Albuquerque, NM, United States
Challenges in Implementing Verification, Validation, and
Uncertainty Quantification Practices in Governmental
Organizations
Technical Presentation. V&V2013-2368
William Oberkampf, W. L. Oberkampf Consulting, Georgetown,
DC, United States
18
TECHNICAL
SESSIONS
FRIDAY, MAY 24
The Language of Uncertainty and its Use in the ASME B89.7
Standards
Technical Presentation. V&V2013-2371
Edward Morse, UNC Charlotte, Charlotte, NC, United States
TRACK 6 Verification and Validation for
Simulation of Nuclear Applications
6-2 Verification and Validation for Simulation of
Nuclear Applications: Part 2
Sunset 3&4
A New Solution Paradign for the Discrete Ordinates Equation of
1D Neutron Transport Theory
Technical Presentation. V&V2013-2433
B.D. Ganapol, University of Arizona, Tempe, AZ, United States
10:30am–12:30pm
Session Chair: Richard Lucas, ASME, New York, NY, United
States
11-5 Development of V&V Community Resources:
Part 2
Verification and Validation Processes Applied with the ISERIT
SW Development
Technical Presentation. V&V2013-2229
Dalibor Frydrych, Milan Hokr, Technical University of Liberec,
Liberec, Czech Republic
Wilshire A
Session Chair: Scott Doebling, Los Alamos National Laboratory,
Los Alamos, NM, United States
Proposal for the 2014 Verification and Validation Challenge
Workshop
Technical Presentation. V&V2013-2273
V. Gregory Weirs, Kenneth Hu, Sandia National Laboratories,
Albuquerque, NM, United States
Investigation of Uncertainty Quantification Procedure for V&V
of Fluid-Structure Thermal Interaction Estimation Code for
Thermal Fatigue Issue in a Sodiumcooled Fast Reactor
Technical Presentation. V&V2013-2350
Masaaki Tanaka, Japan Atomic Energy Agency, O-arai, Ibaraki,
Japan, Shuji Ohno, Oarai R&D Center, JAEA, Ibaraki-Ken,
Japan
Classification Systems for V&V Benchmark Data
Technical Presentation. V&V2013-2274
V. Gregory Weirs, Sandia National Laboratories, Albuquerque,
NM, United States
Simulation and Validation of Response Function of Plastic
Scintillator Gamma Monitors
Technical Presentation. V&V2013-2392
Muhammad Ali, Vitali Kovaltchouk, Rachid Machrafi, University
of Ontario Institute of Technology, Oshawa, ON, Canada
The Validation Journal Article
Technical Presentation. V&V2013-2328
Barton Smith, Utah State University, Logan, UT, United States
Validation of Nondestructive Evaluation Simulation Software for
Nuclear Power Plant Applications
Technical Presentation. V&V2013-2431
Feng Yu, Thiago Seuaciuc-Osorio, George Connolly, Mark
Dennis, Electric Power Research Institute (EPRI), Charlotte, NY,
United States
Verification Testing for the Sierra Mechanics Codes
Technical Presentation. V&V2013-2344
Brian Carnes, Sandia National Laboratories, Albuquerque, NM,
United States
Assessment of a Multivariate Approach to Enable Quantification
of Validation Uncertainty and Model Comparison Error for
Industrial CFD Applications using a Static Validation Library
Technical Presentation. V&V2013-2363
Leonard Peltier, Andri Rizhakov, Jonathan Berkoe, Bechtel
National, Inc., Reston, VA, United States, Michael Kinzel,
Brigette Rosendall, Mallory Elbert, Kellie Evon, Bechtel
National Advanced Simulation & Analysis, Reston, VA, United
States, Kelly Knight, Bechtel National Inc, Frederick, MD,
United States
TRACK 11 Standards Development Activities
for Verification and Validation
11-3 Standards Development Activities
Sunset 5&6
10:30am–12:30pm
10:30am–12:30pm
Session Chair: Steve Weinman, ASME, New York, NY, United
States
Session Co-Chair: Ryan Crane, ASME, New York, NY, United
States
The Generic Methodology for Verification and Validation (GM-VV):
An Overview and Applications of this New SISO/NATO Standard
Technical Presentation. V&V2013-2270
Manfred Roza, National Aerospace Laboratory NLR,
Amsterdam, Noord-Holland, Netherlands, Jeroen Voogd, TNO,
Den Haag, Netherlands
Evaluation of Material Surveillance Measurements used In
Fitness-For-Service Assessments of Zr-2.5%Nb Pressure
Tubes In CANDU Nuclear Reactors
Technical Presentation. V&V2013-2271
Leonid Gutkin, Kinectrics Inc., Toronto, ON, Canada, Douglas
Scarth, Kinectrics Inc., Mississauga, ON, Canada
19
ABSTRACTS
Development of Standardized and Consolidated Reference
Experimental Database (SCRED)
V&V2013-2111
Alessandro Petruzzi, Andriy Kovtonyuk, GRNSPG
and assessment of the computer model predictions based on the
available experimental evidences. Thus the availability of a suitable
database of experiments (SETF and/or ITF) and related qualified
code calculations constitutes a pre-requisite for the development and
the application of the proposed methodology. Similarly to the CIAU
method, the process to develop the database of improved
estimations of input values and related covariance matrix is
distinguished from the application process where the database is
used to produce the improved uncertainty scenario. Thus the
information embedded in the set of experiments and related qualified
code calculations is elaborated and transformed in a suitable
information for performing an improved (in the sense that is based on
the experimental evidences) uncertainty evaluation. The pioneering
approach can be considered also as an attempt to substitute the
empiricism of the CIAU approach in relation to the statistical
treatment of the accuracy for deriving the uncertainty values with a
rigorous mathematical deterministic method.
Uncertainty evaluation constitutes a key feature of BEPU (Best
Estimate Plus Uncertainty) process. The uncertainty can be the result
of a Monte Carlo type analysis involving input uncertainty parameters
or the outcome of a process involving the use of experimental data
and connected code calculations. Those uncertainty methods are
discussed in several papers and guidelines (IAEA-SRS-52,
OECD/NEA BEMUSE reports). The present paper aims at discussing
the role and the depth of the analysis required for merging from one
side suitable experimental data and on the other side qualified code
calculation results. This aspect is mostly connected with the second
approach for uncertainty mentioned above, but it can be used also in
the framework of the first approach. Namely, the paper discusses the
features and structure of the database that includes the following
kinds of documents: 1. The” RDS-facility” (Reference Data Set for the
selected facility): this includes the description of the facility, the
geometrical characterization of any component of the facility, the
instrumentations, the data acquisition system, the evaluation of
pressure losses, the physical properties of the material and the
characterization of pumps, valves and heat losses; 2. The “RDS-test”
(Reference Data Set for the selected test of the facility): this includes
the description of the main phenomena investigated during the test,
the configuration of the facility for the selected test (possible new
evaluation of pressure and heat losses if needed) and the specific
boundary and initial conditions; 3. The “QP” (Qualification Report) of
the code calculation results: this includes the description of the
nodalization developed following a set of homogeneous techniques,
the achievement of the steady state conditions and the qualitative
and quantitative analysis of the transient with the characterization of
the Relevant Thermal-Hydraulics Aspects (RTA); 4. The EH
(Engineering Handbook) of the input nodalization: this includes the
rationale adopted for each part of the nodalization, the user choices,
and the systematic derivation and justification of any value present in
the code input respect to the values as indicated in the RDS-facility
and in the RDS-test.
3-D Cell Modeling and Investigating its Movement on NonRigid Substrates
V&V2013-2115
Mohammad Hadi Mahmoudi Nejad, Hormozgan University of
Medical Science
The attachment of blood and tissue cells to the extracellular matrix
(ECM) is done with the help of specific joints, which are made
between cells’ receptors and matrixes’ ligands. This interaction is
vital for many biological activities inside the body such as cell
migration, cancer development and wound healing. The purpose of
this study is to model a 3-dimensional cell by ABAQUS commercial
software package and investigate effects of factors, including
mechanical properties of focal adhesion, substrate stiffness and cell
active stress on cell motility over a smooth non-rigid substrate. In the
present work, the cell is assumed to be a continuum and continuum
mechanics equations are used for the cell, also due to symmetry in
the geometry of the problem, symmetric approach has been applied
in order to conserve time. Simulation results of this modeling
demonstrate that mechanical properties of focal adhesion between
cell and substrate and also substrate stiffness have considerable
effect on cell motility. In fact, increase in substrate stiffness and also
increase in focal adhesion strength cause an increase in magnitude
of cell-substrate traction stress and on the other hand, increase in
these two parameters causes cell motility velocity to continue its
decreasing trend. Furthermore, magnitude of traction stress and cell
movement speed highly depend on the active stress in the cell which
perceptible increase can be observed in these two values when
active stress increases, it should be noted that all the obtained data
in this study were compared and validated with experimental studies.
Finally, it can be said that this model is able to show the effect of
different mechanical factors on velocity and stresses in the cell and
can be a valuable model in analysis of tumor growth and wound
healing. Keywords: Cell motility, 3-D Finite element model, Non-rigid
substrate, Traction, Focal adhesion
A Novel Approach for Input and Output Uncertainty
Quantification in Nuclear System-Thermal-Hydraulics
V&V2013-2112
Alessandro Petruzzi, Andriy Kovtonyuk, GRNSPG
A large experimental database including SETFs and ITFs is available
in the LWR safety area. The analysts may select different sets of
experimental tests for assessing the uncertainty in similar accident
analysis. This process, together with the inexistence of a uniform
systematic method in identifying and reducing the number of
important parameters and characterizing their uncertainty, is at the
origin of the large variability in the uncertainty evaluations, as resulting
from recent international projects (like OECD BEMUSE). A
comprehensive approach for utilizing quantified uncertainties arising
from SETFs and ITFs in the process of calibrating complex computer
models is discussed in the paper. Notwithstanding a similar approach
has been already adopted in other scientific fields like meteorology,
the application to the nuclear safety area constitutes a real challenge
due to the complexity (in terms of number of relevant phenomena
and related parameters involved) of the system under consideration
and its strong non-linearity. The proposed method has been
developed as a fully deterministic method based on advanced
mathematical tools for performing internally in the thermal-hydraulic
system code the sensitivity and the uncertainty analysis by
implementing adjoint sensitivity procedure and data adjustment and
assimilation respectively. At the same time the method is capable of
accommodating several SETFs and ITFs to learn as much as
possible about uncertain parameters, allowing for the improvement
Uncertainties in Acoustical Measurement and Estimation and
Their Effects in Community Noise
V&V2013-2120
Robert A. Putnam, RAP Acoustical Engineers, Richard J.
Peppin, Engineers for Change, Inc.
INTRODUCTION We know that noise measurement is based on
imprecise instruments and that noise estimation is based on
algorithms that are also subject to experimental error. Errors in
measurement can be in the two digit decibel range easily and errors
in estimation and propagation can result in large dis-crepancies also.
There are two uses that are very much affected by uncertainty in
measurement: a) the measurement of noise to compare with property
line standards b) the determination of sound power (by measurement
20
ABSTRACTS
or by algorithm) to estimate sound immission based on algorithms. In
the first case, uncertainties result in perhaps wrongful citation but in
the second case the un-certainties are compounded and can result
in unhappy citizens. When we measure noise sources, and predict
effects at some distance, we often do not account for uncertainties in
measurement and in prediction. There are many issues that may give
us pause about the “accuracy” of our analysis. This paper discusses
some of the potential uncertainties that arise and their effects.
UNCERTAINTY We estimate sound levels at a distance with many
implicit assumptions and we confirm, or measure, results of
estimations with other assumptions. These are two different, but
often related issues- both involve instruments, environment, etc. In
this paper I’ll discuss the measurement problem but concentrate on
the estimate problem. In measurements, especially in the field, the
sources of error can be summarized as shown: True value of sound
level = measured value + error in measurement + deviation in
operating conditions + deviation in weather + error in receiver
location + error due to random ambient noise. This is different than,
and needs to be added to estimates due to, errors from estimation.
SO WHAT ARE THE ISSUES? The end result is to attempt an
accurate estimate of the sound pressure at some location of a source
operating at another location. And how is this done? By measuring
the sound pressure level of a source at a location and comparing the
results with a way of estimating the same result. The origins of
uncertainties come from two main areas, estimation and
measurement. For Estimation – Measuring sound pressure –
Converting sound pressure to sound pressure lev-el – Determining
sound power – Uncertainty in propagation effects For verification and
estimation – Duration and sampling of the measure-ment –
Measuring sound pressure – Uncertainty issues with metrics –
Microphone location This paper discusses some of the potential
uncertainties that arise and their effects.
locations in notched components provides a great advantage over
experiments and elastic-plastic finite element analysis due to its
simplicity, computational efficiency, low cost and reasonable
accuracy.
The Validation Hierarchy and Challenges in Rolling up
Validation Results to a Target Application
V&V2013-2124
Richard Hills, Sandia National Laboratories
A validation hierarchy designed for complex applications represents
a suite of validation experiments that are often heterogeneous in
physics content as well as measurement type. Experiments within
the hierarchy can be performed at different conditions than those for
an intended application, with each experiment designed to test only
part of the physics relevant for the application. Validation experiments
are generally highly controlled; utilizing idealized representations of
component geometries, with each experiment returning
measurement types (i.e. temperature, pressure, flux, first arrival times)
that may be different from the response quantity of interest for the
application. The validation hierarchy often does not provide full
coverage of the physics of interest for an application. Despite these
limitations, validation experiments, along with the associated
computational model predictions, often provide the clearest evidence
of model form error. Because computation models for complex
applications usually contain models for sub-physics that are based
on effective model parameters, or other non-physics based
calibration (i.e. models for constitutive properties), evidence of model
form error is to be expected. What is the impact of model form error
(i.e. trending error) observed during validation exercises, on the
uncertainty in a prediction for a target application? The focus of the
present presentation is to 1) briefly discuss the roll that the validation
hierarchy plays in computational simulation for complex applications,
2) to discuss the major issues and challenges we face in relating
validation results to a prediction for the target application, 3) to
overview several approaches that are currently being developed to
roll-up validation results across a hierarchy to a target application,
and to 4) discuss the desired features and path forward in developing
such methodology. * Sandia National Laboratories is a multi-program
laboratory managed and operated by Sandia Corporation, a wholly
owned subsidiary of Lockheed Martin Corporation, for the U.S.
Department of Energy’s National Nuclear Security Administration
under contract DE-AC04-94AL85000.
Elasto-plastic Stress-strain Model for Notched Components
V&V2013-2123
Ayhan Ince, General Dynamics Land Systems-Canada
Most driveline and suspension components for military wheeled
vehicles are subjected to complex multiaxial loadings in service. In
addition to the multiaxial loadings, many of those components
contain notches and geometrical irregularities where the fatigue
failure often occurs due to stress concentration. The fatigue strength
and durability estimations of notched components, subjected to
multiaxial loadings paths, require detailed knowledge of stresses and
strains in such regions. In these cases an elastic-plastic Finite
Element Analysis (FEA), using commercial software tools, can be
used to determine critical notch areas stress histories in the elastic
and elastic-plastic state induced by short loading histories. When
faced with long load histories such methods are impractical, due to
long computation times and excessive data storage needs. Durability
evaluation of vehicle suspension and driveline components, based
on experimental assessments, is too expensive and time-consuming.
For these reasons, more efficient and simpler methods of elasticplastic stress-strain analysis are necessary for notched bodies
subjected to lengthy complex load histories. A computational elastoplastic stress-strain model has been developed for stress-strain
analysis for notched components using linear elastic FE stress
results. The method was based on the multiaxial Neuber notch
correction, which relates the incremental elastic and elastic-plastic
strain energy densities at the notch root and the material stress-strain
behaviour, simulated by the Mroz/Garud cyclic plasticity model. The
numerical implementation of the developed model was validated
against the experimental results of SAE 1070 steel notched shaft
subjected to non-proportional loadings. Based on the comparison
between the experimental and computed strain histories for the
several non-proportional load paths, the model calculated notch root
strains with reasonable accuracy using the FE linear elastic stress
histories. Coupling the multiaxial notch correction and the cyclic
plasticity to compute the local stresses and strains at critical
Validation of a Flame Acceleration Simulator Computational
Fluid Dynamics Model of Helium Distribution in an H-type
Chamber
V&V2013-2133
Wen Sheng Hsu, Hung Pei Chen, Chang Lin Zhong, Hui
Chen Lin, National Tsing Hua University
During a severe accident at a nuclear power plant, large amounts of
hydrogen are typically generated because of core degradation.
Historically, both the Three Mile Island (TMI) accident in 1979 and the
Fukushima Daiichi nuclear disaster in 2011 exhibited this
phenomenon. Hydrogen can be produced from the following
sources: a zirconium-steam reaction, radiolysis of water, corrosion of
metals, and degassing of the primary loop coolant water. Such
generated hydrogen is invariably transported in the containment area
and has the potential to threaten the integrity of the containment by
over-pressurization due to explosion, leading to substantial releases
of radioactive materials. The ability to predict hydrogen behavior
under severe accident conditions may allow development of effective
accident management procedures. To develop improved strategies,
an accurate understanding of the distribution of hydrogen and the
pressure distribution of a potential hydrogen explosion are required.
The Flame Acceleration Simulator (FLACS) tool is a computational
fluid dynamics (CFD) model that solves compressible Navier–Stokes
21
ABSTRACTS
equations on a three-dimensional Cartesian grid using the finite
volume method. To date, the FLACS software has been
demonstrated to be highly accurate and is most commonly used to
assess explosion pressures. Therefore, in this study, FLACS was
used to simulate hydrogen gas transport. The experimental facility
was an H-type chamber with an internal volume of 0.2 m3 with
length, width, and height of about 0.8 m, 0.2 m, and 2.52 m,
respectively. Each side of the chamber was divided into three
sections by two layers at 0.92 and 1.72 m in height. Helium was
substituted for hydrogen in the experiment because the former is
stable and has a similar molecular weight as the latter. Helium
concentration measurements were conducted using oxygen sensors
and a gas analyzer. In Test 1, the oxygen sensors were used as
indirect measurement instruments for helium, assuming uniform
dilution of both oxygen and nitrogen with helium. Using this
assumption, the concentration of helium could be derived from the
measured oxygen concentration. In Test 2, direct measurement of
helium was conducted using a thermal conductivity detector (TCD).
FLACS was used to simulate the helium concentration distribution
and the results were compared with the experimental data. The
FLACS simulation results were consistent with the shape and trends
of the experimental data. Follow-up experiments will be conducted
using the TCD to determine the reason for the discrepancy between
the peak values of the experimental and modeled results.
assumption breaks down for molecular gases with strong
temperature or concentration gradients. There has previously been
no way to effectively quantify the model-form uncertainty introduced
by the failure of the scaling approximation. This paper examines the
effect of temperature gradients in CO2 and proposes a technique to
generate probability density functions for the gray-gas absorption
coefficients used in the Full-Spectrum Correlated-k Distribution
Method. The uncertainty in the absorption coefficients may then be
propagated forward to ascertain confidence intervals for important
quantities such as heat flux or intensity distributions.
Designing a New ACL Graft Fixation Applicable for Low
Density Bones
V&V2013-2156
Mahmoud Chizari, Brunel University, Mohammad Alrashidi,
College of Technological Studies
Anterior cruciate ligament (ACL) injuries affect many people and the
studies on ACL reconstruction is growing rapidly. Some of devices
have been designed for surgical fixation of graft tendon. However,
there are problems associated with the proposed mechanisms and
techniques. In this study, the development stages of a newly
improved implant are discussed. The new design was analyzed
numerically using finite element analysis and supported using
experimental testes. This paper focused on both improving
performance and obtaining grater fixation strength of the proposed
design than commonly used ACL graft fixation devices. The progress
of the development is presented in three stages; design in early
stage, secondary design and final design. The analysis results are
compared for each step of progress. Numerical analysis and
experimental data showed that the developed implant provides
higher mechanical performance better than the counterparts.
Numerical Modeling of 3D Unsteady Flow around a Rotor
Blade of a Horizontal Axis Wind Turbine -Rutland 503
V&V2013-2139
Belkheir Noura, University of Khemis-Miliana
Wind turbines offer the promise of clean energy and cheap, but the
prediction of aerodynamic properties is difficult. However, the
estimate exactly of the aerodynamic loads and improved
understanding of the instatinnarité environmental of the Horizontal
Axis wind Turbine, effectively help engineers to design a wind turbine
rotor structure capable of improving the energy capture and increase
the lifetime of the wind turbine. The flow properties in three
dimensions around the blade rotation are essential for any simulation
of the aerodynamics of a wind turbine. The three-dimensional flow
around the blades is very different from the flow over a wing. Rotating
blades can have a significant scale and a radial flow. Obviously also
the speed of the blade varies linearly from the foot to the blade tip.
The interactions of the blade and the near wake with the flow can
lead to impulsive changes of effort on part of the rotating blades. This
article presents an analysis of a simulation of the unsteady flow
around a wind turbine blade. Results of numerical simulation of threedimensional unsteady flow field around a three-bladed wind turbine
rotor blade named Rutland 503 are presented. The approach DES
(detached eddy simulation) based on the Navier-Stokes equations
with the SST turbulence model and an approach to multi-block
structured mesh is used in the modeling of the flow. The solutions are
obtained using the Fluent solver which uses finite volume method.
The simulations are made for moderate wind speeds and strong. We
varied the wind speed of 9.3m / s to 15m / s. The simulation results
obtained are compared with experimental data [1] for a speed of
9.3m / s (low speed) for validation. The purpose of these simulations
is to obtain information about the velocity field around the blade in
order to calculate its aerodynamic properties when the wind is
strong.
Dynamic Stability of Wings for Drone Aircraft Subjected to
Parametric Excitation
V&V2013-2158
Iyd Maree, TU-Freiberg, Bast Jürgen Lothar, TU
Bergakademie Freiberg, Nazhad Ahmad Hussein, University
of Salahaddine
The purpose behind this study is to predict damping effects using
method of passive viscoelastic constrained layer damping. Ross,
Kerwin and Unger (RKU) model for passive viscoelastic damping has
been used to predict damping effects in constrained layer sandwich
cantilever beam. This method of passive damping treatment is widely
used for structural application in many industries like automobile,
aerospace, etc. The RKU method has been applied to a cantilever
beam because beam is a major part of a structure and this prediction
may further leads to utilize for different kinds of structural application
according to design requirements in many industries. In this method
of damping a simple cantilever beam is treated by making sandwich
structure to make the beam damp, and this is usually done by using
viscoelastic material as a core to ensure the damping effect. Since
few years in past viscoelastic materials has been significantly
recognized as the best damping material for damping application
which are usually polymers. Here some viscoelastic materials have
been used as a core for sandwich beam to ensure damping effect.
Due to inherent complex properties of viscoelastic materials, its
modeling has been the matter of talk. So in this report modeling of
viscoelastic materials has been shown and damping treatment has
been carried out using RKU model. The experimental results have
been shown how the amplitude decreases with time for damped
system compared to undamped system and further its prediction has
been extended to finite element analysis with various damping
material to show comparison of harmonic responses between
damped and undamped systems. This article deals with the
experimental and numerical assessment of the vibration suppression
of smart structures using passive viscoelastic materials. Experimental
Uncertainty Modeling in the Full-Spectrum Correlated-k
Distribution Method for Thermal Radiation
V&V2013-2149
John Tencer, John Howell, University of Texas, Austin
The Full-Spectrum Correlated-k Distribution Method has been shown
to be exact for thermal radiation in molecular gas-particulate mixtures
which obey the “scaling approximation.” Unfortunately, this
22
ABSTRACTS
results are presented for an adaptive sandwich cantilever beam that
consists of aluminum facings and a core composed of viscoelastic
3M467 . During this paper we have two part of Calculation
Theoretical and Experimental Calculation, In theoretical Calculation
we use two type of Software First ANSYS ,second MATLAB . We use
the first Software to Calculate the mode vibration of the wing and use
second Software to calculate the Loss Factor for the wing for each
mode . Vibration control of machines and structures incorporating
viscoelastic materials in suitable arrangement is an important aspect
of investigation. The use of viscoelastic layers constrained between
elastic layers is known to be effective for damping of flexural
vibrations of structures over a wide range of frequencies. The energy
dissipated in these arrangements is due to shear deformation in the
viscoelastic layers, which occurs due to flexural vibration of the
structures. The theory of dynamic stability of elastic systems deals
with the study of vibrations induced by pulsating loads that are
parametric with respect to certain forms of deformation .There is a
very good agreement of the experimental results with the theoretical
findings.
to estimate the uncertainty introduced during the implementation of
control software on an ECU. In particular, the use of an analog-todigital converter (ADC) introduces uncertainties in measured signals
due to sampling and quantization effects. These imprecisions can
have a significant effect on the tracking performance of the controller,
to the point that it no longer meets the desired targets. This leads to
costly design and verification iterations. In this paper, a generic
methodology for modeling the uncertainty introduced by sampling
and quantization is developed. The resulting values are then
propagated through the system’s state equations to determine the
overall uncertainty bounds on the system model. This methodology
is illustrated on the cold start emission control problem. Verification is
performed to ensure that the bounds have been accurately
estimated. Finally, a sliding-mode controller developed for the coldstart phase of an engine is modified to incorporate these bounds.
Simulation on a real ECU validates that the modified controller is
more robust to implementation imprecision. The performance
improvement observed after modifying the existing cold-start
controller demonstrates the benefit of incorporating implementation
imprecision knowledge into the design. This type of approach has
significant potential to reduce the number V&V iterations typically
required during automotive controller design.
Model-Based Verification and Real-Time Validation of
Automotive Controller Software
V&V2013-2160
Mahdi Shahbakhti, Michigan Tech. University, Kyle Edelberg,
Selina Pan, Karl Hedrick, University of California Berkeley,
Andreas Hansen, Hamburg University of Technology, Jimmy
Li, University of California Los Angeles
The Effect of Quadriceps Muscles Forces over Anterior
Cruciate Ligament: A Validation Study.
V&V2013-2175
Ariful Bhuiytan, Stephen Ekwaro-Osire, Texas Tech
University, Javad Hashemi, Florida Atlantic University
Verification and Validation (V&V) are essential stages in the design
cycle of automotive controllers to avoid mismatch between the
designed and implemented controller. In this paper, an early modelbased methodology is proposed to reduce the V&V time and
improve the robustness of the designed controllers. The application
of the proposed methodology is demonstrated on a “Cold Start
Emission” control problem in passenger cars. A non-linear modelbased controller is designed to reduce cold start hydrocarbon
emissions from a mid-size passenger car. A model-based simulation
platform is created to verify the controller robustness against
sampling, quantization and fixed-point arithmetic imprecision. The
verification results are used to specify requirements for the controller
implementation for passing current North American ULEV emission
standards. In addition, the results from early model-based
verification are used to identify and remove sources causing
propagation of numerical imprecision in the controller structure. The
structure of the original controller is changed to a discrete-time
quasi-sliding mode controller, using the feedback from early modelbased verification. The performance of the final controller is
validated in real-time by testing the control algorithm on a real
engine control unit. The validation results indicate the new controller
is substantially more robust to implementation imprecision while it
requires lower implementation cost. The proposed methodology
from this work is expected to reduce typical V&V efforts in the
development of automotive controllers.
Background There are an estimated 80,000 to 250,000 cases
annually where the stability of the knee is compromised and the
excessive relative movement in the knee (between the femur and the
tibia) causes ACL failure [1]. These huge numbers of cases induce
USA to carry a significant economic burden. The reasons (or
mechanisms) as to why these injuries occur are not yet crystal clear
to us. At this point, our proposed study is giving a direction to extract
information from inside of the knee during athletic activities to
address a research question regarding to an ACL injury mechanism:
“Can an unopposed quadriceps forces at a lower knee flexion angle
increase ACL strain during landing time?” It is crucial to know the
answer of this question, as mixed opinions about the role of cocontraction of hamstring muscles do exist in the literature [2].
Methods A total of 240 tests have been conducted from 20 fresh
cadaveric knees using our 2nd generation jump-landing simulator.
We measured the pre-strain and landing strain using DVRT sensor.
The FE results were validated with these experimental data. Findings
The findings from one of our in vitro simulation indicate that elevated
quadriceps pre-activation forces (QPFs) increase pre-activation strain
but reduces the landing strain and is, therefore, protective postlanding. Our experiments have investigated the kinematics and
loading of ACL at only 20° knee flexion angle. The current
investigation, which is predictive and most cost-effective, presents
the results of a finite element (FE) analysis of the loading of ACL.
Interpretation We interpret that the conformity between the cartilages
may play an important role to create a lock inside the knee. It retards
the relative movement between femur and tibia; therefore, it prevents
the further loading of the ACL. However, we have been unable to
experimentally quantify this conformation during the impact time. It is
important to know whether this conformation or any other
mechanism is involved in this retardation process. Fortunately, the
finite element model facilitated the access of obtaining these the data
along with the calculation of the loading of ACL for various QPFs.
Reference: [1] Letha Y. Griffin, Marjorie J. Albohm, et. al.
Understanding and Preventing Noncontact Anterior Cruciate
Ligament Injuries: A Review of the Hunt Valley II Meeting, January
2005 Am. J. Sports Med., Sep 2006; 34: 1512 - 1532. [2] Beynnon
BD, Fleming BC et al. Anterior Cruciate Ligament strain behavior
during rehabilitation exercises in vivo. Am J Sports Med 1995; 23:2434.
Modeling and Incorporation of Software Implementation
Uncertainty to Realize Easily Verifiable Automotive Controllers
V&V2013-2161
Kyle Edelberg, Karl Hedrick, University of California Berkeley,
Mahdi Shahbakhti, Michigan Tech. University
Controller Software Verification (CSV) is the critical process used to
avoid mismatch between a designed and implemented controller.
Imprecision caused by approximations in the process of
implementing a controller’s software is the main source of uncertainty
in the CSV of an automotive Electronic Control Unit (ECU). If the
amount of implementation uncertainty is known, the controller can
perform more robustly by incorporating the predictive information
from an uncertainty model. In this work, a methodology is presented
23
ABSTRACTS
CFD Investigation of Carbon Monoxide Concentration within a
Ship Vehicle Garage
V&V2013-2178
Ahmed F. Elsafty, College of Engineering and Technology,
Nasr A. Nasr, Marine Engineering Upgrading Studies Dept. Institute of Upgrading Studies- AASTMT, Mosaad Mosleh,
Marine Engineering and Naval Arch. Dept.
dioxide (VO2) glazing, in passive residential buildings in three climatic
regions of China (cold zone, hot summer and cold winter zone, and
hot summer and warm winter zone) was discussed. It was found out
that: 1) the three types of VO2 glazing discussed were applicable
only in the summer, and their performance was no better than that of
ordinary glazing in the winter, and 2) due to the marked increase of
solar absorptivity when transformed into the metal state, the
assumed smart regulation capacity of the VO2 glazing could not be
demonstrated.
The ventilation system, for enclosed vehicle garage onboard ships,
used to be designed to meet IMO fire fighting standards discarding
the air quality for workers. While, carbon monoxide (CO)
concentration is believed to be the most important chemical species
to be considered in designing ventilation systems for car parks,
because of its harmful and may be deadly effect on human health.
Thus, the purpose of this study is to investigate the carbon monoxide
diffusion in enclosed vehicle garage onboard ships. In this study a
real vehicle garage onboard an existing fast ferry, working as a liner
between Egypt and Saudi Arabia ports, is selected as a case study.
Field measurements were taken to survey the (CO) diffusion
throughout the space for the evaluation of the problem. Due to the
difficulty of studying different design factors towards the
enhancement of CO concentrations onboard working vessels, a
mathematical model implemented in a general computer code that
can provide detailed information on (CO) concentration as well as
airflow field prevailing in three-dimensional enclosed spaces of any
geometrical complexity is presented. This model involves the partial
differential equations governing CO levels and airflow transfer in large
enclosures. For the validation of the mathematical model,
experimental (CO) concentration measurements were used as input
data. The results showed good matching between the experimental
and numerical data. It was also found that CO concentration, for the
existing operating conditions, is in the risky level.
Optimisation of Pin Fin Heat Sink Uusing Taguchi Method
V&V2013-2183
Nagesh Chougule, Gajanan Parishwad, College of
Engineering Pune
The pin fin heat sink is a novel heat transfer device to transfer large
amount of heat through with very small temperature differences and
it also posses large uniform cooling characteristics. Pin fins are
widely used as elements that provide increased cooling for electronic
devices. Increasing demands regarding the performance of such
devices can be observed due to the increasing heat production
density of electronic components. For this reason, extensive work is
being carried out to select and optimize pin fin elements for
increased heat transfer In this paper, the effects of design parameters
and the optimum design parameters for a Pin-Fin heat sink (PFHS)
under multi-jet impingement case with thermal performance
characteristics have been investigated by using Taguchi
methodology based on the L9 orthogonal arrays. Various design
parameters, such as pin-fin array size, gap between nozzle exit to
impingement target surface (Z/d) and air velocity are explored by
numerical experiment. The average convective heat transfer
coefficient is considered as the thermal performance characteristics.
The analysis of variance (ANOVA) is applied to find the effect of each
design parameter on the thermal performance characteristics. Then
the results of confirmation test with the optimal level constitution of
design parameters have obviously shown that this logic approach
can effective in optimizing the PFHS with the thermal performance
characteristics. The analysis of the Taguchi method reveals that, all
the parameters mentioned above have equal contributions in the
performance of heat sink efficiency. Experimental results are provided
to validate the suitability of the proposed approach.
The Validation of BuildingEnergy and the Performance
Evaluation of Thermochromic Window Using Energy Saving
Index
V&V2013-2182
Hong Ye, Linshuang Long, Haitao Zhang, University of
Science and Technology of China, Yanfeng Gao, Shanghai
Institute of Ceramics, CAS
The concepts of the energy saving equivalent (ESE) and energy
saving index (ESI) are presented in this paper to evaluate the
performance of new materials and components in passive buildings.
The ESE represents the hypothetical energy that should be input to
maintain a passive room at the same thermal state as that when a
particular material or component is adopted. The ESI is the ratio of a
particular material or component’s energy saving equivalent to the
corresponding value of the ideal material or component that can
maintain the room at an ideal thermal state in passive mode. The
former can be used to estimate the effect of using a certain building
component or material on the building’s thermal state from an energy
standpoint, while the later can be used to characterize the
performance of the actual building component or material from a
common standpoint and be used to evaluate the performance of
components or materials in different climatic regions or under
different operating situations. The ESE and ESI are obtained by
BuildingEnergy, a building energy analysis computer program
developed by the authors. It was compiled based on a non-steady
state heat transfer model. The program is validated through the
ASHRAE Standard 140 (Standard Method of Test for the Evaluation
of Building Energy Analysis Computer Programs) and a series of
experiments. A Testing and Demonstration Platform for Building
Energy Research, which contains two identical testing rooms with
inner dimensions of 2.9 m×1.8 m×1.8 m (length × width × height), has
been established for the validation experiments. The results show
that the program is of high credibility. With ESI, the performance of
the thermochromic window, represented by the single vanadium
Validation of an In-house 3D Free-Surface Flow Solver in
Inkjet Printing
V&V2013-2186
Hua Tan, Erik Tornianinen, David P. Markel, Robert N K
Browning, Hewlett-Packard Company
Inkjet printing technology has advanced significantly over the past
decades due to the huge demand for cost- and time-effective digital
printing. Because of its ability to precisely deliver pico-amounts of
liquid at low manufacturing cost, inkjet printing technology also
attracts great attention from many other emerging applications
including lab-on-a-chip devices, fuel-injection, electronic fabrication,
spray cooling of electronics, and solid freeforming. In order to reduce
the design cost and design period of inkjeting devices, modeling
engineers at Hewlett-Packard internally developed a dedicated
thermal inkjet modeling tool (CFD3) that is integrated with a userfriendly graphic user interface and connected to a web based
application system. This makes the application available to printhead
architects across the entire global organization, even those without
any background in CFD, to be able to easily evaluate their printhead
designs on a 24hours-7days basis. CFD3 uses a volume-of-fluid
(VOF) based finite-volume method to solve the 3D flow equations
and to track the ink-air interface on a fixed Cartesian mesh. The
piecewise-linear interface calculation (PLIC) method based
Lagrangian advection method is used to compute the mass flux and
advance the interface. A robust surface tension model and contact
24
ABSTRACTS
angle model are proposed and implemented in CFD3 to accurately
handle the surface tension effect and the wall adhesion. In this
validation study, we first present three numerical tests to evaluate our
code CFD3 in solving the surface-tension dominant free-surface
flows. The first example is a sessile droplet with different contact
angles placed on a flat solid surface, which is to test our treatment of
contact angle. The second example is a droplet flying straight under
high surface-tension force, which is to test the conservation of mass
and momentum when surface tension plays a significant role. The
third example is a water droplet impacting an impermeable substrate.
The numerical results are compared with the experimental data
available from the literature. Finally, we compare a droplet ejection
from a HP thermal inkjet device to CFD3 simulation results.
Stroboscopic images of an HP printhead from an OfficeJet-Pro
printer ejecting droplets were compared to CFD3 simulations of the
same printhead. Images of droplet ejection as well as drop weight,
drop velocity and refill were reproduced well by the simulation.
Thermal Design Optimization for Two-phase Loop Cooling
System Applications
V&V2013-2191
Jeehoon Choi, Zalman Tech. Co., Ltd. / Georgia Institute of
Technlogy, Minjoong Jeong, Kyung Seo Park, Korea Institute
of Science and Technology Information, Yunkeun Lee, Zalman
Tech. Co., Ltd.
The significant advances of mobile internet devices have stimulated
the increased usage of cloud computing infrastructures. These
infrastructures consist of massive servers including multi-core
semiconductor processors. Thermal management of most servers
depends mainly upon active air cooling electronic devices with
running fans to dissipate heat to the ambient. However this
conventional technique reaches the limits of their capability to
accommodate the increased heat fluxes in confined server
enclosures and imposes significant increases of noise, vibration, and
power consumption. One of the next generation of cooling
technologies is likely to involve a two-phase flow loop system with
capillary driving forces. On the other hand, these systems are not
adopted in a commercial scale due to inefficient cooling capacity and
complex design constraints on the absence of the standards of
server configuration. This study presents a thermal design
optimization to compensate for the increased heat dissipation in the
space-constrained high density power servers. A design optimization
utilizing genetic algorithms was implemented to search optimized
design parameters under design constraints. The design optimization
program with 3D synchronous visualization was adopted to identify
the optimal geometrical characteristics of the cooling systems.
Physical experiments with an actual application were conducted to
validate the optimized design parameters. The optimization result
shows sufficient cooling capacity under design constraints of server
configurations.
Using Hybrid Techniques to Perform Sensitivity Analysis
During the Calibration Process
V&V2013-2187
Genetha Gray, John Siirola, Sandia National Labs
Optimization is often used to perform model calibration, the process
of inferring the values of model parameters so that the results of the
simulations best match observed behavior. It can both improve the
predictive capability of the model and curtail the loss of information
caused by using a numerical model instead of the actual system. The
calibration process involves comparing experimental data and
computational results. This comparison must take into account that
both data sets contain uncertainties that must be quantified.
Therefore, uncertainty quantification (UQ) techniques must be applied
to identify, characterize, reduce, and, if possible, eliminate
uncertainties. Quantifying uncertainties in both the experimental data
and computational results usually requires one or more sensitivity
studies. This presentation will discuss how to incorporate sensitivity
studies into an optimization-based calibration process so that
computational work is minimized and UQ information is better
utilized. In its purest form, calibration produces only a single ``best’’
set of inputs for the simulator, and does not take into account or
report any uncertainty in the problem. Current approaches are serial
approaches in that first, the calibration parameters are identified and
then, a series of runs dedicated to UQ analysis is completed.
Although this approach can be effective, it can be computationally
expensive or produce incomplete results. Model analysis that takes
advantage of intermediate optimization iterates can reduce the
expense, but the sampling done by the optimization algorithms is not
ideal. Therefore, this work takes a simultaneous approach and
develops the SQUAC (Simultaneous Quantification of Uncertainty
and Calibration) algorithm that hybridizes Bayesian statistical models
and derivative-free optimization. Bayesian models are a natural
method for quantifying uncertainty, because they provide a
mathematically coherent mechanism for propagating and combining
uncertainty from all sources into models and predictions. Specifically,
treed Gaussian processes is considered as a flexible yet
computationally tractable fully Bayesian statistical model. For the
optimization methods, derivative-free approaches are included as
they are a natural choice given the characteristics of complex
computational models. Such models often lack derivatives, and
approximations are unreliable. This presentation will describe the
important details of the SQUAC algorithm and its implementation. It
will also give some numerical results for both a test problem and a
real application from electrical circuit design.
A Simple Validation Metric for the Comparison of Uncertain
Model and Test Results
V&V2013-2194
Ben Thacker, Southwest Research Institute, Thomas Paez,
Thomas Paez Consulting
Model verification and validation (V&V) are the primary processes for
quantifying and building credibility in numerical models. Verification is
the process of determining that a model implementation accurately
represents the developer’s conceptual description of the model and
its solution. Validation is the process of determining the degree to
which a model is an accurate representation of the real world from
the perspective of the intended uses of the model. Uncertainties in
model system response quantities (SRQs) arise from uncertainties in
the model form and model inputs. Likewise, the test data that are
used to validate these model outputs also contain uncertainties. The
goal of validation is to quantify the level of agreement between the
model and test, and in full consideration of the uncertainties in each.
Comparing model and test results requires the selection of one or
more SRQs and a metric for comparison. Graphical overlays of
model and test features do not quantify the measure of agreement
and therefore are insufficient for validation. Desirable characteristics
of metrics are that they should fully incorporate both model and test
uncertainty, quantify the agreement between the model and test
outputs, and reflect the level of confidence in the comparison. This
work presents a simple metric that quantifies the absolute error
between uncertain model and test results. Uncertainties in both the
model and test results are incorporated such that the confidence
level (probability that the computed error is acceptable) is quantified.
The metric has the desirable characteristic that the model cannot be
validated with increasing uncertainty in either (or both) the model or
test. Finally, because the uncertainties in both the model and test
outputs contribute to the error measure, the situation where the
model and test uncertainty are equal will still produce a non-zero
25
ABSTRACTS
error. This somewhat counterintuitive aspect of the metric will be
explored and compared to other metrics that incorporate uncertainty.
In this way the contact forces between the links are determined. For
the spatial representation of the mechanism the joint reaction forces
are computed with SolidWorks and compared with the results from
MATLAB simulation. The SolidWorks results have to be exported into
an Excel file. The relative error between the forces calculated with
SolidWorks and MATLAB is discussed for different positions.
Uncertainties in Structural Collapse Mitigation
V&V2013-2196
Shalva Marjanishvili, Hinman Consulting Engineers, Inc.,
Brian Katz, Hinman Consulting Engineers, Inc.
Verification and Validation of Finite Element Models of Human
Knee Joint using an Evidence-Based Uncertainty
Quantification Approach
V&V2013-2200
Shahab Salehghaffari, Yasin Dhaher, Northwestern University
The possibility of a local structural failure to propagate into a global
collapse of the structural system has fueled the continued
development of improved computational methods to model building
behavior, as well as “best practices” engineering standards. In spite
of these efforts, recent events are bringing the issue of collapse
mitigation to the forefront and highlighting the shortcomings of
existing design practices. The catastrophic nature of structural
collapse dictates the need for more reliable methodologies to
quantify the likelihood of structural failures and strategies to minimize
potential consequences. This paper presents the results of a
stochastic nonlinear dynamic analysis study of a simple structural
model to predict catastrophic failure. The performed analysis
indicates that, at the point of incipient failure, uncertainties associated
with the analysis become increasingly large. Consequently, it may not
be possible to accurately predict when (and if) failure may occur.
Recognizing the need to understand uncertainties associated with
risk and probabilities of unlikely events (low probability and high
consequence events), this paper sets the stage to better understand
the limitations of current numerical analysis methods and discuss
innovative alternatives.
Computational finite element (FE) models of articular joint are very
efficient in investigating the biomechanics of joint functions. However,
their reliability in prediction is burdened by various sources of
uncertainty. One of the most important constituent of FE models is
realistic material modeling of soft and connective tissues. This
substantially depends on accurate experimental quantification of the
tissue’s material properties. However, differences in the donor’s
activity level, age, gender, species as well as differences in
experimental procedures such as storage methods, biases due to
imprecise calibration/gripping, and inaccuracies in assumption of
specimen orientation may lead to variations in the quantified material
properties. Such variations lead to uncertainty in material constants
which serve as important input parameters of FE models. Commonly,
biomechanical outcomes of a deterministic FE model are compared
to that of in vitro experiments to check the accuracy of the
computational model. However, in vitro experiments provide different
outcomes due to the measurement errors, method of
experimentation and different specimens. Therefore, a
nondeterministic V&V approach should be followed. This paper
explains the V&V process of an available FE model of human knee
joint using a nondeterministic approach based on mathematical tools
of evidence theory. Generally speaking, the employed evidencebased V&V approach compares the representation of uncertainty for
outcomes of in vitro experiments with representation of uncertainty
for outcomes of FE simulations derived from propagation of
uncertainties. Available experimental data is used to construct belief
structures with the aim of representing the uncertainty in vitro
experiments for different methods of experimentation. Belief
structures include a number of intervals determining the range of
variation of a biological outcome of interest with an assigned degree
of belief based on the number of experimental data supporting the
interval. The joint belief structure of the represented uncertainty for
each method of experimentation is constructed and is considered as
the target joint belief structure. Belief structures for material
parameters of soft and connective tissues are constructed using the
corresponding available data. FE simulations of the knee joint under
different boundary and loading condition corresponding to each
considered methods of experimentation are performed to propagate
the represented uncertainty in FE models. With uncertainty
propagation, the simulation-based representation of uncertainty for
biological outcome of interest is obtained. Uncertainty measures of
evidence theory (belief and plausibility) for validity of FE models are
estimated through comparing the simulation-based joint belief
structure with target joint belief structure. Such estimated uncertainty
measures are indicative of validity for FE models of human knee joint.
Planar Mechanisms Analysis with MATLAB and SolidWorks
V&V2013-2198
Dan Marghitu, Hamid Ghaednia, Auburn University
The present study uses MATLAB as a tool to calculate the positions,
velocities, accelerations, and forces for planar mechanisms. With the
symbolic and numerical MATLAB the exact solutions for the
kinematics and the inverse dynamics of closed kinematic chains are
obtained. For the MATLAB analysis two different methods, the
classical and the contour methods, lead to the same numerical
values. The classical method is based on the fundamental relations
for velocity and accelerations of two points on the same rigid body
and the equations for one point that is moving relative to the rigid
body. The contour method is based on the structural diagram of the
mechanism. This method is suitable for mechanisms with driver links
and dyads. A dyad is a planar open kinematic chain with three one
degree of freedom joints and two links. An example is given for a
mechanism with one driver link and two dyads. The joint reaction
forces and the equilibrium moment on the driver link for the
mechanism can be determined using Newton-Euler equations of
motion. For the contour methods the D’Alembert principle is applied.
The algebraic equations are solved with MATLAB “solve” command.
The solution obtained with MATLAB represents the exact solution.
Next the SolidWorks is used to calculate the positions, the velocities,
the accelerations, and the forces for the mechanism with a driver link
and two dyads. In SolidWorks the “block” feature is used. The blocks
are created from 2D sketches using a minimum of dimensions and
relations. The motion of the mechanisms defined as an assembly of
blocks is developed using the relations between blocks. The
kinematics of the mechanism for the important points are obtained
and the results are exported in an Excel file. The driver link has a
complete rotation. The SolidWorks kinematics results are compared
with the MATLAB results for a given position of the driver link. Only
the equilibrium moment on the driver link is obtained with the
SolidWorks for the 2D block feature. For the equilibrium moment the
error between SolidWorks and MATLAB is calculated. The
disadvantage of the SolidWorks using 2D blocks is that the joint
reaction forces between the links cannot be computed. To calculate
the joint reaction forced the mechanism has to be represented in 3D.
Dynamic Modeling and Validation of Angle Sensor Using
Artificial Muscle Material
V&V2013-2206
Yonghong Tan, Ruili Dong, College of Mechanical and
Electronic Engineering, Shanghai Normal University
Ionic polymer-metal composite (IPMC) is a promising material called
artificial muscle, which can be used as actuators and sensors for
displacement and angle variations of soft-mechanical systems or
26
ABSTRACTS
biomechanical systems. However, IPMC material also contains
severe nonlinear factors. The use of it for actuators or sensors should
consider the compensation of nonlinear properties in IPMC so as to
obtain satisfactory performance to meet the requirements in
engineering. It is known that the model based compensation
schemes are very popular techniques used for nonlinear
compensation. Therefore, the construction of models to describe the
nonlinear behavior of IPMC is very meaningful in practical
engineering. In this presentation, focusing on the IPMC sensor of
angle measurement of soft mechanical systems, the nonlinear
behavior of IPMC will be investigated and analyzed. It can be found
out that hysteresis is inherent in this sensor, which deteriorates the
performance of sensor. As hysteresis is a non-smooth nonlinearity
with multi-valued mapping, the conventional identification strategies
can not be applied to modeling of it, directly. Therefore, a so-called
expanded input space consisting of the coordinates of the input and
output of the introduced hysteretic operator is constructed, which
can transform the multi-valued mapping of hysteresis to a one-toone mapping. Based on the constructed expanded input space, a
neural network based model is proposed to describe the behavior of
IPMC sensor is developed. In order to obtain the proper model, the
parameters of the neural model are determined based on the
Levenberg-Maquardt algorithm which tunes the parameters of the
model to minimize the quadratic cost function involved with modeling
residual. Then the obtained model is tested in the validation
procedure to see whether the architecture and the corresponding
parameters are acceptable or not. In the validation procedure, the
mean square error between the model output and the real data of the
IPMC sensor is used to evaluate whether the obtained model is
proper. If the mean square error between the model output data and
the real data of the IPMC sensor is less than the previously specified
number, then, the modeling procedure is finished. In order to validate
the generalization performance of the constructed model, different
types of input signals within the operation region are implemented to
excite the derived model to obtain the corresponding model outputs.
Then, they are compared with the measured response data of the
IPMC sensor excited by the same input signals. Only when all the
mean squared errors of modeling with respect to all the types of test
input signals are less than the pre-specified number, the built model
is satisfactory in generalization. This organization of this presentation
is as follows: in section 2, the analysis of nonlinear and dynamic
behavior of IPMC sensor is presented. Section 3 will presents the
architecture of experimental setup of the sensor measurement
system. Then, in section 4, the expanded input space which can
transform the multi-valued mapping of hysteresis in IPMC sensor to a
one-to-one mapping is constructed. In section 5, the neural modeling
procedure on this constructed input space using neural network is
presented. The parameters of the model are determined by the
Levenberg-Maquardt algorithm. After that, in section 6, model
validation procedure of the IPMC sensor is illustrated.
Acknowledgement This research is partially supported by Natural
Science Foundation of China (NSFC) (Grant Nos. 60971004,
61203108 and 61171088 ).
to the dimensionless forms of governing equations which are
characterized by Rayleigh number (Ra) and Prandtl number (Pr). It
may be noted that Ra denotes intensity of natural convection and Pr
typically denotes various fluids [Pr = 0.025 (molten metals); 7.2 (salt
water); 1000 (oils)]. An in-house finite element code with Galerkin
weak form involving Penalty parameter to eliminate pressure has
been developed to solve governing equations with various Dirichlet
and Neumann boundary conditions. The meshing has been carried
out with bi-quadratic basis sets. Post-processing of solutions are
based on fluid and heat flow visualization. Numerical visualization of
fluid flow and heat flow are done via streamfunction and
heatfunction, respectively. Finite element basis sets are used to
compute streamfunction and heatfunctions via solving Poisson type
differential equations. Accuracy of current numerical strategy is done
via validation of simulation results with previous works based on
natural convection within square cavities. Sensitivity of simulation
results are done via varying number of elements for convergence of
wall heat flux and flow intensities. Earlier results show that finite
difference or control volume based approach is found to give
converged solution with much higher number mesh points compared
to finite element bi-quadratic grids. Computations of heat flux with
finite element basis sets are found to be quite robust compared to
methods based on finite difference or control volume where the flux
computation is based on approximate calculations of derivatives. It is
found that grid spacing and number of elements for a converged
solution is strongly dependent on various boundary conditions and
various shapes of enclosure. The number of elements for a
converged solution within enclosures with curved side walls are
strong functions of curvatures (concave or convex). Required number
of elements for converged solutions would vary for convection
dominant (high Ra number flows) mode for high Pr number fluids.
Simple Convergence Checks and Error Estimates for Finite
Element Stresses at Stress Concentrations
V&V2013-2208
Glenn Sinclair, Louisiana State University, Jeff R. Beishem,
ANSYS
Stress raisers often occur at failure locations in engineering practice.
To understand such failures, it is helpful to have an accurate
determination of the stresses present. Finite elements are an
attractive approach for meeting this objective because of their
applicability to the complex configurations encountered in practice.
However, finite elements are only really successful in such endeavors
when the discretization errors inherent in their use are sufficiently
controlled. The intent of this work is to offer some convergence
checks and companion error estimates that can achieve control of
discretization errors. In the first instance, the checks furnish a means
for deciding if the finite element analysis (FEA) is converging (i.e., is
yielding FEA stresses that are approaching the true stress with mesh
refinement). In the second instance, the checks provide a means for
judging if the FEA has converged (i.e., has yielded an FEA stress that
is sufficiently close to the true stress). This latter assessment uses an
error estimate that initially is developed along the lines of earlier
research in computational fluid dynamics by Roache et al. This error
estimate is then modified to try and ensure it remains conservative in
the presence of nonmonotonic convergence, something that occurs
quite frequently in the three-dimensional FEA of stress
concentrations. To implement the checks, usually a baseline mesh is
first assembled with what is deemed to be sufficient resolution to
capture the stress of interest with the desired level of accuracy. Then
two coarser meshes are formed with element extents that are
successively doubled. With finite element values for the stress of
interest from the three meshes, the checks are readily applied with a
spreadsheet, or simply with a hand calculator. Additional
computational solution times to run the checks are modest. To
validate the approach, the checks are applied to a series of 2D and
3D, elastic, stress-concentration, test problems. These problems are
so constructed on finite regions that they have known exact
Verification and Validation Studies on In-house Finite Element
Simulations of Streamfunction, Heatfunction and Heat Flux for
Natural Convection within Irregular Domains
V&V2013-2207
Tanmay Basak, Pratibha Biswal, Chandu Babu, Indian
Institute of Technology
Natural convection within various enclosures or cavities are important
for material and food processing. Current work is based on numerical
simulations of convective heat transfer with flow fields within various
enclosures (square, trapezoidal, cavities with curved side walls etc)
subject to various wall heating or cooling. Governing equations for
flow and heat transfer characteristics are represented with Navier
Stokes and Energy balance equations, respectively. A material
invariant approach for generalized studies of natural convection leads
27
ABSTRACTS
solutions, thereby avoiding any ambiguity as to what discretization
errors are incurred in their FEA. All told 30 such test problems with
stress concentration factors ranging from 2.7 to 570 are subjected to
FEA, and 248 three-mesh simulations of the checks performed.
When the FEA complies with the converging checks, ensuing error
estimates are at the same level as true errors for 69.3% of the
simulations, are larger than true error levels for 29.1%, and lower for
1.6%. Thus largely accurate error estimation that otherwise is
typically conservative.
comparison of numerical results with published computational
studies. This cross-verification is not true validation but it is a step
towards that goal. Comparison of published experimental work often
lacks key data required for true validation of the numerical processes
being developed. The end result is that the researcher builds a case
for validation rather than demonstrates the validation process. In this
work the authors present such a case for numerical methods recently
developed for the prediction of pit growth. Pitting involves localized
corrosion of metals that can result in catastrophic events through
crack initiation and failure. Beside the magnitude and mode of
loading, the transition from pit to crack is also influenced by the pit
shape, which in turn is affected by the microstructure of the
corroding material. Thus, predicting the influence of microstructure
on pit shape can assist in developing microstructures more resistant
to stress corrosion cracking. The authors have developed techniques
for the numerical modeling of the stable pit growth that directly
incorporates the actual microstructure into the simulations. While
many microstructural attributes can contribute towards corrosion, the
current development effort focuses on the effects of the
crystallographic orientation of the grains on pit growth through
variation in corrosion potential with orientation. Two-dimensional
simulation results show the strong influence of the crystallographic
orientation on pit shapes and growth under the assumed 5% and
10% variation in corrosion potential, delineating the critical place of
crystallographic orientation in pit growth to failure. The procedures
used to build a case for validation for this method, including
comparisons to existing data, discussions on available experimental
techniques, design of validation experiments and planned validation
efforts, will be presented.
Adapting NASA-STD-7009 to Assess the Credibility of
Biomedical Models and Simulations
V&V2013-2209
Lealem Mulugeta, Universities Space Research Association,
Division of Space Life Sciences, Marlei Walton, Wyle
Science, Technology & Engineering Group, Emily Nelson,
Jerry Myers, NASA Glenn Research Center
Human missions beyond low earth orbit to destinations such as Mars
and asteroids will expose astronauts to novel operational conditions
that may pose health risks that are currently not well understood and
potentially unanticipated. In addition, there are limited clinical and
research data to inform development and implementation of health
risk countermeasures for these missions. Consequently, NASA’s
Human Research Program (HRP) is leveraging biomedical
computational models and simulations (M&S) to help inform, predict,
assess and mitigate spaceflight health and performance risks, and
enhance countermeasure development. To ensure that these M&S
can be applied with confidence to the space environment, it is
imperative to incorporate a rigorous verification, validation and
credibility assessment process to ensure that the computational tools
are sufficiently reliable to answer questions within their intended use
domain for space biomedical research and operations. NASA’s
Integrated Medical Model (IMM) and Digital Astronaut Project (DAP)
have been working to adapt NASA’s Standard for Models and
Simulations, NASA-STD-7009 (7009), to achieve this goal. As part of
this process, IMM and DAP work closely with end-users, such as
space life science researchers, NASA flight surgeons, and space
program risk managers, to establish appropriate M&S credibility
thresholds and weighting for the different credibility factors, as well as
the technical review process. Incorporating 7009 processes early into
the M&S development and implementation lifecycle has also helped
enhance the confidence of clinicians and life science researchers in
computational modeling to augment their work. Moreover,
application of 7009 by IMM and DAP to quantifying the credibility of
biomedical models has received substantial attention from the
medical modeling community. Consequently, the Food and Drug
Administration (FDA) and National Institute of Health (NIH) are
adapting the methods developed by the IMM and DAP as the basis
for new standards and guidelines for vetting computational models
that are intended to be applied in individualized health care.
Calibration and Validation of Computer Models for Radiative
Shock Experiments
V&V2013-2213
Jean Giorla, Commissariat a l’energie Atomique, Josselin
Garnier, Laboratoire J-L Lions, Universite Paris VII, Emeric
Falize, Berenice Loupias, Clotilde Busschaert,
CEA/DAM/DIF, Michel Koenig, Alessandra Ravasio, LULI,
Claire Michaut, LUTH, Université Paris-Diderot
Magnetic cataclysmic variables are binary systems containing a
magnetic white dwarf which accretes matter from a secondary star.
The radiation collected from these objects mainly comes from an
area near the white dwarf surface, named the accreted column,
which is difficult to observe directly. POLAR experiments aimed to
mimic this astrophysical accretion shock formation in laboratory
using high-power laser facilities. The dynamics and the main physical
properties of the radiative shock produced by the collision of the
heated plasma with a solid obstacle have been characterised on the
first experiments performed on the LULI2000 facility in 2009-2010.
Numerical simulations of these experiments are performed with the
laser-plasma interaction hydrodynamic code FCI2. In this
presentation, we will describe the statistical method used to calibrate
the code and to quantify the uncertainty on the prediction of the
collision time of the heated plasma on the obstacle. In a first step, the
numerical uncertainty is estimated using grid-refinement studies.
Then a global sensitivity analysis aims to determine the uncertain
inputs which are the most influential among the unknown model
parameters and the experimental inputs. Finally, a statistical method
based on Bayesian inference, and taking the monotonicity of the
response into account, is used to calibrate the main unknown
parameters of the simulation and to quantify the model uncertainty.
The predictive uncertainties obtained are mainly due to large input
uncertainties in these preliminary experiments.
Validation Issues in Advanced Numerical Modeling: Pitting
Corrosion
V&V2013-2211
NIthyanand Kota, SAIC, Virginia Degiorgi, Siddiq Qidwai,
Alexis Lewis, Naval Research Laboratory
The development of advanced numerical methods to be used in a
predictive capacity offers unique challenges to the validation process.
While ideally there are experiments running concurrently to be used
for validation this is not always the case. The lack of experiments can
be due to several factors: lagging of experimental techniques,
complexity of real systems while numerical methods are developed
for simplified model systems, and in today’s research environment,
insufficient funding for companion experimental work. The researcher
is therefore left to ad hoc validation which would use published
experimental and computational work. This often leads to
28
ABSTRACTS
Uncertainty Research on the Stiffness of Some Rubber
Isolator
V&V2013-2214
Chen Xueqian, S.F. Xiao, Shen Zhanpeng, X.E. Liu, Institute
of Structural Mechanics,CAEP
oriented cracks subjected to more complex dynamic, cyclic loading
conditions, such as those that may occur in a seismic event. * Short
Cracks in Piping and Piping Welds program – The Short Cracks
program was another NRC-sponsored program which examined the
fracture behavior of shorter, smaller cracks, more typical of the
cracks considered for in-service flaw evaluations and LBB
assessments. * Battelle Integrity of Nuclear Piping (BINP) program –
The BINP program was another international cooperative program
whose goal was to address some of the remaining major technical
issues still outstanding at the conclusion of the Short Cracks and
IPIRG-2 programs. One such issue was the effect of secondary
stresses on the fracture behavior of a cracked pipe system. *
Dissimilar Metal Weld (DMW) Pipe Fracture program – The recently
completed DMW Pipe Fracture program was another NRC
sponsored program aimed at evaluating the fracture behavior of
cracks in DM welds. Over the past 10 years, cracks in DM welds
have been occurring in the pressurized water reactor (PWR) fleet as a
result of primary water stress corrosion cracking. In addition to
conducting the full-scale pipe experiments, each of these programs
included the development of related material properties (tensile and
fracture toughness data) for the materials evaluated in the pipe
experiments, as well as the development of a number of software
products. In conclusion, Battelle has developed a full-scale,
structural-mechanics Pipe Fracture Database that has been used to
validate a number of structural integrity assessment tools for cracked
piping. The database covers a wide range of materials, geometries,
and loadings conditions which can be used to support any industry
(e.g. nuclear, oil and gas, petro-chemical…etc.) that requires fracture
mechanics evaluations for cracked pipe under various loading
conditions.
In order to protect the electronic sets from damage, all kinds of
isolators are used in engineering structures. Especially, the rubber
isolator is used widely because of its merits such as the simple
structure, low cost, etc. However, the performance of the rubber
isolator is different because of its component and workmanship.
Which makes the stiffness in the same batch products is uncertain.
To study the uncertain stiffness of some type rubber isolator, two
kinds of modal tests to separate the stiffness of the isolator are
designed. One is for identifying the assembled and experimental
uncertainty. Another is for identifying the whole uncertainty in the
system. Multi-test cases are conducted in each kind test repeatedly.
The finite element(FE) model of the test structure is modeled. For
each test case, the uncertain parameter of the model is identified
based on the traditional FE model updating. The probability
distributions of uncertain parameters are established by combining
Bayesian method with the Markov Chain Monte Carlo(MCMC).
Moreover, the uncertainty quantification on the stiffness of the isolator
is performed by the operation of random variables. At last, the
quantification model is validated by other validation test results. The
quantifying result shows that the uncertainty of test system and
assembling is only tenth of that of the isolator self.
A Validation Suite of Full-Scale, Fracture Mechanics Pipe
Tests
V&V2013-2215
Bruce Young, Richard Olson, Paul Scott, Andrew Cox,
Battelle Memorial Institute
Validation of Fatigue Failure Results of a Newly Designed
Cementless Femoral Stem Based on Anthropometric Data of
Indians
V&V2013-2219
B.R. Rawal, Shri G.S. Institute of Technology and Science,
Rahul Ribeiro, Naresh Bhatnagar, Department of Mechanical
Engineering
As part of structural integrity evaluations for safety analyses of
pressurized piping systems, fracture mechanics based analyses,
such as the American Society of Mechanical Engineers (ASME) Code
in-service flaw evaluation criteria and the United States Nuclear
Regulatory Commission’s (NRC) Leak-Before-Break (LBB)
procedures, have been extensively validated through comparisons
with full-scale cracked pipe experiments. Over the past three
decades Battelle has conducted a number of pipe fracture test
programs aimed at developing the necessary experimental data for
the validation of these structural integrity evaluations. An overview of
test methods employed, results obtained, and software codes, which
have used this database of pipe fracture experiments for validation,
are discussed. Over the past three decades (1984 – 2012), Battelle
conducted a number of full-scale, structural mechanics test
programs on cracked nuclear grade piping materials/systems. As
part of these programs, approximately 140 full-scale pipe fracture
experiments were conducted which ranged in size from 4 to 42
inches in diameter. The materials evaluated included nuclear grade
stainless and carbon steels and their associated weldments, along
with nickel-based alloy weldments. Loading conditions have ranged
from 4-point bending to complex full-scale pipe-system experiments
under combined pressure and simulated seismic loading. The
experimental parameters and results of these tests have been
consolidated into a Pipe Fracture Database. The following list
provides a brief description of each of the major test programs
conducted at Battelle: * Degraded Piping Program – The Degraded
Piping program was a program sponsored by the US Nuclear
Regulatory Commission (NRC) which examined the fracture behavior
of circumferentially oriented cracks in nuclear grade piping materials
(and their associated weldments) subjected to simple quasi-static,
monotonic loading conditions. * International Piping Integrity
Research Group (IPIRG) programs – The two IPIRG programs (IPIRG1 and IPIRG-2) were international multi-client cooperative research
programs which examined the fracture behavior of circumferentially
A good implant design should satisfy maximum or an infinite fatigue
life. Loads applied to the implant due to human activity generate
dynamic stresses varying in time and resulting in the fatigue failure of
implant material. Therefore, it is important to ensure the hip
prostheses against fatigue failure. In recent years, different national
and international standards have been defined, aimed to determine
in-vitro the fatigue endurance of femoral hip components. Finite
Element Analysis (FEA) combined with mechanical testing of
orthopaedic components is beginning to be accepted in Food and
Drug Administration submissions for pre-market approval in
developing countries like India. In this study, fatigue behavior of
newly designed cementless femoral stem based on anthropometric
data of Indians was predicted using ANSYS Workbench software.
Performance of the stem was investigated for Ti–6Al–4V material.
The three-dimensional solid model assembly of femur bone and
implant was transferred to ANSYS Workbench. Pro/Engineer was
used for Computer Aided Design (CAD) modeling. This can be further
validated by physical testing. Accurate FEA models and validated
fatigue test prediction methods will lead to general acceptance of
these logical methods and ultimately reduce the risk of clinical fatigue
failure of THA components.
29
ABSTRACTS
Efficient Uncertainty Estimation using Latin Hypercube
Sampling
V&V2013-2222
Chengcheng Deng, Tsinghua University, Weili Liu, China
Institute of Atomic Energy, Brian Hallee, Qiao Wu, Oregon
State University
uncertainty due to the dimension of the computational domain is
computed for a flat plate at incidence and is found to be significant
compared with the other uncertainties. Finally, it is shown how the
probability that the ranking between different geometries is correct
can be estimated knowing the uncertainty in the computation of the
value used to rank.
Verification and Validation Processes Applied with the ISERIT
SW Development
V&V2013-2229
Dalibor Frydrych, Milan Hokr, Technical University of Liberec
In best-estimate plus uncertainty nuclear safety analysis, the
technique of crude Monte Carlo using order statistic (CMC-OS)
based on Wilks’ works has been widely used for code validation, but
the issue remained on the accuracy and stability of the 95/95 results
with relatively low number of code runs. Recently, Latin hypercube
sampling (LHS), a method of variance reduction technique, has been
proposed to establish the confidence intervals on the output
distribution. This paper presents the comparison of the mean and
variance of the 95/95 results between the CMC-OS method and the
LHS method, indicating that the LHS method can provide a more
accurate and stable result under certain conditions, especially at low
code run level. Discussions are provided about the number of code
runs necessary to achieve the high probability level in safety analysis,
considering the tradeoff between the size of LHS design and the
accuracy and convergence of the final results. Simple statistical
examples are presented to demonstrate the concepts with
comparisons of the two methods. This work could provide insights
for effective sample designs and appropriate code run times.
This contribution is focused on the SW development, the SW will be
used in the DGR (Deep geological repository of spent nuclear fuel)
project design. The ISERIT SW is focused on solving issues related
to a coupled transport of heat and humidity in forms of water vapor
and immobile water. The physical model consists of three coupled
partial differential equations. A solution of these equations is based
on the primary formulation of the finite element method. The ISERIT
SW implementation is made in JAVA. An object-oriented approach is
being kept strictly during the SW design and development. A great
emphasis is placed on an elimination of circular associations among
individual classes. A verification process is designed to verify the
numerical model, i.e. if the numerical model solves the equations of
the physical model correctly. The verification process is divided into
three levels. The first level is for unit tests of individual classes. The
system JUnit does tests on this level. The tests are set off - “on the
fly” - by individual developers during their work on individual classes.
The aim of these tests is to verify robustness and a functionality of
newly created classes. The second level is for integration tests of
classes that guarantee a more complex functionality. The issues are
e.g. a construction of approximation functions, solutions of systems
of linear equations etc. The system JUnit is also used on this level of
testing. The tests are set off before saving a new class into the
repository. The aim of these tests is to verify the functionality of
already created composite class. The use of the system JUnit
designed for integration testing is not quite “clear” but merits of the
solution, its speed and its tests complexity outweigh the violation of
unit-test rules. The third level is for integration tests of the whole
model. Only resolvable tasks are being solved in tests of the third
level. These are (in majority) simple tasks that can be solved through
an analytical model and run before the launch of a new ISERIT SW
version. The aim of these tests is to verify the functionality of the
whole ISERIT SW. The validation is used to verify the physical model
implemented in the ISERIT SW. The validation verifies the accuracy
of equations used to describe the researched reality. We use data
that have been got experimentally in tests.
Study on the Approaches for Interval-Bound Estimation
Based on Data Sample of Small Size
V&V2013-2223
Z.P. Shen, IS.F. Xiao, X.E. Liu,S.Q. Hu, Institute of Structural
Mechanics,CAEP
As an effective approach applied in engineering, interval analysis is
generally used in the process of uncertainty quantification,
propagation and prediction. Since the estimation of the interval
bounds is an important precondition of interval analysis, it is
significant to obtain an accurate interval-bound estimation of the
uncertain variable when using interval analysis. However, if the data
sample size is small (i.e. insufficient data are available), the epistemic
uncertainty will be significant, and it would dramatically decrease the
veracity of the interval bounds estimated for the uncertain
parameters, if traditional approach had been adopt. In order to
reduce the epistemic uncertainty, the two-stage estimation is
performed to improve the traditional method in this paper: super
parameters related with the distribution function of uncertain variable
are re-estimated in probabilistic methods and grey prediction theory
is applied to extend the bounds of interval which are obtained from
bootstrap in non-probabilistic methods. Furthermore, these
approaches are verified by the numerical experiments, which have
considered several kinds of distributions of the uncertain variable. It is
indicated that they will all improve the accuracy of the estimation;
and especially, the kernel density estimation with variable bandwidth,
which is a probabilistic method, is the best.
Entropy Generation as a Validation Criterion for Flow and
Heat Transfer Models
V&V2013-2230
Heinz Herwig, Yan Jin, TU Hamburg-Harburg
Problems involving turbulent flow and heat transfer are correctly
described by solving the full underlying differential equations
numerically without further approximations and by ensuring grid
independence. For a Newtonian fluid that corresponds to a direct
numerical simulation (DNS) of the Navier-Stokes-Equations and,
when Fourier heat transfer is involved, of the energy equation. These
solutions, however, are so computationally expensive, requiring
months of CPU-time even on high performance computers and with
parallel CFD-codes, that this is no option when real problems have to
be solved. Instead, the time averaged equations of the RANS
approach (Reynolds Averaged Navier Stokes) are solved which now
require turbulence modeling. While the DNS-approach (without
modeling assumptions) implies the need for verification of the CFDcode (“Do we solve the equations right?”, ASME-standard) the
RANS-approach also needs a validation with respect to the modeling
On the Uncertainty of CFD in Sail Aerodynamics
V&V2013-2226
Ignazio Maria Viola, Matthieu Riotte, Newcastle University,
Patrick Bot, Institut de Recherche de l’Ecole Navale
A verification and validation procedure for yacht sail aerodynamics is
presented. Guidelines and an example of application are provided.
The grid uncertainty for the aerodynamic lift, drag and pressure
distributions for the sails is computed. The pressures are validated
against experimental measurements, showing that the validation
procedure may allow the identification of modelling errors. Lift, drag
and L2 norm of the pressures were computed with uncertainties of
the order of 1%. Convergence uncertainty and round-off uncertainty
are several orders of magnitude smaller than the grid uncertainty. The
30
ABSTRACTS
assumptions (“Do we solve the right equations?”, ASME-standard).
Such a validation can either be performed by comparing RANS
results with corresponding experiments or with adequate DNSsolutions – both are assumed to represent the “real physics”. In a
standard validation approach global features like velocity and
temperature profiles or momentum and heat transfer rates are taken
as target quantities in the validation process. As an alternative, we
suggest to choose the local entropy generation rates in the flow as
well as in the temperature field. Entropy generation physically is a
measure of the exergy loss in the flow and in the temperature field.
As such, it has a clear physical interpretation and needs to be
determined correctly when the modeling should be that of the
physics of a problem under consideration. Also, entropy generation is
a very sensitive criterion since it is proportional to the square of the
local velocity and temperature gradients. As a test case we analyse
the turbulent flow over rough walls of different type (known as k- and
d-type wall roughness). Turbulence models that are validated for this
application are the k-epsilon (standard, RNG and Realizable), komega (standard and SST), Reynolds stress (linear pressure-strain,
quadratic and stress-omega) as well as a LES approach with an
eddy-viscosity subgrid model. For all these cases validation with the
entropy generation as target quantity is performed and compared to
a validation based on standard quantities like mean profiles of
velocity and temperature.
A Wavelet-based Time-frequency Metric for Validation of
Computational Models for Shock Problems
V&V2013-2232
Michael Shields, Kirubel Teferra, Adam Hapij, Raymond
Daddazio, Weidlinger Associates, Inc.
Numerous metrics have been developed over the years for the
comparison of computational and experimental response histories.
No universally accepted metric exists which captures all of the
desired features of the response histories and many of these metrics
fall short against the “eyeball test.” That is, a seasoned analyst
viewing cross-plots of the response histories is often a better gauge
of the accuracy of the computational model than quantitative metrics.
However, some of these metrics have demonstrated success in
comparing certain characteristics of the response histories. One set
of metrics relies on the comparison of the frequency content of the
response histories. These include Fourier transform based metrics as
well as shock response spectrum based metrics. These metrics are
useful for many records but can be unreliable when the frequency
content of the history is changing in time – i.e. the response is nonstationary. This type of response may be observed, for instance,
when different resonant frequencies are excited at different times.
With this in mind, there is a desire to develop metrics which capture
the variation of the frequency content in time. In the past, this has
been done through the estimation of time-frequency spectra via
windowed short-time Fourier transforms (STFT). For records which
are relatively long compared to their lowest frequency (such as
structural response to ground motion records) this approach can
provide a great deal of insight. However, for records which are very
short relatively to their lowest frequency (such as the structural
response to shock) STFT cannot adequately resolve the low
frequency content. Wavelets, on the other hand, are much more
adept at capturing this low frequency content in short signals. Here, a
wavelet-based time-frequency spectrum is computed for the both
the computed and the experimental records and a time-varying index
of agreement is used to assess the validity of the model based on the
time-varying frequency content of the records. This metric is then
placed in the formalized UQ context of the QUIPS (Quantification of
Uncertainty and Informed Probabilistic Simulation) software as one
measure in a statistical validation effort for Naval shock modeling.
On the Software User Action Effect to the Predictive
Capability of Computational Models
V&V2013-2231
Yuanzhang Zhang, X.E. Liu, Institute of Structural Mechanics,
CAEP
The verification activity nowadays mainly focused on ensuring the
software quality, and the validation activity tried to quantify the
predictive capability of a certain computational model. However, the
computational model that has been validated is commonly different
from the ones finally used for intended prediction. Considering
engineering practices, there is a gap between the verification proved
code and the computational model. The user action distance is
defined to describe the different level between the validated model
and the computational model will be used for prediction. If the
actions are exactly the same, only the parameters changed, it could
be defined as the first class of difference, as D1. If the actions are
from the same commands set, but within different sequence, it could
be defined as the second class of difference, as D2. If there is a new
command that have not been used in the validation model, then the
validation activity is not accomplished. It is reasonable to suppose
the predictive capability of the computational model will decrease
with the increase of the user action distance because of the increase
of unknown epistemic uncertainty, although it is still hard to quantify.
For applied software development, it is clear that try to make user
actions less could be helpful. Based on the applied software with
some sort of guarantees, the users of the software need to specify
the geometry, the ICs and the BCs, etc. After all those works have
been done, there will be a computational model. Then the computed
system response could be compared with experiments. Then comes
the intended usage, the computational model used for prediction is
not exactly the same with the ones that have been validated.
Normally there is no mathematical difference between those models,
but it is sophisticated and important to ensure the final model
building process based on the software is reliable. The user actions
to build computational models could be considered as software
development, the commercial software or any other FEM software
could be considered as a kind of computational language. Since
these user actions are similar to software development, it could be
helpful to use standard software engineering process. However,
considering the changes normally exist, the agile programming
concept is more helpful in engineering practice.
Design of a New Thermal Transit-time Flow Meter for High
Temperature, Corrosive and Irradiation Environment
V&V2013-2233
Elaheh Alidoosti, Jian Ma, Yingtao Jiang, Taleb Moazzeni,
University of Nevada, Las Vegas
This paper introduces a new bypass flow measurement system using
correlated thermal signals. In harsh environmental applications such
as advanced nuclear reactors, strong irritation, high pressure and
temperature (>300 oC – 1000 oC) has imposed great challenges to
the flow rate measurement. The transit-time flow meter which uses
thermocouples with grounded stainless steel shielding is by far the
most robust and reliable solution to measure the flow rate in such a
harsh environment. We have already developed a flow rate
measurement technique using correlated thermal signals and
calibrated it by standard data in the lab. The transit time of inherent
random temperature fluctuations in processes, like the coolant flow in
a nuclear reactor, can be obtained by the cross correlation calculation
of flow temperatures recorded by two separate temperature sensors
placed certain distance apart. In specific, the first thermocouple
placed at an upstream position on the flow senses a signature of
temperature fluctuation, while the second downstream thermocouple
ideally grabs the same signature with a delay that is inversely
proportional to the flow rate. Determination of this delay through the
cross correlation calculation of these two temperature signals will
thus reveal the flow rate of interest. However, this flow meter is able
to measure velocity only up to 0.8 m/s. Furthermore, since its
31
ABSTRACTS
functionality is strongly dependent on the strength of generated
thermal signal by the heater, it is not economic for larger pipe. These
two limitations motivated us to design a sophisticated layout of
thermal transit-time flow meter. We designed and tested a by-pass
flow measurement system such that the measurement is carried out
through the by-pass route with accurate and repeatable results. The
linear hypothesis of ratio between the bypass to the main flow was
successfully examined by both simulations and experiment. The
computer simulations were created by using ANSYS FLUENT
software experiments were conducted by using labVIEW view
software.
the centreline velocity is expected to behave and the expansion
angle of the jet. A numerical solution is investigated by using different
schemes and turbulent models. Grid independence is checked for
meshes of different designs that are based on the analytical solution.
An evaluation of different velocity profiles working as inlet boundary
condition is conducted. The centreline velocity has a longer near-field
region when using a uniform velocity profile as an inlet boundary
condition, than seen in the experiment. Modelling of the settling
chamber provided an inlet velocity profile which was more developed
than seen in the experiment. In the intermediate field region the
settling chamber- and the experimental velocity profile used in the
numerical model yields similar results concerning the centreline. The
experimental velocity profile is used as inlet boundary condition,
since it follows the experimental centreline more accurately when
moving further away from the orifice. The experiment was carried out
using a uni-directional hot-wire to collect measurements. The
measurements of velocity magnitude at the centreline and cross
section are consistent in the near- and intermediate-field regions with
the numerical model. Same trend applies to the turbulence intensity
where the best match occurs close to the orifice. Similar jets have
been described by H. Fellouah, Frank M. White and H. Fiedler, which
are used for comparison.
QUICKER: Quantifying Uncertainty In Computational
Knowledge Engineering Rapidly—A Faster Sampling
Algorithm For Uncertainty Quantification
V&V2013-2235
Adam Donato, Rangarajan Pitchumani, Virginia Tech
Uncertainty quantification seeks to address either the stochastic
nature of input parameters—known as aleatory uncertainty—or a
lack of complete information about the inputs—known as epistemic
uncertainty. A complete uncertainty quantification analysis would
include every combination of inputs over the full range of each input.
In order to address the enormity of simulations necessary via the
complete analysis, sampling methodologies, such as Latin
Hypercube, Monte Carlo, and Hammersley/Halton Sampling, have
been developed to drastically reduce the number of simulations
necessary, while still providing an accurate representation of the
complete analysis. Unfortunately, conventional sampling methods
still require a rather large number of simulations, making uncertainty
quantification very costly and time-consuming for many practical
scenarios. Towards alleviating this issue, this paper presents a new
sampling methodology known as QUICKER: Quantifying Uncertainty
In Computational Knowledge Engineering Rapidly. The QUICKER
methodology relies first on the mathematical simplicity and
predictability of sampling from a Gaussian distribution. Then, by
using a technique of combining Gaussian distributions into arbitrary
functions, it is possible to represent arbitrary uncertain input
distributions. This way, even systems that are not normally
distributed can be analyzed with the simplicity and predictability of
Gaussian distributions. The QUICKER methodology has been
demonstrated over a range of nonlinear simulations, including
multiphase and turbulent flows, with as many as eleven uncertain
input parameters. Applications to engineering problems demonstrate
that the QUICKER methodology requires approximately only 5% of
the samples of Latin Hypercube Sampling while maintaining a
comparable accuracy—leading to tremendous speedup in
uncertainty quantification. The QUICKER methodology opens the
way to use Uncertainty Quantification for a far wider range of
scenarios than was previously possible using conventional sampling
methodologies without sacrificing on accuracy.
Verifying LANL Physics Codes: Benchmark Analytic Solution
of the Dynamic Response of a Small Deformation Elastic or
Viscoplastic Sphere
V&V2013-2237
Brandon Chabaud, Jerry S. Brock, Todd O. Williams,
Brandon M. Smith, Los Alamos National Laboratory
We present an analytic solution to the small deformation dynamic
sphere problem, which models the dynamic behavior of a solid
sphere undergoing small deformations. Our solution takes the form of
an infinite series for the displacement and quantities derived from it,
such as velocity and stress. We show that the solution applies to a
variety of boundary conditions (for displacement, strain, and stress). It
also applies to isotropic and transverse isotropic elastic spherical
shells and to isotropic viscoplastic shells. We demonstrate selfconvergence of the series solution under spatial, temporal, and
eigenmode refinement. Our primary objective in developing this
solution is to have a benchmark against which physics codes may be
tested for accuracy. We demonstrate the convergence of numerical
results from one particular physics code package being developed
by Los Alamos National Laboratory.
Knowledge Base Development for Data Quality Assessment,
Uncertainty Quantification, and Simulation Validation
V&V2013-2238
Weiju Ren, Oak Ridge National Laboratory, Hyung Lee, Bettis
Laboratory, Kimberlyn C. Mousseau, Sandia National
Laboratories
Free Jet Entrainment in Steady Fluid
V&V2013-2236
Mike Dahl Giversen, Mads Smed, Peter Thomassen,
Katrine Juhl, Morten Lind, Aalborg University
Despite its universally recognized importance to nuclear research and
development, the practice of simulation verification and validation
(V&V), data quality assessment (QA), and uncertainty quantification
(UQ) is still largely conducted in a hodgepodge manner across the
modeling and simulation community. Further, reliable data and
information sources for V&V are often found scarce, incomplete,
scattered, or inaccessible. This has been gradually realized by more
and more researchers in recent years and raised concerns for
developing confident modeling and simulation (M&S) tools that are
highly desired for improved cost efficiency and critical decision
making. To address this issue, a centralized knowledge base dubbed
“Nuclear Energy Knowledge base for Advanced Modeling and
Simulation” (NE-KAMS) was initiated for development in 2012
through collaborations of several Department of Energy laboratories.
NE-KAMS is designed as a web-based, interactive, digital, and multifaceted knowledge management system that supports QA, UQ, and
Jet entrainment is a phenomenon, which has many practical
applications. In combustion chambers, knowledge about jet
behaviour can be used to predict flow entrainment, expansion,
mixing, fluctuations and velocities at different points. Optimisation is
possible by customising the jet with the desired features, such as
ensuring that the jet reaches sufficiently deep into a combustion
chamber. By knowing the boundary conditions of a jet e.g. in a power
plant, huge computational power for a numerical solution of such a
problem can be saved. Flow behavior is analysed in the near- and
intermediate-field regions of a turbulent free jet with a Reynolds
number of approximately 35,000. An analytical solution shows how
32
ABSTRACTS
V&V of modeling and simulation for nuclear energy development. The
development is intended to provide a comprehensive foundation of
resources for establishing confidence in the use of M&S for nuclear
reactor design, analysis, and licensing. It is also intended to offer
support to development and implementation of standards,
requirements and best practice for QA, UQ, and V&V that enable
scientists and engineers to assess the quality of their M&S while
facilitating access to computational and experimental data for
physics exploration. Furthermore, it is envisioned to serve as a
platform for technical exchange and collaboration, enabling credible
computational models and simulations for nuclear energy
applications. Currently NE-KAMS is designed with two major parts:
the NE-KAMS Data File Warehouse and the NE-KAMS Digital
Database. The NE-KAMS Data File Warehouse provides a virtual
dock-and-storage point for collecting data in various types of
electronic documents. Uploading data files into the Warehouse
requires very limited data preparation so that business operation can
start immediately. A data quality assessment is also conducted in the
Warehouse on the uploaded data files. The NE-KAMS Digital
Database is a relational database that manages digital data that are
processed from the warehouse data sources and provides powerful
functionalities for managing and processing digital data as well as
relations between data. It can allow users to: • Easily identify and
extract desired data, • Conveniently reorganize data in various
formats for study and analysis, • Distribute data in various formats to
facilitate data assimilation for various V&V activities, • Export data to
external software for M&S, • Import data from external computational
results for analysis and preservation, and • Generate plots and
reports for presentations. All data in the NE-KAMS Digital Database
can be connected through hypertext links to their original data source
documents in the NE-KAMS Data File Warehouse to keep track of
their pedigree so that users can easily trace back to the data’s origins
whenever necessary.
Richardson extrapolation analysis cannot be rigorously justified,
much less directly utilized. Here, we demonstrate several novel
approaches to this problem: (1) using an optimization framework to
define the problem and provide a family of estimates based on
rationally chosen objective functions and (2) the application of infogap theory [1], which provides a method for supporting model-based
decisions with a confidence statement. We describe how these
analyses work on a sequence of problems beginning with idealized
test cases and scaling up to an engineering-scale problem involving
metrics derived from the output of a multi-dimensional, multi-physics
simulation code on a variety of physical models (fluid, solid
mechanics, and neutron transport). 1. Y. Ben-Haim. Info-Gap
Decision Theory, Decisions under Severe Uncertainty. Elsevier,
Oxford, UK, 2006. 2. W. L. Oberkampf, T. G. Trucano, and C. Hirsch,
“Verification, validation, and predictive capability in computational
engineering and physics,” Applied Mechanics Reviews, 57:345–384,
2004. 3. C. J. Roy and W. L. Oberkampf, “A comprehensive
framework for verification, validation and uncertainty quantification in
scientific computing,” Computational Methods in Applied Mechanical
Engineering, 200:2131–2144, 2011.
Improving Flexibility and Applicability of Modeling and
Simulation Credibility Assessments
V&V2013-2240
William Rider, Richard Hills,Timothy Trucano, Walt
Witkowski, Angel Urbina, Sandia National Laboratories
The importance of assuring quality in modeling and simulation has
spawned a number of frameworks for structuring the examination of
credibility. The format and content of these frameworks provides
emphasis, completeness and flow to credibility assessment activities.
Perhaps the first of these frameworks was the CSAU (Code Scaling,
Applicability and Uncertainty) methodology [Boyack] used for nuclear
reactor safety studies and endorsed by the United States Nuclear
Regulatory Commission (USNRC). This framework was shaped by
nuclear safety regulatory practice, and the regulatory structure
demanded after the Three Mile Island accident. CSAU incorporated
the dominant experimental program (scaling), the dominant analysis
approach (assessment), and concerns about the quality of modeling
(uncertainty). After the cessation of nuclear weapons’ testing the
United States began a program of examining the reliability of these
weapons without testing. This program is science-based, and
integrates theory, modeling, simulation and smaller-scale
experimentation to replace the underground testing. The emphasis
on modeling and simulation necessitated attention on the quality of
these predictions. As computational simulation becomes an
increasingly large component of applied work, the overall credibility
of the simulations is a required focus. Sandia National Laboratories
scientists [Oberkampf et al.] introduced the Predictive Capability
Maturity Model (PCMM) as a conceptual and operational framework
to guide assessment of the credibility of computational simulations.
Presently, PCMM provides an idealized structure for systematically
addressing computational simulation credibility. NASA [NASA] has
built yet another computational simulation credibility assessment
framework in response to the tragedy of the space shuttle accidents
[NASA]. Finally, Ben-Haim and Hemez focus upon modeling
robustness and predictive fidelity in an independent conceptual
approach. Credibility evaluation frameworks such as the PCMM
provide comprehensive examination of modeling and simulation
elements that are essential for achieving computational results of
high quality. However we find that existing frameworks are
incomplete and should be extended, for example by incorporating
elements from each other, as well as new elements related to how
models are mathematically and numerically approximated, and how
the model will be applied. Rather than engage in an adversarial role,
here we discuss why these kinds of evaluation frameworks are
appropriately viewed as collaborative tools. As mechanisms for
structuring collaborations, for guiding the information collection, and
assisting communication, such frameworks enable modeling and
Robust Numerical Discretization Error Estimation without
Arbitrary Constants using Optimization Techniques
V&V2013-2239
William Rider, Jim Kamm, Walt Witkowski, Sandia National
Laboratories
Quantification of the uncertainty associated with the truncation error
implicit in the numerical solution of discretized differential equations
remains an outstanding and important problem in computational
science and engineering. Such studies comprise one element in a
complete evaluation of the development and use of simulation
codes, entailing verification, validation, and uncertainty quantification,
as described, e.g., by Oberkampf et al. [2] or Roy and Oberkampf [3].
Richardson extrapolation is a common approach to characterizing
the uncertainty associated with discretization error effects for
numerical simulations. Various approaches based on Richardson
extrapolation have been broadly applied to cases where computed
solutions on sufficiently many grids are available to completely
evaluate an extrapolated solution. Implicit in such analyses is that
there is numerical evidence to justify the assumption that the
computed solutions are in the domain of asymptotic convergence,
i.e., that the mesh discretization is sufficiently refined for the
assumption of some power law dependence of the discretization
error as a function of the mesh length scale to be valid. We develop a
set of methods for estimating the discretization error utilizing an
optimization framework. This results in a multitude of error estimates
that are refined into the final estimate using robust statistical methods
(e.g., median statistics). One aspect of the methodology lessens and
structures the content in the estimation due to expert judgment. The
practical reality commonly faced in many engineering-scale
simulations of complex, multi-physics phenomena, for which highconsequence decisions will be made is the necessity to make
decisions to which we apply info-gap methods to provide a level of
confidence in the error estimate. Because of the limited amount of
information available in this information-poor case, the usual
33
ABSTRACTS
simulation efforts to achieve high quality. References W. L.
Oberkampf, M. Pilch, and T. G. Trucano, Predictive Capability
Maturity Model for Computational Modeling and Simulation,
SAND2007-5948, 2007. B. E. Boyack, Quantifying Reactor Safety
Margins Part 1: An Overview of the Code Scaling, Applicability, and
Uncertainty Evaluation Methodology, Nuc. Eng. Design, 119, pp. 115, 1990. National Aeronautics and Space Administration, Standard
For Models And Simulations, NASA-STD-7009, 2008. Y. Ben-Haim
and F. Hemez, Robustness, fidelity and prediction-looseness of
models, Proc. R. Soc. A (2012) 468, 227–244.
describe the complex turbulent flow phenomena around the rotating
impeller and baffles on the tank wall. Prior of the transient two phase
flow simulations, the simulation of steady-state single phase flow is
carried out and the general features of flows fully filed with the tank
for a given low rotating speed of the impeller are examined. It is
obviously observed that the flow speed away from the impeller is
negligibly small because both impeller size and rotating speed are
not enough to give the whole water in the tank momentum energy.
Based on the time–dependent analysis of flow pattern obtained from
CFD simulations, several prototypes are produced and the
performance of them is actually verified by experiments and by
comparing numerical results.
The RoBuT Wind Tunnel for CFD Validation of Natural, Mixed,
and Forced Convection
V&V2013-2242
Barton Smith, Blake Lance, Jeff Harris, Utah State University
Experiment-Reconstruction-Calculation in Benchmark Study
for Validation of Neutronic Calculations
V&V2013-2245
Elena Mitenkova, Nuclear Safety Institute of Russian
Academy of Sciences
An experimental wind tunnel facility for velocity, temperature, and
heat flux measurements of flow over a heated flat plate will ve
described. The facility is speci cally designed for validation
experiments. As such, all inflow and boundary conditions are directly
measured. This facility has the ability to produce natural, mixed, and
forced convection. Furthermore, due to its unique “Ferris Wheel”
design, the entire tunnel can be rotated so that buoyancy aids or
opposes the blower-driven flow. Steady-state conditions or
programed transient flow conditions can be prescribed. The
temperature instrumentation system and locations of the
measurements will be discussed. Multiple planes of 2-component
particle image velocimetry (PIV) data are used for the in flow
conditions. Two-component PIV data are also used for system
response quantities (SRQs). Both the time-average flow as well as
turbulent statistics are obtained. Heat-flux sensors provide an
additional SRQ. Rigerous uncertainty analysis has been performed
for all boundary condition, inflow, and SRQ measurements. The first
two validation studies performed in this facility, forced and mixed
convection, will be presented. A parallel, 3-D computational fluid
dynamics (CFD) campaign using the as-built and boundary condition
data are also described.
When simulating the nuclear power objects one of major problem is
the increase of predictive accuracy of the nuclear and radiation safety
for hazard systems, especially for the new developed systems.
Benchmarks are positioned as a basis for substantiation of
calculated results in simulating of reactor physics. The neutronic
benchmark studies are aimed to develop the benchmark models for
validation of neutronic calculations using different codes and nuclear
data libraries. High expectations of the benchmark studies impose
additional conditions on the experiments and reliability of data,
obtained at critical facility and research reactors. Some general
factors in simulation of physical processes at critical facility: • The
reliability of the used data is very different, and the uncertainty for “illdefined” data is quite different from the uncertainty for “well-defined”
data. • The large amounts of Input data - the Dependent and
Independent data, Covariance analysis Pro and Con. • In the
experiment and calculation there are noticeable distinctions in
sensitivity of analyzed neutronic characteristics to the same varying
geometrical parameters and isotopic composition of individual
materials. The final experimental results after data reconstruction are
the base for code and data validation. The proven standard
procedures and the established nuclear data, used in reconstructed
measured data are inherently conservative enough. The
reconstruction initiates the additional errors to the conventional
measurement errors. So in analysis and interpretation of experimental
results some issues are implied: • The correlation of discrepancies
and agreement between the calculated and experimental results are
mainly due to the incomplete description of the experiment. • The
crarification of neutron physics tasks, for which the experiments at
critical facility are justified and the experimental results in the
calculations are in demand. • The refinement of neutronic
characteristics for which the measurements at critical facility can be
regarded as reliable, and their results are represented as the range of
values with regard to the measurement and reconstruction errors,
and uncertainty of experiment parameters. In recent years the
different (U-Pu) fuel compositions are considered for next generation
of sodium fast reactors. The world nuclear professionals pay special
attention to the benchmark study, based on experiments with (U-Pu)
fuel and accurate calculations with a detailed analysis of
experimental and calculated values. Some provisions regarding the
quality of performed neutronic calculations and the issues of
benchmark model efficiency in terms of analyzed characteristics are
considered by the example of BFS-62-3.
Development of a Microorganism Incubator using CFD
Simulations
V&V2013-2243
Ilyoup Sohn, Kyung Seo Park, Sang-Min Lee, Korea Institute
of Science and Technology Information, Hae-Yoon Jeong,
EcoBiznet
A microorganism incubator has been developed by a prototype
experiment and computational fluid dynamics (CFD) simulations. The
microorganism incubator is useful to supply the high quality of
fertilizer to agricultural industries. Particularly, a small-sized, movable
incubator is desired to be produced for personal agriculturalists. The
incubator is basically designed as a type of mixing tank to mix water
and powder of microorganism, hence the determination of an
impeller shape and size for mixing should be significantly considered
on the development process. To produce the effect of turbulence for
easier mixing, the location and size of baffles on the tank wall is
significant aspect of the mixing tank design process as well. CFD
simulations considering two phase flows are performed to describe
the mixing flows inside the tank and the flow physics and patterns in
order to find out the optimal conditions for the microorganism are
studied. Since water with powder of microorganism is partially filled
with the tank, two phase flows should be considered in the CFD
simulations. ANSYS FLUENT 13.0 is utilized to simulate flows and
the volume of fluid (VOF) scheme is used to describe two phase
flows. The transient simulations are adopted to calculate the timedependent interactions between air and water and the
multiple-reference-frame (MRF) is chosen to consider the rotating
motion of the impeller, which is located on the central-bottom side of
the tank. The k-epsilon standard turbulence model is used to
Verification of Modelling Electrovortex Flows in DC EAF
V&V2013-2248
Oleg Kazak, Donetsk National University
The work is devoted to the numerical modelling of electrovortex and
convection flows in DC EAF with the different construction
34
ABSTRACTS
specialties. The numerical modelling of proceeding processes was
solved as multiphysics problem. ANSYS software packages were
chosen for program solving of the stated problem. Reliability and
accuracy of the calculated results were estimated by comparing
these results with: theoretical researchers; experimental data
obtained with the laboratory installation; calculations done by other
methods and software packages; experimental data based on the
increased wear of the bottom electrode for industrial DC EAF. The
verification of the obtained results has shown a good correlation with
the general theoretical data concerning electrovortex movement. The
turbulent electrovortex movement of liquid metal in hemispherical
volume is calculated on the basis of the developed methods. This
movement was studied experimentally in the laboratory installation.
The obtained results that correlate with the experimental data within
15% limits were obtained for turbulence model that was used in the
further calculations. To verify the obtained results, the similar
calculations were done in COMSOL. The comparison of the obtained
results by different software packages and methods showed an
insignificant difference and the value of it is approximately 3%. The
similarity of the calculations done by different methods and software
packages proves the reliability of the methods and significance of the
results. According to the general theoretical conceptions and the
recent researches in metallurgy there is a direct connection between
shear stress on the fettle area from velocity of the liquid and the
increased fettle wear. These data have been obtained since 1996 by
checking the actual wear periodically throughout the life of several
DC EAF. The increased wear of the bottom electrode and fettle right
near the bottom electrode confirms that the flows with higher velocity
and maximum value of the friction force are located near the bottom
electrode at a distance of about the radius of the electrode. The conic
shape of the fettle border near the bottom electrode shows that the
liquid vortex flow is of conic character. The washout of fettle right
near the bottom electrode indicates that streamlines of liquid
movement under Lorenz force are produced and finished on the
bottom electrode. This characteristic of the fettle wearing confirms
that the major cause of the increased wear is liquid vortex movement
under the Lorenz force.
driven flow. Decay heat removal is provided using a layered approach
that includes the passive removal of heat by the steam drum and
independent passive heat removal system that transfers heat from
the primary system to the environment. The limiting design basis
accident for the Westinghouse SMR is a LOCA resulting from a break
in the piping connected to the reactor pressure vessel. The size of the
break is limited to three inches diameter, and all penetrations are at
elevations above the top of the reactor core. The phenomena most
important to the safety of the plant were identified as part of a
Phenomena Identification and Ranking Table (PIRT) exercise. The
important phenomena were evaluated to determine the state of
knowledge (SoK) for each, and the high importance / low SoK
phenomena were used to determine the testing needs for the SMR.
A test program has been identified to identify the highly important /
low SoK phenomena, consisting of an integral effects test and a
separate effects test. The integral effects test is a scaled, fullpressure, full-height test that models the reactor coolant system,
passive safety systems and compact, high-pressure containment.
The separate effects test is a full-scale low pressure that studies the
three-dimensional two-phase flow phenomena in the reactor during
the latter stages of the LOCA. Both tests will be used to develop test
data which will be used to verify and validate the computer codes
used to perform the safety analysis of the Westinghouse SMR.
Models of the test facilities are being developed using two-phase
LOCA simulation codes. These models will be used to predict the
tests, and to provide the validation basis after the tests are
completed. The validation basis will be submitted to the Nuclear
Regulatory Commission for approval of these codes.
Multi-Level Code Verification of Reduced Order Modeling for
Nonlinear Mechanics
V&V2013-2250
William Olson, Ethicon Endo-surgery, Amit Shukla, Miami
University
Reduced order modeling is computationally and intuitively
advantageous in many nonlinear systems. This is an enabling
technology for design space exploration and mapping. This work
describes a Code Verification exercise for a Nonlinear Reduced Order
Model (NL ROM) developed to simulate response of sinusoidally
excited cutting tools. Originally, the simulation of these tools requires
detailed finite element models. These models are computationally
burdensome for design space exploration using traditional nonlinear
time domain, dynamic simulations. To alleviate this burden, an NL
ROM is proposed and developed. The code developed for estimating
this NL ROM utilizes the full finite element model for the associated
sinusoidally excited devices. It is assumed that the finite element
model completely describes the physics of interest. Computational
advantage is derived as NL ROM process uses only static solutions
to the nonlinear problem for identification of nonlinear model
parameters. This derived model can then be exercised for
understanding the dynamic nonlinear response. The result obtained
is a time domain simulation of the response of the model to a
sinusoidal excitation. To achieve this result, the intermediate steps
require (a) calculation of mode shapes and associated frequencies,
(b) estimation of NL ROM coefficients and (c) simulation of the
estimated NL ROM using a Runge-Kutta type numerical integration
of the response to a harmonic input. Verification proposed for this
code development can take varying degrees of effort. This effort
depends upon how the results of the code are used and what
influence they may have on the design decision process. If the
expectation is that the result of this analysis process has minimal
design implications, then verification at one -level may be
satisfactory. An example of this level of verification might include
verification of only frequency content of the time response derived
from the NL ROM model. On the other end of spectrum, if the result
of this analysis process has great influence on the design decisions
then a more rigorous verification process including a multi-level
verification of intermediate results is necessary. This presentation
Validation Basis Testing for the Westinghouse Small Modular
Reactor
V&V2013-2249
Richard Wright, Cesare Frepoli, Westinghouse Electric Co,
LLC
The Westinghouse Small Modular Reactor (SMR) is an 800 MWt
(>225 MWe) integral pressurized water reactor. This paper is part of a
series of four describing the design and safety features of the
Westinghouse SMR. This paper focuses in particular upon the
passive safety features and the safety system response of the
Westinghouse SMR to design basis accidents. The Westinghouse
SMR design incorporates many features to minimize the effects of,
and in some cases eliminates the possibility of, postulated accidents.
The small size of the reactor and the low power density limits the
potential consequences of an accident relative to a large plant. The
integral design eliminates large loop piping, which significantly
reduces the flow area of postulated loss of coolant accidents
(LOCAs). The Westinghouse SMR containment is a high-pressure,
compact design that normally operates at a partial vacuum. This
facilitates heat removal from the containment during LOCA events.
The containment is submerged in water which also aides the heat
removal and provides an additional radionuclide filter. The
Westinghouse SMR safety systems are shown in Figure 1. The
Westinghouse SMR safety system design is passive, is based largely
on the passive safety systems used in the AP1000® reactor, and
provides mitigation of all design basis accidents without the need for
ac electrical power for a period of seven days. Frequent faults, such
as reactivity insertion events and loss of power events, are protected
by first shutting down the nuclear reaction by inserting control rods,
then providing cold, borated water through a passive, buoyancy-
35
ABSTRACTS
describes the verification process alternatives guided by the usage of
the NL ROM code.
order dynamic system in order to demonstrate the computational-toexperimental verification and validation process. The process is
applicable to any system, but a simple example was chosen to
illustrate the implementation. A numerical example is presented along
with the relevant data in sufficient detail to demonstrate how the
analysis was performed. By applying this process, the system is
studied from a statistical perspective, with an emphasis on dynamic
uncertainty analysis, dynamic sensitivity analysis, and dynamic
uncertainty propagation. The sensitivity analysis indicates that the
behavior of each component varied notably in time, and several
critical parameters were identified that influence integrated system
performance. By identifying and quantifying the uncertainty in each
parameter as well as the process and measurements, the quality of
the computational model can be improved, and decisions can be
made with quantifiable confidence.
The Application of Fast Fourier Transforms on FrequencyModulated Continuous-Wave Radars for Meteorological
Application
V&V2013-2252
Akbar Eslami, Elizabeth City State University
The state of development of a solid-state X-Band (e.g., 10 GHz)
frequency-modulated continuous-wave (FM-CW) radar currently
being evaluated by NASA/Goddard is described. This radar is an
inexpensive, operational prototype, simultaneously determining the
range and range-resolved radial velocity component of precipitation
in real-time. The advantages of FM-CW radar are: (1) low cost, (2)
good sensitivity, (3) high spatial resolution, (4) high reliability, (5)
portability, (6) relative simplicity and (7) safety. This type of radar
reduces blind spots (or gaps) found in conventional pulsed-Doppler
radars. It is useful for providing input observations to evaluate
meteorological models, either directly or as a component of a profiler
system to retrieve information on precipitation in the liquid or ice
phase as well as providing input to numerical weather prediction
models. Simulations, signal analyses and visualizations are
performed using Matlab software. Preliminary measurements from
field tests and numerical calculations of various aspects of the radar,
such as range accuracy and reflectivity linearity, are compared,
showing excellent agreement. Fundamental to the operation of this
radar, the manner in which range resolved velocities are calculated, is
fully described and illustrated by the results of simulations. An
additional application is implementing FM-CW theory in a classroom
setting. Students are introduced to signals and signal processing,
covering topics such as discretizing and sampling of signals, Fourier
transforms, Z-transforms, and filters. Matlab software, or similar
computational software, is utilized to perform the signal operations.
Application of a Multivariate Approach and Static Validation
Library to Quantify Validation Uncertainty and Model
Comparison Error for Industrial CFD Applications: SinglePhase Liquid Mixing in PJMs
V&V2013-2255
Andri Rizhakov, Leonard Peltier, Jonathan Berkoe, Bechtel
National, Inc. Kelly Knight, Michael Kinzel, Brigette
Rosendall, Mallory Elbert, Kellie Evon, Bechtel National
Advanced Simulation & Analysis
To qualify a single-phase computational fluid dynamics (CFD) model
as an industrial design tool, a quantification of the goodness of that
model for its intended purpose is required. If experimental data exist
for a representative configuration, the goodness of the model may be
quantified via the verification and validation (V&V) techniques outlined
in the ASME V&V20-2009 Standard. The V&V metrics are the model
comparison error and the validation uncertainty for a variable of
interest. If representative experimental data are not available, as is
often the case, a method is needed that can extend validation
metrics from a validation database to the conditions of the
application. This work considers use of a Sandia multivariate
approach to extend validation metrics from a static validation library
to estimate model comparison error and uncertainty for single-phase
liquid mixing in a pulse-jet vessel. Experimental data exist for the
application allowing model comparison error and validation
uncertainty to be computed directly. A comparison of the results
establishes proof of concept. By limiting the validation cases to a
subset of the validation test suite, the multivariate method is shown
to objectively indicate when the validation cases are insufficient to
resolve the physics of the application.
Validation and Verification Methods for Nuclear Fuel
Assembly Mechanical Modelling and Structure Analyses
V&V2013-2253
Xiaoyan Jiang, Westinghouse
Nuclear plants safety is always number one priority for nuclear
industry and nuclear authority. The validation and verification for the
fuel assembly modeling and analyses are very important steps in the
fuel assembly mechanical design and development. This paper will
briefly discuss the general Westinghouse Fuel design procedure
requirements. Static and dynamic Finite Element Analyses (FEA) are
widely used in nuclear fuel design process. The validation method for
current fuel assembly static and dynamic model is compared with
the mechanical tests (stress, vibration, drop, etc). • The dynamic
parameter validation of the fuel assembly components such as grid
dynamic impact tests • The stress modeling and analysis validation
for the fuel assembly components such as nozzles To better apply
the validation and verification requirements based on the company’s
design procedure requirements, the verification and validation
guidance flow chart is applied to the fuel assembly modeling and
analysis. The different validation methods are discussed. The 3-pass
verification method is widely used in all Westinghouse analysis
reports and technical documents. The validation and verification of
the fuel assembly modeling and the analyses are very important. The
quality control and design review process keep the fuel assembly
design satisfied with the highest safety standard.
High Fidelity Modeling of Human Head under Blast Loading:
Development and Validation
V&V2013-2256
Nithyanand Kota, Science Applications International
Corporation (SAIC), Alan Leung, Amit Bagchi, Siddiq Qidwai,
Naval Research Lab
The mechanical loading during blast wave propagation is the primary
cause of traumatic brain injury (TBI) to the warfighter in battlefields.
Modeling such events is essential for the improved understanding of
TBI. A typical human head modeling effort comprises of developing
the: (1) geometric model, (2) material constitutive models, and (3)
verification and validation procedures. The internal structure of the
human head with many underlying features (e.g., brain fissures,
sinuses, and ventricles) necessitates the use of a high fidelity
geometric model. Incorporation of as many features and
components as possible is also essential for capturing the
complicated nature of wave propagation (e.g., internal reflections due
to impedance mismatch between components). In addition, it is
essential to implement accurate constitutive models for the
constituent materials. This includes choosing the appropriate
Dynamic Sensitivity Analysis and Uncertainty Propagation
Supporting Verification and Validation
V&V2013-2254
John Doty, University of Dayton
Experimental validation experiments were developed for a second-
36
ABSTRACTS
functional form and its calibration with a broad range of strain-rate
dependent experimental data. Lastly, the developed computational
model requires validation through experimental comparison. The
ongoing modeling work presented here includes: a) development of
high fidelity geometric model that consists all the major macro scale
geometric features and constituents, b) assignment of appropriately
calibrated advanced constitutive material property description, c)
testing of the model with simplified blast loading and validation of
developed model against four benchmark studies. The high fidelity
model developed through this study, combined with validation
against four experimental studies will significantly enhance injury
predictive capability compared to existing models.
SCOPE2, all of the functions have been verified step-by-step and
point-by-point. In the verification of AEGIS, the continuous energy
Monte Carlo code was used as the reference solutions. And it was
found that AEGIS results were good agreement with Monte Carlo
results. In the verification of SCOPE2, AEGIS results, which were
confirmed by comparing with Monte Carlo results, were used as the
reference solutions. It was found that SCOPE2 was able to
reproduce AEGIS results. On the whole, verification of
AEGIS/SCOPE2 was properly conducted by a stepwise process.
Validation is conducted in order to confirm that the system represents
the physical phenomenon appropriately. Regarding the validation of
AEGIS/SCOPE2, predicted values by AEGIS/SCOPE2 have been
compared with measured values such as operating data of
commercial PWRs, post irradiation experiments data and critical
experiments data. In the validation of AEGIS/SCOPE2, it was found
that predicted values of AEGIS/SCOPE2 were good agreement with
measured values such as criticalities, power distributions, inventories
of irradiated fuels, reactivity coefficients and so on. Using the results
of verifications and validations in AEGIS/SCOPE2, the application
range of AEGIS/SCOPE2 is considered and it covers all of the
operating range of commercial PWR. We have a plan to use this
verification and validation results for licensing to apply
AEGIS/SCOPE2 for advanced nuclear fuel design and reload core
design of PWRs.
Thermal Stratification by Direct Contact Condensation of
Steam in a Pressure Suppression Pool
V&V2013-2257
Daehun Song, Nejdet Erkan, Koji Okamoto, The University
of Tokyo
The Loss of coolant accident (LOCA) in light water reactors (LWRs) is
one of the main design based accident scenario. Pressure
suppression systems in BWRs are utilized to control the pressure and
temperature of containment in accident or abnormal operating
conditions in which reactor pressure is needed to be relieved in a
short time. This pressure relieve system operates with directing the
steam produced in drywell or pressure vessel of the reactor into a
suppression pool (SP) through a vent line whose open-end
submerged into the water. Steam condenses with direct contact
condensation mechanism in the SP. In case of lower steam flow rates
thermal stratification occurs due to the weak momentum of
condensate which cannot generate large advection in SP. Undesired
accumulation of hot condensate plume at elevated locations of the
pool due to its lower density may degrade condensation and heat
absorption capability of SP. Hot water-gas interface in SP can cause
pressure increase in drywell that may augment the risk of
containment damage. Model experiments are performed in scaleddown SP setup and steam generator for the validation and testing of
our numerical calculations. Time resolved temperature and pressure
data are acquired with thermocouples, pressure transducers and
PLIF (Planar Laser Induced Fluorescence). During the experiments
well-established stationary thermal stratification is detected after a
particular instant of time since the start of steam injection. The critical
parameters, needed to resolve the phenomena accurately, are
explored to be able to extrapolate those findings for the robust
simulation of real dimensions.
Uncertainty Quantification for CFD: Development and Use of
the 5/95 Method
V&V2013-2262
Arnaud Barthet, Philippe Freydier, EDF
In the framework of the increasing use of CFD in Nuclear Reactor
Safety analyses, EDF developed an approach to Uncertainty
Quantification in CFD. In this context, the main purpose is to develop
a method which takes into account an industrial environment with a
limited number of computations. The developed method is robust,
conservative and includes prediction computation from a mock-up
scale to a reactor scale. The purpose of the method is to provide, for
a certain quantity of interest S (S being a scalar output of the CFD
calculation performed at reactor scale), a value S5/95 that is smaller
(or greater, depending what is penalizing) than 95% of the possible
values of S, with 95% confidence. The method makes use of
experimental results, and the confidence level is relative to the finite
number of experimental tests that are used. The method is based on
a comparison between experimental results and calculation results at
mock-up scale, in order to estimate a “code bias” which is then used
at reactor scale. The method also considers a propagation of the
random uncertainty on input data parameters (lack of information on
the initial and boundary conditions). The numerical parameters of the
CFD calculation are fixed by verification and validation on elementary
and integral test cases, and by expert judgment. This method which
follows all the Nuclear Reactor Safety analyses criteria will be
illustrated by a test case.
Verification and Validation of Advanced Neutronics Design
Code System AEGIS/SCOPE2
V&V2013-2259
Satoshi Takeda, Tadashi Ushio, Yasunori Ohoka, Yasuhiro
Kodama, Kento Yamamoto, Masahiro Tatsumi, Masato
Tabuchi, Kotaro Sato, Nuclear Engineering, Ltd.
Shrinkage Porosity Prediction Simulation using Sierra
V&V2013-2263
Eric Sawyer, Tod Neidt, Jordan Lee, Terry Sheridan, Wesley
Everhart, Honeywell, fm&t
Recently, from the view point of the enhancement of safety,
verification and validation of the code system are becoming more
important in the area of nuclear reactor physics. Nuclear Fuel
Industries, ltd. and Nuclear Engineering, Ltd. have been developed
AEGIS/SCOPE2 which is a state-of-the-art in-core nuclear fuel
management system for PWRs and verification and validation of
AEGIS/SCOPE2 have been performed. AEGIS is an advanced lattice
calculation code which is based on 172 energy groups MOC
calculation. Nuclear fuel assembly data files generated by AEGIS
calculations are used for SCOPE2 calculation. SCOPE2 is an
advanced nuclear reactor core calculation code which is based on 9
energy groups SP3 transport theory in pin-by-pin fine-mesh
geometry. Verification is conducted in order to confirm that each
component treats neutronics physics models correctly by comparing
with reference solution. Regarding the verification of both AEGIS and
Shrinkage Porosity Prediction Simulation Using Sierra Eric Sawyer
NNSA’s Kansas City Plant* esawyer@kcp.com Second Annual ASME
Verification & Validation Symposium Planet Hollywood Resort, Las
Vegas, NV May 22-24, 2013 Abstract The investment casting
process is often required to produce complex geometric shapes.
These shapes usually require multiple iterations of gating geometry to
achieve repeatable, defect free parts for production. Considerable
time and resources are spent during this trial and error period which
results in increased costs. A simulation is developed using Sierra to
simulate the cooling of a part in a ceramic mold. The results of that
37
ABSTRACTS
simulation are then processed to calculate a Niyama Criterion and
then finally porosity. Test castings are then poured to validate the
model. Once the model is validated, simulation results will aid in the
design of both the complex geometric shapes and the required
gating to help reduce defects and costs. *Operated for the United
States Department of Energy under Contract No. DE-NA0000622
Formalizing and Codifying Uncertainty Quantification Based
Model Validation Procedures for Naval Shock Applications
V&V2013-2268
Kirubel Teferra, Michael Shields, Adam Hapij, Raymond
Daddazio, Weidlinger Associates, Inc.
There is a strong initiative to employ computational simulations to
optimize experimental testing plans as well as to serve as a surrogate
for physical testing. This is particularly the case in industries where
experimental testing is arduous and costly, such as shock
qualification requirements of on-board electronic equipment for naval
structures. For such problems, computational models involve
transient, nonlinear analysis of large-scale fluid-structure interaction
scenarios. The nature of the loading and the complicated multiphysics relations require high fidelity finite element models to
accurately capture the response quantities of interest (i.e. there is no
universally applicable reduced order modeling technique capable of
producing accurate results for such problems). Further, it is
recognized that significant uncertainties exist, such as model
parameter realizations (e.g. damping, stiffness, etc.), experimental
parameter realizations (e.g. position and orientation of loading with
respect to the ship), gage recording errors, and modeling errors (e.g.
constitutive laws, kinematic presumptions, discretization errors, etc),
which thus produce uncertainty in the computed response. The
extremely large computational costs as well as extremely limited test
data associated with this problem preclude the use of traditional
brute-force Monte Carlo simulation to propagate uncertainty. Instead
propagation techniques tailored to this specific problem have been
developed. Model validation procedures involve the following
phases: calibration, design of experiment, uncertainty propagation,
and assessment. This work mainly focuses on uncertainty
propagation and assessment. Uncertainty propagation is executed
via two different optimal Monte Carlo techniques: the Polynomial
Chaos Expansion (PCE) and a novel refined stratified sampling
technique coupled with the bootstrap Monte Carlo method. These
two competing techniques each offer unique advantages and
disadvantages. Model assessment is conducted by evaluating the
statistics of an ensemble of validation metrics, which are quantitative
comparisons of computed and experimental response quantities.
One of the key aims of this work is to automate the model validation
process by developing a ‘black box’ approach so that users (i.e.
analysts) rather than developers (i.e. researchers) are able to perform
the model validation process. Weidlinger is developing a suite of
packages for model validation under its in-house software QUIPS
(Quantification of Uncertainty using Informed Probabilistic
Simulation). In this presentation, the uncertainty propagation and
model validity assessment techniques developed for large scale
simulations is described. Then an overview of the vision of QUIPS
along with a description of specific features is given.
Variation Studies on Analyst and Solver Settings Contribution
to CFD Analysis Uncertainty
V&V2013-2265
Daniel Eilers, Fisher Controls International LLC
Validation of a simulation procedure is often performed on a subset
of examples and by a subset of the analysts who perform the
simulation. Using a subset of the analysts can create an artificial
control over variation caused by analysts. A study has been
performed where a group of analysts have been asked to perform
the same simulation with defined inputs. Thus the uncertainty in the
geometry and boundary conditions has been removed. Without
clearly established procedures, a group of competent analysts are
able to find a large set of variations in the generated mesh and solver
settings. A subset of the range of solver settings chosen by individual
analysts in the group were applied on each mesh and solved. The
variation in the results have been quantified to provide a guideline on
the output uncertainty caused by analyst choices, an uncertainty that
is in addition to any uncertainty in the simulations and the modeling
of the physical system. The results have been used to define the
specifications for simulation setup, training, and expected range of
variation. This presentation will describe the variability studies
performed on CFD analysis regarding the flow capacity of a control
valve.
Application of a Multivariate Approach to Quantify Validation
Uncertainty and Model Comparison Error for Industrial CFD
Applications: Multi-Phase Liquid-Solids Mixing in Pulse-Jet
Mixing Vessels
V&V2013-2266
Michael Kinzel, Leonard Peltier, Andri Rizhakov, Jonathan
Berkoe, Kelly Knight, Bechtel National Inc., Brigette
Rosendall, Mallory Elbert, Kellie Evon, Bechtel National
Advanced Simulation & Analysis
To qualify a multiphase computational fluid dynamics (CFD) model as
an industrial design tool, a quantification of the goodness of that
model for its intended purpose is required. If experimental data exist
for a representative configuration, the goodness of the model may be
quantified via the verification and validation (V&V) techniques outlined
in the ASME V&V20-2009 Standard. The V&V metrics are the model
comparison error and the validation uncertainty for a variable of
interest. If representative experimental data are not available, as is
often the case, a method is needed that can extend validation
metrics from a validation database to the conditions of the
application. This work considers use of a Sandia multivariate
approach to extend validation metrics from a static validation library
to estimate model comparison error and uncertainty for multiphase
liquid-solids mixing in a pulse-jet mixing vessel. Experimental data
exist for the application allowing model comparison error and
validation uncertainty to be computed directly. A comparison of the
results establishes proof of concept. By limiting the validation cases
to a subset of the validation test suite, the multivariate method is
shown to objectively indicate when the validation cases are
insufficient to resolve the physics of the application.
Verification of Test-based Identified Simulation Models
V&V2013-2269
Alexander Schwarz, Matthias Behrendt, KIT-Karlsruhe
Institute of Technology, Albert Albers, IPEK at KIT Karlsruher
Institute of Technology
Technical systems must be continuously improved so that they can
remain competitive in the market. Also the time-to-market is an
important factor for the success of a product. So new methods and
processes are needed in the to achieve this aim. Especially the testbased optimization is a important phases in the development
process. In [1] there is shows a method, which helps to reduce the
time effort of the test-based optimization. The basic idea is to use
measured test data to parameterize a physical (or mostly physical)
model structure to create adequate models for the optimization. The
main advantage of the method is the reduction of test effort because
the number of variations of the design parameter is one, or extremely
decreased (depending on the system). At the example of the
38
ABSTRACTS
gearshift quality evaluation of a vehicle with a 7-speed double clutch
gearbox, this method saves up to 80% time on the testbed. One
imported task during the method is to ensure a sufficient model
quality. There for this contribution shows methods to this.
Furthermore this contribution shows a method to ensure that the
model behaves physical correct. This contribution uses the example
of the gearshitquality. [2]. Thereby advanced methods for the rating
the drivability and for automatic identification of NVH-Phenomena [3]
are used to optimize multi criterial objective functions. The
measurement is done in the IPEK-X-in-the-Loop Framework (XiL) [4],
which enables an assessment on the complete vehicle level.
References: [1] Albers, A., Alexander Schwarz, Matthias Behrendt, &
Rolf Hettel. (2012). Time-efficient method for test-based optimization
of technical systems using physical models.. In ASME 2012
International Mechanical Engineering Congress & Exposition. SAE [2]
Albers A., Schwarz A., Hettel R., Behrendt M.: an operating system
for the optimization of technical systems using the example of
transmission calibration, Fisita 2012 [3] Albers, A., & Schwarz A.
(2011). Automated Identification of NVH- Phenomena in vehicles.
SAE International. [4] Albers, A., Düser, T. (2009). Integration von
Simulation und Test am Beispiel Vehicle-in-the-loop auf dem
Rollenprüfstand und im Fahrversuch
balloted. The recently established Dutch V&V expertise center ‘Qtility’ has conducted successful case-studies with the GM-VV for the
Dutch Royal Navy, Air-Force, and Military Police. Within this
presentation we give an overview of the GM-VV, practical
illustrations and lessons learned from our three case-studies.
Evaluation of Material Surveillance Measurements used In
Fitness-For-Service Assessments of Zr-2.5%Nb Pressure
Tubes in CANDU Nuclear Reactors
V&V2013-2271
Leonid Gutkin, Douglas Scarth, Kinectrics Inc.
The CANDU reactor core consists of a lattice of several hundred
horizontal fuel channels passing through a calandria vessel
containing heavy water moderator. Each fuel channel includes a
pressurized vessel – a cold worked Zr-2.5%Nb pressure tube – with
nuclear fuel in contact with heavy water coolant. Due to continued
neutron irradiation and hydrogen ingress in the course of reactor
operation, the pressure tube material undergoes degradation with
time in service. The evaluation of such degradation is an integral part
of CANDU fitness-for-service assessments, stipulated by and
performed according to Canadian nuclear standards. In such
assessments, the degradation in pressure tube material properties is
usually captured by means of representative predictive models with
probabilistic capabilities, which are validated using material
surveillance measurements periodically obtained from ex-service
pressure tubes. A new approach to the evaluation of material
surveillance measurements and validation of probabilistic predictive
models for a number of key pressure tube material properties has
recently been developed and implemented in the Canadian nuclear
standard CSA N285.8-10. This approach utilizes probabilistic
metrics, which are defined as the estimated probabilities of obtaining
surveillance values of a given magnitude for the material properties of
interest outside their relevant reference bounds. If the calculated
values of these metrics do not exceed the specified probabilistic
acceptance criteria, the representative predictive models are
considered validated. Alternatively, the representative predictive
models require revision. This presentation describes the most
important aspects of this methodology and illustrates how it has
been applied to the evaluation of the surveillance measurements for
the growth rate of postulated delayed hydride cracks in the irradiated
Zr-2.5%Nb. Two application examples are provided. One example
represents the case when the existing predictive model could not be
validated and had to be revised to include additional predictor
variables. Another example refers to the case when the
representative predictive model was successfully validated.
The Generic Methodology for Verification and Validation
(GM-VV): An Overview and Applications of this New
SISO/NATO standard
V&V2013-2270
Manfred Roza, National Aerospace Laboratory NLR, Jeroen
Voogd, TNO
With the increasing reliance on modeling and simulation (M&S) in
system analysis, design, development, test and evaluation, and
training, it becomes imperative to perform systematic and robust
verification and validation (V&V) However, the V&V method that
works best in a given situation depends on the individual needs and
constraints of an M&S organization, project, application domain or
technology. Therefore, there exist many different approaches to V&V
that rely on a wide variety of concepts, methods, products,
processes, tools or techniques. In many cases the resulting
proliferation restricts or even impedes the transition of V&V results
from one M&S organization, project, and technology or application
domain to another. This context was the key driver behind the
development of the Generic Methodology for Verification and
Validation (GM-VV) as a standard recommended practice for V&V
under the umbrella of the Simulation Interoperability Standards
Organization (SISO) and NATO-STO Modeling and Simulation Group
(NMSG). The GM-VV provides a generic and comprehensive
methodology for structuring, organizing and managing the V&V to
support the acceptance of models, simulations and data. Its
technical framework design is based on a reference architecture
approach for V&V that builds upon existing V&V methods and other
related practices. The starting point for its design is that V&V should
effectively and efficiently provide insight in the M&S technology
development and utilization correctness, its replicative and
predictive capabilities with respect to the real-world (i.e. fidelity), and
the associated uncertainties; knowledge necessary to support the
(acceptance) decision-making throughout the whole M&S life-cycle.
Therefore the GM-VV centers around an evidence-based
argumentation network and (use) risk-driven approach to develop
V&V plans, experiments and cases (e.g. evidence-argument-claim).
Moreover, the GM-VV technical framework provides a generic
architectural template and building blocks for the development of
structured V&V process tailored to the individual M&S organization,
project, and technology or application domain needs. As such the
GM-VV also allows V&V practitioners to integrate their domain
specific V&V methods and techniques within its generic architectural
template. The first of the three volumes that comprise the GM-VV
standard recommended practice has been approved the SISO
Standards Activity Committee, and the second volume is currently
Evaluation of the Effect of Subject Matter Expert Estimated
Mean and Standard Deviation for a Deterministic Testing
Environment
V&V2013-2272
David Moorcroft, Federal Aviation Administration
One of the principal tenets of Verification and Validation (V&V) is the
quantification of the uncertainty in the system. Uncertainty is present
in both the experimental data, in the form of random and systematic
errors, and simulation data, in the form irreducible and reducible
uncertainty. In fields that require testing to determine compliance, the
requirements are often deterministic. As such, data that are mined to
evaluate top-level system models do not provide uncertainty
quantification. American Society of Mechanical Engineers standard
V&V 10.1-2012 illustrates an example to bridge the gap between a
deterministic testing environment and the need for uncertainty
quantification. Subject matter experts (SMEs) are used to estimate
the range of system response quantities that include all practical
results. The measured system response quantity is assumed to be
the mean, a normal distribution is assumed, and the standard
deviation is set to one-sixth the SME estimated range. When an
39
ABSTRACTS
aircraft crashes, lumbar spine injuries can occur due to the vertical
forces transmitted to the occupant. Aircraft seats are designed to
mitigate these forces and minimize injury. For a seat to be approved
for use, the load measured in the lower spine of an Anthropomorphic
Test Device (ATD) is below 1500 lb during a dynamic crash test.
Repeated testing of seat cushions show a typical variance of about
250 lb when testing parameters are tightly controlled. Using this
information, three metrics are calculated for various test-simulation
pairs: the relative error on the mean, the area between the estimated
cumulative distribution functions (called the area metric), and the
error between the experiment and model that has a 90% probability
of not being exceeded. The value of the sample mean, based on the
experimental result and estimated interval, is varied to explore the
range of calculated error. Once a model has been deemed
acceptable for the intended use, it is used to predict the performance
of the system versus the defined pass/fail criteria. Because of the
deterministic nature of aircraft seat testing, it is useful to use a factor
of safety on the predictions. Using the SME estimated uncertainty, a
factor of safety is defined such that there is a 95% probability the
injury criteria will not be exceeded. The factor of safety for the lumbar
load case with a 250 lb uncertainty range yields a maximum
allowable simulated lumbar load of 1430 lb.
Classification Systems for V&V Benchmark Data
V&V2013-2274
V. Gregory Weirs, Sandia National Laboratories
While many engineers and scientists are familiar with the ideas of
code verification and validation, there are many practical obstacles to
successful V&V activities. By successful, we mean that a conclusive
statement is obtained as a result of the code verification study or
validation effort. The practical obstacle that is the focus of this talk is
the lack of sufficient information about the reference solution, i.e., the
exact solution for code verification and the experimental data for
validation. For example, validation experiments are usually
documented in journal articles and technical reports. This format has
worked well for discovery experiments, which emphasize a novel
behavior and may argue a plausible explanation. But a journal article,
for good reasons, includes few of the details about the experiment
that a computational scientist needs to set up and run a simulation
for validation purposes. A number of assumptions must be made
about the experimental setup, boundary conditions, materials, etc.,
that ultimately undermine a quantitative comparison. In this talk, we
present a classification and ranking system to describe the suitability
of an experiment for use in validating a model. The system consists
of levels in various categories, or attributes; providing more complete
information about a particular attribute achieves a higher rating or
level. The intent of these attributes and levels is to provide a tool for
communication and discussion between experimentalists and
computational analysts, and to stimulate much more extensive
capture, retention, and management of information generated by
experiments. A second system is defined with attributes appropriate
for code verification test problems. We will discuss the motivation for
the different attributes and levels and initial experiences in trying to
apply them, as well as avenues for improvement. Sandia National
Laboratories is a multi-program laboratory managed and operated by
Sandia Corporation, a wholly owned subsidiary of Lockheed Martin
Corporation, for the U.S. Department of Energy’s National Nuclear
Security Administration under Contract DE-AC04- 94AL85000.
Proposal for the 2014 Verification and Validation Challenge
Workshop
V&V2013-2273
V. Gregory Weirs, Kenneth Hu, Sandia National Laboratories
In recent years, Verification and Validation (V&V) has been attracting
more interest and resources, as evidenced by the ASME V&V
Symposium itself. In reviewing the submissions from the 2013
symposium, we were encouraged by the wide range of methods and
ideas being applied. V&V is a complex process because so many
different skills and ideas are required to develop a comprehensive
understanding of simulation credibility, including: modeling, data
processing, calibration, UQ, verification, probability & statistics,
validation, decision theory, etc. A great deal of progress has been
made in each of these areas individually, however, it is not obvious
how to combine the results these different areas in a systematic and
useful way. It is difficult to compare integrated V&V processes since
the relative merits of alternative processes are dependent on the
specific context of the simulations. In this situation, case studies
provide an avenue to develop intuition and experience that may
eventually lead to more general conclusions. To this end, we propose
a V&V Challenge Workshop, in the same vein as the workshops in
2002 [1] and 2006 [2]. The workshop will be formally announced in
the fall of 2013 and will be held in the summer of 2014. In this talk, we
will present the rationale for such a workshop, introduce a draft
version of the challenge problem, and explain the V&V concepts that
the problem is intended to cover. We hope to stimulate discussion of
the problem, identify issues that concern V&V practitioners, and seek
feedback on the problem and workshop format. We also hope to
generate enthusiasm for the workshop and attract participants. 1.
Ferson, S., Joslyn, C. A., Helton, J. C., Oberkampf, W. L., & Sentz, K.
(2004). Summary from the epistemic uncertainty workshop:
consensus amid diversity. Reliability Engineering & System Safety,
85(1-3), 355 - 369. 2. Dowding, K. J., Red-Horse, J. R., Paez, T. L.,
Babuska, I. M., Hills, R. G., & Tempone, R. (2008). Validation
challenge workshop summary. Computer Methods in Applied
Mechanics and Engineering, 197(29-32), 2381 - 2384. Sandia
National Laboratories is a multi-program laboratory managed and
operated by Sandia Corporation, a wholly owned subsidiary of
Lockheed Martin Corporation, for the U.S. Department of Energy’s
National Nuclear Security Administration under Contract DE-AC0494AL85000.
Structural Mechanics Validation Application with Variable
Loading and High Strain Gradient Fields
V&V2013-2275
Nathan Spencer, Myles Lam, Sharlotte Kramer, Brian
Damkroger, Sandia National Laboratory
Finite element analysis is used to demonstrate that interface control
document specifications are being met in the design of new tooling
for a specialized disassembly procedure involving existing hardware
and non-replaceable assets. Validation tests are conducted to build
credibility in the model results, reduce the risk of relying solely on
numerical simulations, and develop operational procedures for the
new hardware. The validation experiments are guided by insight from
the models which help locate key areas for strain gage placement.
Challenges in the validation effort include comparing spatially
equivalent values in the presence of a high strain gradient field and
accounting for variations due to the lack of precision in applying
loads at multiple points during the disassembly procedure. The
implications of both of these issues are explored through an
uncertainty quantification study before the validation testing to
provide a band of anticipated strain values. Post test results are
compared to the model predictions for an initial assessment of the
model’s accuracy for the intended application. The model is then
refined based on the actual observed strain gage placement and the
calculations are repeated with the measured loading conditions for a
more accurate comparison between the physical and numerical
experiments.
40
ABSTRACTS
On Stochastic Model Interpolation and Extrapolation Methods
for Robust Design
V&V2013-2276
Zhenfei Zhan, University of Michigan, Yan Fu, Ford, Ren-jye
Yang, Ford Research & Advanced Engineering
Spherical Particles in a Low Reynolds Number Flow: A V&V
Exercise
V&V2013-2281
Nima Fathi, Peter Vorobieff, University of New Mexico
The aim of this investigation is validation and verification of solid
particle interaction in a low Reynolds number fluid flows We evaluate
the accuracy and reliability of discrete phase element method (DPM)
for one/two solid particle interaction in a highly viscous flow in both
cylindrical and rectangular Couette flow. The computational results
(solution) based on ANSYS/FLUENT 14.0 is validated by
experimental data. Unsteady DPM CFD simulations are performed to
match the experimental setups. Particle trajectories in cylindrical and
rectangular domains are studied. In the experiment, spherical
particles are suspended in gravity-stratified Newtonian fluid, thus
limiting their motion to a single plane. Their predominantly twodimensional motion is driven by moving belts (or a rotating cylinder)
that produce shear in the fluid. Several parameters of solid particle
motion at a low Reynolds number are validated and verified. The
uncertainty analysis for both numerical and experimental data is
performed.
Computational Aided Engineering (CAE) becomes a vital tool in
industries. In automotive industry, Finite Element (FE) models are
widely used in vehicle design. Even with increasing speed of
computers, the simulation of high fidelity FE models is still too timeconsuming to perform direct design optimization. As a result,
response surface models (RSMs) are commonly used as surrogates
of the FE models to reduce the turn-around time. However, RSM may
introduce additional sources of uncertainty, such as model bias, and
so on. The uncertainty and model bias will affect the trustworthiness
of design decisions in design processes. This calls for the
development of stochastic model interpolation and extrapolation
methods that can address the discrepancy between the RSM and
the FE results, and provide prediction intervals (PIs) of model
responses under uncertainty. This research proposed a Bayesian
inference-based model interpolation and extrapolation method for
robust design. This method takes advantages of Bayesian inference
and RSM. The process starts with design of experiment (DOE) matrix
for validation in the design space, followed by performing repeated
physical tests and CAE simulations. The difference between test and
CAE data are then calculated as evidence for the Bayesian updating
of hyper-parameters of the bias distribution. After obtaining the prior
information, the posterior distributions of prediction bias hyperparameters are calculated, and the bias distribution at each design
point is obtained. Two RSMs are built for the mean and standard
deviation of the prediction bias. The PIs of the output at new designs
are calculated using the CAE results. Confirmation tests at new
designs are used to validate the prediction results. The prediction is
considered successful if the test results of new designs are within the
corresponding PIs. Based on the results, the decision maker then
decides to accept or reject the stochastic RSM models. If accepted,
they are then passed to the downstream engineers for design
optimization or robust design. On the other hand, if the predictions
are rejected, additional DOE will be added and repeat the process
until good quality stochastic RSM models are obtained. In order to
investigate the behavior of the proposed stochastic interpolation and
extrapolation method for robust design, three validation metrics,
including area metric, reliability metric, and Bayesian confidence
metric, are used to evaluate the prediction results at different discrete
design points. The area metric quantitatively measures the
discrepancy between the two distributions, and it has two important
characteristics. First, it measures the differences between the entire
distributions from the experiments and model predictions. Second, it
can be used when only limited data points from experiments or
model predictions are available. The reliability metric selects the
observed difference between physical test and model simulation
results as the validation feature. It is defined as whether the
probability of the difference distribution that is within a predefined
interval that achieves the confidence requirement. Note that the
physical test and model simulation results can be either univariate or
multivariate. The Bayesian confidence metric uses interval-based
Bayesian hypothesis testing method to quantitatively assess the CAE
model quality. The proposed method is applied to a vehicle design
example. A vehicle model from National Crash Analysis Center
(NCAC) was used for this study. Eight members of the front-end
structure are chosen as the deign variables, and two performance
measures in a full frontal impact, chest G and crush distance are
used as the performance measures. The design objective is to find
the robust design that can minimize the mean of weight through
gauge optimization while satisfying the design requirements on chest
G and crush distance.
Validation Based on a Probabilistic Measure of Response
V&V2013-2282
Daniel Segalman, Lara Bauman, Sandia National
Laboratories, Thomas Paez, Thomas Paez Consulting
Validation is defined as “the degree to which a model is an accurate
representation of the real world from the perspective of the intended
uses of the model.” Its purpose is to establish whether or not a
system model satisfactorily simulates the behavior of a physical
system within a pre-established region of input/response/physical
characteristics space. In order to make a judgment regarding
adequacy of a model to simulate the behavior of a physical system, a
measure of behavior or measure of response must be defined and
computed; this is the quantity of interest (QoI). During a validation,
the QoI is computed using experimental results obtained during
validation experiments involving the physical system, and it is
computed using the results of model predictions obtained during
simulations of the validation experiments. When the model-predicted
QoI is satisfactorily close to the experimentally-obtained QoI, then
the model is judged adequate (or accurate) and valid. Otherwise, the
model is not judged valid. There are few restrictions on what the QoI
can be. For example, when a finite element model of a mechanical
structure is meant to predict multiple peak accelerations on a
physical structure excited by a dynamic load, multiple QoIs might be
defined as those peak accelerations. When a finite element model of
a mechanical structure is meant to predict the static deformation
along a component of a structure, the QoI might be a functional
approximation to the deformation function. However, a system model
is sometimes meant to predict structural reliability; in that case the
QoI may be defined as probability of failure. This presentation defines
a reliability-like measure of system behavior – the probability of
exceeding margin (PEM). We show that the PEM can serve as the
QoI for a model validation. We form a validation metric using the
experimental QoI and the model-predicted QoI, and we present an
example validation involving experimental, nonlinear structural
response, a linear model for the experimental system, and the PEMs
of their responses.
41
ABSTRACTS
Numerical Estimation of Strain Gauge Measurement
Uncertainties
V&V2013-2283
Jérémy Arpin-Pont, S. Antoine Tahan, École de Technologie
Supérieure, Martin Gagnon, Denis Thibault, Research Center
of Hydro-Québec (IREQ), Andre Coutu, Andritz Hydro Ltd
such as radiation transport and thermonuclear fusion. The
manufactured solutions we present are specifically designed to test
the terms and couplings in the Euler equations that are relevant to
these phenomena. Example numerical results generated with a 3D
Finite Element hydrodynamics code are presented, including mesh
convergence studies.
Impact of Numerical Algorithms on Verification Assessment
of a Lagrangian Hydrodynamics Code
V&V2013-2285
Scott Doebling, Scott Ramsey, Los Alamos National
Laboratory
Errors and uncertainties in experimental measurements are
unavoidable. Such factors significantly influence the data obtained
and have to be estimated before attempting a comparison with
numerical or analytical models. We worked specifically on the errors
and uncertainties of strain gauge measurements made on large
rotating structures such as hydroelectric turbine runners. We
developed a numerical model which evaluates the strains that should
be measured by simulating virtual gauges in the Finite Element (FE)
models of the structures, accounting for specific measurement
uncertainties such as gauge dimensions, integration effect,
transverse sensitivity and gauge positioning. Indeed, when a user
install a gauge at a target location, errors and uncertainties are
introduced along the longitudinal and transversal axes of the gauge,
and on the alignment angle. In our methodology, these uncertainties
are introduced using Monte Carlo simulations in which a large
amount of virtual gauges are randomly simulated on the FE model
over a representative area around the target location. Then, the
numerical model estimates the strain measured by each of the
generated gauges. From these estimated strains, a strain distribution
is derived and allows the measurement biases and uncertainties to
be characterized. We validated our model using experimental
measurements on rectangular specimens for whom strains are
analytically and numerically well known. Our aim was to verify that
the model was able to correctly estimate the strains measured given
accurate FE models verified using analytical methods. The analysis of
the correlation between the predicted strains and the experimental
measurements confirmed that the model is able to produce
acceptable estimates. The model was able to predict the biases
observed between measurements and analytical/FE strains. Next,
the model was applied on a more complex structure. Using our
model, the differences between FE analyses and strain gauge
measurements on a Francis hydraulic turbine runner blade were
analysed. In most of the cases, the biases and uncertainties resulting
from the simulation results can explain the observed differences and
enable cross-validations of the FE models with the measurements.
Our model enables the comparison of analytical, numerical and
experimental data in order to do cross-validations or calibrations in
both laboratory experiments and actual field measurements. Our
model can also be used to optimize the measurements in order to
reduce the biases and uncertainties. Please note that even if this
method was initially developed specifically for strain gauges, the
methodology could be used for other type of sensors.
This research studies the impact of numerical model and parameter
choices on the solution accuracy of a Lagrangian hydrodynamics
code. The numerical models studied include artificial viscosity, the
arbitrary Lagrangian-Eulerian (ALE) method, mesh stabilizers and
smoothers, and mesh resolution. To study the impact of these
numerical model choices, a set of classical hydrodynamics
verification test problems will be used. These problems are chosen
because they capture much of the physics of interest in the
application of the hydrocode, and they have well accepted and
readily available analytical solutions. The availability of the analytical
solutions is a key aspect of this work, as it enables the analyst to
quantitatively confirm the impact of the numerical models on solution
accuracy, rather than merely observing the sensitivity of the solution
as the numerical models are changed. The following is a list of
possible test problems to be included in this research: Sedov
Problem: Idealized point explosion in an ideal gas Guderley Problem:
Idealized implosion/shock reflection in an ideal gas Pressurized Gas
Ball Problem: Uniform, stationary gas ball with constant pressure
boundary Kidder Problem: Shock-free contraction and expansion of
an isothermal ideal gas The impact of the choices of numerical
models and parameter values will be assessed by first selecting a set
of outputs from the test problems that are of interest for verification
assessment, (e.g. for a shock problem, shock arrival time and peak
shock magnitude). The variation of these features and their
convergence rates will be observed as the numerical models and
parameters are varied, and will be compared to the exact feature
values known from the analytical solution. The results will
demonstrate which numerical models and parameters have the
largest influence on the accuracy of the problem solution given the
other settings and model choices in the hydrocode. This work
supports assessments of errors arising from numerical model and
parameter choices, for the prediction of experimental shock physics
results (such as a plate impact experiment.) The desire is to use the
hydrodynamic test problems to gain insight into the influence of the
numerical choices and then to extend that insight to the predictions
of experimental results.
Manufactured Solutions for the Three-dimensional Euler
Equations with Relevance to ICF
V&V2013-2284
Thomas R. Canfield, Nathaniel R. Morgan, L. Dean
Risinger,Jacob Waltz, John G. Wohlbier, Los Alamos
National Laboratory
Model-Based Surgical Outcome Prediction, Verification and
Validation of Rhinoplasty
V&V2013-2286
Mohammad Rahman, Ehren Biglari, Yusheng Feng,
University of Texas at San Antonio, Christian Stallworth,
University of Texas Health Science Center at San Antonio
We present a set of manufactured solutions for the three-dimensional
(3D) Euler equations. The purpose of these solutions is to allow for
code verification against true 3D flows with physical relevance, as
opposed to 3D simulations of lower-dimensional problems or
manufactured solutions that lack physical relevance. Of particular
interest are solutions with relevance to Inertial Confinement Fusion
(ICF) capsules. While ICF capsules are designed for spherical
symmetry, they are hypothesized to become highly 3D at late time
due to phenomena such as Rayleigh-Taylor instability, drive
asymmetry, and vortex decay. ICF capsules also involve highly
nonlinear coupling between the fluid dynamics and other physics,
Prior to Rhinoplasty, a physician’s discussion with the patient
comprises the wishes and the reality. Rhinoplasty simulation can
serve as an important tool in this communication and allows both
surgeons and patients to conduct collaborative treatment planning
and predict possible outcome from the surgical procedure. The
structure of the nasal skeleton, including the bone and cartilage,
plays a significant role to determine the shape of the nose. The
objective in this work is to create mathematical and computational
models that can be used to simulate actual surgery and predict the
results based on the theories from elasto-mechanics, as well as the
well-established tripod theory in plastic surgery. Based on the
42
ABSTRACTS
imaging from medical handbook and surgeon’s feedback, we have
developed a full geometric model using SolidWorks that consists of
nasal cartilages and part of the nasal bone. A finite element model is
generated by extracting some parts of this full geometric model and
incorporating ligaments into the model. To predict the final outcome
of the surgery, structural analysis is performed using an open source
C++ library (OFELI). The results are verified using commercial multiphysics solver and visualization program (COMSOL) and validated
using the data obtained from the literature and surgical experiments.
In this presentation, we will demonstrate that the simulation results
agree with typical surgery outcomes where material is removed from
the ends of lower lateral cartilage or the bottom of the medial crura.
Using OFELI, the simulation results can be obtained in real-time. The
results are being verified using COMSOL within specified numerical
error tolerance. The results are also in accordance with that found in
the literature and surgical experiments. The full scale validation is
underway when patient-specific images are made possible. The
advances in real-time simulation provide a great promise for further
improvements in the way this technology being used in the clinics.
The future development of a customizable rhinoplasty simulation
model could benefit the patients greatly by predicting the surgical
outcome in real-time. In addition, mathematical models based on
growth mechanics, is also under development, which permits the
quantitative prognosis for both surgeons and patients in months.
can affect the prediction capability of computational fluid dynamics
simulations for coal gasifiers, which are governed by challenging
physics due to varying scales in reacting multiphase flows. In this
study we investigate the effect of two particular sources of
uncertainty, (1) model input parameter, and (2) discretization error. For
this purpose a simplified two-dimensional transport gasifier
configuration was simulated using a deterministic open source CFD
software, MFIX (https://www.mfix.org) . Non-intrusive uncertainty
quantification methods were employed by interfacing MFIX with an
open source uncertainty quantification toolbox, PSUADE
(https://computation.llnl.gov/casc/uncertainty_quantification ). This
approach has enabled the treatment of the CFD code as a black box
without any structural changes. At this phase, the use of twodimensional simulations instead of three-dimensional ones facilitated
acceptable turn around time to perform simulation of adequate
number of samples. A number of sampling simulations were
performed to construct a data-fitted surrogate model or a response
surface for quantities of interest such as species mass fractions at
the exit. For this purpose, the traditional response surface approach
was extended to include mesh spacing as an additional factor. In
prior studies, this approach has been demonstrated to be an
effective alternative to Richardson’s extrapolation to model the effect
of discretization errors. Furthermore, it possess some favorable
features such as noise filtering capability when noise error is present
as opposed to Richardson’s extrapolation that tends to amplify the
noise error. We present some preliminary results from this study and
how they can be extended to a wider range of gasifier simulations.
Techniques for using the Method of Manufactured Solutions
for Verification and Uncertainty Quantification of CFD
Simulations Having Discontinuities
V&V2013-2288
Benjamin Grier, Richard Figliola, Clemson University,
Edward Alyanak, Jose Camberos, US Air Force Research
Lab
Use of Information-Theoretic Criteria in Smoothing Full-Field
Displacement Data
V&V2013-2305
Sankara Subramanian, Indian Institute of Technology Madras
In recent works by the author and co-workers, Principal
Components Analysis (PCA) has been used to achieve reduction of
noise and redundancy in full-field displacement data. It is well
understood that dominant principal components (PCs) or
eigenfunctions represent most of the variance in the measured data
and that the other eigenfunctions contain mostly noise. By
reconstructing the original measurements from just p dominant PCs,
one achieves the twin objectives of noise and dimensionality
reduction. A key issue in the success of this approach is the
identification of p, the number of principal components to be
retained. In the PCA literature, this problem is well-studied but most
approaches proposed are ad hoc rules of thumb rather than exact
formulae. The ‘scree’ plot technique, based on the spectrum of
singular values, works very well for some synthetic displacement
data, but is not so successful on real experimental data. Another
commonly used technique is to fix the reconstruction error a priori
and retain the smallest number of dominant PCs that yield this
desired accuracy. While this method is easy to implement, it is not
clear as to what the error threshold must be for DIC data, since often
it is not just displacement data that are of interest, but also strains,
which are functions of displacement gradients. In this work, an
objective information-theoretic approach is proposed to identify p.
The corrected Akaike Information Criterion (AICc) and Bayesian
Information Criterion (BIC) are simple formulae widely used in
various fields of engineering to strike a balance between the number
of parameters used in a model and the likelihood of the parameters.
As the number of parameters increases, the quality of the fit
increases, and so does the likelihood. However, so does the danger
of overfitting and both AICc and BIC penalize overfitting, with the
difference being that BIC penalizes larger number of parameters
more severely than does AICc. in the present work, both criteria are
used to fit Legendre polynomial bases to the PCs obtained from true
displacement fields measured during fatigue of solder joints.
Dominant PCs are objectively identified as those for which a finite
number of Legendre polynomials result in the best fit, whereas PCs
corresponding to noise are best fit by a zeroth order Legendre
Verification and validation of CFD solutions using the method of
manufactured solutions (MMS) has been widely used since its
introduction by Steinberg and Roach (1985). MMS is a method
traditionally used to verify the correctness of a computational code,
but can also quantify the discretization, round-off, and iterative error
of the numerical approach and boundary conditions. Current
restrictions on MMS require that solutions be smooth and
continuous, rendering the use of solutions with discontinuities
impossible. Here we investigate generic but fundamental activities
and methodologies for applying MMS to solvers modeling any shock
discontinuities. We develop a method to use arbitrary pre-shock
conditions as well as arbitrary shock boundaries to test these code
capabilities. The outcome of this is a novel demonstration of using
MMS to assess uncertainty quantification of numerical errors. Tests
were completed using steady-state Euler solvers within the military
code AVUS (Air Vehicles Unstructured Solver). While CFD shocks are
the experimental platforms of this research, this work could also be
of interest when dealing with fluid-solid interfaces, code-to-code
couplings, and any numerical solution that has a discontinuous
nature.
Non-intrusive Parametric Uncertainty Quantification for
Reacting Multiphase Flows in Coal Gasifiers
V&V2013-2303
Aytekin Gel, ALPEMI Consulting, LLC, Tingwen Li, URS
Corporation, Mehrdad Shahnam, DOE-NETL, Chris
Guenther, National Energy Technology Laboratory (NETL)
Significant improvements in computational power have substantially
increased the role of simulation-based engineering in complex
systems, such as coal gasifiers. However, credibility of simulation
results needs to be established for usability of these studies. For this
purpose, uncertainty quantification has become a critical component
of modeling and simulation studies, in particular for computational
fluid dynamics applications (CFD). Different sources of uncertainty
43
ABSTRACTS
polynomial, i.e. a constant function. It is shown that BIC works much
better than AICc in separating the dominant eigenfunctions from
noise, and the resulting fits are considered the true signals to be
used in reconstruction and differentiation of the displacement data.
Solution Verification based on Grid Refinement Studies and
Power Series Expansions
V&V2013-2308
Luis Eca, Instituto Superior Tecnico, Martin Hoekstra, MARIN
An Overview of the First Workshop on Verification and
Validation of CFD for Offshore Flows
V&V2013-2306
Luis Eca, Instituto Superior Tecnico
Guilherme Vaz, MARIN
Owen Oakley, Consultant
In this presentation we focus on Solution Verification using
systematic grid refinement, i.e. we will illustrate a practical procedure
for the estimation of the numerical error/uncertainty of a numerical
solution for which the exact solution is unknown. Most of the existing
methods for uncertainty estimation [1] require data in the so-called
``asymptotic range’’, i.e. data on grids fine enough to give a single
dominant term in a power series expansion of the error. This often
means levels of grid refinement beyond those normally used in
practical applications. In order not to let this be a reason to abandon
uncertainty estimation at all, we have established an uncertainty
estimation procedure that takes into account the practical limitations
in grid density. The discretization error is estimated with power series
expansions as a function of the typical cell size. These expansions, of
which four types are used, are fitted to the data in the least squares
sense. The selection of the best error estimate is based on the
standard deviation of the fits. The error estimate is converted into an
uncertainty with a safety factor that depends on the observed order
of accuracy and on the standard deviation of the fit. For wellbehaved data sets, i.e. monotonic convergence with the expected
observed order of accuracy and no scatter in the data, the method
reduces to the well known Grid Convergence Index [1]. References
[1] Roache P.J., “Fundamentals of Verification and Validation”,
Hermosa Publishers, Albuquerque, New Mexico 2009.
This presentation gives an overview of the first Workshop on
Verification and Validation of CFD for Offshore Flows [1] that was held
in July 2012 at the OMAE 2012 Conference. The Workshop included
three unsteady, incompressible flow test cases: 3-D manufactured
solution for unsteady turbulent flow (including turbulence quantities
of eddy-viscosity models); the flow around a smooth circular cylinder
and the flow around a 3-D straked riser. The goal of the first exercise
is to perform Code Verification, i.e. to check that the discretization
error tends to zero with grid and time-step refinement and (if
possible) demonstrate that the observed order of grid convergence
matches the “theoretical order” of the code. The objectives of the
other two cases were twofold: check the consistency between error
bars obtained from different numerical solutions of the same
mathematical model, i.e. perform Solution Verification; apply the V&V
20 Validation procedure [1] to assess the modeling error. This means
that experiments and simulations require the estimation of their
respective uncertainties. Therefore, the first test case is focused on
Code Verification, whereas the remaining test cases include Solution
Verification and Validation. There were 12 groups submitting results
for the Workshop covering all the proposed test cases. The
submitted calculations were obtained with 10 different flow solvers
including in-house and commercial codes. References [1] Eça L., Vaz
G., “Workshop on Verification and Validation of CFD for Offshore
Flows”, OMAE 2012-84215, Rio de Janeiro, Brazil, July 2012.
Radiation Hydrodynamics Test Problems with Linear Velocity
Profiles
V&V2013-2309
Raymond Hendon, Middle Tennessee State University, Scott
Ramsey, Los Alamos National Laboratory
As an extension of the works of Coggeshall and Ramsey, a class of
analytic solutions to the radiation hydrodynamics equations is
derived. These solutions are valid under assumptions including
diffusive radiation transport, a polytropic gas equation of state,
constant conductivity, separable flow velocity proportional to the
curvilinear radial coordinate, and divergence-free heat flux. In
accordance with these assumptions, the derived solution class is
mathematically invariant with respect to the presence of radiative
heat conduction, and thus represents a solution to the compressible
flow (Euler) equations with or without conduction terms included.
With this solution class, a quantitative code verification study (using
spatial convergence rates) is performed for the cell-centered, finite
volume, Eulerian compressible flow code xRAGE developed at Los
Alamos National Laboratory. Simulation results show near second
order spatial convergence in all physical variables when using the
hydrodynamics solver only, consistent with that solver’s underlying
order of accuracy. However, contrary to the mathematical properties
of the solution class, when heat conduction algorithms are enabled
the calculation does not converge to the analytic solution.
Code Verification of ReFRESCO using the Method of
Manufactured Solutions
V&V2013-2307
Luis Eca, Filipe Pereira, Instituto Superior Tecnico,
Guilherme Vaz, MARIN
ReFRESCO is a flexible Unsteady Reynolds-Averaged Navier-Stokes
(URANS) equations solver based on a fully-collocated (all unknowns
are defined at the cells centre) face-based finite-volume discretization
in the physical space. The code is able to handle volumes of arbitrary
shape and so it is highly suitable for applications to turbulent flows
around complex geometries. However, this flexibility is accompanied
by several challenges to obtain second-order grid convergence in
typical grids of practical calculations. In this presentation we discuss
the discretization of the diffusion terms of the momentum equations
and the corrections required in non-orthogonal grids, i.e. in grids
where the line that connects two neighboring cell centers is not
parallel to the face normal. In order to determine the discretization
error, the exercise is performed with Manufactured Solutions
specifically developed to mimic statistically steady, incompressible
near-wall turbulent flows. The results show that it is possible to
achieve second-order grid convergence in grids with deviations from
orthogonality up to 80º. Furthermore, it is also demonstrated that
ignoring non-orthogonality leads to an inconsistent discretization, i.e.
to a scheme that does not converge to the exact solution with grid
refinement.
A Manufactured Solution for a Steady Capillary Free Surface
Flow Governed by the Navier-Stokes Equations
V&V2013-2310
Stephane Gounand, Commissariat À L’Energie Atomique,
Vincent Mons, Ecole Polytechnique
The context of our study is the thermohydraulic numerical simulation
of weld pools with a particular emphasis on the coupling between
surface and bulk phenomena. The liquid metal flow in the weld has a
free surface. It has been shown that in the numerical simulation of
some welding processes, such as the widely-used TIG (Tungsten
Inert Gas) arc welding process, precise determination of the free
surface shape is of paramount importance because it has a great
44
ABSTRACTS
impact on the outcome of the numerical model: geometry of the weld
and heat distribution. The free surface shape results from the balance
of three main forces: external arc aerodynamic pressure loading,
surface tension and internal liquid metal flow loading. In the case of
welding, a main driving force of the liquid metal flow is also located
on the surface: the Marangoni shear force which is due to the
variation of surface tension with temperature. The purpose of this
particular study is to verify the numerical code used for the
computation of a steady capillary free surface flow, such as those
arising in weld pool simulations. We chose here a steady-state
approach for the following reasons: - in the context of welding,
steady situations are sometimes the ones that are sought in order to
get good weld quality; - on the one hand, it simplifies the numerical
problem because there is no need to handle the time dimension; - on
the other hand, the difficulty lies on the fact that the numerical
procedure must be able to converge to the equilibrium state which is
supposed to exist. In order to verify our numerical code, we first
looked for reliable numerical solutions for steady capillary free surface
flow described by Navier-Stokes equations in the literature. We found
surprisely few results: most of the literature is devoted to unsteady
situations such as drop oscillations and sloshing. Moreover, we
couldn‘t reproduce the numerical results given in the main reference
article we found despite the fact that the considered test case
appeared to be very simple at first glance: steady isothermal flow,
simple container geometry and boundary conditions and
incompressible fluid. At this point, we decided to construct our own
manufactured solution for a situation similar to the previously
considered test case. In our presentation, we will outline the
numerical method used, the difficulties that were met and the insights
that were gained in devising and computing the manufactured
solution. The obtained convergence results are also presented and
analysed.
liquid region separated by a free surface. The input parameters in the
experiment are the nozzle height with respect to the unperturbed
liquid surface, the nozzle diameter and the gas flow rate. The quantity
that was measured is the shape of the free surface using an optical
method. In the non-dimensional number range of interest, we
conducted a sensitivity analysis using a regression model for both
the experimental and numerical results before comparing them for
validation purpose.
Compact Finite Difference Scheme Comparison with
Automatic Differentiation Method on Different Complex
Domains in Solid and Fluidic Problems
V&V2013-2313
Kamran Moradi, Siavash Rastkar, Florida International
University
There are three different general methods to differentiatea function:
numeric, symbolic and automatic. Symbolic methods can be very
accurate in computing the values of derivatives but as functions grow
in complexity and size this method becomes more and more
expensive and this yield to using one of the other methods.Automatic
differentiation methods were started from 1990 to overcome such
problems. In these method although the same rules of symbolic
differentiation are used but at each step of the process numeric
values of the derivatives are calculated and substituted in the
differentiation function instead of their symbolic values. In general
finite difference schemes provide an improved representation of a
range of scales (Spectral-like resolution) and as one of advanced
methods of finite difference; Compact finite difference scheme is
evaluated in this article on different complex functions. The first
derivative and second derivative in multidimensional problems inside
and at the boundaries of the calculation domains are investigated.
Error analysis is also presented which shows high precision of this
numerical method. Accuracy and cost efficiency of these approaches
are compared and presented in this paper. The applications of these
methods are so comprehensive even in Thermal and Fluidic domain
analysis and also in solid mechanics area.
Impact of a Gaseous Jet on the Capillary Free Surface of a
Liquid: Experimental and Numerical Study
V&V2013-2311
Stephane Gounand, Benjamin Cariteau, Olivier Asserin,
Xiaofei Kong, Commissariat A l’Energie Atomique, Vincent
Mons, Ecole Polytechnique, Philippe Gilles, Areva NP
Determining What Vortex-Induced Vibration Variables have
the Maximum Effect on a Pipeline Frees Span’s Amplitude of
Displacement with Computational Fluid-Structure Interaction
V&V2013-2315
Marcus Gamino, Ricardo Silva, Samuel Abankwa, Edwin
Johnson, Michael Fischer, Raresh Pascali, Egidio Marotta,
University of Houston, Carlos Silva, Juan-Alberto RivasCardona, FMC Technologies
The context of our study is the thermohydraulic numerical simulation
of weld pools that are present during industrial welding processes
such as the TIG (Tungsten Inert Gas) arc welding process. Although
these weld pools have generally quite small extent compared to the
parts being assembled, they have a big influence during the welding
process both on the local heat distribution and on the subsequent
quality of the weld and the weld penetration. The shape of the liquid
metal weld pool is determined by three main contributions: - external
arc aerodynamic pressure loading; - surface tension; - internal liquid
metal flow loading. Unfortunately, it is very difficult to make precise
in-situ measurements in the weld pool because of the high
temperature environment and the arc radiation. This is why we focus
in this study on a simplified situation that is representative of the main
hydrodynamic phenomena arising in the weld pool: a gaseous jet
impacting the capillary free surface of a liquid. The axisymmetric
hypothesis was also retained in a first approach. We first used
dimensional analysis to construct the characteristic non-dimensional
numbers related to both the welding problem and the jet problem.
Dimensional analysis shows in our case that the gas and fluid flow
are laminar and that gravity effects are negligible with respect to
surface tension effects near the jet impact. A bibliographical study
showed that jets have been widely studied, theoretically and
experimentally, especially in the turbulent regime both in open space
and impacting on a solid surface. However, not so many theoretical
or experimental studies have been conducted in the laminar regime
and with a capillary free surface. Our particular study involves an
experimental setup where an air jet impacts the free surface of a
water pool and a numerical code, previously verified with a
manufactured solution, that computes the flow in both the gas and
The objective of this investigation is to determine the effects of
different Reynolds number, Re, variables (i.e. flow velocities, change
in pipe diameter, and fluid densities) on the maximum amplitude of
displacement of a pipeline free span due to vortex-induced vibration
(VIV). According to Section 4.1.14 of DNV RP-F105, the Reynolds
number “is not explicit in the evaluation of response amplitudes.” A
higher Reynolds number does not guarantee higher response
amplitude since the response amplitude depends upon the reduced
velocity, VR, turbulence intensity, Ic, the flow angle relative to the
pipe, ?rel, and the stability parameter, KS, which represents
dampening for a given modal shape (DNV RP-F105). This research
will verify that there is no correlation between the Reynolds number
and its variables with a free span’s response amplitude by testing
each input variable with computational design of experiments by
means of three different design models, which include a space-filling
design, a full factorial design, and a surface response design. This
investigation focuses on the Re range between 79,500 and 680,000,
where the vortex street is fully turbulent and the boundary layer
around the cylinder has begun to undergo a turbulent transition at Re
at 300,000 (Lienhard). This investigation utilizes state of the art
computational fluid-structure interaction (FSI) by implicitly coupling
45
ABSTRACTS
ABAQUS, a FEA software, and STAR-CCM+, a CFD software to
determine the response amplitude over a range of each design
variable. During the FSI simulations, pressures are calculated in
STAR-CCM+ and transferred to ABAQUS, where displacements are
calculated at every iteration. Of the three DOE design models, the
surface response design proved to be the most accurate and the
least time consuming due to its strategic design point locations. Of
the three main inputs, the current velocity had the greatest effect on
the free span’s amplitude of displacement when it was subjected to
VIV due to current, which validates the response amplitude’s
dependence on the reduced velocity. The results of this investigation
prove there is no linear relationship between the response amplitude
and the increase in the Reynolds number, so no explicit relation can
be defined. Future work could incorporate a wider range of Reynolds
number to test the relationship between flow regime and response
amplitude.
understanding of fuel effects on combustion processes is crucial.
Numerical simulation might be a very beneficial tool to investigate
fuels effects and to design new combustion chambers. However,
especially in aviation risk-informed decision-making is necessity. In
this study, the focus is on the validation of fuel evaporation models.
Evaporation is an important part of the fuel preparation process and
can influence the subsequent combustion processes remarkably.
Previous validations of evaporation models are mainly based on
experiments with suspended droplets. For this kind of experiments
boundary conditions are simple and well controlled, but it is well
known that amongst others the suspension of the droplets might
modify significantly the behavior of the droplets. Validation data from
experimental studies with fuel sprays instead, exhibit a higher
complexity in the boundary conditions. A systematic investigation of
the effect of the uncertainties and simplifications of the model inputs
on the model predictions has not been performed up to now, to the
best of the authors’ knowledge. In this study, a validation metric for
ethanol spray evaporation will be constructed. Experiments have
been performed in a system built to study the evaporation of different
fuels sprays for the evaluation of accuracy of fuel evaporation
models. The paper explores specifically the problems faced in
multiphase flow validation activities. The strategies involved e.g. for
parameter estimation, input uncertainties quantification and their
effect on simulation output precision will be discussed in detail.
High Fidelity Numerical Methods using Large Eddy Simulation
Compared to Reynolds Average Navier-Stokes
V&V2013-2316
Ayoub Glia, Masdar Institute of Technology
The aim of this work is to clearly demonstrate the compromise being
made while using the high fidelity turbulence model of Large Eddy
Simulation (LES). A comparison of this model is made with the
averaging method; Reynolds averaged Navier-stokes, in an attempt
to portray the compromise of computational time over accuracy
being made, in using the high fidelity model. To this effect, time
histories of pressure coefficients at three taps on the roof of a 1/200
model of a 200 x 20 ft low-rise building will be modeled on the
commercial CFD package; FLUENT, using LES and RANS (k epsilon)
operations, and compared to the time histories of pressure
coefficients obtained by wind tunnel data at the same locations. One
of the remarked constrains of the LES model is the dramatically large
computational time to achieve a record length equivalent to that of a
wind tunnel [2]. These results from the large amount of nodes that are
brought into the calculation in order accommodate the principal
operation in large eddy simulations; low pass filtering [2]. Yet, this
model results in a more rewarding accuracy compared to Reynolds
Average Navier-Stoke for specific cases [3][4]. Therefore, there is a
need for researchers to clearly understand the consequences of
using either method when performing CFD analyses. The aim of this
paper is to compare the performance of both models while
simulating the flow over a low rise building. To this effect, a 2D model
of a low rise building is created on the commercial code Fluent.
When constructing the grid, near wall region is highly considered for
both models. A flow with Reynolds number of 105 is applied to study
the performances of these models within a range of relatively high
turbulence. Consequently, velocity profiles (stagnation, vortex,
separation, reattachment, recirculation zone), and pressure
coefficients for peak and mean pressure of both models are
compared. Data collected from the flow over a low rise building
simulated in a wind tunnel by Isam et al [2] are used as benchmark
for the comparisons in this study. Furthermore, a general discussion
and comparison of their grids, results and computational time is
made.
Collecting and Characterizing Validation Data to Support
Advanced Simulation of Nuclear Reactor Thermal Hydraulics
V&V2013-2320
Nam Dinh, Anh Bui, Idaho National Laboratory, Hyung Lee,
Bettis Laboratory
This presentation is concerned with collection, characterization, and
management of experimental data (including separate-effect tests,
integral-effect tests, and plant measurements) to support model
calibration and validation and prediction uncertainty quantification in
advanced simulation of nuclear reactor thermal-hydraulics.
Formidable challenges in verification and validation (V&V) and
uncertainty quantification (UQ) come from multi-scale and multiphysics nature of advanced nuclear reactor simulations, while data
scarcity, heterogeneity and lack of information to characterize them
add to the difficulty in implementing V&V-UQ processes in nuclear
engineering applications. This presentation will introduce a “total
data-model integration” approach that is designed to meet these
challenges. While the concept is appealing, it puts a burden on (i)
formulating, demonstrating and implementing a systematic approach
to data collection, a consistent framework for data characterization,
storage, and management, and making the data available for V&VUQ processing; and (ii) development of V&V-UQ techniques that are
capable of handling and integrating heterogeneous and multi-scale
data in model calibration and validation. Via an example of a socalled “challenge problem” application, and a case study, this
presentation will outline status of developments, insights and lessons
learned in efforts to address (i) and (ii), and recommendations for
future work. This work is performed for and under support of the
Department of Energy’s Consortium for Advanced Simulation of Light
Water Reactors (CASL). Keywords: advanced modeling and
simulation, validation, uncertainty quantification, data, data collection,
data characterization, data management, multi-phase flow, thermalhydraulics, nuclear reactor, Consortium for Advanced Simulations of
Light Water Reactors (CASL).
Validation and Uncertainty Quantification of an Ethanol Spray
Evaporation Model
V&V2013-2319
Bastian Rauch, Patrick Le Clercq, DLR - Institute of
Combustion Technology, Raffaela Calabria, Patrizio Massoli,
CNR - Istituto Motori
Effect of Interacting Uncertainties on Validation
V&V2013-2321
Arun Karthi Subramaniyan, GE Global Research, Liping
Wang, GE Corporate Research & Development, Gene Wiggs,
GE Aviation
The need for alternative fuels brings new challenges for the
transportation industry. The introduction of new fuels challenges
existing systems and might require the development of fuel flexible
engines in the long-term. To ensure operational safety a detailed
Validating complex large scale simulations is hard even when the
46
ABSTRACTS
simulations and test data have independent sources of uncertainty. In
most real engineering applications, both simulation results as well as
test data are fraught with correlated and interacting uncertainties.
System level tests are notoriously difficult to control and
measurement system uncertainties combined with test conditions
can contribute to significant uncertainties in measurements (both
alleatory and epistemic). Uncertainty in simulations can be caused by
the model form, simulation parameters and other epistemic
uncertainties. In order to estimate the validity of the simulations to
predict the observed test data, we need to estimate the probabilistic
mismatch between predictions and the observed data taking all the
sources of uncertainties and their interactions into account. It is
straightforward in most cases to suspect a large error in the
validation process if one ignores some sources of uncertainty.
However, it may not be intuitive/straightforward to assess the impact
of interactions between the sources of uncertainty and the effect of
missing some key interactions. Uncertainties (and/or random
variables) can interact in many ways. One of the most straightforward cases is when the interactions are known apriori (at least
partially), and thus can be accounted for using covariance structures.
However, in many cases, the interactions are not known making the
inference problem rather difficult. When prior probabilities are
considered in addition to the above complexities, the UQ and
validation exercise can quickly become intractable. This talk will
highlight the interaction effects of uncertainties in simulations and
test data and their impact on validation. Several uncertainty
quantification techniques including Bayesian techniques will be
contrasted under varying degrees of interacting uncertainties.
Quantifying random field uncertainty using Bayesian techniques will
be discussed along with the difficulties of including various sources
of uncertainties in the validation exercise. Examples of validating
computationally expensive engineering models of aircraft engines
and gas turbines using Bayesian techniques will be showcased.
Validation of Transient Bifurcation Structures in Physical
Vapor Transport Modeling
V&V2013-2325
Patrick Tebbe, Joseph Dobmeier, Minnesota State University,
Mankato
Physical Vapor Transport (PVT) is one processing method used to
produce specialty semiconductor materials and thin films. During the
process a solute material sublimes (evaporates) from a heated
source, is transported across the enclosure and deposits
(condenses) on a cooled growth (crystal) substrate. Often the
materials produced with PVT are difficult or impossible to produce
with other methods. Modeling PVT involves solving the coupled nonlinear Navier-Stokes equations, with specific boundary conditions.
While the majority of numerical studies have been carried out for
steady-state conditions evidence clearly demonstrates that the PVT
process possesses inherent temporal variations and asymmetries. In
fact, the double diffusive (thermal and solutal) nature of PVT can lead
to steady and oscillatory flows with periodic, and aperiodic
behaviors. Over the course of several years simulations have also
been run with the same computational fluid dynamics package on a
number of different hardware architectures. During the course of this
research, subtle yet significant trends between identical cases run
with different hardware systems was noticed. In general, the same
large-scale bifurcation and flowfield structures are reproduced but
some rotations are reversed and the corner vortices tend to be
different. Most striking are the temporal differences that develop. The
times of transition and lifetimes of flow structures vary between the
cases. The bifurcation development and transitions are in fact
probabilistic events and it can be concluded that differences in
software and hardware implementation of the codes are inducing
these variations. It is very difficult to compare unsteady solutions on a
time-by-time basis. Experimental data is also difficult to obtain for the
transient cases. Therefore, a validation of the transient PVT
simulation is problematic. Comparisons can be made of the time
histories for velocity, species concentration, and temperature at
specific points. Using a Fast Fourier Transform (FFT) technique the
frequency spectrums can also be analyzed to determine if the same
frequency components are generated between different models. In
addition, phase space diagrams can be created to analyze the
dynamics of the flowfield, allowing a determination of what type of
flow is present (steady state, periodic, aperiodic, etc.). In this
presentation, results from the described simulations will be discussed
from the perspective of validation. Given the sensitivity to aspects
such as system architecture and compilation differences, validation
for transient systems such as these poses unique challenges.
Implications this may suggest when similar processes are modeled
with multi-core systems will also be addressed.
Sensitivity and Uncertainty of Time-Domain Boiling Water
Reactor Stability
V&V2013-2323
Tomasz Kozlowski, University of Illinois, Ivan Gajev, Royal
Institute of Technology
The unstable behavior of Boiling Water Reactors (BWR), which is
known to occur at certain power and flow conditions, could cause
SCRAM and decrease the economic performance of the plant. In
general, BWRs may be subject to coupled thermal-hydraulics neutronics instabilities while operating at relatively low flow rate
relative to power (e.g. 30% of nominal flow and 50% of nominal
power). The instability originates in the core and leads to oscillations
in flow rate, pressure and void fraction. A properly designed core
should ensure that core oscillations are efficiently suppressed during
nominal reactor operation. However, oscillations of local, regional or
global nature may occur under a complex, and often unexpected
combination of accident transient conditions. In a number of
situations, SCRAM and consequent reactor power shutdown are the
only tools available to stop the uncontrolled power excursions. The
measure for stability is usually the Decay Ratio (DR) and the Natural
Frequency (FR) of the reactor power. The decay ratio is a measure of
the growing or damping of the reactor power signal, the natural
frequency is the frequency of oscillation of the reactor power signal.
For better prediction of BWR stability and understanding of influential
parameters, two time-domain TRACE/PARCS models of Ringhals-1
and Oskarshamn-2 BWRs were employed to perform a sensitivity
and uncertainty study. The aim of the present study is to quantify the
effect of different input parameters on stability, and to identify the
most influential parameters on the stability parameters (decay ratio
and frequency). The propagation of input errors uncertainty method,
sensitivity calculations and the spearman rank correlation coefficient
has been used fulfill this aim.
On the Convergence of the Discrete Ordinates and Finite
Volume Methods for the Solution of the Radiative Transfer
Equation
V&V2013-2326
Pedro J. Coelho, Instituto Superior Tecnico, Technical
University of Lisbon
Radiative heat transfer in participating media is governed by an
integro-differential equation known as the radiative transfer equation
(RTE). Among the numerical methods used to solve this equation, the
discrete ordinates method (DOM) and the finite volume method
(FVM) are probably the most widely used ones. In contrast with fluid
flow problems governed by the Navier-Stokes equations, where only
the spatial discretization is needed, the RTE requires both a spatial
and an angular discretization. Errors arising from the spatial and
angular discretization influence the accuracy of the numerical
solution, and these errors interact with each other in such way that
they tend to compensate each other. These aspects need to be
taken into account when using the DOM or FVM to solve the RTE.
47
ABSTRACTS
The purpose of the present work is to investigate the convergence of
these two methods for two test problems where an analytical solution
of the RTE is available. These tests correspond to radiative transfer in
two-dimensional rectangular enclosures with black walls and an
emitting-absorbing grey medium maintained at uniform temperature.
The walls are cold in the first test case, and one wall is hot while the
others are cold in the second test case. In the first test case the
solution is smooth, while in the second one there are ray effects
caused by the discontinuity in the boundary conditions at the corners
of the enclosure where a cold wall meets a hot wall. In both
problems, the DOM and FVM numerical solutions are compared with
the analytical solution of the discrete DOM and FVM equations, as
well as with the analytical solution of the RTE. Systematic spatial and
angular grid refinements are carried out to quantify the solution error
and to determine the order of convergence of the numerical methods
as the refinement is carried out. It is shown that if only spatial
refinement is carried out, the numerical solution approaches the
analytical solution of the discrete equations, but does not converge
to the analytical solution of the RTE. This means that the solution
accuracy is not improved and may even get worst. Both spatial and
angular refinements are needed to ensure that the numerical solution
converges to the analytical solution of the RTE. Finally, the accuracy
of the angular discretization is examined by investigating the
convergence of the analytical solution of the spatially discretized
DOM and FVM equations when the angular discretization is refined.
conditions. The second is in the proper implementation of the
boundary conditions. This work will focus on 1) defining the
mathematical model for common boundary conditions for estimating
the residual and 2) implementing Neumann (i.e., derivative) boundary
conditions for the error transport equations and defect correction for
Burgers’ equation and 2D/3D Euler equations.
The “Validation” Journal Article
V&V2013-2328
Barton Smith, Utah State University
Before one can validate a computer model, an accessible dataset of
sufficient quality and completeness must exist. Unfortunately, this is
rarely the case. There are two primary reasons for the dearth of useful
validation datasets: 1) A lack of understanding in the experimental
community about how to design and perform a validation experiment
2) The lack of a venue to disseminate and archive validation datasets
Excellent information on design and execution of validation
experiments is available, but tends to reside inside the domain of
modelers. However, the second point is an even larger concern. Even
if a validation study is performed perfectly, its impact is typically
limited to the term of the project by which it was funded and to the
individuals involved. We will present a proposal that solves both
issues. We propose that society journals begin to publish “Validation
Articles”, which consist of a brief article describing the study coupled
to online data describing the six elements of a validation dataset: 1)
Experimental Facility, 2) Analog Instrumentation and Signal
Processing, 3) Boundary and Initial Conditions, 4) Materials, 5) Test
Conditions, 6) System Response Quantities. The quality of the
dataset will be assessed by peer review. The completeness of the
dataset will be scored by an expert panel appointed by the ASME
V&V committees, independent of the publication process, and this
score will available to subscribers of the journal. Published scoring
criteria will educate the experimental community on how to properly
perform and report validation experiments. Users of the data sets
can shop for a dataset of sufficient completeness for their purposes.
The end result of this course of action would be 1) increased
subscriptions for the journal, 2) journal citations for the
experimentalist, 3) improved value and archival visibility of a
validation experiment, and, most importantly, 4) improved quality of
validation. All of this can be done without the need for a new journal,
editors, or review process. The proposed activities will be very similar
to the current operations of the journal. As part of this presentation,
we will demonstrate how a “Validation Article” would appear and
operate if published in a well-known fluids journal.
Single-Grid Discretization Error Estimation for Computational
Fluid Dynamics
V&V2013-2327
Tyrone Phillips, Christopher J. Roy, Virginia Tech
Discretization error is often the largest and most difficult numerical
error in a simulation and is defined as the difference between the
exact solution to the discrete equations and the exact solution to the
PDEs. This work deals with single-grid discretization error estimation
methods, specifically, defect correction and error transport equations.
The fundamental concepts behind these error estimators are that the
local source of discretization error is the truncation error and that the
discretization error is convected, diffused, and otherwise propagated
through the domain in a similar manner as the primal flow properties.
Both single-grid discretization error estimation methods require the
computation of two solutions on a given grid. For defect correction,
the truncation error is estimated and added to the right-hand side of
the governing equations and a second solution is computed. For
error transport equations, the governing equations are linearized
about the numerical solution, and the discretization error is directly
computed from the estimated truncation error. An important factor in
computing an accurate discretization error estimate has been
identified to be an accurate estimate of the truncation error. Several
different methods for estimating the truncation error have been
developed. One method requires analytically deriving the truncation
error but becomes unfeasible for more complex discretization
schemes and grid topologies. While simplifications can be made,
they can have a direct effect on the results and have been shown to
fail in certain cases. A second method uses a higher-order solution
computed from either extrapolating several solutions to approximate
the exact solution or using a higher-order discretization scheme. The
higher-order solution is interpolated (if necessary) onto the
computational grid resulting in an estimate of the truncation error. A
third method computes a polynomial fit to the solution and operates
the original (continuous) governing equations on to the fit. Significant
work has been done to apply error transport equations to Euler and
Navier-Stokes solutions for both steady and unsteady computations
(Williams and Shih, 2010), and defect correction has seen limited
work, mainly for obtaining higher-order solutions from lower-order
simulations. The proper implementation of boundary conditions is still
a significant challenge to the effectiveness of these methods. There
are two key areas of research needed for boundary conditions. The
first is in the proper estimation of the residual due to the boundary
Assessment of Model Validation Approaches using the
Method of Manufactured Universes
V&V2013-2329
Christopher J. Roy, Ian Voyles, Virginia Tech
Model validation is the assessment of a model relative to
experimental data (Oberkampf and Roy, 2010; Roy and Oberkampf,
2011). Recently, Stripling et al. (2011) proposed the Method of
Manufactured Universes (MMU) for assessing uncertainty
quantification methods. Similar to the method of manufactured
solutions used for code verification, MMU involves the generation of
a manufactured reality, or “universe”, from which experimental
observations can be made (possibly including experimental
measurements errors). An imperfect model is then formed with
uncertain inputs and possibly including numerical solution errors.
Since the true behavior of “reality” is known in this manufactured
universe, the exact evaluation of modeling errors can be performed.
Stripling et al. suggest that the manufactured reality be based on a
high-fidelity model and the imperfect model be based on a lowerfidelity model. This approach can be used to compare and contrast
different approaches for estimating model form uncertainty in the
presence of random experimental measurement error, experimental
48
ABSTRACTS
bias errors, modeling errors, and uncertainties (both random and
those due to lack of knowledge) in the model inputs. In this talk, we
examine the flow over a NACA 0012 airfoil at different Mach numbers
spanning the subsonic and transonic regimes and different angles of
attack including those that involve flow separation. The manufactured
reality is based on a high-fidelity CFD simulation employing the
Reynolds-Averaged Navier-Stokes equations. The imperfect model is
based on both thin airfoil theory and a combined vortex
panel/boundary layer analysis method. Lift, moment, and drag
coefficients are the system response quantities of interest. Varying
types and levels of uncertainty are considered in the Mach number
and angle of attack. Two different approaches for estimating model
form uncertainty are examined: the area validation metric (Ferson et
al., 2008; Oberkampf and Roy, 2010) and the validation uncertainty
approach of the ASME V&V 20-2009 standard (ASME, 2009). Issues
related to the extrapolation of model form uncertainty to conditions
where no experimental data are available are also discussed. ASME,
Standard for Verification and Validation in Computational Fluid
Dynamics and Heat Transfer, ASME Standard V&V 20-2009. Ferson,
S., Oberkampf, W. L., and Ginzburg, L., Computer Methods in
Applied Mechanics and Engineering, Vol. 197, 2008, pp. 2408–2430.
Oberkampf, W. L. and Roy, C. J., Verification and Validation in
Scientific Computing, Cambridge University Press, 2010. Roy, C. J.
and Oberkampf, W. L., Computer Methods in Applied Mechanics
and Engineering, Vol. 200, 2011, pp. 2131–2144. Stripling, H. F.,
Adams, M. L., McClarren, R. G., and Mallick, B. K., Reliability
Engineering and System Safety, Vol. 96, 2011, pp. 1242-1256.
results are a first step into validating high fidelity multi-physics codes
coupling multiphase flow and phase exchange.
The Effect of Welding Self-Equilibrating Stresses on the
Natural Frequencies of Thin Rectangular Plate with Edges
Rotation and Translation Flexibility
V&V2013-2331
Kareem Salloomi, A. Salam Al-Ammri, Shakir Al-Samarrai,
University of Baghdad/ College of Engineering
An investigation has been made into the effect of residual stresses on
the vibration characteristics of a thin rectangular plate elastically
restrained against both translation and rotation along all edges using
an energy method. General frequency equations with and without the
effect of residual stresses have been obtained. Exact frequency
equations with and without the effect of residual stresses for all the
boundary conditions between the extremes of free and clamped can
be obtained from the derived equations if the dimensionless stiffness
is taken between zero and infinity. Exact equations were derived
including the effect of the position of welding along the width of the
plate for all cases considered. General closed form solutions are
given for the vibration of a rectangular plate with various standard
boundary conditions on each of the four edges. The validity of the
equations obtained was checked with available special solutions with
a good agreement
Code Verification of Multiphase Flows with MFIX
V&V2013-2332
Aniruddha Choudhary, Christopher J. Roy, Virginia Tech
Experimental Data to Validate Multiphase Flow Code Interface
Tracking Methods
V&V2013-2330
Matthieu A. Andre, Philippe Bardet, The George Washington
University
In order to estimate the numerical uncertainties in a computational
fluid dynamics (CFD) simulation, code and solution verification
activities can be used to assess the accuracy of numerical solutions.
Code verification deals with identifying algorithm inconsistencies and
coding mistakes while solution verification deals with assessing
numerical errors in the simulation. The order of accuracy test is
considered the most rigorous of the various code verification tests
and requires calculation of discretization error in the numerical
solution at multiple grid levels. However, since the exact solution is
often not known, the method of manufactured solution (MMS) can be
employed where an assumed or “manufactured” solution is used to
perform code verification. (Roache and Steinberg, 1984; Roache,
2009; Oberkampf and Roy, 2010) MFIX (Multiphase Flow with
Interphase eXchanges) (https://mfix.netl.doe.gov/) is a generalpurpose hydrodynamic code that simulates chemical reactions and
heat transfer for fluids with multiple solid phase components that are
typical of energy conversion and chemical reactor processes. MFIX is
a finite-volume code which employs a staggered-grid approach
where numerical discretization of the equations is implemented with
first-order and second-order spatial and temporal discretization
schemes. MFIX employs a modified SIMPLE-based algorithm using
a pressure projection method which imposes a divergence-free
velocity field. Most multiphase code verification studies deal with
specific problems using analytical solutions that are representative of
a physically-realistic flow field. Even for single-phase flows, general
MMS-based code verification is complicated due to: 1) divergencefree constraints on the velocity field, 2) SIMPLE-based algorithm, and
3) the use of a staggered-grid. In this work, code verification is
performed for incompressible, steady and unsteady, single-phase
equations for which the multi-species system of equations in MFIX
reduces to typical incompressible, Navier-Stokes equations. Singlephase, incompressible governing equations in MFIX are verified to be
spatially second order accurate on uniform Cartesian meshes for (1)
2D, steady-state, laminar, Poiseuille flow through a rectangular
channel, (2) 2D, steady-state flow with general sinusoidal
manufactured solutions, (3) 3D, steady-state flow with a simplified
sinusoidal and a general curl-based manufactured solution, and (4)
3D, unsteady flow with a time-dependent manufactured solution. For
Multiphase flows are very complex to accurately model. They involve
up to five different flow regimes that still elude fundamental
understanding. To develop high-fidelity computational fluid dynamics
(CFD) codes, it is necessary to develop and validate robust interface
tracking methods (ITM). The balance of stresses must be considered
at all scales for a physics-based representation. Of particular interest
are the small scale disturbances and breakup of the interface
continuity which lead to the formation of liquid droplets and gas
bubbles. This has important implications in mixing, spraying, heat
and mass transfers. The current experimental study offers high
spatio-temporal resolution data at Re=2,5x10^4 to 1,2x10^5 to
validate ITM. A uniform rectangular water jet is created by a
contoured nozzle and the relaxation of the laminar boundary layer
exiting the nozzle triggers instabilities that quickly deform the free
surface and create very steep millimeter scale waves, which can
eventually entrap air bubbles. This canonical geometry allows a low
level of disturbances and well controlled boundary conditions in both
phases. A dual camera system with high magnification optics is
employed to simultaneously measure the free surface profile using
Planar Laser Induced Fluorescence (PLIF) and the liquid phase
velocity field using Particle Image Velocimetry (PIV). Both methods
are performed in a time-resolved manner giving information on the
evolution of the surface profile and how it correlates to the dynamics
of the flow beneath it. These results shed new lights on the dynamics
of forced waves generated by shear layer and reveal several
mechanisms of air entrainment in the trough of the waves, providing
the velocity field surrounding this event. The high curvature of the free
surface generates vorticity which plays an important role in the
entrapment of air. The high repeatability of these events in our facility
offers good statistical convergence and an easy delimitation of the
various flow regimes. Finally, preliminary results on gas transfer
between the two phases are reported. This is accomplished with
PLIF: quenching of the fluorescence of Pyrene Butyric Acid (PBA) by
dissolved O2 allows visualizing the amount of oxygen transferred into
the liquid, which can be related to the surface renewal rate. These
49
ABSTRACTS
the 3D study a general curl-based, divergence-free manufactured
solution is introduced. The temporal order is verified by performing a
combined spatial-temporal refinement study and the first order Euler
implicit time-stepping algorithm in MFIX is verified. In the
presentation, code verification of baseline two-phase governing
equations will be addressed either with manufactured solutions or
unit-testing of the multiphase modeling terms.
with 6 spacer grids. The test section with circulating fluid is mounted
on a six degree of freedom shake table where a range of ground
motions are simulated. In this scaled experiment, velocity fields are
measured at the center of the fuel bundle within a two-by-two tubes
area by means of particle image velocimetry (PIV). The surrogate fuel
rods vibrations during the simulated ground motion are measured
with laser induced fluorescence tomography (LIFT). The materials
and working fluid were selected to match the natural frequency and
Reynold’s number for typical pressurized water reactor. In this
validation facility, the initial and boundary conditions are highly
controlled to provide a uniform flow at the entrance of the test
section. Turbulence fluctuations are minimized by a series of
honeycomb and mesh followed by a contoured nozzle. Uniform inlet
flow in the test section is obtained by cutting the boundary layers that
develop along the nozzle walls. The facility geometries were
optimized with the help of preliminary simulations. Data garnered
from the experiment will be used to validate two high-fidelity fully
coupled FSI codes. They are based on simulations of increasing
refinement, culminating in large eddy simulations combined with
finite element analysis model.
Applying Sensitivity Analysis to Model and Simulation
Verification
V&V2013-2333
David Hall, Allie Farid, SURVICE Engineering Company,
James Elele, Naval Air Systems Command
Common code verification techniques are designed primarily for use
during code development. It is generally assumed that verification
activities are conducted while models and simulations (M&S) are
being constructed, and that they are part of a well-designed M&S
development process. However, our experience within the U.S.
Department of Defense (DOD) has been that many M&S utilized by
various system acquisition programs throughout DOD have a long
and cloudy history, having been developed over many years in many
and varied iterations, their coding language has evolved during use,
they have been used for various diverse purposes, and the history of
their verification is equally murky, or at least undocumented. The
result of this verification history, or lack thereof, is that many of these
codes have unknown and certainly undocumented credibility. It
would be very expensive to “reverse-engineer” documents that
would be required to conduct a detailed and comprehensive
verification program, particularly for those codes which simulate
complex processes or widely integrated systems. As a consequence,
those who would use the results of those M&S are presented with a
dilemma: to spend a lot of money and time on verification, at the risk
of other program priorities; or to find another method to demonstrate
adequate code credibility that will not break the bank. This paper will
discuss the relative merits of conducting structured sensitivity
analyses as a “substitute” for detailed code verification activities. The
relative accuracy of the code can be inferred from an analysis of the
responses of the code to variations in the input data, as long as
those sensitivities are well structured to support exercise of the paths
through the code that are of most importance to the intended use.
This process also relies on subject matter expert review of the code’s
responses to varying stimuli as compared with their expert judgment
of how the code should respond. Some examples will be provided of
our experiences with this technique.
Challenges for Computational Social Science Modeling and
Simulation in Policy and Decision-Making
V&V2013-2336
Laura McNamara, Timothy Trucano, Sandia National
Laboratories
Rapid growth in computational social modeling and simulation
techniques have generated buzz about an emerging revolution in
social science research. Scientists from a range of disciplines are
using computational methodologies in ways that challenge traditional
social and behavioral science approaches to complex social, cultural,
and behavioral phenomena. As this field has grown, its novel (and to
some exotic) methods have captured the attention of policy and
decision makers. Particularly since 9/11/2001, national security
institutions have invested in computational models for a range of
problems, from counterterrorism to religious radicalization to stability
and reconstruction operations. However, whether new computational
social modeling and simulation approaches effectively leverage
existing social science knowledge for decision-making is a topic of
considerable debate. Advocates for the use of modeling and
simulation in decision-making are optimistic about the potential
revolutionizing how government institutions make decisions, from
epidemiology to military training. At the same time, it is widely
recognized that the theories, frameworks and techniques used in
computational social science are immature, and that evaluation
principles are lacking. In this paper, we scrutinize computational
social science models and simulations from three perspectives: as
modeling and simulation technologies; as a form of social science;
and as analytical tools for human users. Our paper draws on several
decades’ worth of work in computational science in two Department
of Energy National Laboratories; a 900-source, interdisciplinary
literature review focused on computational social science modeling
and simulation projects; and an interdisciplinary workshop we
convened in 2011, in which participants discussed how
computational social modeling and simulation can benefit from
adoption of verification, validation, and uncertainty quantification
frameworks used in physics and engineering. Recognizing that social
and behavioral science bring particular epistemological challenges,
we assert that computational social science can benefit from closer
study and adoption of established methodologies for model
evaluation. For example, both the Department of Energy and the
Department of Defense require modeling and simulation efforts
derive V&V requirements from a specified application domain. Other
principles include employing sound software engineering practice;
conducting verification studies ahead of validation experiments;
identifying appropriate validation referents; gathering validation
quality data; and addressing uncertainty throughout the entire
modeling process. Lastly, evaluating the impact of modeling and
Validation Data and Model Development for Nuclear Fuel
Assembly Response to Seismic Loads
V&V2013-2334
Noah Weichselbaum, Philippe Bardet, Morteza Abkenar,
Oren Breslouer, Majid Manzari, Elias Balara, The George
Washington University, W. David Pointer, National Laboratory,
Elia Merzari, Argonne National Laboratory
High-fidelity computational fluid dynamics (CFD) models of fluid
structure interactions (FSI) are now mature enough to tackle complex
transient problems such as the seismic response of nuclear fuel subassemblies. If adequately validated, such codes hold great potential
for improving designs and assuring the safe operation of nuclear
reactors. A joint effort is on-going in collaboration with
experimentalists and numericists at the George Washington
University, Oak Ridge National Laboratory, and Argonne National
Laboratory to validate FSI codes for the response of nuclear fuel
assemblies during seismic events. The experimental test facility at
the George Washington University is an index matched facility that
allows measuring simultaneously the time-resolved velocity field
between the rods and their three-dimensional motion using nonintrusive optical diagnostics. The fuel sub-assembly is a 6x6 bundle
50
ABSTRACTS
simulation technology for human analytic work remains an
underappreciated problem for many areas of modeling and
simulation, beyond computational social science.
observed in LS-DYNA due to contact limitations and dynamic effects
of the explicit nature of the code. Abaqus results show more
favorable results and the results converge. Later studies show
favorable results in LS-DYNA with a simplified model approach of
having a single component, no contacts and no material creep
properties. The LS_DYNA results are comparable to Abaqus results
and converge easily. Initial conclusion drawn is that this simulation be
run primarily as an implicit simulation and not explicit. Further studies
need to be conducted to investigate dynamic effects in a thermal
explicit simulation and achieve better convergence for complex
systems based on contacts.
CFD Validation vs PIV Data Using Explicit PIV Filtering of CFD
Results
V&V2013-2340
Laura Campo, Ivan Bermejo-Moreno, John Eaton, Stanford
University
Experimental datasets acquired using particle image velocimetry
(PIV) are often used as validation cases for computational fluid
dynamics (CFD) codes. It is important to have accurate estimates of
the PIV uncertainties and biases to reach meaningful conclusions.
Usually uncertainties from PIV experiments are quoted as fixed
percentages of the measured values. These estimates do not
account for biases or spatial variation of the PIV measurement
uncertainty throughout the flow field. For flows with large spatial
gradients (for example boundary layers and shock waves), these
effects can be significant. This talk describes a novel framework to
perform CFD/PIV comparisons and in the process to quantify biases
in PIV datasets. These biases originate from spatial averaging across
interrogation regions, particle travel, seeding density and sampling,
and non-zero particle Stokes number. It is generally not possible to
deconvolve or remove these sources of biases and uncertainties
from a PIV dataset. However, the effects of interrogation region size,
particle travel, seeding bias, and non-zero Stokes number can be
simulated numerically and superimposed onto a CFD dataset. This
allows for estimation of experimental biases and uncertainties as well
as direct comparison between the transformed CFD results and the
PIV experiment to assess the simulation’s validity. The simplest
approach involves spatially averaging the CFD velocity field to match
the resolution of the interrogation regions in the PIV experiment. This
is accomplished by averaging the CFD data at all spatial locations
which fall within a given interrogation region and then assigning the
resulting velocity vector to the center of that region. For a first order
approximation of the effects of particle travel, the averaging is not
only performed over all the CFD points in the interrogation region, but
instead over a spatial stencil determined by the local CFD velocity
field. A more accurate method which accounts for particle travel,
sampling, and seeding density effects involves sampling the CFD
velocity field at N points within a given interrogation region and
integrating the particle paths which emanate from each sample point
over the PIV inter-frame time. This method can be adapted to include
the Stokes drag on the particles to account for non-zero particle
inertia. Sample results and PIV/CFD comparisons will be discussed
in the presentation.
Verification Testing for the Sierra Mechanics Codes
V&V2013-2344
Brian Carnes, Sandia National Laboratories
Large computational mechanics codes are burdened with insuring
that the code has been properly tested, including verification testing.
Because of the large investment in these codes, and the high
consequence events which they simulate, work on verification test
suites and supporting tools is well worth the effort. In this talk, we
present an overview of current verification testing methods and tools
for the Sierra Mechanics codes, which include thermal, solid, and
fluid mechanics modeling capabilities. Details on the overall
methodology, testing infrastructure, specialized tools for verification,
and examples from the test suite are included in the presentation.
Our methodology of code verification is based on coverage of
features actually exercised in user input files. This is done by running
a feature coverage tool (FCT), which compares the syntax in an input
file with a given test suite, producing coverage tables for one-way
(single use of a feature) and two-way (combined use of two features)
feature coverage. The results include references to the actual test
input files and documentation. In addition to regression,
performance, and unit tests, we maintain extensive verification test
suites for each code. The methodology is flexible, but generally the
tests compare the computed solution to a known analytic
benchmark, and perform mesh refinement to assess the order of
convergence. A number of the tests are equipped with selfdocumenting features that enable a verification test manual to be
generated automatically. For our high Mach number gas dynamics
code, we developed a specialized library to enable rapid deployment
of manufactured solutions. Rather than generate the source terms
needed for the Euler or Navier-Stokes equations in an external tool
(such as Mathematica), we generate them implicitly in the code using
Automatic Differentiation (AD) tools. This frees the developer to focus
only on the form of the manufactured solution primitive variables. We
are also developing scripts to enable automated convergence rate
analysis, both for verification tests with known analytic solution, and
for applications where the exact quantity of interest must be
extrapolated. These are written in Python and will be extensively
tested and verified. Finally we present some representative results
from our verification test suites. These include thermal analyses with
non-standard features such as enclosure radiation, thermal contact,
and chemistry; high speed gas dynamics with manufactured
solutions; and solid mechanics tests including contact, large
deformation and multiple element types.
A Study into the Predictability of Thermal Expansion for
Interior Plastic Trim Components using Different Non-linear
Virtual (CAE) Tools and Their Accuracy
V&V2013-2341
Jacob Cheriyan, Jeff Webb, Ford Motor Company
Meraj Ahmed, Ram Iyer, Eicher Engineering Solutions Inc.
To ensure high quality and facilitate design of plastic trim
components in vehicle, understanding failure under high thermal
loads is critical. In addition to physical test methods, computation
methods and technologies such as CAE (Computer Aided
Engineering) tools are being investigated and implemented in the
Auto Industry to validate the design. This paper covers different CAE
tools used and their accuracy to predict thermal expansion and
warping of vehicle interior trim under prolonged heat exposure. CAE
studies are performed in Abaqus and LS-DYNA to predict thermal
warping. Abaqus simulation is based on implicit method where as
LS-DYNA is with explicit method and a coupled implicit /explicit
method. Initial runs are based on a multi component level system,
with part contact and material creep properties. Non-convergence is
Application of a Standard Quantitative Comparison Method to
Assess a Full Body Finite Element Model in Regional Impacts
V&V2013-2345
Nicholas A. Vavalle, Daniel P. Moreno, Joel D. Stitzel, F.
Scott Gayzik, Wake Forest University School of Medicine
The use of human body finite element models (FEMs) is expanding in
the study of injury biomechanics. One such model, of the 50th
percentile male (M50), has been developed as part of the Global
Human Body Models Consortium (GHBMC) project. While FEMs can
provide additional insight into injury mechanisms, they must first be
51
ABSTRACTS
validated against experimental data. It is common for researchers to
conduct qualitative comparisons; however, a survey of recent
publications on human body FEM shows that qualitative
comparisons (QCs) are infrequently reported. Many methods for QCs
have been presented in the literature. In the current study, the
validation approach used in the development of the GHBMC M50
model is discussed, including an overview of the experiments
simulated and the QC method used. The GHBMC M50 was created
from medical images of a prospectively recruited and living subject. A
multi-modality imaging protocol employing MRI, upright MRI, CT, and
laser surface scanning was used as the basis for the model’s
geometry. Models of the Head, Neck, Thorax, Abdomen, and Pelvis
and Lower Extremity (Plex) were meshed and validated by centers of
expertise focused on these specific body regions. The regional
models were integrated into the full body model by the study authors,
using a combination of nodal connections, tied contacts, and sliding
contacts. The current model (v3.5) contains 1.3 million nodes, 1.95
million elements, 961 parts and represents a weight of 75.5 kg.
Model development is being tracked in two abdominal impact
validation cases using a method for model comparisons to
experimental data supported by ISO (ISO/TC 22/SC 10/WG 4). This
standard uses a combination of the Correlation Analysis (CORA)
corridor method and the enhanced error assessment of response
time histories (EEARTH). The EEARTH method parses the
contributions of phase, magnitude, and slope (or shape) to the error.
Two abdominal impacts, a bar in a frontal impact and a hub in an
oblique impact, were simulated. Both force curves were within or
near the corridors over the course of the simulated time. The QC
results for the bar impact were 0.86, 0.75, 0.52, and 0.93 for phase,
magnitude, shape and corridor, respectively. Similarly, for the hub
impact the ratings were 0.78, 0.91, 0.44, and 0.96. Both impacts
were well-matched in phase, magnitude, and corridor ratings and
had weaker matches in shape ratings. The GHBMC M50 aims to be
widely used among injury biomechanics researchers and will help to
advance the safety of vehicles in coming years.
depth within the soil. A Box-Behnken surface response design was
used to capture the non-linear responses throughout the design
space. Maximum stress at the ends of the effective length of the pipe
within the finite element (FE) model increased as the pipe’s
embedment within the soil changed, the soil density increased, and
the pipe’s contact with the soil decreased.
Investigation of Uncertainty Quantification Procedure for V&V
of Fluid-Structure Thermal Interaction Estimation Code for
Thermal Fatigue Issue in a Sodium–cooled Fast Reactor
V&V2013-2350
Masaaki Tanaka, Japan Atomic Energy Agency, Shuji Ohno,
Oarai R&D Center, JAEA
High cycle thermal fatigue caused by thermal mixing phenomena has
been one of the most important issues in design and safety of an
advanced sodium-cooled fast reactor the Japan Sodium cooled Fast
Reactor (JSFR). In the upper plenum of the reactor vessel, the
perforated plate called as the Core Instruments Plate (CIP) is installed
to support thermocouples and the other sensors for operating and
safety measures. Below the CIP, hot sodium comes from fuel
subassemblies and cold sodium flows out from control rod channels
and blanket fuel subassemblies located in the outer region of the
core. Accordingly, significant temperature fluctuation may be caused
by the mixing of the high and the low temperature fluids upper region
of the core. When fluid temperature fluctuation is transmitted to the
CIP surface, the upper guide tubes and the driving system of control
rods, cyclic thermal stress may be induced on the structures. Such a
cyclic stress may cause crack initiation and crack growth in the
structures depending on the frequency characteristics and the
amplitude of the temperature fluctuation. Since it is difficult to
conduct the large scale experiments covering the JSFR operation
conditions, establishment of numerical estimation method for the
high-cycle thermal fatigue issue is desired. A numerical simulation
code MUGTHES, therefore, has been developed to predict thermal
mixing phenomena and thermal response of structure in the field with
thermal interaction between fluid and structure. And verification and
validation study has also been required in developments of the code
and the numerical estimation method. In order to specify an actual
procedure of V&V study in the development process, though
calculation module for the thermal-hydraulics was targeted first,
uncertainty quantification was examined for the several simple
laminar flow problems by using the Grid Convergence Index (GCI)
based on the ASME V&V-20 guideline, the modified version of the
Roache’s GCI and the Eça-Hoekstra’s Least-Squares version of GCI.
The Couette-Poiseuille flows with no, adverse and favorable pressure
gradient conditions as theoretical results of Navier-Stokes equation
were chosen as reference problems and the experiment of the backfacing step flow was also employed as practical problem.
Uncertainty quantification was conducted by using the MUGTHES as
reference, another in-house finite differential code AQUA and
commercial computational fluid dynamics code STAR-CD for codeto-code comparison. Through the examinations, applicability of these
approaches and appropriateness of quantification results for the
MUGTHES code were confirmed.
Examining the Dampening Effect of Pipe-Soil Interaction at
the Ends of a Subsea Pipeline Free Span using Coupled
Eulerian-Lagrangian Analysis and Computational Design of
Experiments
V&V2013-2346
Marcus Gamino, Samuel Abankwa, Ricardo Silva,
Leonardo Chica, Paul Yambao, Raresh Pascali, Egidio
Marotta, University of Houston, Carlos Silva, Juan-Alberto
Rivas-Cardona, FMC Technologies
Vortex-induced vibration (VIV) arises when free vibration of the
structure occurs due to the vortexes that develop behind the
structure. This free vibration is a major fatigue concern in subsea
piping components including free spans of pipelines, which are
subsea pipeline sections not supported by the seabed. The
amplitude of displacement of a pipeline free span is affected by
various factors including dampening. The types of dampening that
affect this amplitude include structural dampening, hydrodynamic
dampening, and dampening at the ends of the free span. The
purpose of this research is to examine the dampening effects from
the pipe-soil interaction at the ends of the free span undergoing
oscillation due to vortex-induced vibration (VIV). DNV Recommended
Practice F105 states that free span boundary conditions must
adequately represent the pipe-soil interaction and the continuality of
the pipeline. To mimic pipe-soil interaction at the ends of the free
span in a finite element (FE) model, coupled eulerian-lagrangian (CEL)
analysis within ABAQUS is used to accurately model the soil
response to lateral cyclic motion of the free span’s ends.
Computational design of experiments (DOE) is utilized to examine the
how different input variables affect the maximum amplitude of
displacement of the free span by analyzing a range of different soil
densities, length of pipe contact with the soil, and pipe embedment
CFD and Test Results Comparison of a High Head Centrifugal
Stage at Various Locations
V&V2013-2352
Syed Fakhri, Dresser-Rand Company
A high-head centrifugal stage that had been previously tested
showed lower surge margin than targeted. It was theorized that the
stage range could be improved by an overall re-design of the
stationaries. Key stationary components of the centrifugal stage
(diffuser, return bend, return channel, return channel vane) were
improved using analytical tools. Each component was methodically
optimized using CFD. The resulting stage was built, instrumented
52
ABSTRACTS
and tested. Comparisons of the trends observed in the experimental
and analytical work will be offered and the agreement between the
analytical and empirical results will be assessed. A qualitative
analysis of the flow field in the previous geometry showed the
presence of a low momentum region at the shroud side that shifted
to the hub side just prior to the onset of diffuser stall and subsequent
stage surge. It was postulated that stall could be delayed by
improving the flow-field in the diffuser and delaying the shift of the
low momentum region. CFD analyses on several iterations of diffuser
geometry were run until the shift was significantly delayed. The final
diffuser was pinched and tapered on both hub and shroud sides. The
return bend radius was increased, return channel was pinched and a
new return channel vane was designed to match the improved flow
field. To verify improvement, the analytical results for the previous
geometry and the final new geometry were compared. A noticeable
improvement in flow quality was observed between the two designs.
It was observed that size of the low momentum region had been
significantly reduced (figures will be shown). Figures will be provided
to show the absolute velocity flow angle at the diffuser exit, where
tangential flow has been significantly reduced for the new design.
The delay of low momentum shift predicted a 13% improvement in
stall margin. A loss coefficient comparison for the diffuser and return
channel also showed marked improvement. The analytical results for
the new build were then validated against the tested results. The
surge margin was significantly improved and compared well with
CFD stall prediction based on low momentum shift. Head coefficient
as predicted by CFD also compared well with tested results for the
overall range. A comparison of pressure data at various locations in
the stationaries was also done to validate the CFD prediction. In
conclusion, it was determined that analytical CFD can be used
effectively as a low-cost alternative to testing to improve
performance of the stationaries of a centrifugal compressor stage.
Uncertainty Quantification in Reduced-order Model of a
Resonant Nonlinear Structure
V&V2013-2357
Rajat Goyal, Anil K. Bajaj, Purdue University
This work is focused on Uncertainty Quantification (UQ) for a
nonlinear MEMS T-beam structure exhibiting 1:2 autoparametric
internal resonance. The sources of uncertainty in fabrication and
operation of the device and their quantification techniques are
investigated. The nonlinear response prediction for the system is
formulated by using a two-mode model constructed with the lowest
two linear modes of the structure in conjunction with the nonlinear
Lagrangian representing the dynamics of the beam structure as well
as the excitation mechanism. Both, the linear structural modal
dynamics and the nonlinear reduced-order system’s dynamics
contribute to the uncertainty in response prediction. Thus, UQ for the
nonlinear resonant MEMS system is formulated in two steps. First,
propagation and quantification in linear analysis outputs namely
mode shapes, natural frequencies and tuning, is investigated.
Secondly, uncertainty in the nonlinear response prediction, as
obtained from averaged equations for the system, are evaluated.
Application of UQ techniques on the linear modal characteristics of
the structure determine the effects of various parametric uncertainties
on model output. Sensitivity analysis is used to reduce the number of
parameters by filtering out the critical parameters. Response surface
analysis using generalized polynomial chaos (gPC) is used to
generate a mathematically equivalent Response Surface model. A
detailed description of the gPC collocation method is also presented
and it is shown that there are five important uncertain parameters.
Subsequently, results of uncertainty propagation through the linear
system are used to evaluate the uncertainty in nonlinear response
predication through the reduced-order model. From nonlinear
system’s UQ analysis, the criticality of internal mistuning over other
parameters is established. Final reliablility of the MEMS structure for
a given set of parameters uncertainty is also presented.
An Independent Verification and Validation Plan for DTRA
Application Codes
V&V2013-2353
David Littlefield, University of Alabama at Birmingham,
Photios Papados, US Army Engineer Research and
Development Center, Jacqueline Bell, Defense Threat
Reduction Agency
Calculation of the Error Transport Equation Source Term
using a Weighted Spline Curve Fit
V&V2013-2358
Don Parsons, Ismail Celik, West Virginia University
Many first principles codes used by the Defense Threat Reduction
Agency (DTRA) have been well accepted within the DTRA
community. However, because of the specific problems of interest to
DTRA, often the results cannot be published in peer reviewed
journals, and the verification and validation regime has not been
socialized in the broader scientific community. Likewise, the use of
specific DTRA application codes is often questioned with respect to
independent verification and validation (IV&V) of the codes. IV&V is
also an important step in the transition of aspects of these first
principles codes to validated DTRA modeling and simulation tools to
enable rapid access for target planning, emergency response and
consequence assessment capabilities. The purpose of this effort is to
give DTRA greater confidence in the validity of the results for specific
classes of problems by having an external review of the verification
and validation regime currently used for each code and suggestions
for improvements as required. Specific uncertainty quantification
(UQ) and sensitivity studies are outlined for two application codes: (1)
the SHAMRC code and (2) the coupled FEFLO/SAICSD code. The
intent of this plan is to give DTRA greater confidence in the validity of
the results for specific classes of problems. In developing the IV&V
plan we have reviewed the methods, measures, and standards
necessary to assess the credibility of codes and models, to quantify
confidence in calculation. This review is the entry point for advanced
technologies in support of establishing credibility in code predictions.
The plan also includes sensitivity analysis and UQ techniques and
methodology.
The error transport equation has been used to solve for the
discretization error in both one-dimensional and two-dimensional
solutions. The primary difficulty in the solution of the error transport
equation is the determination of the error source term. If the analytic
solution is known beforehand the error source term can be solved for
exactly by substituting the analytic solution into the discretized
equation. Given a real problem the analytic solution is not known and
thus the error source term must be developed in a different manner.
In this study, the error source term is calculated by fitting the
numerical solution with a series of polynomials using a weighted
spline method. The solution is then substituted into the original
differential equation to develop the error source term for use in the
error transport equation. This method was first applied to a steady
one-dimensional convection diffusion problem with a known solution.
Solutions were obtained at grid spacing (h) varying from 0.05 down
to 0.008. The exact error source term was compared to the
calculated error source term at each grid density to determine the
grid dependence of the bias. The bias between the exact and
calculated error source term shows a first order dependence on grid
density. The method was then applied to the two-dimensional
solution of a square driven cavity problem. The results are
encouraging and indicate that the error transport equation can be
used to solve for the discretization error using at most two separate
grids.
53
ABSTRACTS
Analytical Modeling of the Forces and Surface Roughness in
Grinding Process with Microstructured Wheels
V&V2013-2360
Taghi Tawakoli, Hochschule Furtwangen University, Javad
Akbari, Ali Zahedi, Sharif University of Technology
components of the CFD models and upscaling approaches. The
outcome of this validation process will be estimates of the
confidence range for system performance metrics. These estimates
will be based on the confidence levels developed in the various
decoupled unit problems as well as the upscaling procedure of the
validation hierarchy. By considering smaller unit problems that isolate
particular aspects of the multi-physics of the full scale system, we will
able to evaluate the contributions of different aspects of the model to
the predictive errors, such as the effects of coupling different physics
or the geometric upscaling of the coupled physics.
Machining with undefined cutting edges and specifically grinding are
shown to be promising machining techniques especially in treating
difficult-to-cut materials. Grinding, the most developed and wellknown of this category, is a complicated process which is
continuously subject to investigation and modifications. The
complexity of this process which is due to its random nature makes
the optimizations and predictions cumbersome. This fact would be
even more intensified for new techniques applied to the conventional
grinding process. Structured grinding wheels are novel tools which
have introduced some advantages in grinding process of some
challenging materials. Among these reduced forces, better surface
integrity, lower tool wear and higher precision can be mentioned. But
these novel tools in order to be extensively and reliably used, require
better understanding and predicting facilities. Most of the current
grinding models are based on statistical surface data and are not
able to model arbitrarily treated and structured wheel surfaces. In this
work a kinematic model is proposed for predicting the action of
arbitrarily structured grinding wheels on the workpiece surface. The
contact conditions of individual cutting grains are interpreted to result
the grinding forces and the produced surface characteristics. Active
grain recognition and different grinding phases (robbing, ploughing
and cutting) are also considered. Experimental results are used to
calibrate and verify the proposed method.
The Dyson Problem Revisited
V&V2013-2362
Scott Ramsey, Los Alamos National Laboratory
The purpose of this work is to derive shock-free, self-similar solutions
of the two-dimensional (2D) axi-symmetric (r-z) compressible flow
equations, subject to the polytropic gas equation of state. Though
self-similar solutions of this system have previously been investigated
by (among others) Dyson [Ref. 1], the goal of this work is to present a
modern, generalized derivation and several particular solution
archetypes, for use in code verification studies of multi-dimensional
compressible flow solvers. Dyson-like self-similar solutions to the 2D
axi-symmetric compressible flow equations follow from space-time
separability assumptions wherein the r-velocity is proportional to r,
and the z-velocity is proportional to z. Under these conditions, the
governing system is reduced to two coupled nonlinear ordinary
differential equations (ODEs) in the time-dependent functions that
characterize the r and z-velocity fields; the otherwise arbitrary density,
pressure, and specific internal energy associated with the reduced
system are subject to constraints arising from the physical necessity
of single-valued flow variables. Despite this constraint, solutions
corresponding to isothermal, isentropic, and shell-like flows can be
derived, the purely 2D features of which include elliptical contours of
constant density with time-varying eccentricity. The Dyson-like
solutions developed as part of this work can be used for code
verification studies of 2D axi-symmetric compressible flow solvers; in
this context they will test the numerical integration of multidimensional conservation laws in converging-diverging smooth
flows. Since closed-form solutions are available, rigorous quantitative
metrics such as convergence rates can be used in conjunction with
these solutions. As an example of such an application, a quantitative
code verification study using a Los Alamos National Laboratory
multi-physics production software package (the xRAGE [Ref. 2]
compressible flow solver) is provided using the isothermal results
derived as part of the aforementioned theoretical work. References:
1. F.J. Dyson, “Free Expansion of a Gas (II) – Gaussian Model” (1958).
2. M. Gittings, et al., “The RAGE Radiation-Hydrodynamics Code,”
Comput. Sci. Disc. 1 015005 (2008).
Validation and Uncertainty Quantification of Large scale CFD
Models for Post Combustion Carbon Capture
V&V2013-2361
Emily Ryan, William Lane, Boston University, Curt Storlie,
Joanne Wendelberger, Los Alamos National Laboratory,
Christopher Montgomery, URS Corporation
Large scale computational fluid dynamics (CFD) simulations are
being developed to accelerate the design of post-combustion
carbon capture technology as part of the U.S. Department of
Energy’s Carbon Capture Simulation Initiative (CCSI). The CFD
models simulate the multi-phase physics of a large scale solid
sorbent carbon capture system. Due to the geometric size of these
reactors full scale simulations of the entire system are not
computationally feasible. To overcome this challenge the system has
been divided into two separate simulations; one of the adsorber
section and one of the regenerator section. These simulations allow
the detailed physics within the reactors to be investigated; however,
they still require a long simulation time (up to one month per
simulation). As part of the development of these models validation
and uncertainty quantification (UQ) analyses are being done to
investigate the accuracy of the model and to develop a quantitative
confidence in the predicted carbon capture efficiencies. Because of
the infeasibility of designing and conducting validation experiments
on the full-scale carbon capture systems, we use a systematic and
hierarchical approach in designing physical experiments (and/or
utilizing available measurement data sets) for comparison with
predictive computational models. This approach divides the complex
utility scale solid sorbent carbon capture system into several
progressively simpler tiers. This layered strategy incorporates
uncertainty in both parameters and models to assess how accurately
the computational results compare with the experimental data for
different physical phenomena coupling and at different geometrical
scales. The objective of the validation and UQ work is to quantify the
accuracy of the CFD modeling of the carbon capture system and to
quantify our confidence in the model results using a hierarchical
validation approach. In this approach we use smaller scale models
and simplified physics to investigate the accuracy of the various
Assessment of a Multivariate Approach to Enable
Quantification of Validation Uncertainty and Model
Comparison Error for Industrial CFD Applications using a
Static Validation Library
V&V2013-2363
Leonard Peltier, Andri Rizhakov, Jonathan Berkoe, Kelly
Knight, Bechtel National Inc. Michael Kinzel, Brigette
Rosendall, Mallory Elbert, Kellie Evon, Bechtel National
Advanced Simulation & Analysis
To qualify computational fluid dynamics (CFD) models as industrial
design tools, a quantification of the goodness of the model for the
intended purpose is required. This step is accomplished through
verification and validation (V&V) of the code using similar validation
experiments which span the physics and parameter ranges of the
application. Following the ASME V&V20-2009 Standard, the
verification and validation (V&V) metrics which characterize the
goodness of the CFD model for a given validation experiment are the
54
ABSTRACTS
model comparison error and the validation uncertainty. When
validation experiments similar to an intended application do not exist,
there will be a need for an approach that can synthesize estimates of
model comparison error and uncertainty for an application from
validation experiments that are different from the application.
Multivariate approaches have been formulated at Sandia National
Laboratories to extend validation metrics from validation experiments
to other points in application space. They offer a potential to meet
this need. This abstract details a preliminary assessment of the use
of a Sandia multivariate approach to extend validation data from a
validation library to industrial CFD applications. The focus is defining
a suitable set of cases to define a usable V&V library and on
demonstration of principle for a number of relatively simple test
cases. The CFD model is the Ansys/Fluent CFD solver. The validation
library is derived from a suite of test cases provided by Ansys/Fluent
for benchmarking of Ansys/Fluent CFD models. Methods outlined in
the ASME V&V20-2009 Standard are followed to estimate model
comparison error and validation uncertainty for each of the test cases
in the validation library. The assessment approach is to divide the
validation library into a subset to be used as a source for validation
data and a subset to be used as a surrogate application space.
Extensions of the validation metrics to the surrogate application
space are done using the Sandia multivariate approach.
Comparisons of the assessed application metrics to the
corresponding multivariate extensions show viability of the approach.
The ability of the Sandia method to relate the validation experiments
to the application is an objective measure for the completeness of
the validation library relative to the physics of the intended
application.
development, there will be simplified versions of the code that are
consistent in the input-output format, but can run much faster due to
lower resolution, lower precision requirements, or fewer enabled
routines. If such simplified versions are no longer available, they can
be constructed at a modest additional development cost. The need
for abundant simulation data and the high cost of setting up and
running each simulation can be reconciled by querying the simulation
code at multiple levels of fidelity, relating such levels by statistical
models, and combining the resulting multi-fidelity training data to
make global assessments of the effect of uncertainty. Our suggested
approach consists of: (1) Simplification of the model by Proper
Orthogonal Decomposition based order reduction, the reduced
model will produce lower-quality training data; (2) A correction
procedure that characterizes the imperfection in the output
introduced by reduction, we use Gaussian-processes-based
automatic learning; (3) Representation of the models outputs in terms
of sources of uncertainty, trained on a weighted combination of highfidelity data and corrected lower-fidelity data. We use nek5000, a
high-fidelity fluid mechanics solver, as a demonstration example. We
investigate the effects of geometric uncertainty on drag in channel
flows (irregularities introduced into the shape of the channel,
dimension of uncertainty is 4 - 20). In each case, we use a training set
of full model evaluations of the size equal to, or slightly less than the
dimension of the space. Our numerical experiments show that it is
possible to describe the effect of high-dimensional uncertainty on the
model output, complete with error bars – at the cost of less than that
of a linear interpolation! In our current work, we are interested in
extending the approach to more complex simulation models, and in
providing additional theoretical support for our empirically chosen
training schemes.
An Ensemble Approach for Model Bias Prediction
V&V2013-2364
Zhimin Xi, University of Michigan - Dearborn, Yan Fu, Ford,
Ren-jye Yang, Ford Research & Advanced Engineering
Challenges in Implementing Verification, Validation, and
Uncertainty Quantification Practices in Governmental
Organizations
V&V2013-2368
William Oberkampf, W. L. Oberkampf Consulting
Model validation is a process of determining the degree to which a
model is an accurate representation of the real world from the
perspective of the intended uses of the model. In reliability based
design, the intended use of the model is to identify an optimal design
with the minimum cost function while satisfying all reliability
constraints. It is pivotal that computational models should be
validated before conducting the reliability based design. This paper
presents an ensemble approach for model bias prediction in order to
correct predictions of computational models. The basic idea is to first
characterize the model bias of computational models, then correct
the model prediction by adding the characterized model bias. The
ensemble approach is composed of two prediction mechanisms: 1)
response surface of model bias, and 2) Copula modeling of a series
of relationships between design variables and the model bias,
between model prediction and the model bias. Advantages of both
mechanisms are utilized with an accuracy based weighting
approach. A vehicle design of front impact example is used to
demonstrate the effectiveness of the proposed methodology.
The implementation of verification, validation, and uncertainty
quantification (VVUQ) practices in private organizations, research
institutes, universities, and governmental organizations is difficult for
a wide range of reasons. Some reasons are technical, but many are
practical, cultural, or behavioral-change issues associated with the
particular type of organization involved. This presentation will discuss
a spectrum of difficulties and underlying issues commonly
associated with implementing VVUQ practices in large governmental
organizations. The goal is to improve our understanding of the
diverse and complex factors that influence and motivate these
organizations and their stakeholders. By viewing VVUQ practices at a
high level, instead of focusing on the technical details of individual
practices, one can regard VVUQ practices as the means to assess
and improve the credibility, accuracy, and trustworthiness of
modeling and simulation (M&S) results. While no one would argue
against these goals, they are often overlooked or ignored in
governmental organizations. These organizations’ responses typically
reframe the topic to: (a) the practicality of implementation issues, (b)
the negative impact on the cost or schedule of an analysis or
deployment of a system, (c) the questionable value-added for the
cost incurred by the governmental, regulatory body, or contracted
organization, or (d) the lack of a governmental authority to dictate
technical practices on regulated or contracted organizations. While
these are reasonable responses, underlying issues may also exist,
including: (a) a bureaucratic resistance to change, (b) the
stakeholders’ conservatism and/or risk aversion, (c) political
interference, or (d) special interest groups’ agendas. This
presentation will briefly review a few large-scale M&S efforts in the
Department of Defense, commercial nuclear power, and
transportation safety with regard to existing VVUQ practices. The
presentation will also explore how impediments to VVUQ progress
may be overcome in constructive and positive ways. By possessing
Multi-fidelity Training Data in Uncertainty Analysis of
Advanced Simulation Models
V&V2013-2366
Oleg Roderick, Mihai Anitescu, Argonne National Laboratory
Uncertainty analysis of computational models plays an important role
in many fields of modern science and engineering that rely on
simulation codes to augment the rare and expensive experimental
data. Progress in computing enables models of increasingly high
fidelity and geometric resolution. The analysis tasks for such models
require increasingly more computational resources. The data
obtained by running the highest-quality codes has in effect become
expensive and scarce given the large dimension of problems. But a
high-fidelity simulation model does not exist alone. In the process of
55
ABSTRACTS
an improved understanding of some of the overt, as well as some of
the underlying issues, individuals both within and outside of the
government can more effectively promote the implementation of
improved VVUQ practices.
Probability Assignment (GPA) expressed by Yager along with a
defined Credibility Factor of Evidence (CFE) for information from
different sources with the aim to better deal with conflict between
them. Observed experimental evidences on simulation responses
and commonly used values for uncertain material parameters are
adopted for calculation of the CFE used for combination of different
belief structures. The efficiency of the proposed aggregation rule is
checked through uncertainty representation of material constants of
Johnson-Cook and Zerilli-Armstrong plasticity models with
consideration of two different fitting approaches. The represented
uncertainty is propagated through Taylor impact simulation of AISI
4340 Steel using both Johnson-Cook and Zerilli-Armstrong plasticity
models resulting in two different propagated belief structure for
responses of the interest. Again, the proposed rule is adopted to
generate a consolidated propagated belief structure for multi-model
analysis. Finally, through comparison of this consolidated belief
structure with target sets, uncertainty of large deformation processes
is quantified. Target sets are considered as intervals which are
defined around experimental values of the corresponding Taylor
impact responses.
Effects of Statistical Variability of Strength and Scale Effects
on the Convergence of Unnotched Charpy and Taylor Anvil
Impact Simulations
V&V2013-2369
Krishna Kamojjala, Rebecca Brannon, University of Utah,
Jeffrey Lacy, Idaho National Laboratory
Predictive numerical simulations with large deformations and
damage are important to automotive, defense, safety and aerospace
etc. Simulations using conventional deterministic damage models are
largely sensitive to the mesh configuration, and it has become a
challenge to the model developers to achieve mesh-independent
results. Previous research (Brannon et al. [1]) demonstrated a
considerable reduction in mesh sensitivity for a dynamic sphere
indentation problem using statistical variability in strength, and scale
effects. This research presents solution verification (mesh
convergence study) and validation of a single constitutive model with
both strain-to-failure and time-to-failure softening mechanisms on a
focused class of problems that involve impact and fracture of metals.
This class of problems includes unnotched Charpy impact and Taylor
anvil impact tests. To address verification, mesh refinement studies
are presented for these problems, and the effects of statistical
variability of strength and scale effects on the damage patterns is
analyzed. Sensitivity analysis to softening models, failure strains, and
variability in strength is presented. To address validation, focus is laid
on comparing the predicted fields with the data provided through
external laboratory tests. These quantities include the amount of
energy absorbed by the specimen, the bend angle of the specimen,
etc. REFERENCES [1] R. Brian Leavy, Rebecca M. Brannon, and O.
Erik Strack, “The use of sphere indentation experiments to
characterize ceramic damage models”. International Journal of
Applied Ceramic Technology, 7:606-615, 2010.
The Language of Uncertainty and its Use in the ASME B89.7
Standards
V&V2013-2371
Edward Morse, UNC Charlotte
Dimensional Metrologists utilize stadardized terms for the expression
of uncertainty in the evaluation of physical measurements. The ASME
B89 standards committee (Dimensional Metrology) employs its
Division 7 (Measurement Uncertainty) to produce standards for the
expression, use, and analysis of measurement uncertainty. This
presentation will summarize the main ideas and vocabulary of
uncertainty as used in this setting, and examine the relationship to
the uncertainty expressed in model validation. A particular standard
B89.7.3.2-2007 (Guidelines for the Evaluation of Dimensional
Measurement Uncertainty) will be discussed in detail, with
applications to measurements taken with coordinate measuring
machines (CMMs).
Verification and Validation of Material Models Using a New
Aggregation Rule of Evidence
V&V2013-2370
Shahab Salehghaffari, Northwestern University
Validation of Experimental Modal Analysis of Steering Test
Fixture
V&V2013-2378
Basem Alzahabi, Kettering University
In the context of evidence theory, an aggregation rule is used to
obtain a consolidated belief structure for uncertainty modeling when
evidence is based on information from different sources or expert
opinions. The aggregation rule is critical when a high degree of
conflict between different sources of data or expert opinions exists. In
the area of uncertainty modeling of large deformation process, the
issue of combining evidence from different sources can appear in
both uncertainty representation and propagation stages. As for
uncertainty representation aspect, different sources of experimental
data, different approaches to generate stress strain data at high
temperature and strain rates and the method used for fitting the
material constants of plasticity models provide wide variation in
material parameters. Such embedded uncertainties in material
parameters create considerable uncertainty in the results of design
analysis and optimization of structural systems based on these
models. In the case of uncertainty propagation, the choice of
available plasticity models that are quite diverse in the way a
particular physical effect such as temperature or strain rate is
mathematically modeled results in different propagated belief
structures. This relates to the area of model-selection uncertainty,
and many researchers have relied on the Bayesian approach to
address it in the recent years. The presented research aims to
develop a new aggregation rule of evidence for more accurate
evidential uncertainty modeling of large deformation processes. The
suggested evidence aggregation rule makes use of Ground
Automotive companies are facing an increasing pressure to satisfy
customer demand to design vehicles with improved characteristics in
vehicle dynamics, quietness, fuel economy, crashworthiness and
occupant safety just to name a few. Advances in numerical
simulation techniques and computer capabilities have made
computer simulations an important tool for investigating vehicle
behavior. Faster, cheaper and more readily interpreted computer
simulations allow the manufacturer to understand the behavior of
their vehicle, and its components, before it is even produced for the
first time. Component testing has long been a valuable tool for the
design of automotive components in the automotive industry. The
use of real time system testing has allowed the design engineer to
test more samples thus improving the reliability and confidence of the
product. However, the demands on the test fixture have grown with
the complexity of the tests themselves. The main energy seen in a
proving ground or field collected Power Spectral Density (PSD) lies
typically in frequencies around the un-sprung suspension resonance
and the low frequency of the rigid body modes. For heavy vehicles
the un-sprung mass natural frequency is around 11 Hz and bounce
and pitch modes are typically around 1 Hz. These tests do not
involve the inertial forces experience in the real vehicle and a typical
servo-hydraulic test that reproduces road-induced vibration will have
vibration content up to 50 Hz. The new design guidelines must
address fixture weight and fixture resonance. Therefore, the test
56
ABSTRACTS
fixture design engineer must minimize weight and push the fixture
resonance above the typical 50Hz excitation experience on the road.
In this paper, a finite element model is used to predict the natural
frequency of a multi-link steering system test fixture. Then the results
of analytical model are correlated to those of an experimental modal
analysis. The benefits and limitations of this approach will be
discussed.
hexagonal and oval shaped cuts. The results of base models are
compared and validated by experimental results in the literature and
excellent correlation has been achieved. It has been shown that the
circular cut-away method has the best effect on crushing
performance of cellular structures when the diameter of cut is 90%
(or more) of cell’s side lengths.
Validation of Vehicle Drive Systems with Real-Time Simulation
on High-Dynamic Test Benches
V&V2013-2391
Albert Albers, Martin Geier, Steffen Jaeger, Christian Stier,
Karlsruhe Institute of Technology (KIT)
Verification, Validation and Uncertainty Quantification of
Circulating Fluidized Bed Riser Simulation
V&V2013-2385
Aytekin Gel, ALPEMI Consulting, LLC, Tingwen Li, URS
Corporation, Balaji Gopalan, West Virginia University
Research Corporation / NETL, Mehrdad Shahnam, Madhava
Syamlal, DOE-NETL
Virtual methods are essential for the development of current vehicle
powertrain systems, especially in the validation of product properties
such as functionality or the survey of comfort characteristics.
Particularly the prediction and rating of comfort affecting phenomena
in the powertrain system turns out as a huge challenge. Numerous
measures for efficiency improvement in powertrains, such as
downsizing of combustion engines or efficiency-optimized
transmission systems, lead to increased vibration excitation and
vibration sensitivity, regarding for example gear rattle or clutch judder.
At the same time a challenge exists in the assessment of such
interdependencies in the powertrain during the early phases of
product development, i.e. without having real prototypes of the whole
vehicle. Based on these backgrounds two new test benches for
powertrain testing, suited with high dynamic electric engines and
battery simulation, are introduced. These test benches allow for
example the evaluation of the potential of state-of-the art vibration
damping systems for future combustion engines with higher
excitation levels. In this regard the real-time simulation of powertrain
components such as the combustion engine is introduced as well as
the resulting requirements for the mechanical and electrical test
bench equipment.
In this study, we applied the comprehensive framework for
verification, validation and uncertainty analysis outlined by Roy and
Oberkampf (2011) to a multiphase computational fluid dynamics
(CFD) model by simulating a pilot-scale circulating fluidized bed
(CFB) riser. The open-source CFD code, Multiphase Flow with
Interphase eXchange (MFIX), developed at National Energy
Technology Laboratory (NETL), was employed to carry out a series of
three-dimensional transient simulations of the gas-solids flow in a
CFB riser. For this study, the overall pressure drop was selected as
the system response quantity (SRQ) of interest; the solids circulation
rate and the superficial gas velocity, two most important operating
parameters of CFB, were identified as the most important uncertain
input quantities. Numerical uncertainties associated with spatial
discretization and time-averaging were evaluated first. The input
uncertainty was propagated by using a data-fitted surrogate model
constructed from simulations based on appropriate sampling of the
allowable input parameter space determined from the experimental
measurement. During this step, additional uncertainty introduced by
the surrogate model was also taken into account as direct Monte
Carlo based sampling of the input parameter space was not possible
for the computationally expensive 3D transient simulations. Finally,
the model form uncertainty was estimated by comparing the
empirical cumulative distribution function plots of simulation results
with the experimental data after careful examination of the
experimental uncertainties. The identified sources of uncertainties are
quantified with respect to the SRQ, and it is found that the spatial
discretization error is the dominant uncertainty in the current study. In
this presentation, we will be talking about the difficulties encountered
in applying Verification, Validation and Uncertainty Quantification to
multiphase CFD simulation, and sharing the lessons learnt from this
exercise. Reference: Roy, C.J.; Oberkampf, W.L. A comprehensive
framework for verification, validation, and uncertainty quantification in
scientific computing. Computer Methods in Applied Mechanics and
Engineering. 2011; 200: 2131-2144
Simulation and Validation of Response Function of Plastic
Scintillator Gamma Monitors
V&V2013-2392
Muhammad Ali,Vitali Kovaltchouk, Rachid Machrafi,
University of Ontario Institute of Technology
Radiation monitoring of personnel and work environment in nuclear
power plants is a corner stone in any radiation protection program.
Tools used in routine maintenance might get contaminated in areas
where the intensity of radiation fields is controlled. As Monte Carlo
simulation is very important tool for pre-design and optimization of
radiation detectors, the purpose of this paper is to demonstrate a
MCNP simulation for a commercial product that is widely used in
nuclear reactors as a gamma radiation contamination monitor. The
MCNP results have been validated with experimental measurements
using different gamma sources usually encountered in nuclear power
plants.
Analytical and Numerical Analysis of Energy Absorber
Hexagonal Honeycomb Core with Cut-Away Sections
V&V2013-2390
Shabnam Sadeghi-Esfahlani, Hassan Shirvani, Anglia
Ruskin University
Expansion and Application of Structure Function Technology
for Verifying and Exploring Hypervelocity Simulation
Calculations
V&V2013-2393
Scott Runnels, Len Margolin, Los Alamos National
Laboratory
Cellular structures are commonly being used in engineering
applications due to their excellent load-carrying and energy
absorption characteristics. In this research regular hexagonal
honeycomb core has been modified by means of cut-away sections
from it’s cell walls. This innovative approach has enabled the
crushing strength rate and folding mechanism to be controlled by
these cut-away sections. Analytical and numerical investigation of
hexagonal honeycomb with cut-away sections is developed.
Analytical study is based on energy absorption mechanism of
Wierzbicki’s theory. The parametric analysis is carried out using
ANSYS/LS-DYNA and LS-DYNA codes considering circular,
We introduce a new method for analyzing the accuracy of shock
profiles produced by hypervelocity continuum mechanics simulators,
often called “hydrocodes”. In particular, we describe how structure
functions can be used to estimate the error of three different
hydrocode methodologies: Lagrangian, where the finite volume mesh
moves with the material, Eulerian, where the finite volume mesh
remains stationary and the material flows through it, and Arbitrary
Lagrangian-Eulerian (ALE) where the mesh may move relative to the
57
ABSTRACTS
fluid. Structure functions are generated by superimposing snapshots
of the simulators’ discrete shock profiles taken at multiple time steps.
These snapshots capture in one smooth curve all of the dynamics
regarding how the numerical shocks interact with the mesh, for
example, how they might change when crossing into a new cell. The
structure function can then be compared against an ideal shock
profile, thereby suggesting various error metrics for quantified
verification. In addition to introducing the structure function and
associated error metrics, we describe several applications wherein
we not only verify hydrocode accuracy, but optimize it. First, an error
metric such as those alluded to above is used to reveal a correlation
between optimal numerical parameters in the hydrocode, specifically
the artificial viscosity parameters. We also compare the relative
accuracy of the Lagrangian, Eulerian, and ALE approaches with
respect to shock profile, with specific attention paid to the mesh
repair algorithms used by the ALE approach. Because the structure
function is comprised of multiple snapshots of the shock, all
translated to the origin and superimposed there, its formation
requires an accurate way to detect the shock location; how to do that
is not immediately evident because the hydrocodes all smear the
shock over multiple cells. A relatively new shock locator method
introduced a few years ago for Lagrangian calculations will be
described in this presentation and generalized for application to
Eulerian and ALE calculations.
Reliability-Based Approach to Model Validation under
Uncertainty
V&V2013-2399
Sankaran Mahadevan, Chenzhao Li, Joshua G. Mullins,
Vanderbilt University, Shankar Sankararaman, NASA Ames
Research Center, You Ling, Vanderbilt University
This talk proposes the use of a reliability-based approach for model
validation under uncertainty, by including the different sources of
aleatory uncertainty and epistemic uncertainty (data uncertainty and
model errors) and considering different types of data situations. The
basic concept of the model reliability approach is to compute the
probability that the difference between the model prediction and the
observed data is less than a desired tolerance limit. This is a rigorous
mathematical approach that computes the model reliability based on
the probability distributions of the model prediction and the
experimental data. It provides a systematic procedure to include
different types of uncertainty, and address different types of data
situations. Different types of data situations (fully characterized, or
partially characterized) can be addressed with this metric. In the case
of fully characterized data, ordered input-output data is available for
validation. The uncertainty in the model parameters (described in
terms of probability distributions) and the model solution
approximation errors are systematically included while computing the
model reliability metric. The model reliability metric can be computed
for each ordered data pair, and two approaches have been
investigated to compute an aggregated model reliability metric with
multiple data: (1) integration using the principle of conditional
probability, and (2) estimate distribution parameters of model
reliability using a beta distribution approach. In the case of partially
characterized data, a particular input may not be measured during
the experiment or a particular input may be measured independent of
the corresponding output. Therefore, the input value corresponding
to a particular output measurement is unknown, and needs to be
treated as an uncertain quantity, whose probability distribution can
be constructed based on point and interval data. This distribution is
used along with the probability distributions of model parameters and
solution errors in order to compute the model reliability metric. The
model reliability metric is an objective metric for model validation, and
it can compute the reliability at particular input values or over a region
of input values. The choice of the tolerance limit (?) is an important
aspect of the proposed methodology, and this choice significantly
affects the model reliability metric. The tolerance limit can be
quantitatively decided based on the acceptable risk of the user, using
sensitivity analysis results. The proposed methodology is illustrated
using a structural dynamics challenge problem developed at Sandia
National Laboratories.
V&V and UQ in U.S. Energy Policy Modeling: History, Current
Practice, and Future Directions
V&V2013-2397
Alan Sanstad, Lawrence Berkeley National Laboratory
Over the past four decades, computational models of the energy
system based upon economic principles have become the dominant
analytical tool for U. S. energy policy analysis. In recent years, such
models have also been extended and applied to policy and
regulatory analysis of environmental problems that are grounded in
the production and consumption of energy, with the most important
example being mitigation of greenhouse gas emissions. As this type
of energy policy modeling emerged and became established in the
1960s and 1970s, model validation and uncertainty quantification
were not merely recognized as fundamentally important issues, but
were supported with significant resources and drew the engagement
of researchers in a range of disciplines. Subsequently, however –
starting in the 1980s – energy model V&V and UQ attenuated
substantially, both as a research topic and as a priority for federal
agencies developing and using these models. But the models’ role,
not just in analysis but in defining the paradigms guiding U. S. energy
and environmental policy and regulation, simultaneously increased.
Today we see the outcome of these joint trends: A policy and
regulatory infrastructure wholly dependent upon a class of
computational model for which formal and rigorous V&V and UQ
activities are negligible. One contributory factor to this state of affairs
is the set of official guidelines for computational model “quality”
promulgated by the Executive Branch in the early 2000’s. These
guidelines are based on a criterion of “replicability” derived from
experimental science that is arguably inappropriate for most forms of
computational modeling. This talk will review the history of energy
policy modeling in U. S. federal government from the 1970s to the
present, emphasizing the rise and fall of V&V and UQ in this domain.
It will describe and critique the aforementioned federal guidelines,
from both conceptual and practical perspectives. These topics will
be related to trends in the science and practice of V&V/UQ in the
sciences and engineering, and potential synergies identified. Finally,
an initial framing of an improved paradigm for the federal
government’s oversight and V&V/UQ requirements in the domain of
energy policy modeling will be presented.
Integration of Verification, Validation, and Calibration Results
for Overall Uncertainty Quantification in Multi-Level Systems
V&V2013-2400
Sankaran Mahadevan, Chenzhao Li, Joshua G. Mullins,
Vanderbilt University, Shankar Sankararaman, NASA Ames
Research Center
This talk presents a Bayesian methodology to integrate information
from model calibration, verification, and validation activities for the
purpose of overall uncertainty quantification in model-based
prediction. The methodology is first developed for single-level
models (simple components, isolated physics), and then extended to
practical systems which are composed of multi-level (multicomponent, multi-physics) models. The first step is to perform
verification, validation, and calibration independently for each of the
models that constitute the overall system. Solution errors such as
numerical errors and surrogate model errors are computed during
model verification, and the estimated errors are used to correct the
model output; the corrected model output is used for calibration
and/or validation, thereby integrating the results of verification while
58
ABSTRACTS
performing calibration and/or validation. The Kennedy O’Hagan
framework for calibration is used to simultaneously estimate the
model parameters, and the model form error (also referred to as
model discrepancy or inadequacy function). A reliability-based
approach is pursued for model validation, using which it is possible
to directly quantify the extent to which the validation data supports
the model. The final step is to perform integration using a Bayesian
network which connects the various models, their inputs,
parameters, and outputs, experimental data, and various sources of
model error (numerical errors and model form error). The results of
calibration, verification, and validation in each individual model are
propagated through this Bayesian network so as to quantify their
effect on the overall system-level prediction uncertainty. In this
procedure, the emphasis is on computing the unconditional
probability distributions (based on the results of verification,
validation, and calibration) of the variables which link the different
models of the system. Two options are considered: the output of one
model becomes the output to the next model, or multiple models
share common parameters. The roll-up methodology also includes
quantitative consideration of relevance of calibration and validation
data at different levels of the system hierarchy with respect to the
system-level prediction quantity of interest, measured through
sensitivity analysis. The Bayesian network approach is also used to
demonstrate benefits for the inverse problem of uncertainty
management, by helping to allocate resources for data collection at
different levels of the system to meet a target uncertainty level with
respect to the prediction quantity of interest. The proposed
methodology of integration of uncertainty quantification activities is
demonstrated for multi-level thermal-electrical and structural
dynamics systems.
of all engineering analysis models. Capturing its process is an
important issue for verifying, validating and reusing the engineering
analysis. However, it usually remains in the realm of the designer’s
tacit knowledge. A framework to systematically capture the modeling
knowledge of a working designer has not yet been established. This
paper proposes a comprehensive descriptive management method
for capturing the modeling process and its operation process. Any
knowledge description framework should be concise and
comprehensive so as to encourage working designers to describe
knowledge. In order to describe the system, a framework to describe
a hierarchical and interdepending structure is needed. The design
process can be also described in a hierarchical and interdepending
structure corresponding to a system structure. The overall problem
can be divided into sub-problems according with the level of details.
A descriptive management method should focus on those features of
system design. First, this research defines four phases of the system
design so as to comprehensively formalize the design process, i.e.,
(1) phase of physical modeling, (2) phase of V&V of physical
modeling, (3) phase of generating design alternatives, and (4) phase
of V&V of design alternatives. The contents of each phase’s process
constitute a hierarchical structure, and are associated with each
other. To describe those structures, this research introduces a
descriptive method based on a format of IDEF0 [1] and Bond graph
[2] and EAMM (Engineering Analysis Modeling Matrix) [3], a
knowledge representation framework which the authors have been
developing. A description example of a micro hydropower system
design is illustrated. This leads to the conclusion that the proposed
descriptive management method can explicitly capture the
engineering analysis modeling process and its operation process
such that a designer can manage knowledge and appropriately
perform engineering analysis. [1] NIST, 1993, Integration Definition for
Function Modeling. [2] Paynter, H. A., 1960, Analysis and Design of
Engineering Systems, The MIT Press. [3] Nomaguchi, Y., et al., 2010,
A Framework of Describing and Managing Engineering Analysis
Modeling Knowledge for Design Validation, Concurrent Engineering:
Research and Applications, 18(3), pp. 185-197.
V&V40 Verification & Validation in Computational Modeling of
Medical Devices Panel Session
V&V2013-2401
Carl Popelar, Southwest Research Institute
Formalized in 2010, the ASME V&V40 subcommittee on
Computational Modeling of Medical Devices has been developing a
guide that addresses some of the unique challenges associated with
verification and validation of medical device simulations. The need for
such a standard is driven by the fact that computational modeling is
often used to support a decision about the safety and/or efficacy of a
medical device. The risk associated with using a computational
model for these purposes is a combination of the consequence of
the model being incorrect/misleading and the degree of its influence
on the decision being made. Thus, the core of the developing guide
centers around the notion that the needed rigor in the V&V activities
are guided by the model risk. This panel session will consist of a
series of brief presentations that summarize the V&V40
subcommittee, review the contents of the developing guide, describe
quantification of the model risk and its relation to the V&V activities,
present and explain the use of a V&V credibility assessment matrix,
and provide a relevant and illustrative example. Individual presenters
from the V&V40 subcommittee will form a panel for open discussion.
Quantification of Fragmentation Energy from Cased
Munitions using Sensitivity Analysis
V&V2013-2408
John Crews, Evan McCorkle, Applied Research Associates
In this presentation, we demonstrate how sensitivity analysis is used
to quantify uncertainty in the fragmentation environment of cased
munitions. Cased munitions produce two major effects: airblast
shock propagation from the high explosive and debris from the
exploding case. Pressure and impulse from shock propagation are
the dominant effects; however, the secondary effect of case
fragmentation often is an important consideration for functional
defeat objectives. The focus in this presentation is the probabilistic
nature of fragment trajectories. Experimental tests typically quantify
measured fragment velocities and quantities using probability
distribution functions. Current modeling and simulation approaches
determine the fragmentation environment using Monte Carlo
methods to sample the experimentally determined distributions and
solve the resulting ordinary differential equations. Here, we show how
the forward and adjoint sensitivity procedures can be utilized to
determine bounds on the fragment trajectories. Results and
computational run-time are compared for the original Monte Carlo
approach, the forward sensitivity procedure, and the adjoint
sensitivity procedure. Furthermore, a method is developed to
quantify the fragmentation energy at specific locations. Weapons
effects studies can leverage these sensitivity procedures to assess
vulnerability or lethality from the fragmentation energy. The probability
of impact for a given location also can be determined, an important
consideration for functional objectives in the weapons effects
community.
Descriptive Management Method of Engineering Analysis
Modeling and its Operation Process toward Verification and
Validation in Multi-Disciplinary System Design
V&V2013-2402
Yutaka Nomaguchi, Haruki Inoue, Kikuo Fujita, Osaka
University
An appropriate engineering analysis model is indispensable in system
design to allow rational evaluation and optimization. A designer
idealizes the system’s physical features, e.g., dimensional reduction,
geometric symmetry, feature removal, domain alterations and so on,
for modeling the system’s physical behavior which consists of
complicated multidisciplinary phenomena. This process is the heart
59
ABSTRACTS
External Rate of Vaporization Control for Self-dislocating
Liquids
V&V2013-2411
Radu Rugescu, Universitatea Politehnica Bucuresti, Ciprian
Dumitrache, University of Colorado
These exhaustive benchmarks do provide end users with the
confidence that the solvers can be used over a wide range of
practical problems. However, the key challenge is to find out how
confident we can be in the simulated behavior when the exact
solution or the experimental solution is not known a priori. This is
done through the process of model and solution verification. If the
model and its simulated behavior are in the domain of the tested
benchmarks and meet certain conditions we can quantify the
uncertainty in the solution. For this purpose we include model
checking (e.g. verify mesh quality), solution verification (e.g. verify
residual) and parametric sensitivity (e.g. verify slope sign change
ratio). With this approach we can estimate the uncertainty in virtual
prototype behavior and its impact on the probability that the actual
physical prototype will meet its requirements benchmark tests. This is
a key component of the Probabilistic Certificate of Correctness
metric that takes into account for all sources of error and uncertainty
in a simulation in a scalable manner, including model verification and
behavior simulation accuracy, manufacturing tolerances, context
uncertainty, human factors and confidence in the stochastic sampling
itself.
The thrust augmentation and clean combustion of solid fuels for
space propulsion and green energy production by means of gasphase afterburning was delivered as a significant result of the study
within the NERVA project, supported by the grant of the Romanian
National Authority for Scientific Research, CNDI– UEFISCDI, project
number 82076/2008. This presentation regards the extension of the
mentioned work for covering the area of external control of the rate of
vaporization of the liquid component by two different means and the
progress of the research along the new ORVEAL project, granted by
the same CNDI-UEFISCDI authority by national contest in a new 3year round for experimental verification. The new PROVIDE project
proposal is focused on the control of the rate of vaporization subject.
One of the envisaged substances considered as a combustion
augmenter is the nitrous oxide (N2O or NOx), often recommended as
a chemical oxidizer for various space propulsion systems, terrestrial
automotive motors included. The median result that stems from the
previous NERVA research is to triple the thrust of the existing solid
fuel motor and to augment by 15% its specific impulse for
application to the first stage of the Romanian private space launcher
NERVA by means of NOx afterburning in specific circumstances. The
extent of NOx usage in other combustion systems is also promising.
This prospect is accompanied by several challenges however, due to
the high energy content and relative instability of the NOx product
and works are provided that prove the area of relative safe
exploitation of NOx in high temperature and pressure conditions
required by the propulsion system. The experimental test stand built
at the ADDA proving grounds in Fagaras is described and the first
numerical and experimental results of the feasibility of the design are
presented, that show that the control of the vaporization rate of the
liquid is the essential element of the project. Several methods to keep
under control the rate of vaporization are presented and the choice
for an optimal approach is discussed. The hard point regarding the
practical manipulation of the nitrous oxide is detailed, as being one of
the major difficulties encountered by experimenters dealing with NOx
manipulation. The atrocious accident encountered by Virgin Galactic
in 2007, while manipulating liquid NOx for space ship 2 fuel supply
tests, is analyzed and the subsequent safety measures are
considered.
The Role of Uncertainty Quantification in the Model Validation
of a Mouse Ulna
V&V2013-2414
David Riha, Southwest Research Institute, Stephen Harris,
University of Texas Health Science Center at San Antonio
A formal model verification and validation (V&V) program is underway
to predict strains in the mouse ulna during in vivo forearm
compression loading. This loading protocol is widely used to study
the role of genetics in bone quality and formation. A hierarchical
approach is used to calibrate and validate each part of the model.
First, tissue level material properties are determined using a Bayesian
model parameter calibration process. Parameters for several
candidate material models (e.g., elastic, orthotropic, viscoelastic) are
calibrated using force-displacement measurements from a microindentation probe. Micro-CT images have been obtained for
unloaded and loaded forearms using a custom load frame. Geometry
of the ulna was determined based on the unloaded micro-CT images
and displacements were obtained from images of the loaded arm to
calibrate a boundary condition model for the ulna. Finally
independent experiments for the loaded forearm have been
performed and will be used for validation. Images from the loaded
and unloaded forearm will be converted to strains to compare to
model predictions for validation. One challenge in model validation is
to account for the errors and uncertainties in the experiments and
models. For example, the C57BL/6 4 month old female mouse
population will have natural variability in the ulna geometry and
material properties. This uncertainty in the materials and geometry
can be reduced by building models for each arm that is tested. The
strategy for the validation is to build the finite element models for
each ulna tested and obtain micro-indentation measurements for the
same mouse (left and right arm). Thus, each ulna finite element
model will have the specimen specific geometry and material
properties for comparison to the validation experiment. With these
uncertainties minimized, boundary conditions and material model
form (e.g, linear elastic or viscoelastic) will be the main uncertainties
to quantify in the model. This presentation will outline the overall V&V
approach and provide details about the Bayesian material model
calibration and its role in the uncertainty quantification and
hierarchical element in the overall model validation.
Certification of Simulated FEA Behavior
V&V2013-2412
A Van Der Velden, Dassault Systemes, George Haduch,
Dassault Systemes SIMULIA Corp
The increased acceptance of simulated results have led to a
tremendous growth in new simulation users, and a greater
dependency by manufacturers on simulation results. This in turn,
increases the importance of producing high quality FEA software that
can be used to confidently predict if a manufactured product meets
its requirements. In 1996 Dassault Systemes SIMULIA was one of the
first finite element companies to achieve ISO 9001 certification for its
quality management system. Included within that system, the
development process involves a continuous effort to fix critical bugs
promptly and to reduce the uncertainty of simulated results. The
Abaqus Verification and Benchmark Manuals contain sets of tests
that can be used to determine if product features are functioning as
documented. Specifically, in the case of code verification, we
compare the correctness of the solvers with known exact solutions
for the same abstraction assumptions (e.g. linear FEA). In the case of
solution validation we compare the simulated model behavior to
known experimental results. The Abaqus Benchmark manual
includes performance data for more than 250 NAFEMS benchmarks.
Dual Phase Flow Modeling and Experiment into SelfPressurized, Storable Liquids
V&V2013-2415
Radu Rugescu, Universitatea Politehnica Bucuresti
Investigation performed within the research projects NERVA and
60
ABSTRACTS
ORVEAL, supported by grants of the Romanian National Authority for
Scientific Research, CNDI– UEFISCDI, project numbers 82076/2008
and 201/2012, by the author team at University Politehnica of
Bucharest, Romania, has shown that a great performance gain is
given by the augmented combustion of solid propellants in space
propulsion systems. Results are also showing that this gain can only
be sustained when a convenient self-vaporizing feed system is
provided, that greatly depends on the behavior of the liquid involved.
The same conclusion is forwarded by other authors, like the team of
Stanford University for space propulsion. In order to determine the
availability of the self-vaporizing feed system a new research was
recently focused and the results are here presented on the selfvaporizing behavior of the notorious nitrous oxide for the
enhancement of solid phase fuels involved in hybrid or compound
rocket engines. The flow of such liquids is investigated and results
are presented on computer modeling of dual phase motion along
slender ducts that easily allow for an unsteady, one-dimensional
computational scheme to be implemented. The TRANSIT code,
initially developed for single-phase, unsteady, non-isentropic flows
was further improved to perform computations on the dual phase
motion of liquids under intense vaporization. Not only the state of the
fluid and its motion are unsteady, as fast changing are the title of the
bubble liquid and the equation of state and thermal transfer
properties of the dual-fluid that accompanies the model. The
agreement of the model with experimental data is checked for the
liquid carbon dioxide under fast vaporization, a replacement used to
perform experimental investigation without the risk that accompanies
the nitrous oxide, known for its instability above 500 Celsius. Details
of the simulation results stand for the basis of a new research project
initiated by the Romanian Research Company ADDA ltd, with
application in the development of the small, low cost orbital launcher
NERVA, promoted for nano-satellite launches with a payload of up to
2 kg in LEO. This work is under cooperation with the Aerospace
Department of TAMU in regard to the satellite control in orbit and of
other international cooperative effort in Europe with IPSA Paris and
ITU, Istanbul.
numerical predictions for the exit temperature and velocity were
compared with the experimental observations. The average
percentage errors in the experimental and numerical values for the
exit temperature and velocity were estimated to be approximately
3.0% and 8.5%, respectively. The possible reason for the
temperature discrepancy could be the adiabatic wall condition
considered for the simulation that overestimated the temperature
values. Also, any heat loss to the surroundings in the experiments
was not accounted for in the simulations, including the radiation and
natural convection. Similarly, the lower experimental velocities may
arise due to any loss of flow that might occur at the joints in the
experimental set-up where as there was no flow loss in the
simulations. Additionally, the uncertainty associated with
measurements during the experiment may be a plausible reason for
the differences in the experimental data and simulation predictions.
Later the validated CFD model was used to study the effect of
increasing momentum flux ratio on the temperature uniformity. Based
on the mixture fraction of the exit flow, it was found that the
temperature uniformity improved with increase in the flux ratio. Better
temperature uniformity would lead to lesser thermal stresses on the
turbine blade and enhanced life expectancy for the blade material.
Verification and Validation of Load Reconstruction Technique
Augmented by Finite Element Analysis
V&V2013-2417
Deepak Gupta, Konecranes, Inc. (Nuclear Division), Anoop
Dhingra, University of Wisconsin - Milwaukee
Accurate characterization of input forces acting on a structure is
often desired based upon experimentally measured structural
response. The instrumented structure essentially acts as its own
transducer. This presentation summarizes the methodology of
verification and validation of a time domain technique for estimating
dynamic loads acting on a structure from strain time response
measured at finite number of optimally placed strain gages on the
structure. The optimal placement of gages is necessary to counter
the inherent limitation of such inverse problems, ill-conditioning. The
validation problem entails a cantilevered aluminum beam clamped at
the base and attached to a shaker head. Focus is to reconstruct the
input forces acting on the structure through the shaker head. A
computational model (code) was developed in MATLAB that worked
in close conjunction with the finite element tool ANSYS. The code
broadly consisted of two parts - (i) determining optimum gage
locations based on experiments of unit loads and (ii) estimating the
applied loads on the structure from strain response measured at the
determined locations. The first part of the code was verified with the
help of a benchmark problem of an axially loaded thin plate with a
hole. The second part of the code was verified numerically where a
known sinusoidal load acting on a cantilevered beam was
reconstructed successfully with the help of strain response obtained
from finite element simulation. In the validation step, experiments
were conducted where two independent base excitations were input
to the aluminum beam through the shaker head to excite it. The
strain response at predetermined optimum gage locations on the
beam was measured and analyzed to arrive at the mode participation
factors (MPF), which were then compared to MPF obtained through
finite element analysis. An additional check on the recovery
procedure included validating the measured strain with the predicted
strain in various gages mounted on the beam. A numerical sensitivity
analysis was further performed to study the effect of uncertainties in
experimental data. Numerical strain data from finite element analysis
was corrupted with normally distributed random errors which was
then used to reconstruct input loads. The technique is validated to be
robust in the sense that even with the presence of uncertainties
(noise) in experimental data (both numerically and experimentally),
the applied loads are recovered fairly accurately.
Verification and Validation of CFD Model of Dilution Process in
a GT Combustor
V&V2013-2416
Alka Gupta, Ryoichi Amano, University of Wisconsin,
Milwaukee
Good mixing of cool air and combustion products in the primary
zone of GT combustor is necessary for high burning rates and to
attain complete combustion. Uniform temperature distribution in the
exhaust gases is dependent on the extent of mixing between the
cool air and combustion products in the dilution zone. The present
numerical and experimental work is an attempt to study the
combustor exit temperature uniformity achieved with increasing
momentum flux ratio. Here, it should be noted that in order to focus
only on the dilution process, non-reacting flow conditions were
simulated and the complete system was reduced to mixing of a
primary (hot) stream and dilution (cold) stream of air. For this study,
CFD simulations were done using the commercial code ANSYS
Fluent. It is a widely accepted and verified code for problems
involving heat transfer and fluid flow. With wide range of turbulence
models available in Fluent, the ε SST model was selected in order to
solve as a standard ε model in the near-wall region where the dilution
jet enters the mixing section and the standard ε model in far-wall
region. The numerical method used in the solution was the pressurebased segregated algorithm. This pressure-based solver uses a
solution algorithm where the governing equations (flow, energy and
turbulence equations) were solved sequentially (i.e., segregated from
one another). Because the governing equations are non-linear and
coupled, the solution loop was carried out iteratively in order to
obtain a converged numerical solution. A grid independence study
was performed to achieve a solution which conformed to the
expected physical conditions. To validate this CFD model, the
61
ABSTRACTS
Verification and Validation of Building Energy Models
V&V2013-2418
Yuming Sun, Godfried Augenbroe, Georgia Institute of
Technology
Quantification of Numerical Error for a Laminar Round Jet
Simulation
V&V2013-2420
Sagnik Mazumdar, Mark L. Kimber, University of Pittsburgh,
Anirban Jana, Pittsburgh Supercomputing Center, Carnegie
Mellon University
Building energy models play an unprecedented role in guiding new
designs and planning retrofits of buildings for low energy. Current
methods of verification and validation (V&V) in building simulation
domain do not reflect the recent development of the methodologies
for V&V and uncertainty quantification (UQ) of simulations in other
engineering domains. Consequently, building energy models may be
using with insufficient model V&V, which may partially explain the
significant discrepancies between model predicted and the actual
metered building energy use. This paper targets the methods that
enhance the effectiveness of building energy models, which is
achieved by understanding the models’ limitations and the
uncertainties inherent in the predictions. It first investigates the recent
development of V&V in other disciplines, based on which a tailored
V&V methodology for building simulations is developed. We discuss
how a rigorous verification method is applied to detecting the coding
errors and quantifying the accuracy of model solutions. It also shows
how a validation method tests the effectiveness of a conceptual
model by comparing to physical measurements with comprehensive
uncertainty quantification undertaken. This paper demonstrates the
V&V method with a widely used “sky model” developed by Perez
(1990), which is to compute solar diffuse irradiation on inclined
surfaces in building energy simulations. Validation results show
measureable model form uncertainty, which is then quantified with
regression models for predictions. The significance of this alternative
solar diffuse irradiation model is evaluated in the context of
uncertainty analyses of building energy predictions through a case
study, in which the model parameter uncertainties are included. The
results show distinct uncertainty distributions of building energy
predictions because of the model form uncertainty revealed in the
validation process.
Quantification of numerical errors is a big part of the verification
process for a computer simulation. The field of Computational Fluid
Dynamics (CFD) has made significant contributions to understanding
how to properly quantify and reduce numerical errors in a simulation.
However, truly making numerical errors negligibly small for realistic
CFD cases is especially hard, and often prohibitively expensive. For
example, not only has truly mesh independent CFD results been
mostly out of reach, but even the so-called asymptotic range where
spatial discretization error is proportional to h^p, where h is a
measure of the mesh fineness, and p is a real number called the
order of accuracy, have been difficult to achieve. More often, the
meshes employed are coarse enough that higher order terms
significantly affect the error, which complicates the quantification of
the error via the well-known Richardson extrapolation method.
However, with modern supercomputers, the goal of achieving CFD
results truly free of numerical errors may be closer to reality. In this
study, we propose to perform systematic and rigorous quantification
of numerical error for a confined, laminar, steady, round jet simulation.
We choose to study a laminar jet to keep the physics simple for now,
however the long term goal of our research is to systematically
quantify numerical errors for simulated turbulent jets such as those in
the lower plenum of the Very High Temperature Reactor (VHTR). The
numerical errors we expect to encounter in our laminar jet simulation
are: 1) iteration errors, 2) finite domain size errors and 3) spatial
discretization errors. First we will ensure iteration errors to be
negligibly small for each of our simulations by allowing sufficient
number of iterations. Next, we will perform a series of simulations of
progressively increasing domain lengths until finite domain size errors
have been sufficiently reduced too. Note that the reason we expect
finite domain size errors is because we intend to apply either a
constant pressure or constant velocity gradient boundary condition
at the outlet, which is achieved physically only if the outlet is far
enough from the jet inlet such that a fully developed Hagen-Poiseuille
flow has been established there. This will be ensured in our
simulations. Finally, we will employ progressively finer grids to assess
the spatial discretization errors. We hope to rigorously demonstrate
the attainment of asymptotic convergence, and perhaps even true
mesh independence. Our simulations will be performed in
OpenFOAM, an open-source parallel CFD code.
Validation of CFD Solutions to Oil and Gas Problems
V&V2013-2419
Alistair Gill, Prospect Flow Solutions, LLC
Engineers are increasingly turning to Computational Fluid Dynamics
to solve a variety of fluid dynamic and heat transfer problems in the
oil and gas industry. Opportunities to validate the output from such
analyses are often limited due to the environments in question, such
as the deep water ocean floor or the depths from which oil and gas is
now being extracted or the fact that the fluid in question is usually a
hydrocarbon the use of which in testing is severely restricted. Two
examples of how Prospect has validated the output from its CFD and
supporting numerical analysis will be highlighted. The first covers
deep water thermal analyses while the second examines a fluid
dynamic analysis associated with a down-hole fracture initiation.
Prospect has successfully designed and supported full scale cool
down testing of numerous deep-water subsea components. The
tests have proven the adaptability of the designs, validated the
analysis methodology, and proven the value of CFD as a subsea
engineering tool. The presentation will also show that by employing a
multi-physics approach, Prospect was able to accurately predict the
time taken for 7 small balls pumped over 18,000 feet through a
narrow pipe within very close agreement with a third party empirical
data made up from over 7,000 hydraulic fractures.
Verification, Calibration and Validation of Crystal Plasticity
Constitutive Model for Modeling Spall Damage
V&V2013-2421
Kapil Krishnan, Pedro Peralta, Arizona State University
Shock loading is a complex phenomenon and computational
modeling of spall damage due to shock loading can be equally
challenging. Studying spall damage on a microstructural level helps
in understanding the intrinsic material characteristics contributing to
the formation of damage sites. Flyer-plate impact experiments were
carried out at low pressures to study the spall phenomenon at its
incipient stage and attribute the damage sites to the surrounding
microstructural features. Experimental observations indicate that
damage tends to localize at grain boundaries and triple junctions.
The research work presented here focuses on developing a reliable
computational model to predict spall damage on a microstructural
level. The development of such a constitutive model to predict
damage initiation and evolution during such high impact events
requires proper understanding of the physics driving them. The
different physical mechanisms incorporated into the constitutive
model are: 1) Equation of state (EOS) to model shock wave
62
ABSTRACTS
propagation 2) Crystal plasticity model for anisotropic plasticity, 3)
Modified Gurson-Tvergaard-Needleman (GTN) model for void
nucleation and growth and 4) Isotropic and kinematic hardening
models. The equations pertaining to these models were coded into a
user-defined material behavior subroutine for ABAQUS/Explicit. Prior
to testing the predictive capabilities of the constitutive model, the
developed model is put through a series of verification and validation
(V&V) tests. An intermediate step of calibration was added to the V&V
procedure to obtain the optimal parameters that can model the
aforementioned phenomena accurately. The calibration of the
material parameters was done using an optimization technique
based on design of experiments (DOE). First, the material parameters
influencing the response were chosen and their lower and upper
bounds were determined. A set of design candidate points were
obtained from the Design-Expert® software and finite element
simulations were carried out for these candidate points. The
responses for each of these simulations were measured and the DOptimal design criterion was used to generate the response surface.
An optimization technique was used to determine the optimal
material constants that predict the experimental test results with the
least error. Single crystal impact experiments were used for
calibration and different sets of single crystal and multicrystal impact
test cases were used for validation purposes. The results obtained
from the finite element simulations gives R-squared fit value of 90%
or above in comparison to the experimental observations. The
confidence level in the computational model was tested by
performing flyer-plate impact simulations at different strain rates.
needed for quantification of the impact of different sources of
uncertainties on the design and safety parameters of HTGRs.
Uncertainty and sensitivity studies are an essential component of any
significant effort in data and simulation improvement. In order to
address this challenge, the IAEA has started a Coordinated Research
Project (CRP) in which Member States will work together to develop
tools and methods in computer codes, to analyse the uncertainties in
neutronics, thermal hydraulics and depletion models for HTGRs. It is
expected that the CRP will also benefit from interactions with the
currently ongoing OECD/NEA Light Water Reactor (LWR) UAM
benchmark activity by taking into consideration the peculiarities of
HTGR designs and simulation In the paper the an update on the
current status of the CRP will be presented, including further plans
and ongoing work.
Coping with Extreme Discretization Sensitivity and
Computational Expense in a Solid Mechanics Pipe Bomb
Application
V&V2013-2425
James Dempsey, Vicente Romero, Sandia National
Laboratories
This talk will present the approach taken to get rough solution error
estimates for a numerically difficult and computationally expensive
solid mechanics problem. The problem involves predicting failure
initiation for a stainless-steel pipe subjected to high temperatures
and pressures. Failure pressure is the quantity of interest in the
analysis. Pressure-failure initiation is signified by failure of the
calculations to numerically converge. Failure to converge indicates
for these “load-controlled” applications that applied pressure forces
exceed the material’s resistance strength. A mathematical instability
occurs where infinite displacement exists for the incremental
pressure increase at the next time step; the quasi-static solidmechanics equations have no inertial terms to balance the pressure
force via acceleration of the material when it can no longer resist the
applied pressure, so the solver cannot converge. Because the
quantity of interest (QOI) in the analysis is defined by calculation
instability, the QOI is very sensitive to mesh and solver settings as
will be shown. Because the QOI is determined by calculation
instability and is not a nodal quantity or integrated nodal quantity
calculated “on the grid”, the QOI does not appear to have a
theoretical underpinning for establishing grid convergence
properties like convergence rates with respect to grid cell size.
Nonetheless, a reasonable empirical rate of convergence was
provisionally obtained upon application of generalized Richardson
Extrapolation (GRE). The GRE discretization study was performed
on a “nearby problem” that featured ¼ symmetry boundary
conditions fairly similar to the non-symmetric boundary conditions of
the real problem being modeled (a validation experiment). The ¼
symmetry model was affordable enough to allow an extensive grid
study and solver study. The first study involved refining from one
finite-element through the pipe-wall thickness, to two elements, four
elements, and finally six elements. Elements in the axial and
circumferential directions of the pipe were also refined
commensurately to retain 1:1:1 aspect ratios. At a thickness
transition in the axial Z direction along the pipe-wall, element widths
increase as wall thickness increases over the transition zone. To
maintain near 1:1 aspect ratios between element widths and heights
over this region, a stretching function was applied such that element
heights in the axial direction increase with Z. This matches the
increasingly wide elements as one travels up the thickening wall.
Then doubling the number of elements in the axial direction over this
transition region does not mean halving the elements, but instead
splitting them at slightly different locations to preserve the mesh
stretching function in the Z direction (over the thickness transition
region). This gave the required “geometrically similar” successively
refined grids that GRE requires. A solver discretization study was
also conducted to confirm that the pseudo-time adaptive integrator
did not significantly affect the mesh study results. Highlights from
A Stochastic Paradigm for Validation of Patient-specific
Biomechanical Simulations
V&V2013-2422
Sethuraman Sankaran, Dawn Bardot, Heartflow Inc.
Validation of biomechanical simulations should include a holistic
evaluation of underlying sources of errors including (a) reconstruction
of patient specific geometry, (b) boundary conditions, (c) reduced
order models and modeling assumptions, (d) clinical parameters, (d)
uncertainty in clinical measurement. We introduce a generic adaptive
stochastic collocation method for uncertainty quantification and
validation of biomechanical simulations. This technique which is
based on Smolyak quadrature method for integrating and
interpolating functions in high dimensional space is used since
conventional quadrature techniques such as Gauss quadrature
cannot be used for the task of validation beyond a few parameters.
Validation is performed by splitting the task into – (i) studying effect of
uncertainty of various clinical parameters and boundary conditions,
(ii) studying impact of uncertainty in geometry, (iii) assessing impact
of modeling assumptions, and (iv) assessing impact of measurement
variability. The scalability and efficiency of the adaptive collocation
method is demonstrated. Sensitivity Impact Factors for the quantities
of interest can be computed and parameters ranked according to
their influence on outcome. A combination of off-line uncertainty
quantification study coupled with accelerated on-line sensitivity
analysis will be used.
Uncertainty Analysis in HTGR Neutronics, Thermalhydraulics
and Depletion Modelling
V&V2013-2423
Bismark Tyobeka, International Atomic Energy Agency,
Kostadin Ivanov, Pennsylvania State University, Frederik
Reitsma, Steenkampskraal Thorium Linited
The predictive capability of coupled neutronics/thermal-hydraulics
and depletion simulations for reactor design and safety analysis can
be assessed with sensitivity analysis and uncertainty analysis
methods. In order to benefit from recent advances in modeling and
simulation and the availability of new covariance data (nuclear data
uncertainties) extensive sensitivity and uncertainty studies are
63
ABSTRACTS
Development and Experimental Validation of Advanced NonLinear, Rate-Dependent Constitutive Models for Polymeric
Materials
V&V2013-2427
David Quinn, Jorgen Bergstrom, Nagi Elabassi, Veryst
Engineering
the solver discretization study will be presented. In going from the ¼
symmetry model to the full-geometry model to solve the real
problem, not even the 1-element-thru-thickness mesh could be
afforded. Runs with this smallest mesh could not be completed
within 1 month on several hundred processors. Scaleup to more
processors showed fast-declining parallel efficiency, so was not
pursued. The ¼ model was revisited for a tradeoff study of solution
accuracy (relative to the GRE-projected asymptotic solution) vs.
different solver error tolerances and meshing schemes. The most
cost-effective mesh and solver combination from this study was
identified. The discretization bias for this mesh/solver was quantified
relative to the GRE-projected asymptotic solution. Highlights of the
tradeoff study will be presented. The identified most-cost-effective
mesh/solver settings were then applied to the full-geometry model
for the production validation/UQ calculations. The discretization bias
from the ¼ symmetry study was scaled to the production calculation
results to provide an approximate bias correction and solution-error
uncertainty estimate for the model validation assessment. The
solution-error bias correction and uncertainty estimation approach
will be described. 1 Sandia is a multi-program laboratory managed
and operated by Sandia Corporation, a wholly owned subsidiary of
Lockheed Martin Corporation, for the U.S. Department of Energy’s
National Nuclear Security Administration under contract DE-AC0494AL85000.
Engineering thermoplastics and elastomers continue to find
widespread application across virtually all industries, including
consumer products and medical devices. In many applications,
polymeric materials are subjected to complex load histories involving
several decades of imposed strain rates or time. For example, some
consumer product enclosures are frequently required to survive drop
events with strain rates exceeding 1000/s. Medical devices such as
bone screws are required to maintain fixation forces over the course
of months to years. Accurate simulations of such time-dependent
events require the use of advanced non-linear constitutive models, as
linear elastic and linear viscoelastic models generally do not suffice.
This presentation first discusses the development of an advanced
viscoplastic constitutive model such as the Three Network Model for
an engineering thermoplastic such as polycarbonate (PC) or
polyether ether ketone PEEK. The model is calibrated to
experimental data from mechanical testing a low and high strain
rates. We then present an approach toward the validation of this
calibrated model comparing simulations to a series of independent
experiments performed on simple material form factors (i.e. test
coupons or sheets, cylinders) aimed at duplicating some of the
conditions the material would be expected to see in practice. This
approach includes the use of drop testing and creep or stressrelaxation testing to validate the model over short and long
time-scales as well as custom experiments to validate the material
response under multiaxial loading. Through the course of this
discussion, important aspects of the validation process, such as
uncertainty assessment, risk assessment and setting appropriate
validation criteria are discussed.
Computational Simulation and Experimental Study of Plastic
Deformation in A36 Steel during High Velocity Impact
V&V2013-2426
Brendan O’Toole, Mohamed Trabia, Jagadeep Thota,
Deepak Somasundaram, Richard Jennings, Shawoon Roy,
University of Nevada Las Vegas, Steven Becker, Edward
Daykin, Robert Hixson, Timothy Meehan, Eric Machorro,
National Security Technologies, LLC
Plastic deformation of metallic plates during ballistic impact is a
complex computational problem as well as a difficult experimental
measurement challenge. Target plates experience large localized
deformation during impact. The panel bulges outward and the
dynamic tensile loads lead to fracture and eventually the spallation of
material off the back face. The objective of this work is to
experimentally measure the target plate back face velocity during
impact and compare this to computationally simulated velocity using
multiple methods. Depth of penetration and other post-impact
geometric parameters are also compared. A two-stage light gas gun
is used to launch a 5-mm diameter polycarbonate cylindrical
projectile. Projectile impact velocity varies from about 5 km/s to 7
km/s. The initial target material is an A36 steel plate with dimensions
of 152-mm x 152-mm x 12.7-mm. The target plate is bolted to a steel
backstop with a 102-mm diameter hole in the center. Photonic
Doppler Velocimetry (PDV) is an interferometric fiber optic technique
ideally suited for measuring the back face velocity, which can rise
from zero to peak value in less than 2 microseconds. Velocity is
determined by measuring the Doppler shift of light reflected from the
moving back surface. A multi-channel PDV system is used to
measure velocity at different locations. Multiple probes mounted at
different angles focused on the same off-axis point are used to
determine the surface jump-off angle. Back face target velocity and
final deformation patterns are simulated with the LS-DYNA smooth
particle hydrodynamics (SPH) solver and Sandia National
Laboratories CTH Eulerian solid mechanics code. Both codes require
the use of complex material models including equations of state
(EOS), damage, and strength parameters. All material model
parameters are taken from published data found in the open literature.
Modeling techniques and simulation results are compared and
contrasted to the experimental data. In particular, velocity versus time
curves are compared at PDV probe locations as well as the deformed
shape of the target plates. This work was done by National Security
Technologies, LLC, under Contract No. DE-AC52-06NA25946 with
the U.S. Department of Energy. DOE/NV/25946—1694
Experiment Design—A Key Ingredient in Defining a Software
Validation Matrix
V&V2013-2428
Richard Schultz, Yassin Hassan, Texas A&M Univeristy,
College Station, TX, United States, Edwin Harvego,
Consulting Engineer, Glenn McCreery, Idaho State University,
Ryan Crane, ASME
Software, designed to analyze the behavior of nuclear systems not
only at steady-state operational conditions but also anticipated
operational occurrences and design basis accidents, must be
carefully validated to quantify the capabilities of the software to
calculate the governing phenomena. The capabilities of analysis tools
intended to calculate the thermal-hydraulic behavior of nuclear
systems are ultimately reviewed by the regulatory authorities to
ensure the key ingredients in defining the nuclear system steadystate and transient trajectories can be calculated with a known and
acceptable calculational uncertainty. In this way, the software tools
used to perform nuclear plant licensing-related calculations are
deemed trustworthy for their intended purpose. To achieve this goal,
regulatory authorities (e.g., the U.S. Nuclear Regulatory Commission)
and related organizations (e.g., the International Atomic Energy
Agency) have published guidelines which outline and summarize
(and in some cases detail) the path to gain acceptance of software
tools for licensing purposes. Such guidelines are given in the NRC’s
regulatory guides and the IAEA’s repertoire of technical reports. A key
ingredient in the process for gaining acceptance are the data
recorded in selected experiments for achieving the objectives
outlined above. Experiments used to produce data for software
validation for nuclear system analysis must be designed to generate
these data for the governing phenomena and related figures-of-merit
at phenomena ranges which can be directly related to the nuclear
64
ABSTRACTS
system scenarios of interest. Consequently, the experiments must be
designed by using acceptable scaling methodologies that give an
association between the prototype at multiple levels ranging from the
system design level to the phenomena level including the various
phenomena interactions. With respect to the Generation IV very high
temperature gas-cooled reactor (VHTR) prototype, a scaled, integral
test facility was designed by Oregon State University (OSU) using an
acceptable, multi-level scaling methodology (the hierarchical twotiered scaling approach) to relate the integral facility to the VHTR.
Additional work has subsequently been done to scale separate-effect
experiments to both the OSU integral test facility and the prototype.
A summary of the efforts focused on separate effects experiment
design and tests will be given that outlines the approach and the
complementary nature of these experiments. The discussion also
relates this work to the objectives and focus of the ASME V&V30
Committee.
Digital Astronaut Project (DAP) and Integrated Medical Model (IMM)
use the NASA Standard for Models and Simulations (NASA-STD7009) method of credibility assessment for biomedical models and
simulations (M&S). To achieve this goal, we will examine one of DAP’s
exercise physiology/biomechanics models employed by NASA’s
exercise physiologists to inform spaceflight exercise
countermeasures research and operations. The presentation will
describe the following processes DAP and IMM use to: 1. Define the
intended use of the model (e.g., research versus clinical operations)
2. Perform risk assessment to determine how the M&S results
influence decisions and the consequence of those decisions 3.
Determine the validation domain for the intended use 4. Define
credibility thresholds based on the intended use and available data
for verification and validation 5. Assess the credibility of the model
within the validation domain 6. Apply the M&S within the validation
domain when (i) credibility thresholds have been met; and (ii)
credibility thresholds have not been met We will also discuss the
limitations and specific assumptions used when applying the
Standard to situations where validation data are lacking, as well as
communicating model assumptions and results with the user.
Verification and Validation of a Penetration Model for Ballistic
Threats in Vehicle Design
V&V2013-2429
David Riha
Validation of Nondestructive Evaluation Simulation Software
for Nuclear Power Plant Applications
V&V2013-2431
Feng Yu, Thiago Seuaciuc-Osorio, George Connolly, Mark
Dennis, Electric Power Research Institute (EPRI)
John McFarland,James Walker, Sidney Chocron, Southwest
Research Institute
The goal of the Defense Advanced Research Projects Agency’s
(DARPA) Adaptive Vehicle Make (AVM) program is to reduce the time
it takes a military ground vehicle to go from concept to production by
a factor of five. One part of the AVM design analysis tool chain is to
assess the vehicle performance in various operational environments
(contexts) such as ballistic threats. A model verification and validation
(V&V) approach is being used to meet the AVM program
requirements to assess model accuracy and uncertainty. V&V
guidelines have been developed to provide a prescriptive and
consistent approach for assessing the AVM context model
capabilities. These guidelines are based on elements from V&V
frameworks in the fluids and solids mechanics communities as well
as NASA and the Department of Defense. The guidelines were used
to assess the predictive capability of the Walker-Anderson
penetration model to assess ballistic threats for vehicle design. The
intended use of this model is to predict the projectile residual velocity
and size to determine if a projectile penetrates vehicle structures to
impact crew or critical vehicle components. A design space domain
was defined for the V&V assessment to cover different projectiles,
targets, and striking velocities needed to assess vehicle performance.
Uncertainty quantification analyses were performed to determine the
importance of uncertain model parameters and to quantify their
impact on the predicted residual velocity. Existing experiments were
used to assess model accuracy based on a discrepancy function
approach. A model maturity level classification was defined as part of
the guidelines to communicate capability of each context model. The
maturity is based on model basis and definition, complexity and
documentation, supporting data, model verification, range of
applicability and UQ, and validation. The UQ analyses and accuracy
assessments are used to develop an approach to compute the
uncertainties in the model predictions during the AVM design
analysis. The V&V process provides documentation of the methods,
assumptions, and experiments to determine model capability. Finally,
a model pedigree concept is used to provide a concise description of
the model capabilities, input, output, assumptions, model accuracy,
model maturity level, and design space domain validity.
Nondestructive evaluation (NDE) is an interdisciplinary field that uses
various sensing and measurement technologies to noninvasively
characterize the properties of material and determine the integrity of
structures and components. Various NDE methods such as
ultrasonic, eddy current and radiography have been widely
implemented in the nuclear power industry to provide the needed
engineering data to assess the component and structure condition
and their remaining life. An inspectability assessment of components
and structures, along with the development and optimization of
related inspection techniques and procedures, is usually an iterative
process that typically involves the design and fabrication of a series
of mockups and transducers as well as extensive experiments and
measurements. It has been long thought that modeling and
simulation is a cost effective means to maintaining and improving
quality and reliability since it is able to provide a more scientific and
technical basis in developing and demonstrating NDE techniques
prior to field use. However this goal is largely unfulfilled for one
reason or another. It was not until recently that the industry has
witnessed the significant effort and progress in developing NDE
models and software to simulate more realistic inspections with more
accurate results. In an ongoing effort to exploit NDE modeling and
simulation for industry applications, EPRI has been performing a
series of independent validations of the available simulation software
with commonly used transducers, mockups, instrumentation and
techniques. In this paper, we report the status of an independent
validation of one of the most commonly used NDE simulation
programs, namely CIVA software developed by Commissariat à
l’Energie Atomique (CEA). A general overview of NDE and the CIVA
program simulation capability with respect to ultrasonic, eddy current
and radiographic inspection methods is presented, along with some
comparisons between experimental and CIVA simulated results for
these three inspection methods. Overall, the CIVA simulations are in
good agreement with experimental results. This study also leads to
an improved understanding of physical-based models and numerical
methods used in CIVA, and indentifies potential gaps between the
current CIVA program and inspection needs.
Case Study: Credibility Assessment of Biomedical Models and
Simulations with NASA-STD-7009
V&V2013-2430
Lealem Mulugeta, Universities Space Research Association,
Division of Space Life Sciences
We will present the step-by-step process describing how NASA’s
65
ABSTRACTS
High Pressure Gas Property Testing and Uncertainty Analysis
V&V2013-2432
Brandon Ridens, Southwest Research Institute
A New Solution Paradigm for the Discrete Ordinates Equation
of 1D Neutron Transport Theory
V&V2013-2433
B.D. Ganapol, University of Arizona
Southwest Research Institute (SwRI) has developed high-accuracy
test methods for speed of sound, density, and specific heat
determination to enable compressor and turbomachinery designs in
the high-pressure, supercritical regime (atmospheric to 10,000 psi).
The Institute performs high-pressure gas property testing using
specialized anti-corrosion, high pressure, SwRI-designed test fixtures
to meet low uncertainty requirements of gas property testing
campaigns. Use of a high precision scale, calibrated high accuracy
pressure transducers, specialized RTDs, high frequency oscilloscope,
and accurate power input helps reduce the individual systematic
uncertainties. Onsite SwRI testing can provide gas species analysis
at particular temperature and pressure points to verify physical
properties, equation-of-state models, and uncertainty limits of the
models. The test uncertainties are calculated for the primary direct
measurements and reference condition (pressure, temperature,
equation of state model prediction, and gas mixture uncertainty) at
each test condition. The test uncertainty is a function of the sensor
measurement uncertainty, additional uncertainties in the test
geometry (length and internal volume), molecular weight uncertainty,
and the precision uncertainty of the test based on repeat
measurements. The uncertainty calculation is based on the
perturbation method implemented in a SwRI uncertainty model
(spreadsheet based) for use on various tests requiring independent
measurements.
A novel numerical method to solve the neutron transport equation in
a slab is presented. The compelling numerical feature of the method
is its simplicity and therefore accessibility to those not familiar with
solving the transport equation and to students in the classroom. The
primary motivation is to establish highly accurate results through
straightforward neutron accounting and well-known numerical
procedures and, in so doing, define a new solution paradigm. There
are numerous numerical solution techniques for solving the 1D
neutron transport equation, but none more fundamental then the
discrete ordinates method DO. With the advent of the computer age
and the pressing needs of the time, the DO method, reformulated by
Carlson1 in a discrete form and called the Sn (Segment- n) method,
performed remarkably well, especially in two and three dimensions.
Since the method involves finite differences and numerical
quadrature, one commonly assumes that truncation error limits
accuracy. For this reason, the Wick-Chandrasekhar approach2 is
often preferred for high accuracy since it considers the spatial
variable analytically and therefore presumably is more accurate for a
given level of effort. Most recently, C.E. Siewert and coworkers3
have rediscovered the Wick-Chandrasekhar method, called the
analytical discrete ordinates (ADO) method, resulting in numerous
papers on the subject. Nevertheless, the primary purpose of this
presentation is to demonstrate that the DO method is every bit as
accurate as the analytically oriented ADO and, in addition, more
easily applied.
An alternative, more fundamental way of numerically solving the
neutron transport equation to high accuracy is to be presented. The
main numerical tool is convergence acceleration leading to a new
definition of “a solution”. We call the method the converged Sn
method (CSn) rather than the converged discrete ordinates (CDO)
method to emphasize its Sn roots in neutron transport theory. In
effect, the CSn method is nothing more than the extrapolation of the
conventional, fully discretized Sn solution to near zero angular and
spatial discretization.
To demonstrate the new method, we will consider neutron transport
in a homogeneous slab with general incoming boundary conditions
and volume sources. Not only will the methods provide exceptionally
accurate numerical results for benchmarking purposes, but also will
represent a new definition of a numerical solution. No longer should
we consider one discretization as the solution in lieu of a sequence of
solutions tending to a limit.
66
AUTHOR INDEX
Author
Abankwa, Samuel
Abkenar, Morteza
Ahmed, Meraj
Akbari, Javad
Al-Ammri, A. Salam
Albers, Albert
Ali, Muhammad
Alidoosti, Elaheh
Alrashidi, Mohammad
Al-Samarrai, Shakir
Alyanak, Edward
Alzahabi, Basem
Amano, Ryoichi
Andre, Matthieu A.
Anitescu, Mihai
Arpin-Pont, Jérémy
Asserin, Olivier
Augenbroe, Godfried
Babu, Chandu
Bagchi, Amit
Bajaj, Anil K.
Balara, Elias
Bardet, Philippe
Bardot, Dawn
Barthet, Arnaud
Basak, Tanmay
Bauman, Lara
Becker, Steven
Behrendt, Matthias
Beishem, Jeff R.
Bell, Jacqueline
Bergstrom, Jorgen
Berkoe, Jonathan
Bermejo-Moreno, Ivan
Bhatnagar, Naresh
Bhuiytan, Ariful
Biglari, Ehren
Biswal, Pratibha
Bot, Patrick
Brannon, Rebecca
Breslouer, Oren
Brock, Jerry S.
Browning, Robert N.K.
Bui, Anh
Busschaert, Clotilde
Calabria, Raffaela
Camberos, Jose
Campo, Laura
Canfield, Thomas R.
Cariteau, Benjamin
Carnes, Brian
Celik, Ismail
Chabaud, Brandon
Chen, Hung Pei
Cheriyan, Jacob
Chica, Leonardo
Chizari, Mahmoud
Chocron, Sidney
Choi, Jeehoon
Choudhary, Aniruddha
Chougule, Nagesh
Coelho, Pedro J.
Connolly, George
Coutu, Andre
Cox, Andrew
Crane, Ryan
Crews, John
Daddazio, Raymond
Damkroger, Brian
Daykin, Edward
Degiorgi, Virginia
Session Number
9-1, 9-2
4-1
3-1
1-2
9-2
1-1, 12-1
6-2
7-1
10-1
9-2
13-2
3-1
4-4
4-4
2-2
2-2
4-1
9-1
4-2
5-1
2-1
4-1
4-1, 4-2
2-1
2-4
4-2
12-1
5-1
1-1
13-1
5-1
10-1
9-1, 11-2, 11-5
4-1
10-1
10-1
10-2
4-2
4-3
3-1
4-1
13-1
4-3
6-1
4-2
4-2
13-2
4-1
9-2
4-1
11-5
2-1
13-1
4-3
3-1
9-1
10-1
1-2
7-1
7-2
7-1
7-2
6-2
2-2
11-2
1-2
2-1
2-3, 5-1
3-2
5-1
1-1
Author
Session Number
Dempsey, James
Deng, Chengcheng
Dennis, Mark
Dhaher, Yasin
Dhingra, Anoop
Dinh, Nam
Dobmeier, Joseph
Doebling, Scott
Donato, Adam
Dong, Ruili
Doty, John
Dumitrache, Ciprian
Eaton, John
Eca, Luis
Edelberg, Kyle
Eilers, Daniel
Ekwaro-Osire, Stephen
Elabassi, Nagi
Elbert, Mallory
Elele, James
Elsafty, Ahmed F.
Erkan, Nejdet
Eslami, Akbar
Everhart, Wesley
Evon, Kellie
Fakhri, Syed
Falize, Emeric
Farid, Allie
Fathi, Nima
Feng, Yusheng
Figliola, Richard
Fischer, Michael
Frepoli, Cesare
Freydier, Philippe
Frydrych, Dalibor
Fu, Yan
Fujita, Kikuo
Gagnon, Martin
Gajev, Ivan
Gamino, Marcus
Ganapol, B.D.
Gao, Yanfeng
Garnier, Josselin
Gayzik, F. Scott
Geier, Martin
Gel, Aytekin
Ghaednia, Hamid
Gill, Alistair
Gilles, Philippe
Giorla, Jean
Giversen, Mike Dahl
Glia, Ayoub
Gopalan, Balaji
Gounand, Stephane
Goyal, Rajat
Gray, Genetha
Grier, Benjamin
Guenther, Chris
Gupta, Alka
Gupta, Deepak
Gutkin, Leonid
Haduch, George
Hall, David
Hallee, Brian
Hansen, Andreas
Hapij, Adam
Harris, Jeff
Harris, Stephen
Harvego, Edwin
Hashemi, Javad
Hassan, Yassin
67
13-2
2-4
6-2
10-2
3-1
13-1
4-2
9-2
2-4
1-2
2-4
4-4
4-1
4-1, 13-2
1-1, 2-5
2-3
10-1
10-1
9-1, 11-2, 11-5
13-1
4-3
6-1
11-2
4-3
9-1, 11-2, 11-5
1-1
4-2
13-1
7-2
10-2
13-2
9-2
6-1
2-4
6-2
2-1, 3-2
12-1
2-2
2-3
9-1, 9-2
11-3
9-1
4-2
5-1
12-1
2-2, 2-3
1-1
4-4
4-1
4-2
7-1
4-2
2-2
4-1, 7-2
2-1
2-5
13-2
2-3
4-4
3-1
11-3
2-2
13-1
2-4
1-1
2-3, 5-1
4-3
10-2
1-2
10-1
1-2
Author
Session Number
Hedrick, Karl
1-1, 2-5
Hendon, Raymond
7-2
Herwig, Heinz
12-1
Hills, Richard
4-4, 2-5
Hixson, Robert
5-1
Hoekstra, Martin
13-2
Hokr, Milan
6-2
Howell, John
2-5
Hsu, Wen Sheng
4-3
Hu, Kenneth
11-5
Hu, S.Q.
2-4
Hussein, Nazhad Ahmad
1-2
Ince, Ayhan
3-2
Inoue, Haruki
12-1
Ivanov, Kostadin
2-1
Iyer, Ram
3-1
Jaeger, Steffen
12-1
Jana, Anirban
4-4
Jennings, Richard
5-1
Jeong, Hae-Yoon
7-1
Jeong, Minjoong
7-1
Jiang, Xiaoyan
6-1
Jiang, Yingtao
7-1
Jin, Yan
12-1
Johnson, Edwin
9-2
Juhl, Katrine
7-1
Kamm, Jim
13-1
Kamojjala, Krishna
3-2
Katz, Brian
3-1
Kazak, Oleg
7-1
Kimber, Mark L.
4-2
Kinzel, Michael
9-1, 11-2, 11-5
Knight, Kelly
9-1, 11-2, 11-5
Kodama, Yasuhiro
6-1
Koenig, Michel
4-2
Kong, Xiaofei
4-1
Kota, NIthyanand
1-1, 5-1
Kovaltchouk, Vitali
6-2
Kovtonyuk, Andriy
2-5, 11-2
Kozlowski, Tomasz
2-3
Kramer, Sharlotte
3-2
Krishnan, Kapil
3-1
Lacy, Jeffrey
3-1
Lam, Myles
3-2
Lance, Blake
4-3
Lane, William
4-1
Le Clercq, Patrick
4-2
Lee, Hyung
6-1, 11-2
Lee, Jordan
4-3
Lee, Sang-Min
7-1
Lee, Yunkeun
7-1
Leung, Alan
5-1
Lewis, Alexis
1-1
Li, Chenzhao
2-5, 12-1
Li, Jimmy
1-1
Li, Tingwen
2-2, 2-3
Lin, Hui Chen
4-3
Lind, Morten
7-1
Ling, You
12-1
Littlefield, David
5-1
Liu, Weili
2-4
Liu, X.E.
1-1, 2-4, 3-2
Long, Linshuang
9-1
Lothar, Bast Jürgen
1-2
Loupias, Berenice
4-2
Ma, Jian
7-1
Machorro, Eric
5-1
Machrafi, Rachid
6-2
Mahadevan, Sankaran
2-5, 12-1
Mahmoudi Nejad, Mohammad Hadi
10-2
Manzari, Majid
4-1
AUTHOR INDEX
Author
Maree, Iyd
Marghitu, Dan
Margolin, Len
Marjanishvili, Shalva
Markel, David P.
Marotta, Egidio
Massoli, Patrizio
Mazumdar, Sagnik
McCorkle, Evan
McCreery, Glenn
McFarland, John
McNamara, Laura
Meehan, Timothy
Merzari, Elia
Michaut, Claire
Mitenkova, Elena
Moazzeni, Taleb
Mons, Vincent
Montgomery, Christopher
Moorcroft, David
Moradi, Kamran
Moreno, Daniel P.
Morgan, Nathaniel R.
Morse, Edward
Mosleh, Mosaad
Mousseau, Kimberlyn C.
Mullins, Joshua G.
Mulugeta, Lealem
Myers, Jerry
Nasr, Nasr A.
Ndungu, Charles N.
Neidt, Tod
Nelson, Emily
Nomaguchi, Yutaka
Noura, Belkheir
Oakley, Owen
Oberkampf, William
Ohno, Shuji
Ohoka, Yasunori
Okamoto, Koji
Olson, Richard
Olson, William
O’Toole, Brendan
Paez, Thomas
Pan, Selina
Papados, Photios
Parishwad, Gajanan
Park, Kyung Seo
Parsons, Don
Pascali, Raresh
Peltier, Leonard
Peppin, Richard J.
Peralta, Pedro
Pereira, Filipe
Petruzzi, Alessandro
Phillips, Tyrone
Pitchumani, Rangarajan
Pointer, W. David
Popelar, Carl
Putnam, Robert A.
Qidwai, Siddiq
Quinn, David
Session Number
1-2
1-1
4-1
3-1
4-3
9-1, 9-2
4-2
4-4
2-1
1-2
1-2
11-1
5-1
4-1
4-2
6-1
7-1
4-4, 7-2
4-1
11-1
13-2
5-1
9-2
11-3
4-3
11-2
2-5, 12-1
10-1, 10-2
10-1
4-3
9-1
4-3
10-1
12-1
9-1
4-1
11-1
6-2
6-1
6-1
11-2
13-1
5-1
2-4, 12-1
1-1
5-1
7-1
7-1
2-1
9-1, 9-2
9-1, 11-2, 11-5
2-2
3-1
13-2
2-5, 11-2
2-2
2-4
4-1
11-4
2-2
1-1, 5-1
10-1
Author
Session Number
Rahman, Mohammad
Ramsey, Scott
Rastkar, Siavash
Rauch, Bastian
Ravasio, Alessandra
Rawal, B.R.
Reitsma, Frederik
Ren, Weiju
Ribeiro, Rahul
Ridens, Brandon
Rider, William
Riha, David
Riotte, Matthieu
Risinger, L. Dean
Rivas-Cardona, Juan-Alberto
Rizhakov, Andri
Roderick, Oleg
Romero, Vicente
Rosendall, Brigette
Roy, Christopher J.
Roy, Shawoon
Roza, Manfred
Rugescu, Radu
Runnels, Scott
Ryan, Emily
Sadeghi-Esfahlani, Shabnam
Salehghaffari, Shahab
Salloomi, Kareem
Sankaran, Sethuraman
Sankararaman, Shankar
Sanstad, Alan
Sato, Kotaro
Sawyer, Eric
Scarth, Douglas
Schultz, Richard
Schwarz, Alexander
Scott, Paul
Segalman, Daniel
Seuaciuc-Osorio, Thiago
Shahbakhti, Mahdi
Shahnam, Mehrdad
Shen, Z.P.
Sheridan, Terry
Shields, Michael
Shirvani, Hassan
Shukla, Amit
Siirola, John
Silva, Carlos
Silva, Ricardo
Sinclair, Glenn
Smed, Mads
Smith, Barton
Smith, Brandon M.
Sohn, Ilyoup
Somasundaram, Deepak
Song, Daehun
Spencer, Nathan
Stallworth, Christian
Stier, Christian
Stitzel, Joel D.
Storlie, Curt
Subramanian, Sankara
68
10-2
7-2, 9-2
13-2
4-2
4-2
10-1
2-1
11-2
10-1
9-2
2-5, 13-1
1-2, 10-2
4-6
9-2
9-1, 9-2
9-1, 11-2, 11-5
2-2
13-2
9-1, 11-2, 11-5
2-2, 7-2, 12-1
5-1
11-3
4-4, 9-2
4-1
4-1
5-1
1-2, 10-2
9-2
2-1
2-5, 12-1
11-1
6-1
4-3
11-3
1-2
1-1
11-2
12-1
6-2
1-1, 2-5
2-2, 2-3
2-4
4-3
2-3, 5-1
5-1
13-1
2-5
9-1, 9-2
9-1, 9-2
13-1
7-1
4-3, 11-5
13-1
7-1
5-1
6-1
3-2
10-2
12-1
5-1
4-1
2-3
Author
Subramaniyan, Arun Karthi
Sun, Yuming
Syamlal, Madhava
Tabuchi, Masato
Tahan, S. Antoine
Takeda, Satoshi
Tan, Hua
Tan, Yonghong
Tanaka, Masaaki
Tatsumi, Masahiro
Tawakoli, Taghi
Tebbe, Patrick
Teferra, Kirubel
Tencer, John
Thacker, Ben
Thibault, Denis
Thomassen, Peter
Thota, Jagadeep
Tornianinen, Erik
Trabia, Mohamed
Trucano, Timothy
Tyobeka, Bismark
Urbina, Angel
Ushio, Tadashi
Van Der Velden, A.
Vavalle, Nicholas A.
Vaz, Guilherme
Viola, Ignazio Maria
Voogd, Jeroen
Vorobieff, Peter
Voyles, Ian
Walker, James
Walton, Marlei
Waltz, Jacob
Wang, Liping
Webb, Jeff
Weichselbaum, Noah
Weirs, V. Gregory
Wendelberger, Joanne
Wiggs, Gene
Williams, Todd O.
Witkowski, Walt
Wohlbier, John G
Wright, Richard
Wu, Qiao
Xi, Zhimin
Xiao, S.F.
Xueqian, Chen
Yamamoto, Kento
Yambao, Paul
Yang, Ren-jye
Ye, Hong
Young, Bruce
Yu, Feng
Zahedi, Ali
Zhan, Zhenfei
Zhang, Haitao
Zhang, Yuanzhang
Zhanpeng, Shen
Zhong, Chang Lin
Session Number
2-3
9-1
2-2
6-1
2-2
6-1
4-3
1-2
6-2
6-1
1-2
4-2
2-3, 5-1
2-5
2-4
2-2
7-1
5-1
4-3
5-1
2-5, 11-1
2-1
2-5
6-1
2-2
5-1
4-1, 13-2
4-3
11-3
7-2
12-1
1-2
10-1
9-2
2-3
3-1
4-1
11-5
4-1
2-3
13-1
2-5, 13-1
9-2
6-1
2-4
3-2
2-4, 3-2
3-2
6-1
9-1
2-1, 3-2
9-1
11-2
6-2
1-2
2-1
9-1
1-1
3-2
SESSION ORGANIZERS
General
Session 1-1: General Topics in Validation
Session Chair: Christopher Roy
Session Co-Chair: Richard Schultz
Verification and Validation for Simulation of Nuclear
Applications
Session 6-1: Verification and Validation for Simulation of
Nuclear Applications, Part 1
Session Chair: Christopher Freitas
Session 1-2: General Topics in Validation Methods for
Materials Engineering
Session Chair: Ryan Crane
Session 6-2: Verification and Validation for Simulation of
Nuclear Applications, Part 2
Session Chair: Richard Lucas
Uncertainty Quantification, Sensitivity Analysis, and
Prediction
Session 2-1: Uncertainty Quantification, Sensitivity Analysis
and Prediction, Part 1
Session Chair: David Moorcraft
Session Co-Chair: Ben Thacker
Verification for Fluid Dynamics and Heat Transfer
Session 7-1: Verification for Fluid Dynamics and Heat Transfer
Session Chair: Dimitri Tselepidakis
Session Co-Chair: Carlos Corrales
Session 7-2: Verification for Fluid Dynamics and Heat
Transfer, Part 2
Session Chair: Ryan Crane
Session 2-2: Uncertainty Quantification, Sensitivity Analysis
and Prediction, Part 2
Session Chair: Donald Simons
Session Co-Chair: Dimitri Tselepidakis
Verification and Validation for Energy, Power, Building,
and Environmental Systems
Session 9-1: Verification and Validation for Energy, Power,
Building and Environment Systems
Session Chair: Dimitri Tselepidakis
Session 2-3: Uncertainty Quantification, Sensitivity Analysis
and Prediction, Part 3
Session Chair: Scott Doebling
Session Co-Chair: Steve Weinman
Session 9-2: Verificaiton and Validation for Energy, Power,
Building and Environment Systems and Verification for Fluid
Dynamics and Heat Transfer
Session Chair: Ryan Crane
Session 2-4: Uncertainty Quantification, Sensitivity Analysis
and Prediction, Part 4
Session Chair: Tina Morrison
Session Co-Chair: Laura McNamara
Validation Methods for Bioengineering
Session 10-1: Validation Methods for Bioengineering and
Case Studies for Medical Devices
Session Chair: Carl Popelar
Session Co-Chair: Tina Morrison
Session 2-5: Uncertainty Quantification, Sensitivity Analysis
and Prediction, Part 5
Session Chair: Steve Weinman
Validation Methods for Solid Mechanics and Structures
Session 3.1: Validation Methods for Solid Mechanics and
Structures, Part 1
Session Chair: David Moorcraft
Session Co-Chair: Danny Levine
Session 10-2: Validation Methods for Bioengineering
Session Chair: Richard Swift
Session Co-Chair: Dawn Bardot
Standards Development Activities for Verification and
Validation
Session 11-1: Government Relations Activity
Session Chair: Alan Sanstad
Session Co-Chair: William Oberkampf
Session 3-2: Validation Methods for Solid Mechanics and
Structures, Part 2
Session Chair: Danny Levine
Session Co-Chair: Richard Swift
Verification and Validation for Fluid Dynamics and Heat
Transfer
Session 4-1: Verification and Validation for Fluid Dynamic
and Heat Transfer, Part 1
Session Chair: Hyung Lee
Session Co-Chair: Koji Okamoto
Session 11-2: Development of V&V Community Resources,
Part 1
Session Chair: Christopher Freitas
Session Co-Chair: Richard Lucas
Session 11-3: Standards Development Activities
Session Chair: Steve Weinman
Session Co-Chair: Ryan Crane
Session 4-2: Verification and Validation for Fluid Dynamic
and Heat Transfer, Part 2
Session Chair: Christopher Freitas
Session 11-4: Panel Session – ASME Committee on
Verification and Validation in Computational Modeling of
Medical Devices
Session Chair: Carl Popelar
Session Co-Chair: Andrew Rau, Ryan Crane, Tina Morrison
Session 4-3: Verification and Validation for Fluid Dynamic
and Heat Transfer, Part 3
Session Chair: Richard Lucas
Session Co-Chair: William Olson
Session 11-5: Development of V&V Community Resources,
Part 2
Session Chair: Scott Doebling
Session 4-4: Verification and Validation for Fluid Dynamic
and Heat Transfer, Part 4
Session Chair: Richard Lucas
Session Co-Chair: Steve Weinman
Validation Methods
Session 12-1: Validation Methods
Session Chair: David Quinn
Session Co-Chair: Koji Okamoto
Validation Methods for Impact and Blast
Session 5-1: Validation Methods for Impact and Blast
Session Chair: Richard Lucas
Session 13-1: Verification Methods, Part 1
Session Chair: Dawn Bardot
69
Session 13-2: Verification Methods, Part 2
Session Chair: Deepak Gupta
EXHIBITORS AND SPONSORS
WE GRATEFULLY ACKNOWLEDGE THE FOLLOWING COMPANIES FOR THEIR SUPPORT:
Virginia Tech
Bronze Sponsor/Exhibitor
NAFEMS
Exhibitor
Dedicated to its motto, Ut Prosim (That I May Serve), Virginia Tech
takes a hands-on, engaging approach to education, preparing
scholars to be leaders in their fields and communities. As the
commonwealth¹s most comprehensive university and its leading
research institution, Virginia Tech offers 215 undergraduate and
graduate degree programs to more than 30,000 students and
manages a research portfolio of more than $450 million. Beyond the
classroom, Continuing and Professional Education at Virginia Tech
offers a Verification and Validation in Scientific Computing short course
for professionals, either in-house or any other location of your choice.
Engineers rely on computer modeling and simulation methods and
tools as vital components of the product development process. As
these methods develop at an ever-increasing pace, the need for an
independent, international authority on the use of this technology has
never been more apparent. NAFEMS is the only worldwide
independent association dedicated to this technology.
NAFEMS is the one independent association dedicated to the
engineering analysis community worldwide, providing vital “best
practice” information for those involved in FEA, CFD and CAE,
ensuring safe and efficient analysis methods. With over 1000 member
companies ranging from major manufacturers across all industries, to
software developers, consultancies and academic institutions, all of
our members share a common interest in design and simulation.
For more information please visit
http://www.cpe.vt.edu/vavsc/index.html
By becoming a member of NAFEMS, you and your company can
access our industry-recognized training programs, independent events
and acclaimed publication—all of which are designed with the needs
of the analysis community in mind.
For more information please visit www.nafems.org
70
ABOUT ASME
ABOUT ASME
ASME V&V STANDARDS DEVELOPMENT
COMMITTEES
ASME helps the global engineering community develop solutions to
real world challenges facing all people and our planet. We actively
enable inspired collaboration, knowledge sharing and skill
development across all engineering disciplines all around the world
while promoting the vital role of the engineer in society.
As part of this effort, the following ASME committees coordinate,
promote and foster the development of standards that provide
procedures for assessing and quantifying the accuracy and credibility
of computational models and simulations.
ASME products and services include our renowned codes and
standards, certification and accreditation programs, professional
publications, technical conferences, risk management tools,
government regulatory/advisory, continuing education and professional
development programs. These efforts, guided by ASME leadership
and powered by our volunteer networks and staff, help make the
world a safer and better place today, and for future generations.
ASME V&V Standards Committee – Verification and Validation in
Computational Modeling and Simulation
SUBCOMMITTEES
ASME V&V 10 – Verification and Validation in Computational Solid
Mechanics
ASME V&V 20 – Verification and Validation in Computational Fluid
Dynamics and Heat
Visit www.asme.org
ASME V&V 30 – Verification and Validation in Computational
Simulation of Nuclear System Thermal Fluids
Behavior
ASME STANDARDS AND CERTIFICATIONS
ASME is the leading international developer of codes and standards
associated with the art, science and practice of mechanical
engineering. Starting with the first issuance of its legendary Boiler &
Pressure Vessel Code in 1914, ASME’s codes and standards have
grown to nearly 600 offerings currently in print. These offerings cover a
breadth of topics, including pressure technology, nuclear plants,
elevators/escalators, construction, engineering design,
standardization, performance testing, and computer simulation and
modeling.
ASME V&V 40 – Verification and Validation in Computational
Modeling of Medical Devices
2012-2013 ASME OFFICERS
Marc W. Goldsmith, President
Madiha Kotb, President-Elect
Victoria Rockwell, Past President
Thomas G. Loughlin, Executive Director
Developing and revising ASME codes and standards occurs yearround. More than 4,000 dedicated volunteers—engineers, scientists,
government officials, and others, but not necessarily members of the
Society—contribute their technical expertise to protect public safety,
while reflecting best practices of industry. These results of their efforts
are being used in over 100 nations: thus setting the standard for codedevelopment worldwide,
ASME STAFF
Ryan Crane, P.E., Project Engineering Manager, Standards &
Certifications
Leslie DiLeo, CMP, Meetings Manager, Events Management
Stacey Cooper, Coordinator, Web Tool/Electronic Proceedings,
Publishing, Conference Publications
Visit www.asme.org/kb/standards
Jason Kaplan, Manager, Marketing
71
NOTES
72
HOTEL FLOOR PLAN
73
$$!$& $"#%
Download