sandia-rpi-research-proposal

advertisement

Adaptive Multimodel Simulation Infrastructure (AMSI)

Investment Area:

Project Intent:

Duration:

Classified:

Proposal No. 13-1208

Computing and Information Sciences

Create

3 years

N

Principal Investigator: Glass, Micheal W. Org: 01545

Project/Portfolio Manager: Galpin, Terri L. Org: 01545

1. Overview/Abstract (310 word limit for section 1.1 and 1.2 combined)

1.1 Problem Statement - This work is in collaboration with Rensselaer Polytechnic Institute.

At present the implementation of a massively parallel multi-scale/multi-physics simulation requires expertise not only in the physical domain of interest, but also in parallel programming and software engineering. While there have been attempts to construct software frameworks to ease the construction of such simulations, these typically require that component codes adhere to certain specific interfaces and data structures. Incorporation of software components used in these simulations can require a great deal of expertise; the effort required to integrate these components into such a framework could conceivably be greater than that required to encode and initialize the physical problem of interest into the simulation framework system.

A simulation infrastructure will be developed to facilitate the implementation of multimodel adaptive simulations while allowing the incorporation of proven legacy components. This infrastructure will leverage high-level programming techniques and a variation on the typical component-based architectures to facilitate the easy integration of legacy software components for use in the infrastructure. By removing the requirement of low-level programming expertise in order to construct these simulations, domain-specific experts and industry-level users will be able to re-use proven legacy software components in order to simulate the physical phenomena of importance to them.

1.2 Creative and Innovative Nature of R&D

The design and implementation of a system to facilitate the combination of low-level code components with arbitrary functional interfaces and data structures poses a number of mathematical, logical, and computational challenges, especially in conjunction with the usability goals stated above. The primary foci in the construction of physical simulations are efficiency, performance, and accuracy. Thus the low-level systems underlying the high-level computation stages pose an entirely different set of challenges to be overcome in their implementation.

Ultimately, while these areas of challenge are motivated by entirely different requirements for the infrastructure, there is also the challenge posed by balancing the requirements against each other in the inevitable cases where they clash.

2. Proposed R&D

2.1 Technical Approach and Leading Edge Nature of Work:

The implementation of parallel adaptive multi-model simulations using the Adaptive Multimodel

Simulation Infrastructure (AMSI) will rely heavily on a mechanism to adequately describe the hierarchy of information transfer and transformations. Multi-scale designs must allow for the understanding and quantification – and determination of the sensitivities of variables – between coupled simulation scales [Choi08]. An effective set of abstractions will be developed for the mathematical and computational formalisms required not only to develop a single-scale physical simulation, but also to construct bridging mechanisms between multiple simulation scales/models. These abstract descriptive mechanisms must have corresponding infrastructure systems/facilities, ranging from simple control functionalities to high-performance data tunnels, all of which must provide implementations suitable for use in an exascale environment.

A key challenge in the development of the parallel control of AMSI will be solving the problem that different components of a multi-scale/multi-physics simulation will be executed using different underlying parallel data structures. AMSI implementation on massively parallel computers will be supported by using comprehensive component interfaces which allow simulation components to interact without regard to the underlying data structures in use by a component [Jansen10]. Components interacting through purely functional interfaces have been used to support the development of preliminary adaptive multi-scale simulations [Shephard09].

While component interaction will use the comprehensive functional interfaces of components to provide parallel interaction techniques, AMSI will not subscribe solely to the semantics of blocking function calls for component interaction. General component interaction semantics will be developed to allow both synchronous and asynchronous component interactions, along with any other interface interaction semantics necessary to facilitate different parallel component interaction methodologies.

Since the precise combination of models, methods, and discretizations which will produce optimal results is not typically known prior to the implementation of a multi-scale/multi-physics simulation, the abstractions developed to describe the mathematical and computational formalisms necessary for simulation implementation must also allow for the inclusion of adaptive control processes and discretization control schemes in order to allow the construction of truly versatile and useful multi-model simulations.

Current work at RPI’s Scientific Computation Research Center (SCOREC) with the FASTmath

SciDAC institute involves collaborative efforts with Sandia personnel to introduce adaptive methods developed at SCOREC to the ALBANY code that makes use of the Trilinos framework.

Additionally, SCOREC is anticipating upcoming collaborations with Zoltan developers located at

Sandia working toward better dynamic load balancing schemes for multi-model simulations.

These collaborations will provide key insights into the real-world requirements to which adaptive processes must adhere to be of use, as well as providing information about cuttingedge simulation system abstraction practices and concepts in HPC environments.

The standard approaches taken to facilitate component interaction in commercial frameworks – with the accepted practice of implementing standard functional interfaces – are likely insufficient to provide the flexible communications patterns required to develop a system to facilitate the abstracted formal interactions which will be developed and used in AMSI.

Typical component frameworks as developed and deployed in industry – such as EJB [Sun],

COM/DCOM [Microsoft08], and CORBA [OMG11] – focus primarily on interoperability of

components, as opposed to the efficiency of the resulting application. However, the focus purely on interoperability is non-optimal for the high performance computing field – particularly in a massively parallel (exascale) setting – where both performance and cooperation of disparate components is essential to achieving desired performance, both in terms of time-tosolution as well as the parallel scalability and structure of the resulting implementation.

Standards-based functional interface requirements defining component interactions are necessarily the most efficient means of component interaction, using direct method invocation and low-level code interactions. Unfortunately this approach also requires a high degree of software engineering and parallel programming knowledge. Directly implementing 'glue' code in order to define the interactions between components is the most development-heavy method.

AMSI will provide mechanisms for component interaction that are related to high-level abstract formalisms of a simulation, allowing construction of simulation codes by domain experts rather than software engineers, while prioritizing the performance capabilities of these interactions.

Attempts at abstracting component interface specifications using Interface Definition Languages

(IDL) have allowed some liberation from language-specific interface interoperability and allowed for more flexible component structuring. The use of these languages to specify interfaces has been used by the Common Component Architecture Forum (CCA) to construct the SIDL/Babel

[Epperly11] technologies to provide the possibilities of more flexible interactions as noted above.

Unfortunately, while this approach has some positive features and is more powerful than many other approaches – such as straightforward standards-based functional interface adaptation – these tools have proven to introduce undesirable levels of inefficiency into component interactions. AMSI intends to provide a set of component interoperability tools of equal or greater flexibility and power, but with a focus on performance and scalability.

Interfaces of components adapted into a simulation framework using the SIDL/Babel technologies should still promote certain methods of access and manipulation of the underlying component, if not exactly a standardized interface (such as the ITAPS meshing interfaces).

Initial component interaction developments stemming from earlier work conducted at SCOREC have been used to introduce the PETSc linear system solver – configured with the superLU_DIST package as opposed to the built in Krylov solution methods – into a multi-scale biotissue simulation model currently in development. The simulation couples a macro-scale continuum simulation to a number of micro-scale simulations used to provide Cauchy stress values at numerical integration points. Introduction of this solver along with additional development and modification of the simulation system have moved the micro-scale simulation associated with the overall multi-scale simulation into the parallel execution space. Current work is geared towards increasing the overall parallelization of the underlying simulation systems coordinating component interactions (primarily between the unstructured mesh database components provided by the Simmetrix software meshing libraries and the PETSc algebraic system solver).

Lessons taken from this development will be used to further development of AMSI concepts.

[Choi08] Choi, H.-J., McDowell, D.L., Allen, J.K., Rosen, D. and Mistree, F. “An inductive design exploration method for robust multiscale materials design”, J. Mechanical Design, Vol.130,

031402 (13 pages), March 2008.

[Epperly11] Thomas G. W. Epperly, Gary Kumfert, Tamara Dahlgren, Dietmar Ebner, Jim Leek,

Adrian Prantl, Scott Kohn, “High-performance language interoperability for scientific computing through Babel”, 2011, International Journal of High-performance Computing Applications

(IJHPCA), LLNL-JRNL-465223, DOI: 10.1177/1094342011414036.

[Jansen10] K.E. Jansen, O. Sahni, A. Ovcharenk, M.S. Shephard, and M. Zhou. “Adaptive computational fluid dynamics: Petascale and beyond”. Proc. SciDAC 10, 2010.

[Microsoft08] “Distributed Component Object Model (DCOM) Remote Protocal Specification”.

Microsoft. October, 2008.

[OMG11] “Common Object Request Broker Architecture (CORBA/IIOP)”. Object Management

Group. November, 2011. http://www.omg.org/spec/CORBA/3.2/

[Shephard09] M.S. Shephard, M.A. Nuggehally, B. FranzDale, C.R. Picu, J. Fish, O. Klaas, M.W.

Beall. “Component Software for Multiscale Simulation”. Bridging the Scales in Science and

Engineering, Oxford University Press, pp. 393-421, 2009.

[Sun] L. DeMichiel, K. Michael. “EJB Core Contracts and Requirements”. Sun Microsystems.

2.2

Key R&D Goals and Project Milestones:

Component Interoperability Tools:

Code metadata requirements

Component metadata requirements

August, 2012

September, 2012

December, 2012

January, 2013

Code-coupling mechanisms

Simple component coupling mechanisms

Core Simulation Abstraction:

Simple simulation abstraction development

Mathematical and computational formalism abstraction

Initial Parallel interaction models/abstractions

Annual Status Report:

January, 2013

February, 2013

April, 2013

May, 2013 Develop and submit report

Multi-model Simulation Abstraction:

Standard adaptive semantics abstraction specification

Standard control systems abstraction specification

Complex simulation abstraction development

June, 2013

July, 2013

September, 2013

Simulation Construction:

Component instantiation and configuration systems

Formalism to component service mapping mechanisms

Abstraction to component mapping mechanisms

Annual Status Report:

Develop and submit report

AMSI Interface Development:

Develop interface specification

Implement initial user interfaces

Automated Component Coupling:

September, 2013

November, 2013

January, 2014

May, 2014

June, 2014

July, 2014

Simple transformational inference system

Metadata generation and inference system

Complex component coupling mechanisms

Automated Simulation Construction/Configuration:

Component service inference coupling

Component requirement satisfaction inference

Final Report / Thesis:

July, 2014

August, 2014

September, 2014

October, 2014

November, 2014

Initial writing and compilation of materials

Revisions and additional development work

Final contributions and refinements

Final compilation and presentation

October, 2014

Oct-May

May, 2015

July, 2015

2.3Technical Risk and Likelihood of Success:

N/A for Campus Executive Project

3. Resources

3.1 Key Research Team Members:

N/A for Campus Executive Project

3.2 Qualifications of the Team to Perform This Work:

N/A for Campus Executive Project

3.3 Budget

3.3.1 Budget Breakdown:

N/A for Campus Executive Project

3.3.2 Capital Purchases and Subcontracts:

N/A for Campus Executive Project

4. Strategic Alignment and Potential Benefit

4.1 Relevance to DOE and National Security Missions: 100 word limit

This research plan builds on, and links to, the DOE’s Scientific Discovery Though Advanced

Computing FASTMathInstitute of which both Sandia and Rensselaer are integral parts. In particular, the aim of this research is to take the FASTMath work on the development of next generation computational mathematics libraries targeted for exascale computers and have them execute effectively in a multimodel computing infrastructure that can address key multiscale problems related to this nation’s energy and security needs.

4.2 Anticipated Outputs and Outcomes:

N/A for Campus Executive Project

4.3Programmatic Benefit to Investment Area, if Successful:

N/A for Campus Executive Project

4.4 Communication of Results:

N/A for Campus Executive Project

Download