Overview of ESMF and Component

advertisement
Session 1:
Component-based Modeling with ESMF
Instructors: Rocky Dunlap and Fei Liu
NOAA Cooperative Institute for Research in Environmental Sciences
University of Colorado, Boulder
Training at NRL Monterey
August 5-6, 2015
Two Day Overview
• Day 1 morning
– Overview of ESMF and component-based modeling
– Coding exercise: Run a single-component ESMF application
• Day 1 afternoon
– ESMF distributed data classes and wrapping your model data types
with ESMF data types
– Regridding, LocStream
– Coding exercise: Grid/Mesh construction
• Day 2 morning
– Overview of NUOPC interoperability layer
– Coding exercise: Run a NUOPC prototype application
• Day 2 afternoon
– Questions and time for discussion of ESMF/NUOPC in COAMPS and
other NRL modeling applications
2
Takeaways for You
• What are the reasons and advantages of building
coupled modeling systems from components with
standardized interfaces and data types?
• What steps are required to augment an existing
model to make it compliant with ESMF/NUOPC
conventions?
• How can ESMF and the NUOPC interoperability layer
help you to achieve your scientific goals?
3
This Session…
• Quick overview of ESMF and NUOPC
• Learn about ESMF Components and their interfaces
• Run a basic ESMF application
A key goal is to provide basic prerequisites for
understanding the NUOPC interoperability layer as it is
now in use at most major modeling centers in the U.S.
4
This Session…
• Quick overview of ESMF and NUOPC
• Learn about ESMF Components and their interfaces
• Run a basic ESMF application
A key goal is to provide basic prerequisites for
understanding the NUOPC interoperability layer as it is
now in use at most major modeling centers in the U.S.
5
Federal Coupled Modeling System Examples
NEMS: NOAA Environmental Modeling
System
•
•
Next-generation operational prediction for
weather through seasonal time scales
Some applications in operations now, some in
development
NASA GEOS-5 Global Circulation Model and
ModelE
•
•
Research in data assimilation techniques and
utilization of satellite measurements
Seasonal forecasting, climate forecasting, creation
of reanalysis data sets
CESM: Community Earth System Model
Navy Forecast Systems (COAMPS, NavGEM)
•
•
•
Research into all aspects
of the climate system
National and international
assessments, including
participation in the
Intergovernmental Panel
on Climate Change
assessment reports
Research and operational weather forecasting in
support of military operations and national
security
Surface winds from
COAMPS Navy model
6
Earth System Prediction Suite
ESPS COUPLED MODELING SYSTEMS
Model
Driver
NEMS
and CFS
COAMPS
NavGEM
GEOS-5
ModelE
CESM






ATMOSPHERE MODELS
GSM
NMMB
•
The Earth System Prediction Suite is a
collection of federal and community
models and components that use the
Earth System Modeling Framework
(ESMF) with interoperability
conventions called the National Unified
Operational Prediction Capability
(NUOPC) Layer.
•
ESMF standard component interfaces
enable major U.S. centers to assemble
systems with components from different
organizations, and test a variety of
components more easily.
•
The multi-agency Earth System
Prediction Capability (ESPC) supports
adoption efforts.



CAM

FIM
GEOS-5
Atmosphere
ModelE
Atmosphere
COAMPS
Atmosphere
NavGEM





NEPTUNE
OCEAN MODELS
MOM5
HYCOM







NCOM

POP
POM

SEA ICE MODELS


CICE
KISS








OCEAN WAVE MODELS
WW3
SWAN





LEGEND
 Components are NUOPC compliant and the technical correctness of data
transfers in a coupled system has been validated.
 Components and coupled systems are partially NUOPC compliant.

From Theurich et al. 2015,
in submission
Earth System Modeling Framework
The Earth System Modeling Framework
(ESMF) was initiated in 2002 as a multiagency response to calls for common
modeling infrastructure.
ESMF provides:
• high performance utilities, including
grid remapping, data communications,
and model time management
• an component-based architecture for
model construction
Metrics:
~6000 downloads
~100 components in use
~3000 individuals on info
mailing list
ESMF has become a standard for federal
research and operational models in
climate, weather, and space weather.
~40 platform/compilers
regression tested nightly
~6500 regression tests
~1M SLOC
https://www.earthsystemcog.org/projects/esmf/
The National Unified Operational
Prediction Capability
NUOPC increases model component interoperability by introducing a set
of generic component templates for building coupled systems.
NUOPC Generic Components
Driver
Connectors
Model
Mediator
(a) Simple driver and
(b) schematic of one
configuration of Navy
regional model COAMPS
NUOPC wrappers or “caps” contain translations of native data structures (e.g. grids,
field data, time quantities) into ESMF data structures.
Standard Component Interfaces
All ESMF Components have the same three standard methods
with the same parameters (these can have multiple phases):
– Initialize
– Run
– Finalize
subroutine MyInit(comp,
type(ESMF_GridComp)
type(ESMF_State)
type(ESMF_State)
type(ESMF_Clock)
integer, intent(out)
importState, exportState, clock, rc)
:: comp
:: importState
:: exportState
:: clock
:: rc
! This is where the model specific setup code goes.
rc = ESMF_SUCCESS
end subroutine MyInit
Interfaces are wrappers and can often be introduced in a non-intrusive
and high-performance way. ESMF is designed to coexist with native
model infrastructure.
10
ESMF Grid Remapping
•
•
•
•
Uniquely fast, reliable, and general – interpolation weights computed in parallel in 3D
space
Supported grids:
– Logically rectangular and unstructured grids in 2D and 3D
– Global and regional grids
Supported interpolation methods:
– Nearest neighbor, bilinear, higher order patch recovery, and 1st order conservative
methods
– Options for straight or great circle lines, masking, and a variety of pole treatments
Multiple ways to call ESMF grid remapping:
– Generate and apply weights using the ESMF API, within a model
– Generate and apply weights using ESMPy, through a Python interface
– Generate weights from grid files using ESMF_RegridWeightGen, a command-line
utility
HOMME Cubed Sphere Grid with Pentagons
Courtesy Mark Taylor of Sandia
FIM Unstructured Grid
Regional Grid
11
Acquiring and Installing ESMF
• Download: https://www.earthsystemcog.org/projects/esmf/download/
• Installation instructions in user guide:
http://www.earthsystemmodeling.org/esmf_releases/public/last/ESMF_usrdoc/
• Requirements: Fortran90/C++ compilers, MPI, GNU cpp, make, Perl
• Optional: LAPACK, NetCDF, parallel-NetCDF, Xerces
$
$
$
$
…
export
export
export
export
ESMF_DIR=/home/user/esmf
ESMF_COMM=openmpi
ESMF_COMPILER=intel
ESMF_INSTALL_PREFIX=/path/to/install
$ make –j8
$ make check
$ make install
12
Getting Help
• ESMF Home Page: https://www.earthsystemcog.org/projects/esmf/
• User Guide:
http://www.earthsystemmodeling.org/esmf_releases/public/last/ESMF_usrdoc/
• Reference Manual:
http://www.earthsystemmodeling.org/esmf_releases/public/last/ESMF_refdoc/
• Code Examples:
https://www.earthsystemcog.org/projects/esmf/code_examples/
• NUOPC Home page: https://www.earthsystemcog.org/projects/nuopc/
• NUOPC Reference Manual:
http://www.earthsystemmodeling.org/esmf_releases/last_built/NUOPC_refdoc/
• NUOPC Prototype Codes:
https://www.earthsystemcog.org/projects/nuopc/proto_codes
Support email is very active: esmf_support@list.woc.noaa.gov
13
This Session…
• Quick overview of ESMF and NUOPC
• Learn about ESMF Components and their interfaces
• Run a basic ESMF application
A key goal is to provide basic prerequisites for
understanding the NUOPC interoperability layer as it is
now in use at most major modeling centers in the U.S.
14
“Sandwich” Architecture
, Mesh, LocStream
The Superstructure includes
Components with standard
interfaces that wrap user
code.
The Infrastructure includes data classes
that interface with model fields and
utilities for regridding, time management,
data I/O, logging, handling metadata, etc.
It is possible to use each independently.
15
A Closer Look at ESMF
Component Interfaces
All ESMF Components methods have the same parameter list.
subroutine MyInit(comp,
type(ESMF_GridComp)
type(ESMF_State)
type(ESMF_State)
type(ESMF_Clock)
integer, intent(out)
importState, exportState, clock, rc)
:: comp
:: importState
:: exportState
:: clock
:: rc
…
end subroutine MyInit
comp:
a reference to the component itself (like self in Python)
importState:contains a list of fields coming in
exportState:contains a list of fields going out
clock:
tracks current model time, timestep, stop time
rc:
a return code to indicate success/failure
16
Components Share Data via
Import and Export States
Import State
Component
Export State
Import/Export States
enhance the modularity of a
model so it can be used in
multiple contexts.
• The only way data moves in or out of
a Component is via instances of the
ESMF State class (ESMF_State).
• States do NOT prescribe any specific
set of model exchange field. This is
determined by the model developer.
• A State is a container for ESMF data
types that wrap native model data.
• Model data can be referenced,
avoiding duplicates and copies.
• Metadata (e.g., name, coordinates,
decomposition) travels with data
objects.
17
Initialize, Run, Finalize
The init, run, and finalize interfaces are designed to be
called from a higher-level component. Most models
already use this basic sequence.
• Initialize
– take any pre-run steps necessary before entering the main time
stepping loop, e.g., read configuration files, allocate memory for
model fields, open data files, set up initial conditions, etc.
– this is also a good place to populate import and export State
objects
• Run
– where bulk of model computation takes place
– passed in Clock can be used to determine run length
• Finalize
– clean up, deallocate memory, close files, etc.
18
SetServices Registers
User-implemented Subroutines
Every ESMF Component must define a public SetServices
method that registers the initialize, run, and finalize
methods for the Component.
subroutine SetServices(comp, rc)
type(ESMF_GridComp)
:: comp
integer, intent(out) :: rc
! must not be optional
! must not be optional
! Set the entry points for standard ESMF Component methods
call ESMF_GridCompSetEntryPoint(comp, ESMF_METHOD_INITIALIZE, &
userRoutine=MyInit, rc=rc)
call ESMF_GridCompSetEntryPoint(comp, ESMF_METHOD_RUN, &
userRoutine=MyRun, rc=rc)
call ESMF_GridCompSetEntryPoint(comp, ESMF_METHOD_FINALIZE, &
userRoutine=MyFinal, rc=rc)
rc = ESMF_SUCCESS
end subroutine
19
ESMF Drivers
Because ESMF Components are designed to be called, drivers are
necessary to implement the main thread of control for the application.
• The basic control sequence is:
1. initialize ESMF
2. create the top level Component
3. call its SetServices method
4. call its Initialize method
5. call its Run method
6. call its Finalize method
7. destroy the top level Component
8. finalize ESMF
• It is possible to write a generic Driver since the sequence above is
boilerplate in nature
20
Hierarchical Creation and
Invocation of Components
The main program
creates and invokes
the highest level
component.
Parent components
are responsible for
creating child
components and
invoking their
control methods.
21
A Component’s
Computational Resources
• Every ESMF Component is assigned an instance of the Virtual
Machine (VM) class which informs the Component of its available
computational resources.
• The basic elements contained in a VM are Persistent Execution
Threads (PETs).
• A PET is an abstraction that typically maps to an MPI process
(task), but may also map to a Pthread.
ESMF Application
Component
VM
OS level
PET 0
MPI Task 0
PET 1
MPI Task 1
PET 2
MPI Task 2
PET 3
MPI Task 3
22
Sequential Execution
of Components
PET s
1
2
3
4
5
6
7
T im e
AppDriv er (“M ain”)
Call Run
8
9
In this application, there
are 9 total PETs available.
Run
GridComp “Hurricane M odel”
LOOP
Call Run
Run
All Components share the
same set of PETs and are
executed sequentially.
GridComp
“Atm osphere”
Run
GridComp
“Ocean”
Run
CplComp
“Atm -Ocean Coupler”
Drawing a line down from
any particular PET, shows
the same execution
sequence as the other
PETs.
23
Concurrent Execution
of Components
PETs
1
2
3
4
5
6
7
T im e
AppDriver (“Main”)
8
9
In this application, there
are 9 total PETs available.
Call Run
Run
GridCom p “Hurricane Model”
LOOP
Run
GridCom p
“Atmosphere”
Call Run
“Atmosphere” and
“Ocean” are assigned
disjoint PETs and execute
concurrently.
Run
GridCom p
“Ocean”
Run
CplCom p
“Atm-Ocean Coupler”
Drawing a line down
from PET 3, shows a
different execution
sequence than PET 4.
24
Distributed Objects Span a
Component’s VM
PETs in
Atm VM
PETs in
Ocn VM
1
2
3
4
5
6
7
8
Ocn
Atm
Import State
(empty)
Import State
AtmField1
AtmField2
AtmField3
Export State
OcnField1
OcnField2
Export State
(empty)
9
• The scope of distributed
objects in ESMF is the VM of
the currently executing
Component.
• All PETs in the Component VM
make the same distributed
object creation calls.
• PETs 1-4 each have a copy of
OcnField1 and OcnField2
metadata.
• PETs 5-9 each have a copy of
AtmField1, AtmField2, and
AtmField3 metadata.
25
Communication Occurs
within Components
PETs
1
2
3
4
5
6
7
T im e
AppDriver (“Main”)
Call Run
8
9
Here, the orange
Component “Atm-Ocn
Coupler” transfers data, in
parallel, between
“Atmosphere” and “Ocean.”
Run
GridCom p “Hurricane Model”
LOOP
Run
GridCom p
“Atmosphere”
Call Run
Run
GridCom p
“Ocean”
Run
CplCom p
“Atm-Ocean Coupler”
To achieve this, the “AtmOcn Coupler” runs on the
union of the “Atmosphere”
and “Ocean” PETs.
Therefore, the “Atm-Ocen
Coupler” has a view over
the distributed data
structures in both
Components.
26
New Work: VM and
Accelerator Devices
• ESMF is now accelerator aware.
• Supported accelerator
frameworks:
– OpenCL
– OpenACC
– Intel MIC
• The list of accelerator devices
available to a PET can be
retrieved from the VM.
• Components capable of using
the accelerator device can be
assigned PETs that can access
the device.
Comp1 is an accelerated component and
Comp2 is a non-accelerated component.
Funded by Navy/ESPC
Image courtesy Jayesh Krishna/ANL
27
Before running any code…
There are a few basics things to know before running an
ESMF application.
•
•
•
•
•
initializing and finalizing the ESMF framework
log files
debugging
deep and shallow classes
ESMF makefile fragment
28
Initializing and Finalizing ESMF
! must be called once on each PET before any other ESMF methods
! by default, this method calls MPI_Init()
! optionally, an MPI Communicator may be provided
call ESMF_Initialize( &
defaultCalKind=ESMF_CALKIND_GREGORIAN, &
logkindflag=ESMF_LOGKIND_MULTI, rc=rc)
! application driver code…
! all PETs must call this once before exiting
call ESMF_Finalize(endflag=ESMF_END_NORMAL, rc=rc)
29
ESMF Log Files
By default, ESMF will generate a log file per PET with the name
PET<X>.ESMF_LogFile where X is the PET number.
$ ls
ESMF_MeshTest
ESMF_MeshTest.F90
ESMF_MeshTest.o
Makefile
PET0.ESMF_LogFile
PET1.ESMF_LogFile
PET2.ESMF_LogFile
PET3.ESMF_LogFile
$ cat PET0.ESMF_LogFile
20150802 203237.398 INFO
PET0 Running with ESMF Version 7.0.0
beta snapshot
20150802 203533.159 INFO
PET0 Running with ESMF Version 7.0.0
beta snapshot
20150802 204835.635 INFO
PET0 Running with ESMF Version 7.0.0
beta snapshot
20150802 204835.644 ERROR
PET0 ESMCI_DistGrid_F.C:457
c_esmc_distgridget() Argument sizes do not match - - 2nd dim of
indexCountPDimPDe array must be of size 'deCount'
20150802 204835.644 ERROR
PET0 ESMF_DistGrid.F90:2783
ESMF_DistGridGetDefault() Argument sizes do not match - Internal
subroutine call returned Error
30
Debugging
Consistent checking of return codes and providing log
messages with filenames and line numbers helps with
debugging.
field = ESMF_FieldCreate(grid2d, typekind=ESMF_TYPEKIND_R8, &
name="temperature", &
totalLWidth=(/1,1/), &
totalUWidth=(/1,1/), &
rc=rc)
! check return code, write to log if not ESMF_SUCCESS
if (ESMF_LogFoundError(rc, msg="Error creating field", &
file=__FILE__, line=__LINE__)) &
return
31
Shallow and Deep Classes
• Deep classes involve internal memory allocations and
can take time to set up. Explicit calls to deallocate
resources are required.
ESMF_GridCreate()
ESMF_GridDestroy()
• Shallow classes are simply declared and their values
set. Their memory is deallocated automatically.
type(ESMF_TimeInterval) :: timeStep
call ESMF_TimeIntervalSet(s=3600)
32
ESMF Makefile Fragment
ESMF applications are typically built using make.
It is common for ESMF application makefiles to import the
makefile fragment generated when the ESMF library was
built.
The convention is to set the location of the ESMF makefile
fragment using the environment variable ESMFMKFILE.
$ export ESMFMKFILE=
/install/location/lib/libO/Linux.gfortran.64.openmpi.de
fault/esmf.mk
33
This Session…
• Quick overview of ESMF and NUOPC
• Learn about ESMF Components and their interfaces
• Run a basic ESMF application
A key goal is to provide basic prerequisites for
understanding the NUOPC interoperability layer as it is
now in use at most major modeling centers in the U.S.
34
Obtaining Test Code
$ git clone https://github.com/rsdunlapiv/esmf_training.git
or download Zip archive from:
https://github.com/rsdunlapiv/esmf_training
35
Running Eclipse
36
SingleCompWithDriver
Application Driver
(AppDriver.F90)
1.
2.
3.
4.
initialize ESMF
create a Component
call init, run*, finalize
finalize ESMF
loop
init
run
Component
(atm.F90)
final
1. on init, print a message
2. on run, print model time
3. on final, print a message
37
38
Extra Slides
39
Session 1 Objectives
• Understand what is meant by ESMF Superstructure and
Infrastructure and their relationship to “user code”
• Understand what it means to be an ESMF Component
• Understand the three basic user-defined methods used
in all ESMF Components: initialize, run, and finalize.
• Understand how ESMF represents computational
resources as Persistent Execution Threads (PETs)
• Understand the purpose of a Component’s SetServices
method
• Understand how ESMF Components share data through
State objects
40
“Don’t call us,
we’ll call you!”
• An ESMF Component’s initialize, run, and
finalize methods are designed to be called by a
higher-level Component or program.
• This is so that the same Component can participate in
multiple applications (with different drivers).
• ESMF Infrastructure classes (e.g., Field, Grid) are still called
within user code.
init
run
final
Component
41
Component Design Strategy:
Modularity
ESMF Components don’t have direct access to the internals of other
Components, and don’t store any coupling information. Components
pass their States to other Components through the standard
subroutine arguments importState and exportState.
Since components are not hard-wired into particular configurations and
do not carry coupling information, components can be used more
easily in multiple contexts.
NWP application
Seasonal prediction
Standalone for basic research
ATM Component
42
Component Design Strategy:
Flexible Composition
Components can be assembled in a variety of
coupling configurations. In this way, the
application architecture becomes flexible.
Users decide on their own control flow.
Pairwise Coupling
Atm
Atm
DATA
DATA
Lnd
Ocn
Hub and Spoke
Coupler
SeaIce
Atm2Lnd
Lnd
43
Component Design Strategy:
Hierarchical Applications
Since each ESMF application is also a Gridded Component, entire ESMF
applications can be nested within larger applications. This strategy can be
used to systematically compose very large, multi-component codes.
Hierarchical composition, therefore, is a way to deal with complexity as the
number of components grows.
44
Download