CWO-002 FY 03: Infrastructure Development for Regional Coupled Modeling Environments Final Report September 30, 2003 CWO-002 FY03: Infrastructure Development for Regional Coupled Modeling Environments Table of Contents SECTION I ............................................................................................................................ 5 1TITLE: .................................................................................................................................. 5 2AUTHORS: .............................................................................................................................. 5 3KEYWORDS: ........................................................................................................................... 5 4ABSTRACT: ............................................................................................................................. 5 SECTION II .......................................................................................................................... 7 1INTRODUCTION ....................................................................................................................... 7 2MODEL COUPLING .................................................................................................................. 8 3WRF/NCOM/SWAN COUPLING THROUGH MCEL IMPLEMENTATION OF API ................... 10 .3.1MCEL overview ............................................................................................................... 11 .3.2MCEL implementation of I/O and Coupling API............................................................ 12 .3.3Coupling results .............................................................................................................. 14 4WRF/ROMS COUPLING THROUGH (MCT) IMPLEMENTATION OF API ................................. 15 .4.1Concurrent Coupling on a Single Machine .................................................................... 15 .4.24.2 Concurrent Coupling using Two Machines over a Computational Grid .................. 16 .4.3Sequential Coupling on a Single Machine ...................................................................... 16 .4.44WRF/ROMS Simulation of Idealized Hurricane ........................................................... 17 ACKNOWLEDGEMENTS ............................................................................................... 18 APPENDIX A: WRF I/O AND MODEL COUPLING API SPECIFICATION .......... 19 1SPECIFICATION OVERVIEW .................................................................................................. 20 .1.1I/O API subroutines......................................................................................................... 20 .1.2Optional begin/commit semantics for open..................................................................... 21 .1.3Random access ................................................................................................................ 21 .1.4Sequential access ............................................................................................................ 22 .1.5Model coupling through the WRF I/O API ..................................................................... 23 .1.6Multi-threading ............................................................................................................... 23 .1.7Parallelism ...................................................................................................................... 23 .1.8Arguments to WRF I/O API routines .............................................................................. 23 .1.8.1Data Handles ............................................................................................................................................................... 24 .1.8.2System Dependent Information ................................................................................................................................... 24 .1.8.3Data names and time stamps ....................................................................................................................................... 24 .1.8.4Field Types .................................................................................................................................................................. 24 .1.8.5Communicators ........................................................................................................................................................... 25 .1.8.6Staggering.................................................................................................................................................................... 25 .1.8.7Memory order .............................................................................................................................................................. 25 .1.8.8Dimension names ........................................................................................................................................................ 25 .1.8.9Domain Start/End ........................................................................................................................................................ 25 .1.8.10Patch Start/End .......................................................................................................................................................... 26 .1.8.11Memory Start/End ..................................................................................................................................................... 26 .1.8.12Status codes. .............................................................................................................................................................. 26 2WRF I/O AND MODEL COUPLING API SUBROUTINES .......................................................... 27 .2.1ext_pkg_ioinit .................................................................................................................. 27 .2.2ext_pkg_ioexit ................................................................................................................. 27 .2.3ext_pkg_inquiry ............................................................................................................... 27 .2.4ext_pkg_open_for_read................................................................................................... 28 .2.5ext_pkg_open_for_read_begin ........................................................................................ 29 .2.6ext_pkg_open_for_read_commit ..................................................................................... 29 .2.7ext_pkg_open_for_write .................................................................................................. 30 .2.8ext_pkg_open_for_write_begin ....................................................................................... 30 .2.9ext_pkg_open_for_write_commit .................................................................................... 31 .2.10ext_pkg_inquire_opened ............................................................................................... 31 .2.11ext_pkg_open_for_update ............................................................................................. 32 .2.12ext_pkg_ioclose ............................................................................................................. 33 .2.13ext_pkg_read_field ........................................................................................................ 33 .2.14ext_pkg_write_field ....................................................................................................... 34 .2.15 ext_pkg_get_next_var ................................................................................................. 36 .2.16ext_pkg_end_of_frame .................................................................................................. 36 .2.17ext_pkg_iosync .............................................................................................................. 37 .2.18ext_pkg_inquire_filename ............................................................................................. 37 .2.19ext_pkg_get_var_info .................................................................................................... 38 .2.20ext_pkg_set_time ........................................................................................................... 38 .2.21ext_pkg_get_next_time .................................................................................................. 39 .2.22ext_pkg_get_var_ti_type set of routines........................................................................ 39 .2.23ext_pkg_put_var_ti_type set of routines ....................................................................... 41 .2.24ext_pkg_get_var_td_type set of routines ....................................................................... 42 .2.25ext_pkg_put_var_td_type set of routines ...................................................................... 43 .2.26ext_pkg_get_dom_ti_type set of routines ...................................................................... 44 .2.27ext_pkg_put_dom_ti_type set of routines ...................................................................... 45 .2.28ext_pkg_get_dom_td_type set of routines .................................................................... 45 .2.29ext_pkg_put_dom_td_type set of routines .................................................................... 46 .2.30ext_pkg_warning_string ................................................................................................ 47 .2.31ext_pkg_error_string..................................................................................................... 48 APPPENDIX B: MCT REFERENCE IMPLEMENTATION ....................................... 49 1.ext_mct_ioinit ..................................................................................................................... 49 2.ext_mct_ioexit .................................................................................................................... 50 3.ext_mct_inquiry .................................................................................................................. 50 4.ext_mct_open_for_read_begin ........................................................................................... 51 5.ext_mct_open_for_read_commit ........................................................................................ 52 6.ext_mct_open_for_write_begin .......................................................................................... 52 7.ext_mct_open_for_write_commit ....................................................................................... 53 8.ext_mct_read_field ............................................................................................................. 53 9.ext_mct_write_field ............................................................................................................ 55 APPENDIX C: MCEL REFERENCE IMPLEMENTATION ...................................... 57 SECTION I 1 Title: Infrastructure Development for Regional Coupled Modeling Environments 2 Authors: John Michalakes, Senior Software Engineer National Center for Atmospheric Research 3450 Mitchell Lane, Boulder, Colorado 80301 (303) 497-8199, Fax: (303) 497-8181, michalak@ucar.eduMatthew Bettencourt Center for Higher Learning/ U. Southern Miss. Daniel Schaffer Forecast Systems Laboratory/CIRAJoseph Klemp National Center for Atmospheric Research 3 Robert Jacob Argonne National Laboratory Jerry Wegiel Air Force Weather Agency Keywords: model coupling, software frameworks, CWO, CDLT, CE 4 Abstract: Understanding and prediction of geophysical systems have moved beyond the capabilities of single-model simulation systems coupling into the area of multi-model multi-scale interdisciplinary systems of interacting coupled models. Terascale computing systems and highspeed grid-enabled networks enable high-resolution multi-model simulation at regional and smaller scales. Exploiting such capabilities is contingent, however, on the ability of users to easily and effectively employ this power in research and prediction of hurricane intensification, ecosystem and environmental modeling, simulation of air quality and chemical dispersion, and other problems of vital concern. This project, funded under the DoD High Performance Modernization PET program, has developed and demonstrated flexible, reusable software infrastructure for highresolution regional coupled modeling systems that abstracts the details and mechanics of intermodel coupling behind an application program interface (API). AS DISCUSSED IN OUR ORIGINAL PROPOSAL, EFFECTIVE SOFTWARE INFRASTRUCTURE FOR HIGH-PERFORMANCE SIMULATION ADDRESSES THREE LEVELS OF THE PROBLEM: MANAGING SHARED- AND DISTRIBUTEDMEMORY PARALLELISM WITHIN INDIVIDUAL COMPONENT MODELS (L1), EFFICIENTLY TRANSLATING AND TRANSFERRING FORCING DATA BETWEEN COUPLED COMPONENT GRIDS EACH RUNNING IN PARALLEL (L2), AND PRESENTING THE SIMULATION SYSTEM AS A PROBLEM SOLVING ENVIRONMENT (L3). INTRA-MODEL (L1) SOFTWARE INFRASTRUCTURE IS PROVIDED BY THE ADVANCED SOFTWARE FRAMEWORK (ASF) ALREADY DEVELOPED FOR THE WEATHER RESEARCH AND FORECAST (WRF) MODEL. UNDER THIS PROJECT WE HAVE DEVELOPED L2 INFRASTRUCTURE: A DETAILED SPECIFICATION FOR AN I/O AND COUPLING API AND TWO REFERENCE IMPLEMENTATIONS OF THE API. USING THESE REFERENCE IMPLEMENTATIONS, WE HAVE ASSEMBLED, RUN, AND BENCHMARKED PROOF-OF-CONCEPT COUPLED CLIMATE-WEATHER-OCEAN (CWO) APPLICATIONS. THE RESULTS DEMONSTRATE THAT THE REGIONAL COUPLING INFRASTRUCTURE DEVELOPED UNDER THIS PROJECT (1) IS FLEXIBLE AND INTEROPERABLE OVER THE RANGE OF TARGET COMPONENT APPLICATIONS, API IMPLEMENTATIONS, AND COMPUTING PLATFORMS, INCLUDING COMPUTATIONAL GRIDS, (2) IS EFFICIENT, INTRODUCING LITTLE ADDITIONAL RUN-TIME OVERHEAD FROM THE COUPLING MECHANISM ITSELF, AND (3) CAN THEREFORE PROVIDE SIGNIFICANT BENEFITS TO THE DOD CWO RESEARCH COMMUNITY.SECTION II 1 Introduction The requirements for understanding and prediction of geophysical systems has moved beyond the capabilities of single-model simulation systems coupling into the area of multi-model multi-scale interdisciplinary systems of interacting coupled models: In the realm of geophysical modeling the current state-of-the-art models have the capability to run at very high spatial resolutions. This capability has led to a drastic increase in the accuracy of the physics being predicted. Due to this increased numerical accuracy, once neglected effects, such as non-linear feedback between different physical processes, can no longer be ignored. The ocean's deep water circulation, surface gravity waves, and the atmosphere above can no longer be treated as independent entities and must be considered a single coupled system.1 Terascale computing systems and high-speed grid-enabled networks are enabling high-resolution multi-model simulation at regional and smaller scales. Exploiting such capabilities is contingent, however, on the ability of users to easily and effectively employ this power in research and prediction of hurricane intensification, ecosystem and environmental modeling, simulation of air quality and chemical dispersion, and other problems of vital concern. This project, funded under the DoD High Performance Modernization PET program, has developed and demonstrated flexible, reusable software infrastructure for high-resolution regional coupled modeling systems that abstracts the details and mechanics of inter-model coupling behind an application program interface (API) that also serves as the API to I/O and data format functionality. The application need not know or care about the entity on the other side, whether a self-describing dataset on a disk or another coupled application. Moreover, knowledge of the underlying mechanisms for parallel transfer, remapping, masking, aggregation, and caching of inter-model coupling data are implemented behind the API and not within the application code itself. Thus, computer/network-specific or application-specific library implementation of the API may be substituted at link-time without modifying the application source code; developers of API implementations may be assured that their coupling software will be usable by any application that calls the API. Lastly, component models are better positioned to make use of other packages and frameworks for model coupling, such as the NOAA/GFDL Flexible Modeling System (FMS) and, when it becomes available, the NASA Earth System Modeling Framework (ESMF). Under this project we have developed a detailed specification for an I/O and Coupling API. This specification is provided as Appendix A. We have also developed two reference implementations of the API using the Model Coupling Toolkit (MCT), a coupling library that supports tightly-coupled a priori scheduled interactions between components that was developed for the Community Climate System Model (CCSM) project, and the Model Coupling Environment Library (MCEL) to support loosely coupled peer-to-peer data-driven interactions between components within coupled ecosystem models that was developed as part of the High Fidelity Simulation of Littoral Environments (HFSoLE) project. The MCT implementation of the I/O and Coupling API is described in Appendix B; the MCEL implementation in Appendix C. Finally, we have assembled, run, and benchmarked proof-of-concept coupled applications using the MCT and MCEL implementations. These are described in the sections that follow. In the next section, an overview of model coupling is provided. In Section 4, we describe an atmosphere/ocean coupled system use of the MCT implementation of the I/O and Coupling API to exchange surface fields between the WRF model and the Rutgers Ocean Modeling System (ROMS). Section 5 describes an atmosphere/coastal ocean/wave model coupling using the MCEL implementation of the I/O and Coupling API to exchange surface fields between WRF, NCOM, and SWAN components. 2 Model Coupling 1“A Distributed Model Coupling Environment for Geophysical Processes,” Bettencourt, Sajjadi, Fitzpatrick, Navigator Online There are two modes for coupling between models of different geophysical processes: subroutinization and componentization. In a subroutinized coupling, the coupled model -- e.g. a physical parameterization such as a microphysics scheme, a land surface model, or an chemistry model -- appears as an integral part of the model, such as a model of atmospheric dynamics. Synchronization and communication is through calls and argument lists to the components from the driver model. Computationally, subroutinized coupling is often the most efficient mode of coupling because the cost is that of a subroutine call, but it requires that one model be in control, it severely limits opportunities for concurrency between components, and it requires that components be Peer-to-peer •HFSoLE •Models-3 Scheduled •MM5-Chem •Model nesting •Model physics •PCM •WRF LSM •COAMPS Subroutine coupled Sequential coupled •CCSM Concurrent coupled highly code compatible so that components developed independently by other groups may be difficult to integrate without significant recoding that breaks the code heritage from the originally developed code. Componentized coupling, which the infrastructure developed under this project addresses, maintains the component codes in close to their original forms providing considerably more flexibility and opportunities for ad hoc switching between different coupled components, at the potential cost of some execution efficiency. However, atmospheric, ocean, and other component models typically exchange a relatively small amount of data (two-dimensional fields at their boundaries) at coupling intervals that are relatively long in terms of the amount of simulation that components perform between. Componentized coupling presents opportunities for concurrent execution of components. Finally, since the mechanics of coupling can be placed outside the components, considerable optimization can occur behind the I/O and Coupling API without disturbing the component codes themselves. Componentized coupling encompasses two modes of execution, sequential and concurrent, both of which are supported under the coupling infrastructure developed under this project. Sequentially coupled components (left side of Figure XX) execute in sequence over the span of the coupling interval. Interpolation and movement of forcing data between coupled components occurs between executions of the components. On parallel systems, all the components are decomposed over the complete set of processors and the interpolation and movement of data involves interprocessor communication to map data from the forcing component’s grid to the forced component’s grid. Since only one component executes at a time, sequential coupling allows components to execute in exact phase with each other, but for efficient execution on parallel computers that all components be parallelized. Scalability is limited by the least scalable component. Software infrastructure that supports sequential coupling must be able to receive data from a component and store it for receipt by the destination component without deadlock. Concurrently coupled components execute concurrently on separate sets of processors, sending and receiving coupling data between the processor sets (right side of Figure XX). Components with different execution speeds can be paces by assigning varying numbers of processors to each component, addressing load imbalance. Non-parallel or modestly-parallel components are able to interact with fully parallelized components. Two-way coupled components – that is, pairs of components that both provide data to and depend on data from their peer – can Atmos Land Ice Ocean Land Coupler processors processors Atmos Ice Ocean run time run time not execute at the same time over the same interval of the simulation. A phase shift must be introduced (for example, an ocean model would run one coupling interval behind an atmospheric mode; generally acceptable in climate simulations) or they must be serialized. 3 WRF/NCOM/SWAN Coupling through MCEL Implementation of API Loosely coupled peer-to-peer coupled modeling systems of interacting components, referred to as L2-B style coupling in our original proposal, involves supporting ad-hoc (not a priori scheduled), data-driven interactions between a relatively large number of components that produce data for their peers, sending it over the network when they have it, and consume data from other components when it is available. The Model Coupling Environment Library was developed to support such interactions using a client-server process model. The I/O and Model Coupling API developed under this project has been implemented as a common interface to MCEL and has been used to implement a subset of applications from the High Fidelity Simulation of Littoral Environments (HFSoLE) CHSSI project, involving “effective, relocatable, high-resolution, dynamically linked mesoscale, meteorological, coastal oceanographic, surface water/groundwater, riverine/estuarine and sediment transport models.”2 The MCEL-based I/O and Model Coupling API is also applicable to a range of multi-scale multi-model simulations that involve component interactions of this type such as air-quality modeling and other ecosystem simulations. This section provides an overview of the MCEL system, a description of the MCEL implementation of the I/O and Model Coupling API developed under this project, and results of proof-of-concept simulations using the API to couple the WRF model with a subset of components in the HFSoLE system. .3.1 MCEL overview Matt, could you please provide this section The Model Coupling Environmental Library (MCEL) was developed to simplify the coupling process for models that exchange data at most every time step. Traditionally, model coupling is performed in one of the three following ways: file Input/Output (I/O), subroutinazation, and a Message Passing Interface (MPI). It was felt that these mechanisms yielded a solution which was too costly or cumbersum to be a longterm viable solution. The MCEL infrastructure consists of three core pieces: a centralized server, filters, and numerical models. Utilizing a data flow approach, coupling information is stored in a single server or multiple centralized servers. Upon request this data flows through processing routines, called filters, to the numerical models which represent the clients. These filters represent a level of abstraction for the physical or numerical processes that join together different numerical models. The extraction of the processes unique to model coupling into independent filters allows for code reuse for many different models. The communication between these objects is handled by the Common Object Request Broker Architecture (CORBA). In this paradigm the flow of information is fully controlled by the clients. Figure XX represents a hypothetical example of how such a system might be used. 2 “High Fidelity Simulation of Littoral Environments,” Rick Allard, NRL, UGC 2002 Figure XX: A hypothetical example of a three model MCEL system. FigureXX1 shows three numerical models: Ocean circulation model, atmospheric model and surface gravity wave model. Each model provides information to the centralized server: sea surface temperature (SST), wave height and direction and wind velocities at ten meters above the surface. The SST is used by the atmospheric model; however, it must first be interpolated onto the atmospheric model's grid. Therefore, the data passes through an interpolation filter prior to delivery. The wave information is transformed into stresses by the RadStress filter using the algorithm by Longuet-Higgins and Stewart3. The final transformation uses the algorithm by Sajjadi and Bettencourt4 which calculates energy transfer from wind and wave information. The filters represent application-independent processes and can be used to process inputs for any wave or circulation application. With the proper combination of filters and models arbitrarily complex modeling suites can be developed. .3.2 MCEL implementation of I/O and Coupling API The MCEL implementation of the I/O and Coupling API allows applications to open connections to the MCEL server and filter processes for the purpose of exchanging forcing data between applications as if they were performing I/O. The MCEL implementation allows for components to operate on different extents of the simulated physical domains, using different grids and coordinate systems, and with masking for, for example, land-ocean boundaries. The coupling 3Longuet-Higgins and R. Stewart, “Radiation Stress in Water Waves,” Deep Sea Research, II:529- 562, 1964. S.G, Bettencourt, M.T, “An Improved Parameterization for Energy Exchange from Wind to Stokes Waves,” To appear Proc. 2nd Wind-Over-Waves Conference, Ellis Horwood Publishing Co., 2002 4Sajjadi, mode supported by the MCEL implementation is concurrent – that is, components run concurrently as separate sets of processes. The communication connections between processes are UNIX IPC sockets within the underlying CORBA system that implements MCEL, and there is a single, nonparallel, connection for each connection. Components running in parallel over multiple processes are expected to collect/distribute decomposed coupling data onto a single process when calling the MCEL implementation of the I/O and Model Coupling API. The computational domains of a component in a coupled simulation may have different extents. For example, an atmospheric model may simulate over a region that is significantly larger than the domain of a coupled coastal ocean model (see Figure XX) within a coupled ecosystem model for a littoral area; or a limited area atmospheric domain covering a tropical storm may be smaller than the domain of a whole-ocean model. Furthermore, the coordinate systems of the coupled components are likely to be different: a limited-area atmospheric model such as WRF may represent its domain on an equal area Cartesian grid while the underlying ocean is represented on a latitude-longitude coordinate system. Finally, the coupling must account for the fact that an ocean model will only provide data back to the atmosphere over the water, and land areas must be masked. The I/O and Model Coupling API includes semantics for a component to providing necessary georegistration and masking information to the MCEL library. In the MCEL implementation, a component provides this information when a connection is opened through the API. As described in the API specification, the open-for-write of a connection occurs in two phases: an open for write begin and an open for write commit. Between the calls to the begin and commit, the application performs the sequence of writes for all the fields that constitute one frame of output data from the model. This is the training phase and it allows the API implementation to gather information from the application about the data that will be written. For the MCEL implementation, this is used to capture the information that is then used to georegister and, when necessary, mask the data that will be exchanged with other components. This information is provided to the API as a string of comma separated variable-value pairs in the SysDepInfo argument to the OPEN_FOR_WRITE_BEGIN subroutine; for example, the WRF model calls OPEN_FOR_WRITE_BEGIN and passes it the string: “LAT_R=XLAT,LON_R=XLONG,LANDMASK_I=LU_MASK” which informs the interface it should expect writes of arrays of type real named XLAT and XLONG that contain that latitude and longitude of each grid point in the WRF domain, and an array of type integer that contains either a one or a zero indicating whether a point is over land or not. In the training series of writes that follow the OPEN_FOR_WRITE_BEGIN, the interface will capture and store these arrays for use later, after the OPEN_FOR_WRITE_COMMIT after actual writes to the coupling interface commence. Reading the MCEL implementation of the API is different in one important respect from a normal I/O read. For reading from the coupling interface, the MCEL implementation of the API will automatically interpolate data provided by another application into the region of the readingcomponent’s domain that corresponds to the coupled component. For example, in the case where the WRF model expects to receive sea-surface temperatures from a coastal ocean model on a smaller domain, the MCEL implementation will interpolate the ocean model-provided sst data onto the WRF sst field where they overlap and then smooth (using extrapolation) the data out over a specified radius of influence onto the rest of the WRF sst field. WRF sst data that is outside this radius is unaffected. Thus, the read operation under the MCEL implementation of the I/O and Model Coupling API is more refined than the behavior of a typical I/O read, before which an application might provide a buffer containing uninitialized data, which is entirely overwritten after completion of the read. The MCEL implementation requires that the buffer contain valid field data prior to the read, and this data is then updated with interpolated data from the coupled application. This also means that for components that are running on multiple processes and collecting/distributing the decomposed coupling data on a single process for calling the API, the decomposed field to be read must be collected on one process prior to calling the API, as well as distributing it afterwards. A future implementation of the MCEL implementation of the API will subsume this collection and distribution into the API itself. Only those subroutines of the I/O and Model Coupling API that are needed for coupling through MCEL are implemented: OPEN_FOR_READ, OPEN_FOR_WRITE_BEGIN, OPEN_FOR_WRITE_COMMIT, READ_FIELD, and WRITE_FIELD. Others are treated as null operations. This allows an application to switch seamlessly between an implementation of the API that targets I/O to a dataset and the coupling implementation that uses MCEL. The other calls that pertain to I/O to a dataset – for example, getting and putting meta-data for a self-describing model output file – are simply ignored by the MCEL implementation. In other respects, the behavior and calling semantics of the MCEL implementation of the API are standard with respect to the API specification. .3.3 Coupling results This section describes the results of coupling the WRF model with two components of the HFSoLE simulation, the Navy Coastal Ocean Model (NCOM) and SWAN, a wave model, for a coupled domain in the Gulf of Mexico off of the Louisiana and Mississippi shorelines. Figure XX shows the atmospheric domain with the area of the coupling to NCOM and SWAN indicated by the box. The 24-hour simulation period was November 7-8, 2002. The WRF model provided ten meter winds to the MCEL server at hourly intervals. MCEL filters this data and produces wind stresses that are read by NCOM. NCOM, in turn, produces sea surface temperatures that are provided to WRF with a read through the API from an MCEL filter that interpolates the NCOM surface NCOM SST NCOM SST WRF SST, Winds, SLP temperatures back onto the WRF skin-temperature field. The results from WRF and NCOM at the end of the 24-hour simulation are shown in Figure XX. Running WRF on 16 processes, NCOM on one process, the MCEL server on one process, and a filter process on one process of on an IBM Power3 SP system (blackforest.ucar.edu), the total run time for the simulation was 1713 seconds, of which approximately 7 seconds were used for inter-model coupling through the API. Thus, the overhead associated with the coupling through the MCEL implementation of the API is negligible. 4 WRF/ROMS Coupling through (MCT) Implementation of API A second reference implementation of the WRF I/O and Model Coupling API was developed using the MCT [5] to support regular, scheduled exchanges of boundary conditions for tightly to moderately coupled interactions (L2-A style coupling in our original proposal). This was used to couple versions of the WRF and Regional Ocean Modeling System (ROMS). WRF wind stress and heat fluxes were sent to the ocean model and sea-surface temperature were received from ROMS. Three performance benchmarks of the WRF/ROMS coupling were conducted. In addition, the L2-A coupled WRF/ROMS system is being used in follow-on scientific studies involving an idealized hurricane vortex described in [6]. .4.1 Concurrent Coupling on a Single Machine WRF was run for a 150x150x20 domain with a time step of 40 seconds; ROMS on a 482x482x15 domain with a time step of 240 seconds. WRF ran for 24 time steps and ROMS for four time steps on an Intel Linux cluster. The models exchanged boundary conditions every ocean time step. Table 1 shows the total run times of the main loop for each model and the times to send and receive data through the I/O and Model Coupling API. Time spent in the MCT implementation was less than 1% of the total run-time for each model. These results are worst-case, since such coupling is usually over intervals that are greater than every ocean time step. PEs/ model 2 4 8 16 WRF main loop 107.5 61.0 32.9 19.9 WRF receive WRF send 0.73 0.32 0.18 0.13 0.09 0.02 0.01 0.01 ROMS ROMS main receive loop 158.5 0.63 57.7 0.22 29.5 0.11 13.4 0.05 WRF send 0.12 0.07 0.05 0.07 Table 1. Computational and concurrent coupling costs on a single parallel computer .4.2 4.2 Concurrent Coupling using Two Machines over a Computational Grid WRF and ROMS were coupled over a rudimentary computational grid. ROMS ran on one Intel Linux node located at the NOAA Pacific Marine Environmental Laboratory (PMEL) in Seattle, Washington; WRF on four Linux nodes located at the NOAA Forecast Systems Laboratory (FSL) in Boulder, Colorado. The Globus Toolkit [7] provided the grid middleware. Table 2 indicates that communication times were less than 2%. WRF main loop 248.0 WRF receive WRF send 4.74 0.04 ROMS ROMS main receive loop 171.0 2.69 WRF send 1.49 Table 2. Computational and concurrent coupling costs over the Grid. .4.2.1 .4.3 Sequential Coupling on a Single Machine WRF and ROMS were coupled sequentially on four Linux nodes at FSL. Again, as indicated in Table 3, the costs of the MCT implementation communication and interpolation were small. PEs/ model 4 8 WRF main loop 82.0 47.8 ROMS main loop 0.10 0.10 WRF send 0.02 0.05 ROMS ROMS main receive loop 59.8 0.27 33.1 0.16 WRF send 0.11 0.09 Table 3. Computational and coupling costs for sequentially coupled WRF/ROMS. .4.4 4WRF/ROMS Simulation of Idealized Hurricane This coupling scenario is part of a project studying the physics of hurricane development by investigators at the University of Miami, the University of Washington [6], and the Model Environment for Atmospheric Discovery (MEAD) project [9], National Science Foundation (NSF)funded project at the National Center for Supercomputing Alliance (NCSA). The WRF atmospheric domain was 161 by 161 horizontal points and 30 levels spanning a 1288 km grid initialized to a "background" sounding representing the mean hurricane season in the Caribbean. The ocean domain is 200 by 200 horizontal points by 15 in the vertical, and initialized with temperature stratification typical of the tropical Atlantic. WRF provided wind stress to the ROMS, and the resulting ROMS sea-surface temperature was fed back to WRF. WRF and ROMS were time-stepped at one-minute, and five-minutes respectively, and coupled every 10 minutes. Output was at 30-minute intervals. The load balancing for this concurrent run required eight processors modeling the atmosphere for every oceanic processor. Figure 2a shows divergent wind-driven surface currents providing an upwelling of cold water centered at the vortex core, with marked asymmetrical structure. This anisotropy affects the convective perturbations in the atmosphere that drive the vortex generation. Figure 2b shows the temperature stratification in the ocean, as well as the vertical velocity field. Figures 2a, b: Output from WRF/ROMS coupled simulation of simulated hurricane vortex using MCT implementation of I/O and Model Coupling API, courtesy of Chris Moore, NOAA Pacific Marine Environment Laboratory. 5 ACKNOWLEDGEMENTS This publication made possible through support provided by DoD High Performance Computing Modernization Program (HPCMP) Programming Environment and Training (PET) activities through Mississippi State University under the terms of Contract No. N62306-01-D-7110. THE ORIGINAL WRF I/O API SPECIFICATION WAS PRODUCED BY JOHN MICHALAKES (NCAR), LESLIE HART, JACQUES MIDDLECOFF, AND DAN SCHAFFER (NOAA/FSL) IN THE WEATHER RESEARCH AND FORECAST MODEL DEVELOPMENT, WORKING GROUP 2: SOFTWARE ARCHITECTURE, STANDARDS, AND IMPLEMENTATION. WE ALSO ACKNOWLEDGE THE CONTRIBUTIONS OF CHRIS MOORE (NOAA/PMEL) FOR TECHNICAL ASSISTANCE WITH THE WRF/ROMS COUPLING, THE IDEA OF USING AN I/O API FOR MODEL COUPLING WAS DEMONSTRATED SUCCESSFULLY IN THE EARLY-TO-MID 1990S BY CARLIE COATS AND OTHERS AT THE UNIVERSITY OF NORTHERN CAROLINA IN THE U.S. EPA MODELS-3 I/O API.5APPENDIX A: WRF I/O AND MODEL COUPLING API SPECIFICATION The WRF I/O and Model Coupling Application Program Interface (WRF I/O API, for short) presents a package-independent interface to routines for moving data between a running application and some external producer or consumer of the application’s data: for example, a file system or another running program. The WRF I/O API is part of the WRF Software Architecture described in the WRF Software Design and Implementation document.6 A schematic of the I/O and coupling architecture is shown in the following diagram. The I/O API sits between the application and the package-specific implementation of the I/O or coupling mechanisms and provides a common package-neutral interface to services for model data and metadata input and output. The use of a package-neutral interface allows the WRF applications to be easily adaptable to a variety of implementation libraries for I/O, coupling, and data formatting. 5 Coats, C. J. , Jr., A. Hanna, D. Hwang, and D. W.. Byun, 1993: Model Engineering Concepts for Air Quality Models in an Integrated Environmental Modeling System. Transactions, Regional Photochemical Measurement and Modeling Studies, Air and Waste Management Association, San Diego, CA. pp. 213-223. 6 http://www.mmm.ucar.edu/wrf/users/WRF_arch_03.doc Packageindependent Application Application I/O API I/O API Format Comm PackagePackage Package specific Installationspecific ... Format Comm Package Package Application I/O API Format Comm Package Package Data Medium For normal model I/O, the API routines may be implemented using any of a number of packages in use by the atmospheric modeling community. Such packages include GRIB, NetCDF, and HDF. For model coupling the API may be implemented using packages such as the Model Coupling Toolkit (MCT), the Model Coupling Environment Library (MCEL), the Earth System Modeling Framework (ESMF), and others that provide input from and output to other actively running applications. Packages implementing parallel I/O such as MPI-I/O or vendor specific implementations of parallel scalable I/O are also implemented beneath the I/O API . In all cases, this is transparent to the application calling the WRF I/O API. This document provides a specification for the WRF I/O API and is intended for both application programmers who intend to use the API and implementers who wish to adapt an I/O or model coupling package to be callable through the API. 1 Specification Overview Implementations of the WRF I/O API are provided as libraries in the external directory of the WRF distribution and are documented separately.7 Names of WRF I/O API routines include a packagespecific prefix of the form ext_pkg. This allows for multiple I/O API packages to be linked into the application and selected at run-time and individually for each of the I/O data streams. .1.1 I/O API subroutines The calling and functional specification of the WRF I/O API routines are listed in the subsections that follow. In brief, the key routines are: 7 A convention for documentation of external implementations of the WRF I/O API is being developed. For the time being, implementers should, at a minimum, provide a README file documenting any special features, limitations, behaviors, calling semantics, etc. relating to the specific implementation. ext_pkg_ioinit ext_pkg_open_for_read ext_pkg_open_for_read_begin ext_pkg_open_for_read_commit ext_pkg_read_field ext_pkg_open_for_write ext_pkg_open_for_write_begin ext_pkg_open_for_write_commit ext_pkg_write_field ext_pkg_inquiry In addition, the WRF I/O API provides routines for getting and putting metadata. These routine names have the form ext_pkg_[get|put]_[dom|var]_[ti|td]_type where type is one of real, double, integer, logical, or char. .1.2 Optional begin/commit semantics for open The WRF I/O API provides for both simple opens for reading and writing datasets or two-stage begin/commit opens, which enables performance improvement for certain implementations of the I/O API. For example, some self-describing formats such as NetCDF are able to write more efficiently if contents of the dataset are defined up-front. The phase of program execution between the call to an open_begin and an open_commit allows for “training” writes or reads of the dataset that, otherwise, have no effect on the dataset itself. For interface implementations that do not take advantage of a “training” period, read and write field calls that occur prior to the open “commit” call will simply do nothing. An interface may also implement simple open_for_read or open_for_write interfaces. Implementations that require (as opposed to simply allowing) begin/commit will return an error status if the simple form is called. Applications can determine the behavior of an implementation using ext_pkg_inquiry. As mentioned above, the WRF I/O API may employ a begin/commit technique for training the interface of the contents of an output frame. It is important to note, however, that the writing of metadata through the WRF I/O API is not included in this training period. Metadata is only written after the training is concluded and the open operation committed. In addition, and primarily for output efficiency, the metadata should only be written to a dataset once. .1.3 Random access By default, an implementation of the WRF I/O API provides random access to a dataset based on the name and timestamp pair specified in the argument list. If an implementation does not provide random access this must be indicated through the ext_pkg_inquiry routine and in the documentation included with the implementation. For random access, routines that perform input or output on time-varying data or metadata accept two string arguments: the name of the variable/array and a character string index. The variable name provides the name of the variable being read/written; the character string is the index into the series of values of the variable/array in the dataset. The WRF I/O API attaches no significance to the value of an index string, nor does it assume any relationship other than equality or inequality between two strings (that is, no sorting order is assumed); such relationships are entirely the responsibility of the application. The WRF model and other WRF applications treat the index string as a 23 character date string of the form “0000-01-01_00:00:00.0000” that is used as a temporal index. The API subroutines that deal with order in a dataset (ext_pkg_get_next_time, ext_pkg_set_time, ext_pkg_get_next_var) have no meaning or effect in the case of an API implementation that supports only random access. Typically, model coupling implementations of the WRF I/O API rely on ordered (first in first out) production and consumption of data and do not support random access; however, this is not precluded (MCEL, for example, stores the data on the central server with time stamp information allowing random access for applications that require it). .1.4 Sequential access A particular implementation of the WRF I/O API implementation may allow or require sequential access through a dataset or model coupling; however, this behavior must be specified in the package documentation, and the package must return this information through the ext_pkg_inquiry routine. When a package allows sequential access, the package supports random access of the data but it also remembers the order in which sets of fields having the same date string were written (though not necessarily the order of the fields within each set) and allows the dataset to be read-accessed in that order. Under this form of sequential access, an application program can traverse the dataset by sequencing through time-frames, the set of all fields in the dataset having the same date-string. For example: 1. 2. 3. 4. 5. 6. 7. 8. 9. call ext_pkg_get_next_time(handle, DateStr, Status_next_time) DO WHILE ( Status_next_time .eq. 0 ) call ext_pkg_get_next_var (dh1, VarName, Status_next_var) DO WHILE ( Status_next_var .eq. 0 ) Operations on successive fields call ext_pkg_get_next_var (dh1, VarName, Status_next_var) END DO call ext_pkg_get_next_time(handle, DateStr, Status_next_time) END DO The call to ext_pkg_get_next_time in line 1 will return the value of the string DateStr for the next set of fields in the dataset. The section of the example code from lines 3 and 7 apply to those fields having the same DateStr. Within this section, the code calls ext_pkg_get_next_var to get the first of those fields, and then iterates over the fields until there are no more fields having that DateStr. When a package requires sequential access, the package does not support random access of the data at all. There are two modes for sequential access. The first is strict. Fields and metadata must be read in exactly the order they were written. A call to an API subroutine will skip ahead through the data until the operation can be satisfied or until the end of the dataset is reached. Any skipped data is thus inaccessible without a close and reopen. Strict sequential access implies that sequential access is required and that random access is not supported. The second mode is framed. The access to the data is sequential over DateStr sets of fields but random within a set. The outer loop in the example above represents framed sequential access. Note that the allowed form of random access always implies framed sequential access. One important reason for allowing sequential access (either framed or strict) in the API specification is to provide a way for implementations to perform buffering to or from datasets or coupled models. On writing, all fields in the same DateStr set can be cached as a frame within a package and then written out or sent to a peer model after the writing application has finished writing all the fields of the frame, as indicated by the ext_pkg_end_of_frame call. On reading, the package can read ahead or receive an entire new frame when the ext_pkg_get_next_time or ext_pkg_set_time routines are called. .1.5 Model coupling through the WRF I/O API In addition to explicit reading and writing of files, the WRF I/O API provides support for coupling of model components. In the context of I/O, the open subroutine calls may open a file for reading or writing. In the coupling case they open data streams through which model boundary conditions can be exchanged. The read and write routines receive and send boundary data to and from components. In the process, the necessary re-gridding operations are performed within the coupling mechanism that implements the interface. Coupling interfaces may support exchange of only 2-D fields or full 3-D fields (although performance may suffer). This and other behavior of the coupling implementation are available through the ext_pkg_inquiry routine. Model coupling implementations of the WRF I/O API may be parallel (and thus collective; the MCT implementation is an example) or single-processor (MCEL is an example). Coupling implementations of the API can also make use of the begin/commit functionality, which can be used to implement “coupling by frames”. During the “training” phase, the read/write calls provide the API with a list of variables that are to be exchanged between components during each “frame”. Once the commit call has been made the set of fields that make up a frame has been defined and the API can use that information to aggregate communication or otherwise improve efficiency. On writing, the frame is not considered completed and sent until the write_field for the last field in the frame or get_next_time has been called. On reading, the read_field call for the first field in the frame or get_next_time initiates a new frame, and the frame is considered active until read_field has been called for the last field in the frame or get_next_time has been called for the stream. .1.6 Multi-threading The WRF I/O and Model Coupling API is not thread safe and must not be called by more than one concurrently executing thread. .1.7 Parallelism An implementation of the WRF I/O and Model Coupling API may support parallel I/O, in which case, calls to I/O API routines are required to be collective – that is, if an API routine is called, it must be called by every process on the communicator. An implementation is allowed to specify that it is not parallel and can only be called on a single processor (this assumes that the application collects the data onto a single processor before calling the I/O API). The implementation’s behavior must be specified in its documentation, and it must indicate whether it is parallel or not in response to an application query using the ext_pkg_inquiry routine. .1.8 Arguments to WRF I/O API routines This section describes subroutine arguments common to various routines in the I/O API. .1.8.1 Data Handles These are descriptors of datasets or data streams. Created by the “open” routines, they are subsequently passed to the read and write field routines. .1.8.2 System Dependent Information Many of the API routines accept a string-valued argument that allows an application to pass additional control information to the I/O interface. The format of the string is a comma-separated list of key=value pairs. Combined with the WRF I/O system’s ability to open different packages for distinct data streams, this feature provides flexibility to customize I/O and coupling for particular implementations. The package implementer may, but is not required, to do anything special with this information. .1.8.3 Data names and time stamps From the point of view of a package implementing the I/O API, the timestamp string has no meaning or relationship with any other timestamp other than lexicographic: two strings are either the same or they are different; there is no meaning associated with the timestamp string. The interface does not do time comparisons but it is aware of control breaks (change in the sequence of timestamps in the interface). The interface may also store a record of the sequence in which fields were written to a dataset and provide this information in a way that allows sequential traversal through the dataset but datasets are fundamentally random access by variable name and timestamp string. The interface may assume and expect that all the fields in a frame of data have the same date string and that on writing the calls to the interface to write them will have been called in consecutively. .1.8.4 Field Types The subroutines accept integer data types as specified below. These data types are defined in the wrf_io_flags.h file that comes with each reference implementation. This file must be included by the application. WRF_REAL – Single precision real number WRF_DOUBLE – Double precision real number WRF_INTEGER – Integer number WRF_COMPLEX – Single precision complex number WRF_DOUBLE_COMPLEX – Logical (boolean) WRF_LOGICAL .1.8.5 – Double precision complex number Communicators The “open” subroutines of the interface allow callers to pass up to two communicators with different purposes. For example, one could be a global communicator and the second a communicator for a subset of the processes. .1.8.6 Staggering Staggered means with respect to mass-point coordinates. For example, in WRF the Eulerian height solver uses an Arakawa C-grid so that the U-velocity is staggered in X, the V-velocity is staggered in Y, and the vertical velocity is staggered in Z. The stagger string is specified to the read_field and write_field using a string consisting of the characters X, Y, and/or Z that specifies which dimensions the field is staggered in (or the empty string if the variable is not staggered in any dimension). .1.8.7 Memory order Fields have the property of memory order while in model memory; the order in the dataset is package dependent. This information is needed by the write_field and read_field in oder to move data between memory and dataset, possibly transposing the data in the process. The memory order string specifies the order of the dimensions (minor to major from left to right) of the field as it sits in program memory. The direction of the dimension depends on whether X, Y, or Z is capitalized. Uppercase means ‘west to east’ in X, ‘south to north’ in Y, and ‘bottom to top’ in Z with ascending indices. The number of dimensions is implied from the string as well. Full 2-D/3D Arrays 1-D (Column) Arrays Non-decomposed Boundary 3-D Boundary 2-D ‘XYZ’ ‘Z’ ‘C’ ‘XSZ’ (west) ‘XS’ (west) ‘XEZ’ (east) ‘XE’ (east) ‘XZY’ ‘ZXY’ 0-D (Scalars) ‘YSZ’ (south) ‘YS’ (south) ‘XY’ ‘0’ (zero) ‘YEZ’ (north) ‘YE’ (north) ‘YX’ .1.8.8 Dimension names These are the names of the dimensions of the grid upon which lies the variable being read or written (for example “latitude”). .1.8.9 Domain Start/End These are the global starting and ending indices for each variable dimension. .1.8.10 Patch Start/End These are the process-local starting and ending indices (excluding halo points) for each variable dimension. .1.8.11 Memory Start/End These are the process-local starting and ending indices (including halo points) for each variable dimension. .1.8.12 Status codes. All routines return an integer status argument, which is zero for success. Non-zero integer constant status codes are listed below and are defined by the package in an include file named wrf_status_codes.h and made available by the package build mechanism. Status codes pertaining to coupler implementations of the I/O will be listed in a future release of this document. WRF_NO_ERR (equivalent to zero) No error WRF_WARN_NOOP Package implements this routine as NOOP WRF_WARN_FILE_NF File not found (or incomplete) WRF_WARN_MD_NF Metadata not found WRF_WRN_TIME_NF Timestamp not found WRF_WARN_TIME_EOF No more time stamps WRF_WARN_VAR_NF Variable not found WRF_WARN_VAR_END No more variables for the current time WRF_WARN_TOO_MANY_FILES Too many open files WRF_WARN_TYPE_MISMATCH Data type mismatch WRF_WARN_WRITE_RONLY_FILE Attempt to write read only file WRF_WARN_READ_WONLY_FILE Attempt to read write only file WRF_WARN_FILE_NOT_OPENED Attempt to access unopened file WRF_WARN_2DRYRUNS_1VARIABLE Attempt to do 2 trainings for 1 variable WRF_WARN_READ_PAST_EOF Attempt to read past EOF WRF_WARN_BAD_DATA_HANDLE Bad data handle WRF_WARN_WRTLEN_NE_DRRUNLEN Write length not equal training length WRF_WARN_TOO_MANY_DIMS More dimensions requested than training WRF_WARN_COUNT_TOO_LONG Attempt to read more data than exists WRF_WARN_DIMENSION_ERROR Input dimensions inconsistemt WRF_WARN_BAD_MEMORYORDER Input MemoryOrder not recognized WRF_WARN_DIMNAME_REDEFINED A dimension name with 2 different lengths WRF_WARN_CHARSTR_GT_LENDATA String longer than provided storage WRF_WARN_PACKAGE_SPECIFIC The warning code is specific to package Fatal Errors: WRF_ERR_FATAL_ALLOCATION_ERROR Allocation error WRF_ERR_FATAL_DEALLOCATION_ERR Deallocation error WRF_ERR_FATAL_BAD_FILE_STATUS Bad file status WRF_ERR_PACKAGE_SPECIFIC The error code is specific to package 2 WRF I/O and Model Coupling API Subroutines This section comprises the calling semantics and functional descriptions of the subroutines in the WRF I/O and Model Coupling API. .2.1 ext_pkg_ioinit SUBROUTINE ext_pkg_ioinit( SysDepInfo, Status ) CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER, INTENT(INOUT) :: Status Synopsis: Initialize the WRF I/O system. Arguments: SysDepInfo: System dependent information .2.2 ext_pkg_ioexit SUBROUTINE ext_pkg_ioexit( Status ) INTEGER, INTENT(INOUT) :: Status Synopsis: Shut down the WRF I/O system. .2.3 ext_pkg_inquiry SUBROUTINE ext_pkg_inquiry ( Inquiry, Result, Status ) CHARACTER *(*), INTENT(IN) :: Inquiry CHARACTER *(*), INTENT(OUT) :: Result INTEGER , INTENT(OUT) :: Status Synopsis: Return information about the implementation of the WRF I/O API. Arguments: Inquiry: Result: The attribute of the package being queried. The result of the query. Description: This routine provides a way for an application to determine any package specific behaviors or limitations of an implementation of the WRF I/O API. All implementations of the API are required to support the following list of inquiries and responses. In addition, packages may specify additional string valued inquiries and responses provided these are documented. ‘RANDOM_WRITE’ ‘REQUIRE’ or ‘ALLOW’ or ‘NO’ ‘RANDOM_READ’ ‘REQUIRE’ or ‘ALLOW’ or ‘NO’ ‘SEQUENTIAL_WRITE’ ‘STRICT’ or ‘FRAMED’ or ‘ALLOW’ or ‘NO’ ‘SEQUENTIAL_READ’ `STRICT’ or ‘FRAMED’ or ‘ALLOW’ or ‘OPEN_READ’ ‘REQUIRE’ or ‘ALLOW’ or ‘NO’ ‘NO’ ‘OPEN_WRITE’ ‘REQUIRE’ or ‘ALLOW’ or ‘NO’ ‘OPEN_COMMIT_READ’ ‘REQUIRE’ or ‘ALLOW’ or ‘NO’ ‘OPEN_COMMIT_WRITE’ ‘REQUIRE’ or ‘ALLOW’ or ‘NO’ ‘MEDIUM’ ‘FILE’ or ‘COUPLED’ or ‘UNKNOWN’ ‘PARALLEL_IO’ ‘YES’ or ‘NO’ ‘SELF_DESCRIBING’ ‘SUPPORT_METADATA’ ‘YES’ or ‘NO’ ‘YES’ or ‘NO’ ‘SUPPORT_3D_FIELDS’ ‘YES’ or ‘NO’ .2.4 ext_pkg_open_for_read SUBROUTINE ext_pkg_open_for_read ( DatasetName , Comm1 , Comm2, & SysDepInfo , DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: DatasetName INTEGER , INTENT(IN) :: Comm1 , Comm2 CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: Opens a WRF dataset for reading using the I/O package pkg. Arguments: DatasetName: Comm1: Comm2: SysDepInfo: DataHandle: The name of the dataset to be opened. First communicator. Second communicator. System dependent information. Returned to the calling program to serve as a handle to the open dataset for subsequent I/O API operations. Description: This routine opens a file for reading or a coupler data stream for receiving messages. There is no “training” phase for this version of the open statement. .2.5 ext_pkg_open_for_read_begin SUBROUTINE ext_pkg_open_for_read_begin ( DatasetName , Comm1 , Comm2, & SysDepInfo , DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: DatasetName INTEGER , INTENT(IN) :: Comm1 , Comm2 CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: Begin “training” phase for the WRF dataset DatasetName using the I/O package Pkg Arguments: DatasetName: Comm1: Comm2: SysDepInfo: DataHandle: The name of the dataset to be opened. First communicator. Second communicator. System dependent information. Returned to the calling program to serve as a handle to the open dataset for subsequent I/O API operations. Description: This routine opens a file for reading or a coupler data stream for receiving messages. The call to this routine marks the beginning of the “training” phase described in the introduction of this section. The call to ext_pkg_open_for_read_begin must be paired with a call to ext_pkg_open_for_read_commit. .2.6 ext_pkg_open_for_read_commit SUBROUTINE ext_pkg_open_for_read_commit( DataHandle , Status ) INTEGER , INTENT(IN ) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: End “training” phase. Arguments: DataHandle: Descriptor for an open dataset. Description: This routine switches an internal flag to enable input. The call to ext_pkg_open_for_read_commit must be paired with a call to ext_pkg_open_for_read_begin. .2.7 ext_pkg_open_for_write SUBROUTINE ext_pkg_open_for_write ( DatasetName , Comm1 , Comm2, & SysDepInfo , DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: DatasetName INTEGER , INTENT(IN) :: Comm1 , Comm2 CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: Opens a WRF dataset for writing using the I/O package pkg. Arguments: DatasetName: Comm1: Comm2: SysDepInfo: DataHandle: The name of the dataset to be opened. First communicator. Second communicator. System dependent information. Returned to the calling program to serve as a handle to the open dataset for subsequent I/O API operations. Description: This routine opens a file for writing or a coupler data stream for sending messages. There is no “training” phase for this version of the open statement. .2.8 ext_pkg_open_for_write_begin SUBROUTINE ext_pkg_open_for_write_begin( DatasetName , Comm1, Comm2, SysDepInfo, DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: DatasetName INTEGER , INTENT(IN) :: Comm1, Comm2 CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status & Synopsis: Begin data definition phase for the WRF dataset DatasetName using the I/O package Pkg. Arguments: DatasetName: Comm1: Comm2: SysDepInfo: DataHandle: The name of the dataset to be opened. First communicator. Second communicator. System dependent information. Returned to the calling program to serve as a handle to the open dataset for subsequent I/O API operations. Description: This routine opens a file for writing or a coupler data stream for sending messages. The call to this routine marks the beginning of the “training” phase described in the introduction of this section. The call to ext_pkg_open_for_write_begin must be matched with a call to ext_pkg_open_for_write_commit. .2.9 ext_pkg_open_for_write_commit SUBROUTINE ext_pkg_open_for_write_commit( DataHandle , Status ) INTEGER , INTENT(IN ) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: End “training” phase. Arguments: DataHandle: Descriptor for an open dataset. Description: This routine switches an internal flag to enable output. The call to ext_pkg_open_for_write_commit must be paired with a call to ext_pkg_open_for_write_begin . .2.10 ext_pkg_inquire_opened SUBROUTINE ext_pkg_inquire_opened( DataHandle, DatasetName , DatasetStatus, & Status ) INTEGER ,INTENT(IN) :: DataHandle INTEGER ,INTENT(IN) :: DatasetName CHARACTER*(*) ,INTENT(OUT) :: DatasetStatus INTEGER ,INTENT(OUT) :: Status Synopsis: Inquire if the dataset referenced by DataHandle is open. Arguments: DataHandle: Descriptor for an open dataset. DatasetName: The name of the dataset. DatasetStatus: Returned status of the dataset. Zero = success. Non-zero status codes are listed below and are in the module wrf_data and made available by the package build mechanism in the inc directory. WRF_FILE_NOT_OPENED WRF_FILE_OPENED_NOT_COMMITTED WRF_FILE_OPENED_AND_COMMITTED WRF_FILE_OPENED_FOR_READ Description: This routine returns one of the four dataset status codes shown above. The status codes WRF_FILE_OPENED_NOT_COMMITTED and WRF_FILE_OPENED_AND_COMMITTED refer to opening for write. .2.11 ext_pkg_open_for_update SUBROUTINE ext_pkg_open_for_read ( DatasetName , Comm1 , Comm2, & SysDepInfo , DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: DatasetName INTEGER , INTENT(IN) :: Comm1 , Comm2 CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: Opens a WRF dataset for reading and writing using the I/O package pkg. Arguments: DatasetName: Comm1: Comm2: SysDepInfo: DataHandle: The name of the dataset to be opened. First communicator. Second communicator. System dependent information. Returned to the calling program to serve as a handle to the open dataset for subsequent I/O API operations. Description: This routine opens a dataset for reading and writing, and is intended for updating individual fields in-place. There is no “training” phase for this version of the open statement. This routine has the same calling arguments of ext_ncd_open_for_read but allows both ext_ncd_read_field and ext_ncd_write_field calls on the file handle. It is incumbent upon the application to ensure that ext_ncd_write_field is called only for fields that already exist in the file and that the dimension, order, etc. arguments correspond to the field as it exists in the file. It's necessary for the application to close the file (using ext_ncd_ioclose) to make sure that the changes get saved to the dataset. .2.12 ext_pkg_ioclose SUBROUTINE ext_pkg_ioclose( DataHandle, Status) INTEGER ,INTENT(IN) :: DataHandle INTEGER ,INTENT(OUT) :: Status Synopsis: Close the dataset referenced by DataHandle. Arguments: DataHandle: Descriptor for an open dataset. .2.13 ext_pkg_read_field SUBROUTINE ext_pkg_read_field ( DataHandle , DateStr , VarName , & Field , FieldType , Comm1 , Comm2 & DomainDesc , MemoryOrder , Stagger , & DimNames , & DomainStart , DomainEnd , & MemoryStart , MemoryEnd , & PatchStart , PatchEnd , & Status ) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: DateStr CHARACTER*(*) ,INTENT(IN) :: VarName type ,INTENT(IN) :: Field(*) INTEGER ,INTENT(IN) :: FieldType INTEGER ,INTENT(IN) :: Comm1 INTEGER ,INTENT(IN) :: Comm2 INTEGER ,INTENT(IN) :: DomainDesc CHARACTER*(*) ,INTENT(IN) :: MemoryOrder CHARACTER*(*) ,INTENT(IN) :: Stagger CHARACTER*(*) , DIMENSION (*) ,INTENT(IN) :: DimNames INTEGER ,DIMENSION(*) ,INTENT(IN) :: DomainStart, DomainEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: MemoryStart, MemoryEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: PatchStart, PatchEnd INTEGER ,INTENT(OUT) :: Status Synopsis: Read the variable named VarName from the dataset pointed to by DataHandle. Arguments: DataHandle: DateStr: VarName: Field: FieldType: Comm1: Comm2: DomainDesc: specific domain Descriptor for an open dataset. A string providing the timestamp for the variable to be written. A string containing the name of the variable to be written. A pointer to the variable to be written. Field attribute First communicator Second communicator Additional argument that may be used to pass a communication package descriptor. MemoryOrder: Stagger: DimNames: DomainStart: DomainEnd: MemoryStart: MemoryEnd: PatchStart: PatchEnd: Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Description: This routine reads the variable named VarName in location Field from the dataset pointed to by DataHandle at time DateStr. No data are read if this routine is called during “training”. .2.14 ext_pkg_write_field SUBROUTINE ext_pkg_write_field ( DataHandle , DateStr , VarName , & Field , FieldType , Comm1 , Comm2 & DomainDesc , MemoryOrder , Stagger , & DimNames , & DomainStart , DomainEnd , & MemoryStart , MemoryEnd , & PatchStart , PatchEnd , & Status ) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: DateStr CHARACTER*(*) ,INTENT(IN) :: VarName type ,INTENT(IN) :: Field(*) INTEGER ,INTENT(IN) :: FieldType INTEGER ,INTENT(IN) :: Comm1 INTEGER ,INTENT(IN) :: Comm2 INTEGER ,INTENT(IN) :: DomainDesc CHARACTER*(*) ,INTENT(IN) :: MemoryOrder CHARACTER*(*) ,INTENT(IN) :: Stagger CHARACTER*(*) , DIMENSION (*) ,INTENT(IN) :: DimNames INTEGER ,DIMENSION(*) ,INTENT(IN) :: DomainStart, DomainEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: MemoryStart, MemoryEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: PatchStart, PatchEnd INTEGER ,INTENT(OUT) :: Status Synopsis: Write the variable named VarName to the dataset pointed to by DataHandle. Arguments: DataHandle: DateStr: VarName: Field: Descriptor for an open dataset. A string providing the timestamp for the variable to be written. A string containing the name of the variable to be written. A pointer to the variable to be written. FieldType: Comm1: Comm2: DomainDesc: specific domain MemoryOrder: Stagger: DimNames: DomainStart: DomainEnd: MemoryStart: MemoryEnd: PatchStart: PatchEnd: Field attribute First communicator Second communicator Additional argument that may be used to pass a communication package descriptor. Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Description: This routine writes the variable named VarName in location Field to the dataset pointed to by DataHandle at time DateStr. No data are written if this routine is called during “training”. .2.15 ext_pkg_get_next_var SUBROUTINE ext_pkg_get_next_var(DataHandle, VarName, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(OUT) :: VarName INTEGER ,INTENT(OUT) :: Status Synopsis: On reading, this routine returns the name of the next variable in the current time frame. Arguments: DataHandle: VarName: Descriptor for a dataset that is open for read. A string returning the name of the next variable. Description: This routine applies only to a dataset that is open for read in a package that supports sequential framed access. Otherwise, the package sets Status to WRF_WARN_NOOP and returns with no effect. For the active DateStr (set by ext_pkg_set_time or ext_pkg_get_next_time) the name of the next unaccessed variable in the frame is returned in VarName. The variable name returned in VarName may then be used as the VarName argument to ext_pkg_read_field. If there are no more variables in the set the the currently active DateStr, a non-zero Status, WRF_WARN_VAR_END, is returned. .2.16 ext_pkg_end_of_frame SUBROUTINE ext_pkg_end_of_frame(DataHandle, Status) INTEGER ,INTENT(IN) :: DataHandle INTEGER ,INTENT(OUT) :: Status Synopsis: Write an end-of-frame marker onto the dataset. Arguments: DataHandle: Descriptor for a dataset that is open. Description: This applies only to datasets opened for write and for implementations that support sequential framed access. Otherwise the routine sets Status to WRF_WARN_NOOP and returns without an effect. The WRF I/O API is optimized (but not required) to write all the output variables in order at each output time step. All the output variables at a time step are called a frame. After a frame is read or written, ext_pkg_end_of_frame must be called which can (but is not required to) write an end-of-file mark. The end-of-file marks allow for buffering of the output. Recommended use: It is recommended that applications use ext_pkg_end_of_frame when writing data as sets of fields with the same DateStr, even if the particular package does not support it except as a NOOP. Writing application output code this way will make the code easily adaptable to implementations of the WRF I/O API that allow or require an end-of-frame marker. .2.17 ext_pkg_iosync SUBROUTINE ext_pkg_iosync(DataHandle, Status) INTEGER ,INTENT(IN) :: DataHandle INTEGER ,INTENT(OUT) :: Status Synopsis: Synchronize the disk copy of a dataset with memory buffers. Arguments: DataHandle: Description: Descriptor for a dataset that is open for read. Synchronizes any I/O implementation buffers with respect to dataset. .2.18 ext_pkg_inquire_filename SUBROUTINE ext_pkg_inquire_filename(DataHandle, Filename, FileStatus, Status) INTEGER ,INTENT(IN) :: DataHandle INTEGER ,INTENT(OUT) :: Filename INTEGER ,INTENT(OUT) :: FileStatus INTEGER ,INTENT(OUT) :: Status Synopsis: Returns the Filename and FileStatus associated with DataHandle. Arguments: DataHandle: Filename: FileStatus: Descriptor for a dataset that is open. The file name associated with DataHandle The status of the file associated with DataHandle. .2.19 ext_pkg_get_var_info SUBROUTINE ext_pkg_get_var_info(DataHandle, VarName, Ndim, MemoryOrder, DomainStart, DomainEnd, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: VarName INTEGER ,INTENT(OUT) :: NDim CHARACTER*(*) ,INTENT(OUT) :: MemoryOrder INTEGER ,DIMENSION(*) ,INTENT(OUT) :: DomainStart, DomainEnd INTEGER ,INTENT(OUT) :: Status Synopsis: Get information about a variable. Arguments: DataHandle: VarName: Ndim: MemoryOrder: DomainStart: DomainEnd: Descriptor for a dataset that is open for read. A string containing the name of the variable to be read. The number of dimensions of the field data. Field attribute Field attribute Field attribute Description: This routine applies only to a dataset that is open for read. This routine returns the number of dimensions, the memory order, and the start and stop indices of the data. .2.20 ext_pkg_set_time SUBROUTINE ext_pkg_set_time(DataHandle, INTEGER ,INTENT(IN) CHARACTER*(*) ,INTENT(IN) INTEGER ,INTENT(OUT) DateStr, Status) :: DataHandle :: DateStr :: Status Synopsis: Sets the time stamp. Arguments: DataHandle: Descriptor for a dataset that is open for read. DateStr: The time stamp; a 23 character date string of the form “0000-0101_00:00:00.0000”. Description: This routine applies only to a dataset that is open for read and if the implementation supports sequential framed access. Otherwise the routine sets WRF_WARN_NOOP and returns with no effect. This routine sets the current frame of fields having the same date string to the value of the argument DateStr. .2.21 ext_pkg_get_next_time SUBROUTINE ext_pkg_get_next_time(DataHandle, DateStr, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(OUT) :: DateStr INTEGER ,INTENT(OUT) :: Status Synopsis: Returns the next time stamp. Arguments: DataHandle: Descriptor for a dataset that is open for read. DateStr: Returned time stamp; a 23 character date string of the form “0000-0101_00:00:00.0000”. Description: This routine applies only to a dataset that is open for read. This routine returns the next time stamp in DateStr. The time stamp character string is the index into the series of values of the variable/array in the dataset. In WRF, this is a 23 character date string of the form “0000-0101_00:00:00.0000” and is treated as a temporal index. .2.22 ext_pkg_get_var_ti_type set of routines This family of functions gets a time independent attribute of a variable of attribute type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_get_var_ti_char (DataHandle, Element, Var, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_get_var_ti_type (DataHandle, Element, Var, Data, Count, OutCount, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element CHARACTER*(*) ,INTENT(IN) :: Var type ,INTENT(OUT) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUY) :: OutCount INTEGER ,INTENT(OUT) :: Status Synopsis: Read attribute “Element” of the variable “Var” from the dataset and store in the array Data. Arguments: DataHandle: Element: Var Data: Count: OutCount: Descriptor for a dataset that is open for read. The name of the data. Name of the variable to which the attribute applies Array in which to return the data. The number of words requested. The number of words returned. Description: These routines apply only to a dataset that is open for read. They attempt to read Count words of the time independent attribute “Element” of the variable “Var” from the dataset and store in the array Data. OutCount is the number of words returned. If the number of words available is greater than or equal to Count then OutCount is equal to Count, otherwise Outcount is the number of words available. .2.23 ext_pkg_put_var_ti_type set of routines This family of functions writes a time independent attribute of a variable of attribute type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_put_var_ti_char (DataHandle, Element, Var, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_put_var_ti_type (DataHandle, Element, Var, Data, Count, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element CHARACTER*(*) ,INTENT(IN) :: Var type ,INTENT(OUT) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUT) :: Status Synopsis: Write attribute “Element” of variable “Var” from array Data to the dataset. Arguments: DataHandle: Element: Var Data: Count: Descriptor for a dataset that is open for read. The name of the data. Name of the variable to which the attribute applies Array holding the data to be written. The number of words to be written. Description: These routines apply only to a dataset that is opened and committed. They write Count words of the time independent attribute “Element” of the variable “Var” stored in the array Data to the dataset pointed to by DataHandle. .2.24 ext_pkg_get_var_td_type set of routines This family of functions gets a variable attribute at a specified time where the attribute is of type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_get_var_td_char (DataHandle, Element, DateStr, Var, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_get_var_td_type (DataHandle, Element, DateStr, Var, Data, Count, OutCount, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element CHARACTER*(DateStrLen),INTENT(IN) :: DateStr CHARACTER*(*) ,INTENT(IN) :: Var type ,INTENT(OUT) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUY) :: OutCount INTEGER ,INTENT(OUT) :: Status Synopsis: Read attribute “Element” of the variable “Var” at time “DateStr” from the dataset and store in the array Data. Arguments: DataHandle: Descriptor for a dataset that is open for read. Element: The name of the data. DateStr Time stamp; a 23 character date string of the form “0000-0101_00:00:00.0000” Var Name of the variable to which the attribute applies Data: Array in which to return the data. Count: The number of words requested. OutCount: The number of words returned. Description: These routines apply only to a dataset that is open for read. They attempt to read Count words of the attribute “Element” of the variable “Var” at time “DateStr” from the dataset and store in the array Data. OutCount is the number of words returned. If the number of words available is greater than or equal to Count then OutCount is equal to Count, otherwise Outcount is the number of words available. .2.25 ext_pkg_put_var_td_type set of routines This family of functions writes a variable attribute at a specified time where the attribute is of type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_put_var_td_char (DataHandle, Element, DateStr, Var, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_put_var_td_type (DataHandle, Element, DateStr, Var, Data, Count, Status) INTEGER ,INTENT(IN) CHARACTER*(*) ,INTENT(IN) CHARACTER*(DateStrLen),INTENT(IN) CHARACTER*(*) ,INTENT(IN) type ,INTENT(OUT) INTEGER ,INTENT(IN) INTEGER ,INTENT(OUT) :: :: :: :: :: :: :: DataHandle Element DateStr Var Data(*) Count Status Synopsis: Write attribute “Element” of the variable “Var” at time “DateStr” from the array Data to the dataset. Arguments: DataHandle: Descriptor for a dataset that is open for read. Element: The name of the data. DateStr Time stamp; a 23 character date string of the form “0000-0101_00:00:00.0000” Var Name of the variable to which the attribute applies Data: Array in which to return the data. Count: The number of words requested. Description: These routines apply only to a dataset that is either open and not committed or open and committed. If the dataset is opened and not committed then the storage area on the dataset is set up but no data is written to the dataset. If the dataset is open and committed then data is written to the dataset. These routines write Count words of the attribute “Element” of the variable “Var” at time “DateStr” from the array Data to the dataset. .2.26 ext_pkg_get_dom_ti_type set of routines This family of functions gets time independent domain metadata of type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_get_dom_ti_char (DataHandle, Element, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_get_dom_ti_type (DataHandle, Element, Data, Count, OutCount, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element type ,INTENT(OUT) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUY) :: OutCount INTEGER ,INTENT(OUT) :: Status Synopsis: Reads time independent domain metadata named “Element” from the dataset into the array Data. Arguments: DataHandle: Element: Data: Count: OutCount: Descriptor for a dataset that is open for read. The name of the data. Array in which to return the data. The number of words requested. The number of words returned. Description: These routines apply only to a dataset that is open for read. They attempt to read Count words of time independent domain metadata named “Element” into the array Data from the dataset pointed to by DataHandle. OutCount is the number of words returned. If the number of words available is greater than or equal to Count then OutCount is equal to Count, otherwise Outcount is the number of words available. Note that a result will only be stored in array Data if these routines are called after any “training” phase has completed. If called during a “training” phase, the routines will not modify array Data but will behave normally in all other ways. It is the responsibility of the caller to avoid using array Data after a call made during a “training” phase. .2.27 ext_pkg_put_dom_ti_type set of routines This family of functions writes time independent domain metadata of type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_put_dom_ti_char (DataHandle, Element, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_put_dom_ti_type (DataHandle, Element, Data, Count, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element type ,INTENT(OUT) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUT) :: Status Synopsis: Write time independent domain attribute “Element” from the array Data to the dataset. Arguments: DataHandle: Element: Data: Count: Descriptor for a dataset that is open for read. The name of the data. Array holding the data. The number of words to write. Description: These routines apply only to a dataset that is open and committed. They write Count words of time independent domain metadata named “Element” from the array Data to the dataset pointed to by DataHandle. Note that these routines can be called during a training phase in which case words are not written to the dataset. .2.28 ext_pkg_get_dom_td_type set of routines This family of functions gets time dependent domain metadata of type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_get_dom_td_char (DataHandle, Element, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_get_dom_td_type (DataHandle, Element, Data, Count, OutCount, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element CHARACTER*(*) ,INTENT(IN) :: DateStr type ,INTENT(OUT) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUY) :: OutCount INTEGER ,INTENT(OUT) :: Status Synopsis: Reads domain metadata named “Element” at time DateStr from the dataset into the array Data. Arguments: DataHandle: Descriptor for a dataset that is open for read. Element: The name of the data. DateStr: Time stamp; a 23 character date string of the form “0000-0101_00:00:00.0000” Data: Count: OutCount: Array in which to return the data. The number of words requested. The number of words returned. Description: These routines apply only to a dataset that is open for read. They attempt to read Count words of domain metadata named “Element, at time DateStr, into the array Data from the dataset pointed to by DataHandle. OutCount is the number of words returned. If the number of words available is greater than or equal to Count then OutCount is equal to Count, otherwise Outcount is the number of words available. .2.29 ext_pkg_put_dom_td_type set of routines This family of functions puts time dependent domain metadata of type type where type can be: real, double, integer, logical, character If type is character, then the function has the form: SUBROUTINE ext_pkg_get_dom_td_char (DataHandle, Element, Data, Status) Otherwise it has the form: SUBROUTINE ext_pkg_get_dom_td_type (DataHandle, Element, Data, Count, OutCount, Status) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: Element CHARACTER*(*) ,INTENT(IN) :: DateStr type ,INTENT(IN) :: Data(*) INTEGER ,INTENT(IN) :: Count INTEGER ,INTENT(OUT) :: Status Synopsis: Writes domain metadata named “Element” at time DateStr from the dataset into the array Data. Arguments: DataHandle: Descriptor for a dataset that is open for read. Element: The name of the data. DateStr: Time stamp; a 23 character date string of the form “0000-0101_00:00:00.0000” Data: Array holding the data. Count: The number of words requested. OutCount: The number of words returned. Description: These routines apply only to a dataset that is open for write. They attempt to write Count words of domain metadata named “Element, at time DateStr, into the array Data from the dataset pointed to by DataHandle. .2.30 ext_pkg_warning_string SUBROUTINE ext_pkg_warning_str ( Code, ReturnString, Status) INTEGER ,INTENT(IN) :: Code CHARACTER*(*) ,INTENT(OUT) :: ReturnString INTEGER ,INTENT(OUT) :: Status Synopsis: Given an status value returned by an API routine, set the ReturnString to a descriptive value. The string passed to this routine must be 256 characters or larger. Arguments: Code: ReturnString: Integer status from an I/O API call String containing message describing error .2.31 ext_pkg_error_string SUBROUTINE ext_pkg_error_str ( Code, ReturnString, Status) INTEGER ,INTENT(IN) :: Code CHARACTER*(*) ,INTENT(OUT) :: ReturnString INTEGER ,INTENT(OUT) :: Status Synopsis: Given an status value returned by an API routine, set the ReturnString to a descriptive value. The string passed to this routine must be 256 characters or larger. Arguments: Code: ReturnString: Integer status from an I/O API call String containing message describing error APPPENDIX B: MCT REFERENCE IMPLEMENTATION Interpolation is implemented using the sparse matrix multiplication capability provided by MCT. Currently, the matrix coefficients are stored in an ASCII file. The user provides the base name of this file to the OPEN routines described below. Since the coefficients vary depending on grid staggering, the full name of the file is constructed by combining the base name with the staggering characters passed to the read/write routines. The behavior of the implementation depends on whether it is called by model components that are executed sequentially or concurrently. If they are executed concurrently, then every processor initializes this package with its component name since the MCT initialization routine requires this information. If the components execute sequentially then the component names are not unique to each processor. In this case, the package is initialized with list of all components. The API allows for both kinds of initialization within a coupled model. The sequential form overrides the concurrent form. This allows the modeler to change modes relatively easily. The concurrent initialization calls are made from within each model. If sequential coupling is desired, the model leaves in the concurrent calls but adds a sequential call to the coupler driver. Each time an EXT_MCT_OPEN_FOR_WRITE_BEGIN is called, the user is required to specify the component names of both the sender and receiver since MCT needs this information to set up a data exchange between the two components. Technically, in the concurrent case, only one is required since the name of the sender was obtained when the package was initialized. However, for symmetry, the package will always require both. The same logic applies to EXT_MCT_OPEN_FOR_READ_BEGIN. In the API routines, some of the arguments passed in are unused as will be discussed below. 1. ext_mct_ioinit SUBROUTINE ext_mct_ioinit( SysDepInfo, Status ) CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER, INTENT(INOUT) :: Status Synopsis: Initialize the Mct coupler implementation of the WRF I/O API. Arguments: SysDepInfo: System dependent information If coupling is being done concurrently, then SysDepInfo must consist of the following key/value pair: SysDepInfo=”component_name=model_comp_name” If coupling is being done sequentially, then SysDepInfo must consist of a list of keys and values for all model components as follows: SysDepInfo=”component1=model1_comp_name, component2=model2_comp_name, …” 2. ext_mct_ioexit SUBROUTINE ext_mct_ioexit( Status ) INTEGER, INTENT(INOUT) :: Status Synopsis: Shut down the WRF I/O system. 3. ext_mct_inquiry SUBROUTINE ext_mct_inquiry ( Inquiry, Result, Status ) CHARACTER *(*), INTENT(IN) :: Inquiry CHARACTER *(*), INTENT(OUT) :: Result INTEGER , INTENT(OUT) :: Status Synopsis: Return information about the Mct implementation of the WRF I/O API. Arguments: Inquiry: Result: The attribute of the package being queried. The result of the query. Description: Below are the Mct implementation specific valid queries and results. ‘RANDOM_WRITE’ ‘NO’ ‘RANDOM_READ’ ‘NO’ ‘SEQUENTIAL_WRITE’ ‘FRAMED’ ‘SEQUENTIAL_READ’ `FRAMED’ ‘OPEN_READ’ ‘NO’ ‘OPEN_WRITE’ ‘NO’ ‘OPEN_COMMIT_READ’ ‘REQUIRED’ ‘OPEN_COMMIT_WRITE’ ‘REQUIRED’ ‘MEDIUM’ ‘COUPLED’ ‘PARALLEL_IO’ ‘YES’ ‘SELF_DESCRIBING’ ‘NO’ ‘SUPPORT_METADATA’ ‘NO’ ‘SUPPORT_3D_FIELDS’ ‘NO’ ‘EXECUTION_MODE’ ‘CONCURRENT’ or ‘SEQUENTIAL’ 4. ext_mct_open_for_read_begin SUBROUTINE ext_mct_open_for_read_begin ( ComponentName , GlobalComm , CompComm, SysDepInfo , DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: ComponentName INTEGER , INTENT(IN) :: GlobalComm , CompComm CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status & Synopsis: Begin “training” phase for the WRF dataset DatasetName using the I/O package Mct Arguments: ComponentName GlobalComm: CompComm: SysDepInfo: DataHandle: The name of the model component from which data will be read. Communicator for all processors. Communicator for processors specific to the calling component. System dependent information described below. Returned to the calling program to serve as a handle to the open coupler stream for subsequent I/O API operations. Description: This routine opens a coupler data stream for receiving messages. The call to this routine marks the beginning of the “training” phase described in the introduction of this section. The call to ext_mct_open_for_read_begin must be paired with a call to ext_mct_open_for_read_commit. SysDepInfo must contain two key/value pairs as follows: SysDepInfo=”sparse_matrix_base_name=base_filename, component_name=comp_name” As described earlier, base_filename is the base name of the file containing the sparse matrix coefficients. comp_name is the name of the component originating the read operation. In the concurrent case, the communicator CompComm is associated with this component. In the sequential case, both GlobalComm and CompComm must be the global communicator. 5. ext_mct_open_for_read_commit SUBROUTINE ext_mct_open_for_read_commit( DataHandle , Status ) INTEGER , INTENT(IN ) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: End “training” phase. Arguments: DataHandle: Descriptor for an open coupler stream. Description: This routine indicates to the Mct implementation that the “training” phase is over and all subsequent read operations should actually receive data from the other component. 6. ext_mct_open_for_write_begin SUBROUTINE ext_mct_open_for_write_begin ( ComponentName , GlobalComm , CompComm, SysDepInfo , DataHandle , Status ) CHARACTER *(*), INTENT(IN) :: ComponentName INTEGER , INTENT(IN) :: GlobalComm , CompComm CHARACTER *(*), INTENT(IN) :: SysDepInfo INTEGER , INTENT(OUT) :: DataHandle INTEGER , INTENT(OUT) :: Status & Synopsis: Begin “training” phase for the WRF dataset DatasetName using the I/O package Mct Arguments: ComponentName GlobalComm: CompComm: SysDepInfo: DataHandle: The name of the model component to which data will be written. Communicator for all processors. Communicator for processors specific to the calling component. System dependent information described below. Returned to the calling program to serve as a handle to the open coupler stream for subsequent I/O API operations. Description: This routine opens a coupler data stream for sending messages. The call to this routine marks the beginning of the “training” phase described in the introduction of this section. The call to ext_mct_open_for_write_begin must be paired with a call to ext_mct_open_for_write_commit. SysDepInfo must contain two key/value pairs as follows: SysDepInfo=”sparse_matrix_base_name=base_filename, component_name=comp_name” As described earlier, base_filename is the base name of the file containing the sparse matrix coefficients. comp_name is the name of the component originating the write operation. In the concurrent case, the communicator CompComm is associated with this component. In the sequential case, both GlobalComm and CompComm must be the global communicator. 7. ext_mct_open_for_write_commit SUBROUTINE ext_mct_open_for_write_commit( DataHandle , Status ) INTEGER , INTENT(IN ) :: DataHandle INTEGER , INTENT(OUT) :: Status Synopsis: End “training” phase. Arguments: DataHandle: Descriptor for an open coupler stream. Description: This routine indicates to the Mct implementation that the “training” phase is over and all subsequent write operations should actually send data to the other component. 8. ext_mct_read_field SUBROUTINE ext_mct_read_field ( DataHandle , DateStr , VarName , & Field , FieldType , GlobalComm , & CompComm, & DomainDesc , MemoryOrder , Stagger , & DimNames , & DomainStart , DomainEnd , & MemoryStart , MemoryEnd , & PatchStart , PatchEnd , & Status ) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: DateStr CHARACTER*(*) ,INTENT(IN) :: VarName type ,INTENT(IN) :: Field(*) INTEGER ,INTENT(IN) :: FieldType INTEGER ,INTENT(IN) :: GlobalComm INTEGER ,INTENT(IN) :: CompComm INTEGER ,INTENT(IN) :: DomainDesc CHARACTER*(*) ,INTENT(IN) :: MemoryOrder CHARACTER*(*) ,INTENT(IN) :: Stagger CHARACTER*(*) , DIMENSION (*) ,INTENT(IN) :: DimNames INTEGER ,DIMENSION(*) ,INTENT(IN) :: DomainStart, DomainEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: MemoryStart, MemoryEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: PatchStart, PatchEnd INTEGER ,INTENT(OUT) :: Status Synopsis: Receive the variable named VarName from the component pointed to by DataHandle. Arguments: DataHandle: DateStr: VarName: Field: FieldType: GlobalComm: CompComm: Descriptor for an open coupler stream. Not Used A string containing the name of the variable to be received. A pointer to the variable to be received. Data type of the variable. Communicator for all processors. Communicator for processors specific to the calling component. DomainDesc: MemoryOrder: Stagger: DimNames: DomainStart: DomainEnd: MemoryStart: MemoryEnd: PatchStart: PatchEnd: Not Used. Not Used Staggering of the variable on the grid. Not Used Field attribute (see Section 1.8 ) Field attribute Field attribute Field attribute Field attribute Field attribute Description: This routine receives the variable named VarName in location Field from the component pointed to by DataHandle. No data are received if this routine is called during “training”. Currently, only fields of type WRF_REAL are supported. The Stagger argument is combined with the sparse matrix base name specified in the Open call to form a complete sparse matrix file name. 9. ext_mct_write_field SUBROUTINE ext_mct_write_field ( DataHandle , DateStr , VarName , & Field , FieldType , GlobalComm , & CompComm, & DomainDesc , MemoryOrder , Stagger , & DimNames , & DomainStart , DomainEnd , & MemoryStart , MemoryEnd , & PatchStart , PatchEnd , & Status ) INTEGER ,INTENT(IN) :: DataHandle CHARACTER*(*) ,INTENT(IN) :: DateStr CHARACTER*(*) ,INTENT(IN) :: VarName type ,INTENT(IN) :: Field(*) INTEGER ,INTENT(IN) :: FieldType INTEGER ,INTENT(IN) :: GlobalComm INTEGER ,INTENT(IN) :: CompComm INTEGER ,INTENT(IN) :: DomainDesc CHARACTER*(*) ,INTENT(IN) :: MemoryOrder CHARACTER*(*) ,INTENT(IN) :: Stagger CHARACTER*(*) , DIMENSION (*) ,INTENT(IN) :: DimNames INTEGER ,DIMENSION(*) ,INTENT(IN) :: DomainStart, DomainEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: MemoryStart, MemoryEnd INTEGER ,DIMENSION(*) ,INTENT(IN) :: PatchStart, PatchEnd INTEGER ,INTENT(OUT) :: Status Synopsis: Send the variable named VarName to the component pointed to by DataHandle. Arguments: DataHandle: DateStr: VarName: Field: FieldType: GlobalComm: CompComm: DomainDesc: MemoryOrder: Stagger: DimNames: DomainStart: DomainEnd: MemoryStart: MemoryEnd: PatchStart: PatchEnd: Descriptor for an open coupler stream. Not Used A string containing the name of the variable to be sent. A pointer to the variable to be sent. Data type of the variable. Communicator for all processors. Communicator for processors specific to the calling component. Not Used. Not Used Staggering of the variable on the grid. Not Used Field attribute Field attribute Field attribute Field attribute Field attribute Field attribute Description: This routine sends the variable named VarName in location Field to the component pointed to by DataHandle. No data are set if this routine is called during “training”. Currently, only fields of type WRF_REAL are supported. The Stagger argument is combined with the sparse matrix base name specified in the Open call to form a complete sparse matrix file name. APPENDIX C: MCEL Reference Implementation