Slide - CMAS

advertisement
Development of a Plume-in-Grid Version
of Global-through-Urban WRF/Chem
Prakash Karamchandani, Krish Vijayaraghavan, Shu-Yun Chen
ENVIRON International Corporation, Novato, CA
Yang Zhang, North Carolina State University, Raleigh, NC
9th Annual CMAS Conference, October 11-13, 2010
Chapel Hill, North Carolina
Template
Outline
•
•
•
•
•
•
Objectives
Global-through-Urban WRF/Chem
Modeling Domains
Challenges in Coupling Plume Model with WRF/Chem
Solutions
Current Status
2
Objectives
• Develop unified Global-through-Urban WRF/Chem
•
•
(GU-WRF/Chem) for integrated climate change-air
quality modeling at all scales
Apply over multiple nested grids with domains
covering the globe, the Trans-Pacific region, the
continental US (CONUS), and the eastern U.S.
This work: incorporate Plume-in-Grid (PinG)
treatment to resolve point source plumes in the
eastern U.S., based on the treatment used in CMAQAPT (CMAQ with Advanced Plume Treatment)
GU-WRF/Chem
• Extension of mesoscale WRF/Chem to larger scales
• Includes CB05 chemistry mechanism with global
•
•
extensions (CB05GE) to represent urban, regional
and remote tropospheric chemistry, and the chemistry
of the lower stratosphere
Multiple choices for aerosol treatment based on
MADE/SORGAM (modal), MADRID (sectional) and
MOSAIC (sectional)
Embedded SCICHEM reactive plume model for PinG
treatment (ongoing work)
Modeling Domains
Plume-in-Grid Implementation Approach
• Use plume model in CMAQ-APT as a starting point
• Update model to make chemistry (gas, aerosols,
•
•
•
•
•
aqueous) consistent with that in GU-WRF/Chem
Write new interfaces to couple GU-WRF/Chem with
plume model
Conduct initial testing with tracer version of model
running in serial mode to verify that plumes are
transported correctly within WRF
Test chemistry version of the model in serial mode
Implement parallelization
Conduct summer episode simulation, evaluate model, and
assess PinG impacts
Challenges
• Architectures of CMAQ and WRF/Chem are significantly different
• WRF employs a layered software architecture: driver layer,
•
•
•
mediation layer, model layer, and external libraries for timekeeping, parallelization interfaces, etc.
WRF architecture promotes modularity, portability and software
reuse; however – software design is not very intuitive for physical
scientists
Science code developers are generally restricted to the model
layer and lower levels of mediation layer
Design works well when you don’t need to go beyond these
layers, e.g., for “column physics” or “column chemistry”. For fully 3D packages, such as PinG, science code developers need to
venture to the higher layers of the system
Key Features of WRF Relevant to New Science
Code Development
• Makes extensive use of modern programming features of Fortran
90, such as modules, derived data-types, dynamic memory
allocation, pointers, and recursion
• The most important derived data type is “domain”. Instances of this
data type are used to represent the modeling grids in a WRF or
WRF/Chem simulation
• This data type contains all the “state” variables associated with a
grid, such as meteorology, chemical species concentrations,
emissions, photolysis rates, as well as other grid characteristics, such
as dimensions and pointers to nested, sibling and parent grids
• The subroutine that performs integration of the physical and
chemical processes for a single time step is a recursive subroutine,
i.e., it is first called for the mother grid and then calls itself for each
subsequent nested grid and their siblings until the last grid is
integrated
WRF Registry
• Text file that specifies model state arrays, I/O, coupling, inter•
•
process communication, physics and chemistry packages, and model
configuration options
Used at model build time to auto-generate several thousand lines
of code, e.g.:
– Declaration, allocation, and initialization of state data
– Generation of dummy argument lists and their declarations
– Generation of actual argument lists used when passing state
data between subroutines at the interface between layers in the
WRF software architecture
If new science code requires access to WRF variables that are not
currently saved as state variables, then the registry needs to be
modified by the code developer; similar requirement for model
configuration options relevant for the new code
Other Considerations
• Array conventions (IKJ instead of the traditional IJK) for state
•
•
•
•
variables and automatic arrays
Grid staggering, e.g., horizontal wind components are specified at
dot points in CMAQ and cell faces in WRF
Domain dimensions for all state variables are staggered: for
variables that are defined at cross or mass points, such as chemical
concentrations, this means that there is an extra grid cell at the end
in each dimension
All time-keeping information (such as current time, process time step,
etc.) is stored in the “domain clock” associated with each grid and
needs to be obtained from the external time-keeping library
(ESMF) using “wrapper” accessor functions
The model namelist file (which provides run control information for a
WRF/Chem simulation) should be updated to select new model
configuration options specified in the registry
Parallelization in WRF
• 2-level domain decomposition:
•
•
•
“Patches”, assigned to processes, and
“Tiles”, assigned to threads
Model layer routines (science code)
generally operate at tile level
Driver layer allocates, stores, and
decomposes model domains and
provides top-level access to
communications and I/O using
external libraries
Mediation layer provides interface
between driver layer and model
layer; performs many important
functions, one of which is to
dereference driver layer data
objects (e.g., grids) and pass the
individual fields (state variables) to
the model layer routines
Logical
domain
1 Patch, divided
into multiple tiles
Inter-processor
communication
Figure from WRF presentation
by John Michalakes, NCAR
Implications for PinG Implementation
• Domain decomposition paradigm not compatible with SCICHEM
• SCICHEM uses puff decomposition for the following reasons:
– SCICHEM state variables are puff variables, not grid variables
– Load balancing (with domain decomposition, non-uniform
distribution of point sources across model domain will increase
the load on some processors while other processors would
remain idle or perform less work)
– Difficulties with treating puffs crossing sub-domains during a
time step if domain decomposition is used
• Grid state variables used in SCICHEM to provide 3-D
meteorology and chemistry inputs: information required for whole
domain
• Model interface needs to collect sub-domain state variable
information (inter-processor communication required)
PinG in GU-WRF/Chem
• Main interface in the mediation layer
• All WRF grid description and meteorology state variables de-
referenced in interface and passed as simple arrays to SCICHEM
driver (in CMAQ-APT these variables were read from I/O API
data files)
• WRF/Chem grid chemistry variables de-referenced in interface
and passed as simple array to SCICHEM driver (similar to
treatment in CMAQ-APT)
• New state variables specified in Registry (e.g., dry deposition
velocities), as well as new configuration options for PinG treatment
• For parallel implementation, write code to gather full domain
state arrays at beginning of SCICHEM time step and scatter
chemistry variables at end of time step
Current Status
• Meteorology and grid interfaces completed
• Tracer version of model currently being tested
• Updates to SCICHEM chemistry
• Development of chemistry interfaces
• Parallelization
Acknowledgments
This work is being conducted under subcontract to North
Carolina State University for EPA STAR Grant #
RD833376
Questions?
Download