WRF code

advertisement
Some Coding Structure in WRF
Software Architecture
 Features
F90 w/ structures and dynamic memory allocation
Modules
Run-time configurable
Hierarchical Software Design
Multi-level parallel decomposition
shared-, distributed-, hybrid
Multi-level parallel decomposition
Logical
domain

1 Patch, divided
into multiple tiles
Single version of code for
efficient execution on:

Distributed-memory

Shared-memory
Hybrid-memory

Model domains are decomposed for parallelism on two-levels
Patch: section of model domain allocated to a distributed memory node
Tile: section of a patch allocated to a shared-memory processor within
a node; this is also the scope of a model layer subroutine.
Distributed memory parallelism is over patches; shared memory
parallelism is over tiles within patches
Three Sets of Dimensions
Domain size:
ids, ide, jds, jde, kds, kde
Memory size:
ims, ime, jms, jme, kms, kme
Tile size:
its, ite, jts, jte, kts, kte
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
•
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
•
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
logical patch
Distributed Memory Communications
Example code fragment that requires communication between patches
Note the tell-tale +1 and –1 expressions in indices for rr and H1 arrays on
right-hand side of assignment. These are horizontal data dependencies
because the indexed operands may lie in the patch of a neighboring
processor. That neighbor’s updates to that element of the array won’t be
seen on this processor. We have to communicate.
(dyn_eh/module_diffusion.F )
SUBROUTINE horizontal_diffusion_s (tendency, rr, var, . . .
. . .
DO j = jts,jte
DO k = kts,ktf
DO i = its,ite
mrdx=msft(i,j)*rdx
mrdy=msft(i,j)*rdy
tendency(i,k,j)=tendency(i,k,j)(mrdx*0.5*((rr(i+1,k,j)+rr(i,k,j))*H1(i+1,k,j)(rr(i-1,k,j)+rr(i,k,j))*H1(i ,k,j))+
mrdy*0.5*((rr(i,k,j+1)+rr(i,k,j))*H2(i,k,j+1)(rr(i,k,j-1)+rr(i,k,j))*H2(i,k,j ))msft(i,j)*(H1avg(i,k+1,j)-H1avg(i,k,j)+
H2avg(i,k+1,j)-H2avg(i,k,j)
)/dzetaw(k)
)
ENDDO
ENDDO
ENDDO
. . .
&
&
&
&
&
&
&
&
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
•
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
logical patch
•
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
•
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
Memory dimensions
• Used to dimension dummy
arguments
• Do not use for local arrays
jme
1 node
logical patch
halo
ims
ime
jms
•
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
•
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
•
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
Memory dimensions
• Used to dimension dummy
arguments
• Do not use for local arrays
Tile dimensions
• Local loop ranges
• Local array dimensions
jme
jte
tile
its
jts
ite
halo
ims
ime
jms
Data structure

WRF Data Taxonomy

State data

Intermediate data type 1 (L1)

Intermediate data type 2 (L2)
Data structure
State data






Persist for the duration of a domain
Represented as fields in domain data structure
Arrays are represented as dynamically allocated
pointer arrays in the domain data structure
Declared in Registry using state keyword
Always memory dimensioned; always thread
shared
Only state arrays can be subject to I/O and
Interprocessor communication
•
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
•
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
Memory dimensions
• Used to dimension dummy
arguments
• Do not use for local arrays
jme
1 node
logical patch
halo
ims
ime
jms
Data structure
L1 Data

Data that persists for the duration of 1 time step on a
domain and then released

Declared in Registry using i1 keyword

Typically automatic storage (program stack) in solve
routine

Typical usage is for tendency or temporary arrays in
solver

Always memory dimensioned and thread shared

Typically not communicated or I/O
Data structure
L2 Data

L2 data are local arrays that exist only in model-layer
subroutines and exist only for the duration of the call to the
subroutine

L2 data is not declared in Registry, never communicated
and never input or output

L2 data is tile dimensioned and thread local; overdimensioning within the routine for redundant computation
is allowed

the responsibility of the model layer programmer

should always be limited to thread-local data
•
template for model layer subroutine
SUBROUTINE model ( &
arg1, arg2, arg3, …
ids, ide, jds, jde,
ims, ime, jms, jme,
its, ite, jts, jte,
, argn,
kds, kde,
kms, kme,
kts, kte
&
&
&
)
•
! Domain dims
! Memory dims
! Tile dims
IMPLICIT NONE
! Define Arguments (S and I1) data
REAL, DIMENSION (ims:ime,kms:kme,jms:jme) :: arg1, . . .
REAL, DIMENSION (ims:ime,jms:jme)
:: arg7, . . .
. . .
! Define Local Data (I2)
REAL, DIMENSION (its:ite,kts:kte,jts:jte) :: loc1, . . .
. . .
! Executable code; loops run over tile
! dimensions
DO j = jts, jte
DO k = kts, kte
DO i = MAX(its,ids), MIN(ite,ide)
loc(i,k,j) = arg1(i,k,j) + …
END DO
END DO
END DO
•
Domain dimensions
• Size of logical domain
• Used for bdy tests, etc.
Memory dimensions
• Used to dimension dummy
arguments
• Do not use for local arrays
Tile dimensions
• Local loop ranges
• Local array dimensions
jme
jte
tile
its
jts
ite
halo
ims
ime
jms
The Registry

"Active data-dictionary” for managing WRF data structures
 Database describing attributes of model state, intermediate, and
configuration data






Program for auto-generating sections of WRF from database:










Dimensionality, number of time levels, staggering
Association with physics
I/O classification (history, initial, restart, boundary)
Communication points and patterns
Configuration lists (e.g. namelists)
570 Registry entries  30-thousand lines of automatically generated WRF code
Allocation statements for state data, I1 data
Argument lists for driver layer/mediation layer interfaces
Interprocessor communications: Halo and periodic boundary updates, transposes
Code for defining and managing run-time configuration information
Code for forcing, feedback and interpolation of nest data
Automates time consuming, repetitive, error-prone programming
Insulates programmers and code from package dependencies
Allow rapid development
Documents the data
Registry data base


Currently implemented as a text file: Registry/Registry
Types of entry:

State – Describes state variables and arrays in the domain structure

Dimspec – Describes dimensions that are used to define arrays in the
model

L1 – Describes local variables and arrays in solve

Typedef – Describes derived types that are subtypes of the domain
structure

Rconfig – Describes a configuration (e.g. namelist) variable or array

Package – Describes attributes of a package (e.g. physics)

Halo – Describes halo update interprocessor communications

Period – Describes communications for periodic boundary updates

Xpose – Describes communications for parallel matrix transposes
State/L1 Entry (Registry)

Elements


Entry: The keyword “state”
Type: The type of the state variable or array (real, double, integer, logical,
character, or derived)




Sym: The symbolic name of the variable or array
Dims: A string denoting the dimensionality of the array or a hyphen (-)
Use: A string denoting association with a solver or 4D scalar array, or a hyphen
NumTLev: An integer indicating the number of time levels (for arrays) or hypen
(for variables)





Stagger: String indicating staggered dimensions of variable (X, Y, Z, or hyphen)
IO: String indicating whether and how the variable is subject to I/O and Nesting
DName: Metadata name for the variable
Descrip: Metadata description of the variable
Example
#
Type Sym Dims
Use
Tlev Stag IO
Dname
Descrip
# definition of a 3D, two-time level, staggered state array
state
i1
real ru
real ww1
ikj
ikj
dyn_em
dyn_em
2
1
X
Z
irh
"RHO_U"
"X WIND COMPONENT“
State Entry– different output times

Example
In Registry
state real ru
ikj
dyn_em
2
X
irh01
"RHO_U"
In namelist.input
auxhist1_outname
auxhist1_interval
frames_per_auxhist1
auxhist1_begin_y
auxhist1_begin_mo
auxhist1_begin_d
auxhist1_begin_h
auxhist1_begin_m
auxhist1_begin_s
io_form_auxhist1
= 'pm_output_d<domain>_<date>'
= 10000, 10000, 5
= 30, 30, 24
=0
=0
=1
=0
=0
=0
= 2,
This will give you a five minute output interval on domain 3 starting after
1 day simulation.
"XX“
Dimspec entry

Elements







Entry: The keyword “dimspec”
DimName: The name of the dimension (single character)
Order: The order of the dimension in the WRF framework (1, 2, 3, or ‘-‘)
HowDefined: specification of how the range of the dimension is defined
CoordAxis: which axis the dimension corresponds to, if any (X, Y, Z, or C)
DatName: metadata name of dimension
Example
#<Table>
dimspec
dimspec
dimspec
dimspec
<Dim>
i
j
k
l
<Order> <How defined>
<Coord-axis>
1
standard_domain
x
3
standard_domain
y
2
standard_domain
z
2
namelist=num_soil_layers z
<DatName>
west_east
south_north
bottom_top
soil_layers
Package Entry (Registry)

Elements






Entry: the keyword “package”,
Package name: the name of the package: e.g. “kesslerscheme”
Associated rconfig choice: the name of a rconfig variable and the
value of that variable that choses this package
Package state vars: unused at present; specify hyphen (-)
Associated 4D scalars: the names of 4D scalar arrays and the fields
within those arrays this package uses
Example
# specification of microphysics options
package
passiveqv
mp_physics==0
package
kesslerscheme mp_physics==1
package
linscheme
mp_physics==2
package
ncepcloud3
mp_physics==3
package
ncepcloud5
mp_physics==4
-
moist:qv
moist:qv,qc,qr
moist:qv,qc,qr,qi,qs,qg
moist:qv,qc,qr
moist:qv,qc,qr,qi,qs
# namelist entry that controls microphysics option
rconfig
integer
mp_physics
namelist,namelist_04
max_domains
0
Comm entries: halo and period

Elements




Entry: keywords “halo” or “period”
Commname: name of comm operation
Description: defines the halo or period operation
 For halo: npts:f1,f2,...[;npts:f1,f2,...]*
 For period: width:f1,f2,...[;width:f1,f2,...]*
Example
# first exchange in eh solver
halo
HALO_EH_A dyn_em 24:u_2,v_2,ru_1,ru_2,rv_1,rv_2,w_2,t_2;4:pp,pip
# a periodic boundary update
period PERIOD_EH_A dyn_em 2:u_1,u_2,ru_1,ru_2,v_1,v_2,rv_1,rv_2,rw_1,rw_2
4D Tracer Arrays




State arrays, used to store arrays of 3D fields such as
moisture tracers, chemical species, ensemble members,
etc.
First 3 indices are over grid dimensions; last dimension
is the tracer index
Each tracer is declared in the Registry as a separate
state array but with f and optionally also t modifiers to
the dimension field of the entry
The field is then added to the 4D array whose name is
given by the use field of the Registry entry
Package Entry (Registry)
state real qv
ikjft moist
2
- \
i01rhusdf=(bdy_interp:dt,rqv_b,rqv_bt) "QVAPOR"
"Water vapor mixing ratio"
"kg kg-1"
state real qc
ikjft moist
2
- \
i01rhusdf=(bdy_interp:dt,rqc_b,rqc_bt) "QCLOUD"
"Cloud water mixing ratio"
"kg kg-1"
state real qr
ikjft moist
2
- \
i01rhusdf=(bdy_interp:dt,rqr_b,rqr_bt) "QRAIN"
"Rain water mixing ratio"
state real qi
ikjft moist
2
- \
i01rhusdf=(bdy_interp:dt,rqi_b,rqi_bt) "QICE"
"Ice mixing ratio"
state real qs
ikjft moist
2
- \
i01rhusdf=(bdy_interp:dt,rqs_b,rqs_bt) "QSNOW"
1"
state real
qg
ikjft moist
2
-
"Snow mixing ratio"
"kg kg-1"
"kg kg-1"
"kg kg-
\
i01rhusdf=(bdy_interp:dt,rqg_b,rqg_bt) "QGRAUP"
"Graupel mixing ratio"
"kg kg-1"
4D Tracer Arrays

The extent of the last dimension of a tracer array is from
PARAM_FIRST_SCALAR to num_tracername
 Both defined in Registry-generated
frame/module_state_description.F
 PARAM_FIRST_SCALAR is a defined constant (2)
 Num_tracername is computed at run-time in
set_scalar_indices_from_config (module_configure)
 Calculation is based on which of the tracer arrays are
associated with which specific packages in the
Registry and on which of those packages is active at
run time (namelist.input)
4D Tracer Arrays



Each tracer index (e.g. P_QV) into the 4D array is also
defined in module_state_description and set in
set_scalar_indices_from_config
Code should always test that a tracer index greater than
or equal to PARAM_FIRST_SCALAR before referencing
the tracer (inactive tracers have an index of 1)
Loops over tracer indices should always run from
PARAM_FIRST_SCALAR to num_tracername -EXAMPLE
4D Tracer Array Example
• 4D moisture field, moist_1(i,k,j,?)
? = P_QV (water vapor)
P_QC (cloud water)
P_QI (cloud ice)
P_QR (rain)
P_QS (snow)
P_QG (graupel)
IF (qi_flag) then
(the memory of cloud ice is allocated)
...
Directory Structure
Registry
WRF Mass-Coordinate Model Integration Procedure
WRFV3/dyn_em/solve_em.F
Begin time step
Runge-Kutta loop (steps 1, 2, and 3)
(i) advection, p-grad, buoyancy using  t ,   ,  
(ii) if step 1 (first_rh_part1/part2) physics,
save for steps 2 and 3
(iii) assemble dynamics tendencies
Acoustic step loop
(i) advance U,V, then , , then w, 
(ii) time-average U,V, 
End acoustic loop
Advance scalars using time-averaged U,V, 
End Runge-Kutta loop
Other physics (currently microphysics)

End time step

phy_prep
…
phy_init
radiation_driver
surface_driver
pbl_driver
WRF
…
solve_em
part1
cumulus_driver
DYNAMICS
.
moist_physics_prep
microphysics_driver
Physics
Calculate decoupled variable tendencies
• Cumulus parameterization
• Boundary layer parameterization
• Radiation parameterization
Update decoupled variables directly
• Microphysics
Physics three-level structure
solve_em
Physics_driver
SELECT CASE (CHOICE)
CASE ( NOPHY )
CASE ( SCHEME1 )
CALL XXX
CASE ( SCHEME2 )
CALL YYY
.
CASE DEFAULT
END SELECT
Individual physics scheme
( XXX )
Rules for WRF physics

Naming rules
module_yy_xxx.F (module)
yy
= ra
bl
sf
cu
mp
is for radiation
is for PBL
is for surface and surface layer
is for cumulus
is for microphysics.
xxx = individual scheme
ex, module_cu_grell.F
Rules for WRF physics

Naming rules
RXXYYTEN (tendencies)
XX = variable (th, u, v, qv, qc, … )
YY = ra
bl
cu
ex, RTHBLTEN
is for radiation
is for PBL
is for cumulus
Rules for WRF physics

Naming rules

One scheme one module

Coding rules (later)
WRF Physics Features
• Unified global constatnts
(module_model_constants.F)
REAL
REAL
REAL
REAL
.
.
, PARAMETER :: r_d
, PARAMETER :: r_v
, PARAMETER :: cp
, PARAMETER :: cv
= 287.
= 461.6
= 7.*r_d/2.
= cp-r_d
WRF Physics Features
• Unified global constatnts
(module_model_constants.F)
• Unified common calculations
(saturation mixing ratio)
• Vertical index
(kms is at the bottom)
Implement a new physics scheme

Prepare your code

Create a new module

Declare new variables and
a new package in Registry

Modify namelist

Do initialization

Modify solve_em.F

Modify phy_prep
Implement a new physics scheme

Modify cumulus_driver.F
(use cumulus parameterization as an example)

Modify calculate_phy_ten

Modify phy_cu_ten
(module_physics_addtendc.F)

Modify Makefile

Compile and test
phy_prep
…
phy_init
radiation_driver
surface_driver
pbl_driver
WRF
…
solve_em
part1
cumulus_driver
DYNAMICS
.
moist_physics_prep
microphysics_driver
Prepare your code
1. F90
a) Replace continuation characters in the 6th column
with f90 continuation `&‘ at end of previous line
F77
Subroutine kessler(QV, T,
+
its,ite,jts,jte,kts,kte,
+
ims,ime,jms,jme,kms,kme,
+
ids,ide,jds,jde,kds,kde)
F90
Subroutine kessler(QV, T, . . .
&
its,ite,jts,jte,kts,kte,
&
ims,ime,jms,jme,kms,kme,&
ids,ide,jds,jde,kds,kde )
Prepare your code
1. F90
a) Replace continuation characters in the 6th column
with f90 continuation `&‘ at end of previous line
b) Replace the 1st column `C` for comment with `!`
F77
c This is a test
F90
!
This is a test
Prepare your code
1. F90
2. No common block
common/var1/T,q,p, …
WRF
Subroutine sub(T,q,p, ….)
real,intent(out), &
dimension(ims:ime,kms:kme,jms:jme):: T,q,p
Prepare your code
1. F90
2. No common block
3. Use “ implicit none ”
4. Use “ intent ”
Subroutine sub(T,q,p, ….)
implicit none
real,intent(out), &
dimension(ims:ime,kms:kme,jms:jme):: T
real,intent( in), &
dimension(ims:ime,kms:kme,jms:jme):: q
real,intent(inout), &
dimension(ims:ime,kms:kme,jms:jme):: p
Prepare your code
1. F90
2. No common block
3. Use “ implicit none ”
4. Use “ intent ”
5.Variable dimensions
Subroutine sub(global,….)
implicit none
real,intent(out), &
dimension(ims:ime,kms:kme,jms:jme):: global
real,dimension(its:ite,kts:kte,jts:jte):: local
Prepare your code
1. F90
2. No common block
3. Use “ implicit none ”
4. Use “ intent ”
5.Variable dimensions
6.Do loops
do j = jts, jte
do k = kts, kte
do i = its, ite
...
enddo
enddo
enddo
Implement a new physics scheme
 Create a new module
ex, module_cu_exp.F (plug in all your codes)
 Go Registry and declare a new package
(and new variables) (WRFV1/Registry)
package kfscheme
cu_physics==1 - -
package bmjscheme
cu_physics==2 - -
package expscheme
cu_physics==3 - -
Implement a new physics scheme
 Create a new module
ex, module_cu_exp.F (plug in all your codes)
 Go Registry and declare a new package
(and new variables) (WRFV1/Registry)
Cloud microphysics
package kesslerscheme mp_physics==1 - moist:qv,qc,qr
package linscheme
mp_physics==2 - moist:qv,qc,qr,qi,qs,qg
package wsm3
mp_physics==3 - moist:qv,qc,qr
package wsm5
mp_physics==4 - moist:qv,qc, qr,qi,qs
Implement a new physics scheme
 Create a new module
ex, module_cu_exp.F (plug in all your codes)
 Go Registry and declare a new package
(and new variables) (WRFV1/Registry)
 Modify namelist.input and assign
cu_physics = 3
(dyn_em)
(start_em.F)
* start_domain_em
(dyn_em)
WRF
…….
*
solve_em
(phys)
(module_physics_init.F)
phy_init
cu_init
phys/module_physics_init.F
 Pass new variables down to cu_init
(dyn_em)
(start_em.F)
* start_domain_em
(dyn_em)
WRF
…….
* solve_em
(phys)
(module_physics_init.F)
phy_init
cu_init
phys/module_physics_init.F
 Pass new variables down to cu_init
 Go subroutine cu_init
Include the new module and create a new
SELECT case
phys/module_physics_init.F
Subroutine cu_init(…)
.
USE module_cu_kf
USE module_cu_bmj
USE module_cu_exp
.
cps_select: SELECT CASE(config_flags%cu_physics)
CASE (KFSCHEME)
CALL kfinit(...)
CASE (BMJSCHEME)
CALL bmjinit(...)
Match the package
CASE (EXPSCHEME)
name in Registry
CALL expinit(...)
CASE DEFAULT
END SELECT cps_select
phy_prep
…
WRF
…
phy_init
solve_em
part1
DYNAMICS
.
moist_physics_prep
microphysics_driver
phy_prep/moist_physics_prep
• Calculate required variables
• Convert variables from C grid
to A grid
phy_prep
…
phy_init
radiation_driver
surface_driver
pbl_driver
WRF
…
solve_em
part1
cumulus_driver
DYNAMICS
.
moist_physics_prep
microphysics_driver
Expcps
Three-level structure
solve_em
Physics_driver
SELECT CASE (CHOICE)
CASE ( NOPHY )
CASE ( SCHEME1 )
CALL XXX
CASE ( SCHEME2 )
CALL YYY
.
CASE DEFAULT
END SELECT
Individual physics scheme
( XXX )
cumulus_driver.F
 Go physics driver (cumulus_driver.F)
Include the new module
and create a new SELECT CASE in driver
Check available variables in drivers
(variables are explained inside drivers)
Module_cumulus_driver.F
MODULE module_cumulus_driver
CONTAINS
Subroutine cumulus_driver (….)
.
.
.
!-- RQICUTEN
!
!-- RAINC
(mm)
!-- RAINCV
!-- NCA
!
!-- u_phy
!-- v_phy
!-- th_phy
!-- t_phy
!-- w
!-- moist
(kg/kg)
!-- dz8w
!-- p8w
Qi tendency due to
cumulus scheme precipitation (kg/kg/s)
accumulated total cumulus scheme precipitation
cumulus scheme precipitation (mm)
counter of the cloud relaxation
time in KF cumulus scheme (integer)
u-velocity interpolated to theta points (m/s)
v-velocity interpolated to theta points (m/s)
potential temperature (K)
temperature (K)
vertical velocity (m/s)
moisture array (4D - last index is species)
dz between full levels (m)
pressure at full levels (Pa)
Module_cumulus_driver.F
MODULE module_cumulus_driver
CONTAINS
Subroutine cumulus_driver
.
USE module_cu_kf
USE module_bmj_kf
USE module_cu_exp
cps_select: SELECT CASE(config_flags%cu_physics)
CASE (KFSCHEME)
CALL KFCPS(...)
CASE (BMJSCHEME)
CALL BMJCPS(...)
Match the package
CASE (EXPSCHEME)
name in Registry
CALL EXPCPS(...)
CASE DEFAULT
END SELECT cps_select
phy_prep
…
phy_init
radiation_driver
surface_driver
pbl_driver
WRF
…
solve_em
part1
cumulus_driver
DYNAMICS
.
moist_physics_prep
microphysics_driver
phy_prep
part1
cumulus_driver
part2
calculate_phy_tend
expcps
solve_em
update_phy_ten
DYNAMICS
.
message
passing ?
phy_cu_ten
phys/module_physics_addtendc.F
Subroutine phy_cu_ten (… )
.
CASE(BMJSCHEME)
.
CASE (EXPSCHEME)
CALL add_a2a (rt_tendf, RTHCUTEN,…
)
CALL add_a2c_u(ru_tendf,RUBLTEN,…
)
CALL add_a2c_v(rv_tendf,RVBLTEN,…
)
.
if ( QI_FLAG ) &
CALL add_a2a(moist_tendf(ims,kms,jms,P_QV),RQVCUTEN, .. &
ids,ide, jds, jde, kds, kde,
&
ims, ime, jms, jme, kms, kme,
&
its, ite, jts, jte, kts, kte
)
.
Download