Joint Ensemble Forecast System (JEFS) NCAR Sep 2005

advertisement
Joint Ensemble Forecast System
(JEFS)
Sep 2005
NCAR
Overview

Motivation/Goal

Requirements

Resources

System Design

Roadmap

Products/Applications
JEFS’ Goal
Prove the value, utility, and operational feasibility
of ensemble forecasting to DoD operations.
Deterministic
Forecasting
?
• Ignores forecast uncertainty
• Potentially very misleading
• Oversells forecast capability
Ensemble
Forecasting
…etc
• Reveals forecast uncertainty
• Yields probabilistic information
• Enables optimal decision making
Ensemble Forecast Requirements
Air Force (and Army)
AFW Strategic Plan and Vision, FY2008-2032
Issue #3/4-3: Use of multi-scale (kilometer to meter resolution), ensemble, and consensus model forecasts,
combined with automation of local techniques, to support planning and execution of military operations.
“Ensembles have the potential to help quantify the certainty of a prediction, which is something that users have been
interested in for years. The military applications of ensemble forecasting are only at their beginnings; there are years’
worth of research waiting to be done.”
Operational Requirements Document, USAF 003-94-I/II/III-D, Centralized Aerospace Weather
Capability
(CAWC ORD)
…will support ensemble forecasting with the following capabilities:
1) The creation of sets of perturbed initial conditions of the fine-scale model initialized fields in selected regional windows.
2) Assembly of ensemble forecasts either from model output sets derived from the multiple sets of perturbed initial
conditions or from sets assembled from the output from different models.
3) Evaluation of forecasting skill of ensemble forecasts compared to single forecast model outputs.
Air Force Weather, FY 06-30, Mission Area Plan (AFW MAP)
Deficiency: Mesoscale Ensemble Forecasting
“The key to successful ensemble forecasting is many different realizations of the same forecast events. Studies using
different models - or the same model with different configurations - consistently yield better overall forecasts.
This demonstrates a definite need for multiple model runs.”
R&D Portfolio
MSA Shortfall D-08-07K: Insufficient ensemble forecasting capability for AFWA’s theater scale model
Ensemble Forecast Requirements
Navy
No documented requirement or supporting Fleet request for ensemble prediction.
Navy ‘requirements’ are written in terms of warfighting capabilities. The current (draft) METOC
ICD (old MNS) only specifies parameters required for support. However, ensembles present a
solution for the following specified warfighter requirements:
• Long-range prediction for mission planning, optimum track ship routing, severe weather avoidance
• Tropical cyclone prediction for safety of operations, personnel safety
• Winds, turbulence, boundary layer structure for chem/bio/nuclear dispersion (WMD support)
• Cloud base, fog, aerosol for slant range visibility (aerial recon, flight operations, targeting)
• Boundary layer structure/atmospheric refractivity (T, q) for EM propagation (detection, tracking,
communications)
• Surface winds (ASW, mine drift, SAR, flight operations in enclosed/narrow waterways)
• Surf and sea heights (SOF, small boat ops, logistics)
• Turbulence, cloud base/tops (OPARS, safety of flight)
Whenever the uncertainty of the wx phenomena exceeds operational sensitivity,
either a reliable probabilistic or a range-of-variability prediction is required.
J
E
F
S
T
E
A
M
Organization
AFWA
FNMOC
HPCMP
NRL
Contribution
- JEFS integration
- FY05-FY07 Funding
- JEFS integration
- NOGAPS members for JGE
- Primary Hardware Funding
- Programming Environment and Training (PET)
onsite at AFWA
- JGE and JME initial conditions
- COAMPS model perturbations
ARL
- Uncertainty visualization tool: Weather Risk Analysis
and Portrayal (WRAP)
DTRA
- FY05-FY09 Funding
NCAR
- WRF model perturbations
UW
20 OWS
- Calibration (bias correction and BMA)
- Product Design/Development
- JEFS operational testing and evaluation
17 OWS
- JEFS operational testing and evaluation
Yokosuka
NPMOC
NPS & AFIT
- JEFS operational testing and evaluation
ONR
- Consultation
- Research project(s)
Players
Maj Tony Eckel
Dr. Jerry Wegiel
Mr. Norm Mandy
Dr. Mike Sestack
Mr. John Boisseau
Dr. Steve Klotz (at AFWA)
Dr. Craig Bishop
Dr. Jim Doyle
Dr. Carolyn Reynolds
Ms. Sue Chen
Mr. Justin McLay
Mr. Dave Knapp
Ms. Barb Sauter
Mr. Hyam Singer (Next Century)
Mr. Allen Hill (Next Century)
CDR Stephanie Hamilton
Mr. Pat Hayes
Dr. Jordan Powers
Dr. Chris Snyder
Dr. Cliff Mass
Dr. Eric Grimit
Lt Col Mike Farrar
Maj David Andrus
Maj Christopher Finta
1Lt Perry Sweat
?
Dr. Russ Elsberry
Maj Bob Stenger
Dr. Steve Tracton
FY04 HPCMP Distributed Center (DC) Award
• Apr 03: FNMOC and AFWA proposed a split distributed center to the DoD High
Performance Computing Modernization Program (HPCMP) as a DoD Joint Operational
Test Bed for the Weather Research and Forecast (WRF) modeling framework
• Apr 04: Installation began of $4.2M in IBM HPC hardware,
split equally between FNMOC and AFWA
(two 96 processor IBM Cluster 1600 p655+ systems)
• Fosters significant Navy/Air Force collaboration in NWP for
1) Testing and optimizing of WRF configurations to meet
unique Navy and Air Force NWP requirements
2) Developing and testing mesoscale ensembles based on
multiple WRF configurations to meet DoD needs
3) Testing of Grid Computing concepts and tools for NWP
• Apr 08: Project Completion
Joint Global Ensemble (JGE)
• Description: Combination of current GFS and NOGAPS global, medium-range
ensemble data. Possible expansion to include ensembles from CMC,
UKMET, JMA, etc.
• Initial Conditions: Breeding of Growing Modes 1
• Model Variations/Perturbations: Two unique models, but no model perturbations
• Model Window: Global
• Grid Spacing: 1.0 1.0 (~80 km)
• Number of Members: 40 at 00Z
30 at 12Z
• Forecast Length/Interval: 10 days/12 hours
• Timing
• Cycle Times: 00Z and 12Z
• Products by: 07Z and 19Z
1 Toth, Zoltan, and Eugenia Kalnay, 1997: Ensemble Forecasting at NCEP and the Breeding Method. Monthly Weather
Review: Vol. 125, No. 12, pp. 3297–3319.
Joint Mesoscale Ensemble (JME)
• Description: Multiple high resolution, mesoscale model runs generated at FNMOC
and AFWA
• Initial Conditions: Ensemble Transform Filter 2 run on short-range (6-h),
mesoscale data assimilation cycle driven by GFS and NOGAPS
ensemble members
• Model variations/perturbations:
• Multimodel: WRF-ARW, COAMPS
• Varied-model: various configurations of physics packages
• Perturbed-model: randomly perturbed sfc boundary conditions (e.g., SST)
• Model Window: East Asia (COPC directive, Apr ’04)
• Grid Spacing: 15 km for baseline JME (summer ’06)
5 km nest later in project
• Number of Members: 30 (15 run at each DC site)
• Forecast Length/Interval: 60 hours/3 hours
• Timing
• Cycle Times: 06Z and 18Z
• Products by: 14Z and 02Z
~7 h production
/cycle
5km
15km
2 Wang, Xuguang, and Craig H. Bishop, 2003: A Comparison of Breeding and Ensemble Transform Kalman Filter Ensemble
Forecast Schemes. Journal of the Atmospheric Sciences: Vol. 60, No. 9, pp. 1140–1158.
FNMOC AFWA
Joint Ensemble Forecast System
lateral boundary
conditions
NCEP Medium Range Ensemble
 44 staggered GFS runs, T126, 15 d
 Analysis perturbations: Bred Modes
 Model Perturbations: in design
multiple
first guesses
Observations
Data Assimilation
3DVAR / NAVDAS
Ensemble Transform
 Generate initial
condition perturbations
“warm start”
Joint Mesoscale Ensemble (JME)
 30 members, 15/5km, 60 h, 2/day
 One “demonstration” theater
 Multi model (WRF, COAMPS)
 Perturbed model: varied physics
and surface boundary conditions
Storage of
principal fields
FNMOC Medium Range Ensemble
 18 00Z, 8 12Z NOGAPS, T119, 10 d
 Analysis Perturbations: Bred Modes
 Model Perturbations: None
Calibrate
Joint Global Ensemble (JGE) Products
 Apply postprocessing calibration
 Long-range products tailored to support
warfighter planning
Observations
and Analyses
Storage of
principal fields
Calibrate
JME Products
 Apply postprocessing calibration
 Short-range products tailored to
support warfighter operations
JEFS Production Schedule
00Z cycle data
GFS ensemble Grids
to AFWA and FNMOC
06Z cycle data
12Z cycle data
18Z cycle data
NOGAPS ens. grids to AFWA
Interpolate and calibrate JGE
Make/Distribute JGE products
Obtain global analysis
Update JGE Calibration
Data Assimilation
Run 6-h forecasts and do ET
06Z production cycle
Run JME models
18Z production cycle
Exchange output
Make/Distribute JME Products
Update JME Calibration
00
03
06
09
12
15
18
21
24(Z)
Notional Roadmap
for JEFS and Beyond
1.
AFWA/FNMOC Awarded HPCMPO DC Nov 03
2.
AFWA Awarded PET-CWO On-Site
3.
NRL Awarded mesoscale ensemble research
4.
DTRA-AFWA Ensemble Investment
5.
ARL SIBR Phase I & II and AFWA UFR
6.
NCAR & UW Contract, funded by AFWA Wx Fcst 3600
1. HPCMPO DC H/W
1
2. Programming Environment and Training - Climate Weather Ocean On-Site
2
3
3. Probabilistic Pred. of High Impact Wx
4. DTRA-AFWA Support
4
5. Phase I
5. ARL SIBR Phase II w/ AFWA UFR
6. NCAR & UW Contract
JEFS Design
Phase I
Phase II
JGE RDT&E
JGE IOC
1st
JME RDT&E
Meso. EPS H/W Procurement*
Mesoscale EPS IOC
2nd Meso. EPS H/W Procurement*
3rd Meso. EPS H/W Procurement*
Mesoscale EPS FOC
FY04
FY05
FY06
FY07
FY08
FY09
FY10
FY11
* Note: Funded via PEC 35111F Weather Forecasting (3080M)
Product Strategy
Tailor products to customers’ needs and weather sensitivities
Forecaster Products/Applications
 Design to help transition from deterministic to stochastic thinking
Warfighter Products/Applications
 Design to aid critical decision making (Operational Risk Management)
Operational Testing & Evaluation
PACIFIC AIR FORCES Forecasters
20th Operational Weather Squadron
17th Operational Weather Squadron
607 Weather Squadron
Naval Pacific Meteorological and
Oceanographic Center Forecasters
Yokosuka Navy Base
Warfighters
7th Fleet
Warfighters
PACAF
5th Air Force
SEVENTH
Fleet
FIFTH
Air Force
Forecaster Products/Applications
Consensus & Confidence Plot
Maximum
Potential Error
(mb, +/-)
6
5
4
3
2
1
<1
• Consensus (isopleths): shows “best guess” forecast (ensemble mean or median)
• Model Confidence (shaded)
Increase Spread in
the multiple forecasts
Less
Predictability
Decreased confidence
in forecast
Probability Plot
%
• Probability of occurrence of any weather phenomenon/threshold (i.e., sfc wnds > 25 kt )
• Clearly shows where uncertainty can be exploited in decision making
• Can be tailored to critical sensitivities, or interactive (as in IGRADS on JAAWIN)
Multimeteogram
1000/500 Hpa Geopotential Thickness [m] at Yokosuka
Initial DTG 00Z 28 JAN 1999
5520
5460
5400
5340
5280
5220
5160
5100
5040
4980
0
1
2
3
4
5
6
Forecast Day
7
8
9
10
• Show the range of possibilities for all meteogram-type variables
• Box & whisker, or confidence interval plot is more appropriate for large ensembles
• Excellent tool for point forecasting (deterministic or stochastic)
Sample JME Products
P ro b a b ility o f W a rn in g C rite ria a t M c G u ire A F B
B a s e d o n 1 5 /0 6 Z M M 5 E n s e m b l e
100
90
Probability of Warning Criteria at Osan AB
T S t o rm
70
W in d s > 3 5 k t
60
W in d s > 5 0 k t
50
S now > .5"/hr
40
F z g R a in
30
20
10
0
15/06
12
18
16/00
06
12
18
17/00
06
Valid Time (Z)
D a te /T im e
50
45
Wind Speed (kt) .
Probability (%)
80
Surface Wind Speed at Misawa AB
40
Extreme
Max
35
30
Mean
25
90%
CI
20
15
10
Extreme
Min
5
0
11/18
12/00
06
12
18
13/00
06
Valid
Time
Valid Time (Z)
12
18
14/00
06
Sample JGE Product (Forecaster)
Probability of Severe Turbulence @FL300
10%
30%
50%
70%
90%
70%
50%
10%
30%
10%
Sample JGE Product? (Warfighter)
Upper Level Turbulence
280
350
Sample JGE Product (Warfighter)
Chance of Upper Level Turbulence
Intensity: Severe
250/370
280/370
300/330
LEGEND
Negligible
Chance
Low
Base/Top
Med
High
Warfighter Products/Applications
Bridging the Gap
Stochastic Forecast
Binary Decisions/Actions
?
Integrated Weather Effects Decision Aid (IWEDA)
Deterministic
Stochastic
Forecast
Weapon System
Weather Thresholds*
0.05
0.04
Drop
Zone
Drop Zone
Surface
Winds
Surface Winds
6kt
6kt
0.03
0.02
0.01
0
0
310
20
6
30
9
40
12
50
15
60
18kt
70
> 13kt
10%
10-13kt
20%
0-9kt
70%
*AFI 13-217
Method #1: Decision Theory
 Minimize operating cost (or maximize effectiveness) in the long run by taking action based
on an optimal threshold of probability, rather than an event threshold.
 What is the cost of taking action?
 What is the loss if…
 the event occurs and without protection?
 opportunity was missed since action was not taken?
 Good for well defined, commonly occurring events
Example (Hypothetical)
Event: Damage to
parked aircraft
Threshold: sfc wind > 50kt
Cost (of protecting): $150K
Loss (if damaged): $1M
Observed?
Forecast?
YES
NO
YES
Hit
$150K
NO
False
Alarm
$150K
Miss
$1000K
Correct
Rejection
$0K
Method #2:
Weather Risk Analysis and Portrayal (WRAP)
 Army Research Lab’s stochastic decision aid, in
development by Next Century Corporation
Probability
Probability
Cumulative
 Stoplight color based on
1) Ensemble forecast probability distribution
2) Weapon systems’ operating thresholds
3) Warfighter-determined level of acceptable risk
9kt
Threshold
13kt
Threshold
90% Confidence
80%
70%
+
5
Median
Forecast
Value
10
15
Forecast
Value
Drop Zone Surface Winds (kt)
Method #2:
Weather Risk Analysis and Portrayal (WRAP)
9kt
Threshold
13kt
Threshold
Drop Zone #1
99%
1%
0%
18kt ?
Threshold
Acceptable Risk
Low
(90th Percentile)
Med
(60th Percentile)
High
5
10
15
Drop Zone #2
1%
31%
68%
(30th Percentile)
Low
Med
High
5
10
15
Drop Zone #3
37%
52%
11%
Low
Med
High
5
10
Surface Winds (kt)
15
Decision Input
Method #2:
Weather Risk Analysis and Portrayal (WRAP)
ENSEMBLES
AHEAD
Backup Slides
The Atmosphere is a Chaotic, Dynamic System
Predictability is primarily limited
by errors in the analysis
Sensitive to Initial Conditions:
nearby solutions diverge
Analogy
Two adjacent drops
in a waterfall end up
very far apart.
Describable State: system specified by
set of variables that evolve in “phase
space”
Deterministic: system appears
random but process is governed
by rules
Solution Attractor: Limited region
in phase space where solutions
occur
Aperiodic: Solutions never
repeat exactly, but may
appear similar
To account for this
effect, we can make an
ensemble of predictions
(each forecast being a
likely outcome) to
encompass the truth.
Encompassing Forecast Uncertainty
An analysis produced to run a model
is somewhere in a cloud of likely states.
Any point in the cloud is equally likely
to be the truth.
12h
forecast
12h
verification
T
T
48h
forecast
24h
forecast
36h
forecast
Nonlinearities drive apart the
forecast trajectory and true trajectory
24h
(i.e., Chaos Theory)
verification
T
The true state of the atmosphere
exists as a single point in phase
space that we never know exactly.
36h
verification
T
48h
verification
T
P
H
S AS
PA E
C
E
A point in phase space completely describes an
instantaneous state of the atmosphere. (pres, temp,
etc. at all points at one time.)
Encompassing Forecast Uncertainty
An ensemble of likely analyses leads to an ensemble of likely forecasts
T
T
P
H
S AS
PA E
C
E
Ensemble Forecasting:
-- Encompasses truth
-- Reveals uncertainty
-- Yields probabilistic information
The Wind Storm That Wasn’t
(Thanksgiving Day 2001)
Eta-MM5 Forecast
Mean Sea Level Pressure (mb)
and shaded Surface Wind Speed (m s-1)
Verification
The Wind Storm That Wasn’t
(Thanksgiving Day 2001)
eta-MM5 Forecast
cent-MM5 Forecast
avn-MM5 Forecast
eta-MM5 Forecast
Verification
ngps-MM5 Forecast
cmcg-MM5 Forecast
avn-MM5 Forecast
tcwb-MM5 Forecast
ukmo-MM5 Forecast
ngps-MM5 Forecast
cmcg-MM5 Forecast
ukmo-MM5 Forecast
tcwb-MM5 Forecast
Deterministic vs. Ensemble Forecasting
Deterministic Forecasting
Ensemble Forecasting
Single solution
Multiple solutions
Variable and unknown risk
Variable and known risk
Attempt to minimize uncertainty
Attempt to define uncertainty
Utility reliant on:
Utility reliant on:
1) Accuracy of analysis
1) Accounting of analysis error
2) Accuracy of model
2) Accounting of model error
3) Flow of the day
3) Flow of the day
4) Forecaster experience
4) Machine-to-Machine
5) Random chance
5) Random sampling (# of model runs)
Cost / Return: Mod / Mod
Cost / Return: High / High+
The Deterministic Pitfall
Notion
Reality
The deterministic atmosphere should Need for stochastic forecasting is a result of
be modeled deterministically.
the sensitivity to initial conditions.
A high resolution forecast is better.
A better looking simulation is not necessarily
a better forecast. (precision ≠ accuracy)
A single solution is easier for
interpretation and forecasting.
Misleading and incomplete view of the future
state of the atmosphere.
The customer needs a single forecast Poor support to the customer since in many
to make a decision.
cases, a reliable Y/N forecast is not possible.
A single solution is more affordable
to process.
Good argument in the past, but not anymore.
How can you afford not to do ensembles?
NWP was designed deterministically.
Yes and no. NWP founders designed models
for deterministic use, but knew the limitation.
There are many spectacular success
stories of deterministic forecasting
Result of forecast situation with low
uncertainty, or dumb luck of random sampling.
Method #1: Decision Theory
 Minimize operating cost (or maximize effectiveness) in the long run by taking action based
on an optimal threshold of probability, rather than an event threshold.
 What is the cost of taking action?
 What is the loss if…
 the event occurs and without protection?
 opportunity was missed since action was not taken?
 Good for well defined, commonly occurring events
Example (Hypothetical)
Event: Satellite drag
alters LEO orbits
Threshold: Ap > 100
Cost (of preparing): $4.5K
Loss (of reacting): $10K
Observed?
Forecast?
YES
NO
YES
Hit
$150K
NO
False
Alarm
$150K
Miss
$1000K
Correct
Rejection
$0K
EF Vision 2020
AFWA
Global Mesoscale Ensemble
Runs/Cycle: O(10)
Resolution: O(10km)
Length: 10 days
United Global
Mesoscale Ensemble
Microscale Ensembles
Runs/Cycle: O(10)
Resolution: O(100m)
Length: 24 hours
Runs/Cycle: O(100)
Resolution: O(10km)
Length: 10 days
Global Mesoscale Ensemble
Runs/Cycle: O(10)
Resolution: O(10km)
Length: 15 days
Microscale Ensemble
Runs/Cycle: O(10)
Resolution: O(100m)
Length: 2 days
Coalition Weather Centers
Global Mesoscale Ensembles
…etc.
Global Mesoscale Ensemble
MSC
JMA
ABM
FNMOC
Runs/Cycle: O(10)
Resolution: O(10km)
Length: 10 days
Microscale Ensembles
Runs/Cycle: O(10)
Resolution: O(100m)
Length: 24 hours
Download