452 NWP 2015

advertisement
452 NWP
2015
Major Steps in the Forecast
Process
•
•
•
•
•
•
•
Data Collection
Quality Control
Data Assimilation
Model Integration
Post Processing of Model Forecasts
Human Interpretation (sometimes)
Product and graphics generation
Data Collection
• Weather is observed throughout the world and the
data is distributed in real time.
• Many types of data and networks, including:
–
–
–
–
–
–
–
–
Surface observations from many sources
Radiosondes and radar profilers
Fixed and drifting buoys
Ship observations
Aircraft observations
Satellite soundings
Cloud and water vapor track winds
Radar and satellite imagery
Observation and Data Collection
Weather Satellites Are Now 99%
of the Data Assets Used for NWP
• Geostationary Satellites: Imagery,
soundings, cloud and water vapor winds
• Polar Orbiter Satellites: Imagery,
soundings, many wavelengths
• RO (GPS) satellites
• Scatterometers
• Active radars in space (GPM)
Quality Control
• Automated algorithms and manual intervention to
detect, correct, and remove errors in observed
data.
• Examples:
– Range check
– Buddy check
– Comparison to first guess fields from previous
model run
– Hydrostatic and vertical consistency checks for
soundings.
• A very important issue for a forecaster--sometimes
good data is rejected and vice versa.
3 March 1999: Forecast a snowstorm
… got a windstorm instead
Eta 48 hr SLP Forecast valid 00 UTC 3
March 1999
Pacific Analysis
At 4 PM
18 November
2003
Bad Observation
Forecaster Involvement
• A good forecast is on the lookout for NWP
systems rejecting bad data, particularly in
data sparse areas.
• Quality control systems can allow models to
go off to never never land.
• Less of a problem today due to satellite data
everywhere.
Objective Analysis/Data
Assimilation
• Numerical weather models
are generally solved on a
three-dimensional grid
• Observations are scattered in
three dimensions
• Need to interpolate
observations to grid points
and to insure that the various
fields are consistent and
physically plausible (e.g.,
most of the atmosphere in
hydrostatic and gradient wind
balance).
Objective Analysis
• Interpolation of observational data to either
a grid (most often!) or some basis function
(e.g., spectral components)
• Typically iterative (done in several passes)
• Typically starts with first guess (short-term
forecast)
Objective Analysis/Data Assimilation
• Often starts with a “first guess”, often the gridded
forecast from an earlier run (frequently a run
starting 6 hr earlier)
• This first guess is then modified by the
observations.
• Adjustments are made to insure proper balance.
• Objective Analysis/Data Assimilation produces
what is known as the model initialization, the
starting point of the numerical simulation.
An early objective analysis
scheme is the Cressman scheme
3DVAR: 3D Variational Data
Assimilation
• Used by the National Weather Service
today for the GFS and NAM
• Tries to create an analysis that minimizes a
cost function dependent on the difference
between the analysis and (1) first guess and
(2) observations
• Does this at a single time.
3DVAR Covariances
4DVAR: Four Dimension
Variational Data Assimilation
• Tries to optimize analyses at MULTIPLE
TIMES
• Uses the model itself as a data assimilation
too.
Many of the next generation data
assimilation approaches are
ensemble based
• Example: the Ensemble Kalman Filter
(EnKF)
An Attractive Option: EnKF
Temperature observation
3DVAR
EnKF
Mesoscale Covariances
12 Z January 24, 2004
Camano Island Radar
|V950|-qr covariance
Surface Pressure Covariance
Land
Ocean
Hybrid Data Assimilation: Now
Used in GFS
• Uses both 3DVAR and EnkF
• Uses EnkF covariances from GFS ensemble
in 3DVAR.
Next Advance ENVAR
• Use temporal covariances to spread impact
of observations over TIME.
Model Integration: Numerical
Weather Prediction
• The initialization is used as the starting
point for the atmospheric simulation.
• Numerical models consist of the basic
dynamical equations (“primitive equations”)
and physical parameterizations.
“Primitive” Equations
•
•
•
•
•
3 Equations of Motion: Newton’s Second Law
First Law of Thermodynamics
Conservation of mass
Perfect Gas Law
Conservation of water
With sufficient data for initialization and a
mean to integrate these equations, numerical
weather prediction is possible.
Example: Newton’s Second Law: F = ma
One Form
Physics Parameterizations
• We need physics parameterizations to
include key physical processes.
• Examples include radiation, cumulus
convection, cloud microphysics, boundary
layer physics, etc.
• Why? Primitive equations with lack the
necessary physics or lack sufficient
resolution to resolve key processes.
Parameterization
• Example: Cumulus Parameterization
• Most numerical models (grid spacing of 12km is the best available operationally)
cannot resolve convection (scales of a few
km or less).
• In parameterization, represent the effects of
sub-grid scale cumulus on the larger scales.
Numerical Weather Prediction
• A numerical model includes the primitive
equations, physics parameterization, and a way to
solve the equations (usually using finite
differences on a grid)
• Make use of powerful computers
• Keep in mind that a model with a horizontal grid
spacing is barely simulating phenomenon with a
scale four times the grid spacing. So a 12-km
model barely is getting 50 km scale features
correct.
Numerical Weather Prediction
• Most modeling systems are run four times a day
(00, 06, 12, 18 UTC), although some run twice a
day (00 and 12 UTC)
• The main numerical modeling centers in the U.S.
are:
– Environmental Modeling Center (EMC) at the National
Centers for Environmental Prediction (NCEP)--part of
the NWS. Located near Washington, DC.
– Fleet Numerical Meteorology and Oceanography
Center (FNMOC)-Monterey, CA
– Air Force Weather Agency (AFWA)-Offutt AFB,
Nebraska
Major U.S. Models
• Global Forecast System Model (GFS). Uses
spectral representation rather than grids in the
horizontal. Global, resolution equivalent to 35 km
grid model. Run out to 384 hr, four times per day.
• Weather Research and Forecasting Model
(WRF). Two versions: WRF-NMM and WRFARW(different ways of representing the
dynamics). WRF is a new mesoscale modeling
system system that is used by the NWS and the
university/research community. AFWA is also
moving to WRF. The NWS runs WRF-NMM.
WRF-NMM is run at 12-km grid spacing, four
times a day to 84h.
Major U.S. Models
• MM5 (Penn. State/NCAR Mesoscale Model
Version 5). Has been the dominant model in the
research community. Run here at the UW (36, 12
and 4 km resolution).
• COAMPS (Navy). The Navy mesoscale
model..similar to MM5
• There are many others--you will hear more about
this in 452.
• Forecasters often have 6-10 different models to
look at. Such diversity can provide valuable
information.
Major International NWP Centers
• ECMWF: European Center for MediumRange Weather Forecasting. The Gold
standard. Their global model is considered
the best.
• UK Met Office: An excellent global model
similar to GFS
• Canadian Meteorological Center
• Other lesser centers
Accessing NWP Models
• The department web site (go to weather loops or
weather discussion) provides easy access to many
model forecasts.
• The NCEP web site is good place to start for NWS
models.
http://www.nco.ncep.noaa.gov/pmb/nwprod/analysis/
• The Department Regional Prediction Page gets to the
department regional modeling output.
http://www.atmos.washington.edu/mm5rt/
A Palette of Models
• Forecasters thus have a palette of model forecasts.
• They vary by:
–
–
–
–
Region simulated
Resolution
Model Physics
Data used in the assimilation/initialization process
• The diversity of models can be a very useful tool
to a forecaster.
Post-Processing
• Numerical model output sometimes has systematic
biases (e.g., too warm or too cold in certain
situations). Why not remove it?
• Numerical models may not have the resolution of
physics to deal with certain problems (e.g., low
level fog in a valley). Some information be
derived from historical model performance.
• The solution: post-processing of model forecasts.
MOS
• In the 1960s and 1970s, the NWS developed and
began using statistical post-processing of model
output…known as Model Output Statistics…MOS
• Based on linear regression: Y=a0 + a1X1 + a2X2+
a3X3 + …
• MOS is available for many parameters and greatly
improves the quality of most model predictions.
Post-Processing
• There are other types of post-processing.
• Here at the UW we have developed a way of
removing systematic bias.
• Others have used “neural nets” as an approach.
• Another approach is to combine several models,
weighing them by previous performance (called
Bayesian Model Averaging).
Ensemble Forecasting
• All of the model forecasts I have talked about
reflect a deterministic approach.
• This means that we do the best job we can for a
single forecast and do not consider uncertainties in
the model, initial conditions, or the very nature of
the atmosphere. These uncertainties are often very
significant.
• Traditionally, this has been the way forecasting
has been done, but that is changing now.
A More Fundamental Issue
• The work of Lorenz (1963, 965, 1968)
demonstrated that the atmosphere is a
chaotic system, in which small
differences in the initialization…well
within observational error… can have
large impacts on the forecasts,
particularly for longer forecasts.
• Similarly, uncertainty in model physics
can result in large forecast
differences..and errors.
• Not unlike a pinball game….
• Often referred to as the “butterfly
effect”
Probabilistic-Ensemble NWP
• Instead of running one forecast, run a collection
(ensemble) of forecasts, each starting from a
different initial state or with different physics.
• The variations in the resulting forecasts could be
used to estimate the uncertainty of the prediction.
Ensemble Prediction
•Can use ensembles to provide a new generation
of products that give the probabilities that some
weather feature will occur.
•Can also predict forecast skill!
•It appears that when forecasts are similar, forecast
skill is higher.
•When forecasts differ greatly, forecast skill is less.
Ensemble Prediction
• During the past decade the size and sophistication of the
NCEP and ECMWF ensemble systems have grown
considerably, with the medium-range, global ensemble
system becoming an integral tool for many forecasters.
• Also during this period, NCEP has constructed a higher
resolution, short-range ensemble system (SREF) that uses
breeding to create initial condition variations.
The Thanksgiving Forecast 2001
42h forecast (valid Thu 10AM)
SLP and winds
1: cent
Verification
- Reveals high uncertainty in storm track and intensity
- Indicates low probability of Puget Sound wind event
2: eta
5: ngps
8: eta*
11: ngps*
3: ukmo
6: cmcg
9: ukmo*
12: cmcg*
4: tcwb
7: avn
10: tcwb*
13: avn*
Human Interpretation
• Once all the numerical simulations and
post-processing are done, humans still play
an important role:
– Evaluating the model output
– Making adjustments if needed
– Attempting to consider features the model can’t
handle
– Communicating to the public and other users.
Product Generation
• Some completely objective and automated.
• Others depend on human intervention
• Example: the National Weather Service
IFPS system
Interactive Forecast Preparation System (IFPS) and
National Digital Forecast Database (NDFD)
The Forecast Process
The Forecast Process
• Step 1: What is climatology for the
location in question?
What are the record and average maxima and minima? You
always need very good reasons to equal or break records.
• Step 2: Acquaint yourself with the
weather evolution of the past several days.
How has the circulation evolved? Why did
past forecasts go wrong or right?
• Step 3: The Forecast Funnel.
Start with the synoptic scale and then downscale
to the meso and local scales. Major steps:
I. Synoptic Model Evaluation
Which synoptic models have been the most skillful during
the past season and last few days?
Has there been a trend in model solutions?
Have they been stable?
Are all the model solutions on the same page? If so, you
can more confidence in your forecast.
Evaluate synoptic ensemble forecasts. Are there large or
small spread of the solutions?
Which model appears to most skillful today based on
initializations and short-term (6-12h forecasts)?
Satellite imagery and surface data are crucial for this
latter step
II. Decide on the synoptic evolution you believe to be most
probable. Attempt to compensate for apparent flaws in the best
model.
III: Downscaling to the mesoscale. What mesoscale evolution will
accompany the most probable synoptic evolution?
This done in a variety of ways:
a. Subjective rules and experience: e.g., the PSCZ occurs when
the winds on the WA coast are from the W to NW? Onshore push
occurs when HQM-SEA gets to 3.5 mb. Knowledge of these rules
is a major component of forecast experience.
Typical diurnal wind fields in the summer.
b. High resolution mesoscale modeling: e.g., MM5.
Clearly becoming more and more important
c. Model Output Statistics (MOS, for some fields)
IV. Downscaling to the microscale for point
forecasts.
Subjective approach using knowledge of terrain and
other local characteristics.
For subjective forecasts remember the DT approach:
It is nearly impossible to forecast a parameter value
from first principles--so consider what has changed.
STEP 4. The Homestretch
• Combine the most probable synoptic, mesoscale, and
microscale evolution in your mind to produce a predicted
scenario
• Attempt to qualify the uncertainty in the forecast. Synoptic
and mesoscale (SREF) ensemble systems are becoming
increasingly important for this task.
• Ask yourself: am a missing something? Am I being
objective? Overcompensating for a previous error? Check
forecast discussions from other forecasters to insure you
are not missing something.
Today’s Forecast
Download