Model Output Statistics (MOS) - Objective Interpretation of NWP Model Output

advertisement
Model Output Statistics (MOS) Objective Interpretation of NWP
Model Output
University of Maryland – March 9, 2016
Mark S. Antolik
Meteorological Development Laboratory
Statistical Modeling Branch
NOAA/National Weather Service
Silver Spring, MD
(301) 427-9480
email: mark.antolik@noaa.gov
MOS Operational System “Fun Facts”
Did you know?
The NWS MOS system has…
● 12 M regression equations
● 100 M forecasts per day
● 1600 products sent daily
● 400,000 lines of code – mostly FORTRAN
● 300 min. supercomputer time daily
● All developed and maintained by ~ 17
MDL / SMB meteorologists!
OUTLINE
1. Why objective statistical guidance?
2. What is MOS?
Definition and characteristics
The “traditional” MOS product suite (GFS, NAM)
Other additions to the lineup
3. Simple regression examples / REEP
4. Development strategy MOS in the “real world”
5. MOS on a grid
6. Verification
7. Dealing with NWP model changes
8. Where we’re going – The future of MOS
WHY STATISTICAL GUIDANCE?
● Add value to direct NWP model output
Objectively interpret model
- remove systematic biases
- quantify uncertainty
Predict what the model does not
Produce site-specific forecasts
(i.e. a “downscaling” technique)
● Assist forecasters
“First Guess” for expected local conditions
“Built-in” model/climo memory for new staff
A SIMPLE STATISTICAL MODEL
OBSERVED REL. FREQUENCY
Relative Frequency of Precipitation as a Function of
12-24 Hour Model-Forecast Mean RH
1.0
0.9
3-YR SAMPLE; 200 STATIONS
1987-1990 COOL SEASON
0.8
0.7
0.6
47%
0.5
0.4
0.3
0.2
0.1
0.0
0
10
20
30
40
50
60
70
80
MODEL MEAN RELATIVE HUMIDITY (%)
90
100
MOS Max Temp vs. Direct Model Output
MRF MOS Max Temp
1999 Warm Season - CONUS/AK
8
M A E (d e g F )
7
6
CLIMO
5
DMO
4
3
NEW MOS
CLIM O
DMO
MOS
2
24
48
72
96
120
Projection (hours)
144
168
192
What is MOS?
MODEL OUTPUT STATISTICS (MOS)
Relates observed weather elements (PREDICTANDS)
to appropriate variables (PREDICTORS)
via a statistical approach
Predictors are obtained from:
1. Numerical Weather Prediction (NWP) Model
Forecasts
2. Prior Surface Weather Observations
3. Geoclimatic Information
Current Statistical Method:
MULTIPLE LINEAR REGRESSION
(Forward Selection)
MODEL OUTPUT STATISTICS (MOS)
Properties
● Mathematically simple, yet powerful
● Need historical record of observations
at forecast points
(Hopefully a long, stable one!)
● Equations are applied to future run of
similar forecast model
MODEL OUTPUT STATISTICS (MOS)
Properties (cont.)
● Non-linearity can be modeled by using
NWP variables and transformations
● Probability forecasts possible from a
single run of NWP model
● Other statistical methods can be used
e.g. Polynomial or logistic regression;
Neural networks
MODEL OUTPUT STATISTICS (MOS)
● ADVANTAGES
Removal of some systematic model bias
Specific element and site forecasts
Optimal predictor selection
Recognition of model predictability
Reliable probabilities
● DISADVANTAGES
Short samples
Changing NWP models
Availability & quality of observations
MAJOR CHALLENGE TO MOS DEVELOPMENT:
RAPIDLY EVOLVING NWP MODELS AND
OBSERVATION PLATFORMS
OBS
MODELS
Can make for:
Mesonets
NCEP
1. SHORT, UNREPRESENTATIVE
DATA SAMPLES
2. DIFFICULT COLLECTION OF APPROPRIATE
PREDICTAND DATA
New observing systems:
Same “old” predictands:
(ASOS, WSR-88D, Satellite)
(Co-Op, Mesonets)
The elements don’t change!
“Traditional” MOS
text products
GFS MOS GUIDANCE MESSAGE
FOUS21-26 (MAV)
COLLEGE PARK AIRPORT
KCGS
GFS MOS GUIDANCE
DT /MAR
3/MAR
4
HR
18 21 00 03 06 09 12
N/X
42
TMP 35 38 41 43 45 47 48
DPT 25 30 31 33 35 37 39
CLD OV OV OV OV OV OV OV
WDR 15 16 15 19 22 24 29
WSP 04 04 03 03 04 03 02
P06
91
68
69
P12
89
Q06
2
1
1
Q12
2
T06
0/ 7 3/11 3/ 9
T12
3/11
POZ 20 25 27 8 2 2 2
POS 77 29 5 0 0 0 0
TYP
Z Z Z R R R R
SNW
1
CIG
6 4 2 3 4 2 3
VIS
5 4 3 4 5 5 5
OBV HZ BR BR BR BR BR BR
3/03/2015 1200 UTC
/MAR
5
15 18 21 00 03 06 09 12
50
26
46 46 45 44 41 40 38 36
43 46 45 40 35 32 29 26
OV OV OV OV OV OV OV OV
32 33 01 03 36 35 34 34
01 04 01 03 03 06 10 09
100
76
88
88
100
100
2
2
3
3
3
3
1/ 4 1/ 4 2/ 4 0/ 0
3/10
2/ 6
1 0 0 4 7 14 23 29
0 0 0 0 15 37 65 72
R R R R R S Z Z
0
3 3 3 3 3 2 3 3
3 5 1 2 4 5 5 7
BR BR FG BR BR BR BR N
15 18
26
19
OV
01
06
25
18
OV
35
05
92
2
2/ 9
4/ 9
17 8
79 85
S S
2 2
3 2
BR BR
/MAR
6
21 00 06 12
28
13
24 25 18 13
18 16 8 1
OV OV FW CL
35 34 33 34
06 06 04 03
68 13 5
100
32
1 0 0
4
0
1/ 4 0/ 4
1/30
9 9 1 1
92 92 96 99
S S S S
4
3 4 8 8
4 7 7 7
BR N N N
NAM MOS (MET) v3.1 UPDATE
(1/13/2016)
NEW for AK sites
NEW for all sites
Content of NAM MOS bulletins now matches that of the GFS MOS
Short-range (GFS / NAM) MOS
● STATIONS:
Now at 2563 Forecast Sites (GFS; 1990 NAM)
(CONUS, AK, HI, PR, CA)
Approx.
2560 sites
MOS Stations
Short-range (GFS / NAM) MOS
● STATIONS:
Now at 2563 Forecast Sites (GFS; 1990 NAM)
(CONUS, AK, HI, PR, CA)
● FORECASTS:
Available at projections of 6-84 hours
GFS available for 0600 and 1800 UTC cycles
● RESOLUTION:
GFS predictors on 47.63 km grid; NAM on 33.81 km
Predictor fields available at 3-h timesteps
● DEPENDENT SAMPLE NOT “IDEAL”:
Non-static underlying NWP model
Limited data from current model versions
Fewer seasons than older MOS systems
GFSX MOS GUIDANCE MESSAGE
FEUS21-26 (MEX)
KBWI
GFSX MOS GUIDANCE
FHR 24 36| 48 60| 72
WED 04| THU 05| FRI
N/X 38 46| 23 25| 9
TMP 43 40| 32 21| 10
DPT 36 34| 23 11| 0
CLD OV OV| OV OV| OV
WND
9
5| 12 12| 9
P12 97 100| 99 99| 31
P24
100|
99|
Q12
2
2| 3
4| 0
Q24
3|
5|
T12
3
1| 2
3| 0
T24
| 2
| 3
PZP 15
8| 30 19| 4
PSN
7
0| 2 67| 96
PRS
0
3| 44 14| 0
TYP
R
R| Z
S| S
SNW
0|
8|
3/03/2015 1200 UTC
84| 96 108|120 132|144 156|168 180|192
06| SAT 07| SUN 08| MON 09| TUE 10|WED CLIMO
24| 12 37| 26 44| 30 43| 27 48| 34 32 53
19| 14 32| 29 39| 32 39| 30 42| 37
1| 9 16| 22 22| 25 24| 20 25| 29
CL| OV PC| PC PC| PC OV| PC OV| CL
6| 1
5| 2
6| 3
7| 3
6| 3
12| 8
8| 21 24| 30 28| 17 26| 16 24 23
31|
20|
24|
30|
26|
34
0| 0
0| 0
0| 0
0|
|
0|
0|
0|
0|
|
1| 0
1| 0
2| 2
1| 1
2| 2
| 1
| 1
| 2
| 2
| 4
4| 6
6| 13
6| 7
6| 7
3| 0
96| 89 71| 45 45| 46 50| 32 18| 12
0| 0
0| 10 19| 18
8| 14 16| 1
S| S
S| S
S| S
S| RS
R| R
0|
0|
0|
0|
|
MOS station-oriented products:
Other additions
Marine MOS
44004 GFS MOS GUIDANCE
DT /NOV 22/NOV 23
HR
18 21 00 03 06 09 12
TMP 58 53 49 49 50 48 46
WD
23 25 27 28 28 29 29
WS
33 31 29 25 23 22 24
WS10 36 34 31 26 25 24 26
DT /NOV 25
HR
09 12 15
TMP 45 45 45
WD
29 29 28
WS
18 15 10
WS10 20 16 11
18
47
30
10
11
21
47
29
13
14
11/22/2005 1200 UTC
/NOV 24
15 18 21 00 03 06 09
44 44 45 47 48 51 54
28 28 27 27 25 22 22
25 23 18 14 12 14 19
27 25 19 15 13 15 21
12
56
22
26
28
15
60
23
29
31
18
62
23
30
32
21
61
23
29
31
/NOV 25
00 03 06
59 51 47
24 27 28
29 28 24
31 30 26
/
00
47
34
12
13
Marine MOS sites
Standard MOS sites
Max/Min Guidance for Co-op Sites
GFS-BASED MOS COOP MAX/MIN GUIDANCE
SAT 21| SUN 22| MON 23
BTVM2
3 27| 27 48| 20 24
CBLM2
0 28| 26 52| 15 33
CNWM2
6 29| 28 43| 17 21
DMAM2
13 25| 25 43| 10 19
EMMM2
3 27| 27 40| 16 22
FRSM2
4 26| 26 33| 5 12
KAPG
999 999|999 999|999 999
LRLM2
12 29| 29 47| 11 24
MECM2
2 28| 27 42| 22 28
MLLM2
5 26| 26 43| 8 19
OLDM2
7 34| 28 35| 8 12
OXNM2
3 29| 29 44| 23 23
PRAM2
13 40| 32 51| 15 31
ROYM2
6 32| 31 53| 19 28
SBGM2
999 26|999 48|999 25
SHRM2
4 26| 26 40| 16 21
SNHM2
3 41| 40 56| 18 36
SOLM2
6 33| 28 40| 22 26
SRDM2
-3 29| 26 36| 14 14
2/20/15
1200 UTC
Beltsville, MD
Laurel 3 W
Oxon Hill, MD
Western Pacific MOS Guidance
Saipan,
ROM
NSTU
GFS MOS GUIDANCE
DT /NOV 19/NOV 20
HR
18 21 00 03 06 09 12
TMP 81 82 84 83 82 81 80
DPT 75 76 77 77 77 76 76
CLD OV BK BK BK BK BK BK
WDR 12 11 10 09 08 08 08
WSP 16 14 13 11 09 08 07
P06
18
10
40
P12
29
CIG
7 7 8 6 6 6 6
Midway,
Wake, US US
11/19/2013 1200 UTC
/NOV 21
15 18 21 00 03 06 09
78 81 84 86 84 82 82
76 77 78 78 76 77 77
BK BK BK BK BK BK BK
07 08 08 08 08 06 07
07 08 10 11 10 08 07
35
29
29
60
48
6 6 8 7 6 8 6
12
81
77
BK
05
06
28
6
15
80
76
OV
05
06
18
81
77
BK
07
06
35
58
6 7
21
85
77
BK
07
09
8
/NOV
00 06
86 83
78 76
OV BK
07 09
09 07
36 35
56
7 8
22
12
80
77
BK
09
07
28
7
MDL’s Experimental ECMWF MOS
(NOAA Eyes Only: Text and Meteogram)
7/23/13 0000 UTC KDCA ECMWF / GFS comparison meteogram ( MaxT and MinT)
7/23/13 0000 UTC KDCA ECMWF / GFS comparison meteogram (PoP12)
24
Application of Linear Regression
to MOS Development
MOS LINEAR REGRESSION
JANUARY 1 - JANUARY 30, 1994 0000 UTC
KCMH
60
TODAY'S MAX (°F)
50
40
30
20
10
0
-10
1150
1200
1250
1300
18-H NWP MODEL 850-1000 MB THICKNESS (M)
1350
MOS LINEAR REGRESSION
JANUARY 1 - JANUARY 30, 1994 0000 UTC
KCMH
60
MAX T = -352 + (0.3 x 850-1000 mb THK)
TODAY'S MAX (°F)
50
40
RV=93.1%
30
20
10
0
-10
1150
1200
1250
1300
18-H NWP MODEL 850-1000 MB THICKNESS (M)
1350
REDUCTION OF VARIANCE
A measure of the “goodness” of fit and
Predictor / Predictand correlation
PREDICTAND
RV =
Variance - Standard Error
Variance
MEAN
RV
{
*
}
UNEXPLAINED VARIANCE
PREDICTOR
MOS LINEAR REGRESSION
JANUARY 1 - JANUARY 30, 1994 0000 UTC
KUIL
TODAY'S MAX (°F)
60
50
40
30
1250
RV=26.8%
1300
Same predictor,
Different site,
Different relationship!
1350
18-H NWP MODEL 850-1000 MB THICKNESS (M)
1400
MOS LINEAR REGRESSION
DECEMBER 1 1993 - MARCH 5 1994 0000 UTC
12-24 H PRECIPITATION ≥ .01"
KCMH
1
0
10
20
30
40
50
60
70
80
90
AVG. 12-24 H NWP MODEL ~1000 - 500 MB RH
100
MOS LINEAR REGRESSION
DECEMBER 1 1993 - MARCH 5 1994 0000 UTC
12-24 H PRECIPITATION ≥ .01"
KCMH
1
RV=36.5%
0
10
20
30
40
50
60
70
80
90
AVG. 12-24 H NWP MODEL ~1000 - 500 MB RH
100
MOS LINEAR REGRESSION
DECEMBER 1 1993 - MARCH 5 1994 0000 UTC
12-24 H PRECIPITATION ≥ .01"
KCMH
1
RV=36.5%
RV=42.4%
0
10
20
30
40
50
60
70
80
90
AVG. 12-24 H NWP MODEL ~1000 - 500 MB RH
100
MOS LINEAR REGRESSION
DECEMBER 1 1993 - MARCH 5 1994 0000 UTC
12-24 H PRECIPITATION ≥ .01"
KCMH
1
RV=44.9%
RV=36.5%
RV=42.4%
0
POP = -0.234 + (0.007 X MRH) +
(0.478 X BINARY MRH (70%))
10
20
30
40
50
60
70
80
90
AVG. 12-24 H NWP MODEL ~1000 - 500 MB RH
100
EXAMPLE REGRESSION EQUATIONS
Y = a + bX
CMH MAX TEMPERATURE EQUATION
MAX T = -352 + (0.3 x 850 -1000 mb THICKNESS)
CMH PROBABILITY OF PRECIPITATION EQUATION
POP = -0.234 + (0.007 x MEAN RH)
+ (0.478 x BINARY MEAN RH CUTOFF AT 70%)*
*(IF MRH ≥ 70% BINARY MRH = 1; else BINARY MRH = 0)
If the predictand is BINARY,
MOS regression equations produce
estimates of event PROBABILITIES...
12-24 H PRECIPITATION ≥ .01"
KCMH
1
3 Events
P = 30%
RF= 30%
0
7 Events
10
20
30
40
50
60
70
80
90
AVG. 12-24 H NWP MODEL ~1000 - 500 MB RH
100
Making a PROBABILISTIC
statement...
Quantifies the uncertainty !
DEFINITION of PROBABILITY
(Wilks, 2006)
● The degree of belief, or quantified judgment,
about the occurrence of an uncertain event.
OR
● The long-term relative frequency of an event.
PROBABILITY FORECASTS
Some things to keep in mind
Assessment of probability is EXTREMELY dependent
upon how predictand “event” is defined:
-Time period of consideration
-Area of occurrence
-Dependent upon another event?
MOS forecasts can be:
● POINT PROBABILITIES
● AREAL PROBABILITIES
● CONDITIONAL PROBABILITIES
AREAL PROBABILITIES
3H MOS thunderstorm probability forecasts
valid 0000 UTC 8/27/2002 (21-24h proj)
What if these were 6-h
forecasts?
40-km gridbox
10% contour interval
20-km gridbox
10% contour interval
PROPERTIES OF
MOS PROBABILITY FORECASTS
● Unbiased
Average forecast probability equals
long-term relative frequency of event
● Reliable
Conditionally or “Piecewise” unbiased
over entire range of forecast probabilities
● Reflect predictability of event
Range narrows and approaches event RF
as NWP model skill declines
- extreme forecast projection
- rare events
Reliable Probabilities…
Even for rare events
Reliabilty of 12-h PQPF > 0.25", 48h Forecasts
Cool Seasons 05-06 and 06-07, 335 sites
100%
70000
60000
90%
182
50000
400
40000
30000
20000
10000
0
70%
0.0
%
2.5
%
10
.0%
20
.0%
30
.0%
40
.0%
50
.0%
60
.0%
70
.0%
80
.0%
90
.0%
97
.5%
Relative Frequency
80%
65
638
60%
800
1159
50%
1437
40%
2108
30%
3467
20%
12-h Precip > 0.25”
8611
10%
33811
0%
0%
Mean: 4.7%
61929
10%
20%
30%
40%
50%
Forecast
60%
70%
80%
90% 100%
Designing an
Operational MOS System:
Putting theory into practice…
DEVELOPMENTAL CONSIDERATIONS
MOS in the real world
● Selection (and QC!) of Suitable
Observational Datasets
ASOS? Remote sensor? Which mesonet?
Suitable observations?
Appropriate Sensor?
Good siting?
Real or Memorex?
Photo Courtesy W. Shaffer
MOS Snowfall Guidance
Uses Observations from Cooperative Observer Network
12”+
8”- 10”
6”- 8”
4”- 6”
< 4”
36-hr forecast
00Z 02/13/14 – 00Z 02/14/14
Verification
DEVELOPMENTAL CONSIDERATIONS
MOS in the real world
● Selection (and QC!) of Suitable
Observational Datasets
ASOS? Remote sensor? Which mesonet?
● Predictand Definition
Must be precise !!
PREDICTAND DEFINITION
Max/Min and PoP
Daytime Maximum Temperature
“Daytime” is 0700 AM - 0700 PM LST *
Nighttime Minimum Temperature
“Nighttime” is 0700 PM - 0800 AM LST *
* CONUS – differs in AK
Probability of Precipitation
Precipitation occurrence is accumulation
of ≥ 0.01 inches of liquid-equivalent at a
gauge location within a specified period
PREDICTAND DEFINITION
GFSX 12-h Average Cloud Amount
● Determined from 13 consecutive hourly
ASOS observations, satellite augmented
● Assign value to each METAR report:
CLR; FEW; SCT; BKN; OVC
0 ; 0.15; 0.38; 0.69; 1
● Take weighted average of above
● Categorize:
CL < .3125 ≤ PC ≤ .6875 < OV
Creating a Gridded Predictand
Lightning strikes are summed over the “appropriate” time
period and assigned to the center of “appropriate” grid boxes
×
×
××
××
×
×
×
×
×
×
×
×
×
×
×
×
××
×
××
×
×
××
×
×
×
A thunderstorm is deemed to have occurred when one or
more lightning strikes are observed within a given gridbox:
= thunderstorm
= no thunderstorm
DEVELOPMENTAL CONSIDERATIONS
MOS in the real world
● Selection (and QC!) of Suitable
Observational Datasets
ASOS? Remote sensor? Which mesonet?
● Predictand Definition
Must be precise !!
● Choice of Predictors
“Appropriate” formulation
Binary or other transform?
“APPROPRIATE” PREDICTORS
● DESCRIBE PHYSICAL PROCESSES ASSOCIATED
WITH OCCURRENCE OF PREDICTAND
i.e. for POP:
PRECIPITABLE WATER
VERTICAL VELOCITY
MOISTURE DIVERGENCE
MODEL PRECIPITATION
●
X
1000-500 MB THK
TROPOPAUSE HGT
“MIMIC” FORECASTER THOUGHT PROCESS
(VERTICAL VELOCITY) X (MEAN RH)
POINT BINARY PREDICTOR
24-H MEAN RH
CUTOFF = 70%
INTERPOLATE ; STATION RH ≥ 70% , BINARY = 1
BINARY = 0 OTHERWISE
96
86
89
94
87
73
76
90
(71%)
•
KCMH
76
60
69
92
64
54
68
93
RH ≥70% ; BINARY AT KCMH = 1
GRID BINARY PREDICTOR
24 H MEAN RH
CUTOFF = 70%
WHERE RH ≥ 70% ; GRIDPOINT = 1 ; INTERPOLATE
1
1
1
1
1
1
1
1
(.21 )
1
0
0
0
•
KCMH
0
1
0
1
0 ≤ VALUE AT KCMH ≤ 1
Logit Transform Example
KPIA (Peoria, IL) 0000 UTC ; 18-h projection
POZ = - 0.01 + 0.9444 (LOGIT TRAN (850 T))
1.2
RV = 0.7209
PROB. of FROZEN (POZ)
1
1
________
1 + e- (a + bx)
0.8
0.6
0.4
POZ = 12.5 - 0.0446 (850 T)
0.2
RV = 0.6136
0
-0.2
250
255
260
265
270
275
850 MB TEMP
280
285
290
DEVELOPMENTAL CONSIDERATIONS
(cont.)
● Terms in Equations; Selection Criteria
“REAL” REGRESSION EQUATIONS
MOS regression equations are MULTIVARIATE , of form:
Y = a 0 + a 1 X 1 + a 2 X 2 + ... + a N X N
Where,
the "a's" represent COEFFICIENTS
the "X's" represent PREDICTOR variables
The maximum number of terms, N, can be QUITE large:
For GFS QPF, N = 15
For GFS VIS, N = 20
The FORWARD SELECTION procedure determines the
predictors and the order in which they appear.
FORWARD SELECTION
●
METHOD OF PREDICTOR SELECTION
ACCORDING TO CORRELATION WITH
PREDICTAND
●
“BEST” OR STATISTICALLY MOST IMPORTANT
PREDICTORS CHOSEN FIRST
●
FIRST predictor selected accounts for greatest reduction
of variance (RV)
●
Subsequent predictors chosen that give greatest RV
in conjunction with predictors already selected
●
STOP selection when desired maximum number of terms
is reached or new predictors provide less than a
user-specified minimum RV
DEVELOPMENTAL CONSIDERATIONS
(cont.)
● Terms in Equations; Selection Criteria
● Dependent Data
Sample Size, Stability, Representativeness
AVOID OVERFIT !!
Stratification - Seasons
Pooling – Regions
MOS LINEAR REGRESSION
OCTOBER 1 1993 - MARCH 31 1994 0000 UTC
KUIL
12-24 H PRECIPITATION ≥ 1.0"
1
Short sample,
Few observed cases,
Limited skill!
RV=14.2%
0
0.00
0.25
0.50
0.75
12-24 H NWP MODEL PRECIPITATION AMOUNT (IN.)
1.00
GFS MOS Cool Season PoP/QPF Regions
With GFS MOS forecast sites (1720) + PRISM
DEVELOPMENTAL CONSIDERATIONS
(cont.)
● Terms in Equations; Selection Criteria
● Dependent Data
Sample Size, Stability, Representativeness
AVOID OVERFIT !!
Stratification - Seasons
Pooling – Regions
● Categorical Forecasts?
Grids?
MOS BEST CATEGORY SELECTION
KDCA 12-Hour QPF Probabilities
48-Hour Projection valid 1200 UTC 10/31/93
PROBABILITY (%)
80
TO MOS
GUIDANCE
MESSAGES
60
FORECAST
THRESHOLD
THRESHOLD
40
EXCEEDED?
20
0
0.01"
0.10"
0.25"
0.50"
1.00"
2.00"
PRECIPITATION AMOUNT EQUAL TO OR EXCEEDING
Gridded MOS
KDCA
GFS MOS GUIDANCE
DT /NOV 27/NOV 28
HR
18 21 00 03 06 09 12
N/X
44
TMP 61 60 54 49 47 45 45
DPT 39 39 41 44 43 42 42
CLD OV BK BK FW CL CL SC
WDR 22 22 23 23 22 24 18
WSP 03 04 03 02 01 01 01
P06
0
0
1
P12
2
Q06
0
0
0
Q12
0
T06
0/24 0/ 1 0/ 0
T12
0/24
POZ
0 0 1 1 1 1 0
POS
0 0 0 0 0 0 0
TYP
R R R R R R R
SNW
0
CIG
8 8 8 8 8 8 8
VIS
7 7 7 7 7 7 7
OBV
N N N N N N N
11/27/2006 1200 UTC
/NOV 29
15 18 21 00 03 06 09 12
66
47
54 62 63 57 51 49 48 49
44 43 44 45 45 46 46 47
SC SC SC BK BK BK OV OV
17 17 15 15 16 17 17 15
02 03 03 03 03 01 01 02
1
0
1
4
4
7
0
0
0
0
0
0
0/ 0 0/16 0/ 0 0/ 0
0/ 0
0/16
0 0 1 2 1 2 1 0
0 0 0 0 0 0 0 0
R R R R R R R R
0
8 8 8 8 8 8 4 4
7 7 7 7 7 6 3 1
N N N N N BR FG FG
15 18
54
49
OV
16
02
61
50
OV
19
04
3
0
0/
0/
0
0
R
3
5
BR
0
0
1
0
R
4
7
N
/NOV 30
21 00 06 12
63
51
61 57 55 53
51 52 52 52
OV OV OV OV
19 20 20 21
04 04 05 04
5 8 12
8
16
0 0 0
0
0
0/20 0/ 0
0/20
1 2 2 0
0 0 0 0
R R R R
0
4 1 2 1
7 6 5 1
N HZ BR FG
High-resolution, gridded guidance for NDFD
Gridded MOS – Supporting NDFD
GFS-based
CONUS-wide @ 2.5km
Max / Min
PoP
Temp / Td
RH
Tstm
Winds
QPF
Snowfall
Gusts
http://www.mdl.nws.noaa.gov/~mos/
gmos/conus25/index.php
Sky Cover
Weather
Alaska / Hawaii Gridded MOS
AK: GFS-based, 3-km grid
HI: GFS-based, 2.5-km grid
3-km grid
2.5-km grid
All CONUS elements
Max / Min
RH
PoP
Winds
Temp / Td
Gusts
The Gridding of MOS
“Enhanced-Resolution” Gridded MOS Systems
● “MOS at any point” (e.g. GMOS)
Support NWS digital forecast database (NDFD)
2.5 km - 5 km resolution
Equations valid away from observing sites
Emphasis on high-density surface networks
Use high-resolution geophysical (terrain) data
Surface observation systems used in GMOS
•
•
•
•
•
METAR
Buoys/C-MAN
Mesonet (RAWS/SNOTEL/Other)
NOAA cooperative observer network
RFC-supplied sites
Approx.
2560 sites
MOS Stations
Approx.
7490 sites
MOS Stations
Approx.
20,470 sites!
MOS Stations
Gridded MOS – Central CA
2.5-km grid
Gridded MOS Concept - Step 1
“Blending” first guess and high-density station forecasts
Day 1 Max Temp
00 UTC 03/03/05
First guess field from
Generalized Operator Equation
or other source
Day 1 Max Temp
00 UTC 03/03/05
First guess + guidance
at all available sites
Developing the “First Guess” Field
Some options
• Generalized operator equation (GOE)
Pool observations regionally
Develop equations for all elements, projections
Apply equations at all grid points within region
• Use average field value at all stations
• Use other user-specified constant
• Use NWP model forecast
Gridded MOS Concept - Step 2
Add further detail to analysis with high-resolution
geophysical data and “smart” interpolation
Day 1 Max Temp
00 UTC 03/03/05
First guess + guidance
at all available sites
Day 1 Max Temp
00 UTC 03/03/05
First guess + station forecasts +
terrain
GMOS Analysis
Basic Methodology (Glahn, et al. 2009, WaF)
• Method of successive corrections
(“BCDG”)
Bergthorssen and Doos (1955); Cressman (1959);
Glahn (1985, LAMP vertical adjustment)
• Elevation (“lapse rate”) adjustment
Inferred from forecasts at different elevations
Calculations done “on the fly” from station data
Can vary by specific element, synoptic situation
• Land/water gridpoints treated differently
GMOS Analysis
Other Features
• Special, terrain-following smoother
• ROI can be adjusted to account for variations
in density of observed data
• Nudging can be performed to help preserve
nearby station data
• Parameters can be adjusted individually for
each weather element
GMOS Analysis
Some Issues
• Not optimized for all weather elements and
synoptic situations
Need situation specific, dynamic models?
• May not capture localized variations in vertical
structure
Vertical adjustment uses several station “neighbors”
• May have problems in data-sparse regions
over flat terrain
Defaults to pure Cressman analysis with small ROI
Can result in some “bulls-eye” features
NDGD vs. NDFD
Which is “better”?
Max Temperature
00 UTC 03/11/06
NDGD Max T
Max Temperature
00 UTC 03/11/06
NDFD Max T
NDGD vs. NDFD
Which is “better”?
Relative Humidity
21 UTC 03/10/06
NDGD RH
Relative Humidity
21 UTC 03/10/06
NDFD RH
Fewer obs available to analysis = less detail in GMOS
Forecasters adding detail: Which is “better”? More accurate?
AK GMOS Temps & Observing Sites
Even fewer obs available – Yikes!
3-km grid
The Gridding of MOS
“Enhanced-Resolution”, Gridded MOS Systems
● “MOS at any point” (e.g. GMOS)
Support NWS digital forecast database (NDFD)
2.5 km - 5 km resolution
Equations valid away from observing sites
Emphasis on high-density surface networks
Use high-resolution geophysical (terrain) data
● “True” gridded MOS
Observations and forecasts valid on fine grid
Use remotely-sensed predictand data
e.g. WSR-88D QPE, Satellite clouds, NLDN
Remotely-sensed precipitation data
How well do we do?
MOS Verification
Temperature Verification - 0000 UTC
GFS MOS vs. GFS DMO (10/2013 - 3/2014)
2-M Temperature MAE at 1315 CONUS Stations
8
GFS DMO
GFS MOS
MAE (Degrees F)
7
6
5
4
3
2
6
18
30
42
54
66
78
90
102 114
Projection (Hours)
126
138
150
162
174
186
Temperature Verification - 0000 UTC
GFS MOS vs. GFS DMO (10/2014 - 3/2015)
2-M Temperature MAE at 1315 CONUS Stations
8
GFS DMO
GFS MOS
MAE (Degrees F)
7
6
5
4
3
2
6
18
30
42
54
66
78
90
102 114
Projection (Hours)
126
138
150
162
174
186
MOS Temperature Verification - 0000 UTC
2010 Warm Season (4/2010 – 9/2010)
Mean Absolute Error - 00Z Temperatures
CONUS (300 stations)
April 1 - September 30, 2010
4
MAE (degrees F)
3.5
3
2.5
2
GFS
NAM
1.5
6
12
18
24
30
36
42
48
54
Projection (hours)
60
66
72
78
84
MOS Temperature Verification - 0000 UTC
2012 Warm Season (4/2012 – 9/2012)
MOS Temperature Verification - 0000 UTC
2014 Warm Season (4/2014 – 9/2014)
Mean Absolute Error - 0000 UTCTemp.
2014 Warm Season (4/2014-9/2014)
CONUS (300 Stations)
4
MAE (Degrees F)
3.5
3
2.5
2
NAM MOS
GFS MOS
1.5
6
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
MOS Temperature Verification - 0000 UTC
2015 Warm Season (4/2015 – 9/2015)
Mean Absolute Error - 0000 UTCTemp.
2015 Warm Season (4/2015-9/2015)
CONUS (300 Stations)
4
MAE (Degrees F)
3.5
3
2.5
2
NAM MOS
GFS MOS
1.5
6
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
MOS Temperature Bias - 0000 UTC
2015 warm season (4/2015 – 9/2015)
BIAS – 0000 UTC Temperature
CONUS (300 Stations)
4/2015 – 9/2015
Mean Alg. Error (°F)
2
1
0
Unbiased?
Why or why not?
-1
NAM MOS
GFS MOS
-2
6
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
MOS Temperature Bias - 0000 UTC
2012 warm season (4/2012 – 9/2012)
BIAS – 0000 UTC Temperature
CONUS (300 Stations)
4/2012 – 9/2012
Mean Alg. Error (°F)
2
1
0
-1
Unbiased? Why not?
NAM MOS
GFS MOS
-2
6
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
MOS Temperature Bias - 0000 UTC
10/06; 01/07; 03/08
Bias - 00z Temperature
CONUS - 300 Stations
Oct. 1 - 31, 2006, Jan. 1 - 31 2007, Mar. 1 - 31 2008
2
Mean Alg. Error (°F)
1
0
-1
Having a representative
verification sample is
important, too!
NAM
GFS
-2
6
12
18
24
30
36
42
48
54
Projection
Projection (hrs)
(Hours)
60
66
72
78
84
6h PoP Verification - 0000 UTC
2014 Warm Season (4/14 – 9/14)
6-hr POP Brier Score
CONUS 300 Stations
0.11
0.10
Brier Score
0.09
0.08
0.07
0.06
0.05
NAM MOS
GFS MOS
0.04
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
6h PoP Verification - 0000 UTC
2015 Warm Season (4/15 – 9/15)
6-hr POP Brier Score
CONUS 300 Stations
0.11
0.10
Brier Score
0.09
0.08
0.07
0.06
0.05
NAM MOS
GFS MOS
0.04
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
6h PoP Verification - 0000 UTC
2014-15 Cool Season (10/14 – 03/15)
6-hr POP Brier Score
CONUS 300 Stations
0.09
Brier Score
0.08
0.07
0.06
0.05
NAM MOS
GFS MOS
0.04
12
18
24
30
36
42
48
54
Projection (Hours)
60
66
72
78
84
48-yr Max Temperature Verification
Guidance / WFO; Cool Season 1966 - 2013
6.5
Mean Absolute Error (F)
6.0
48-h
48-h Guidance
48-h WFO
24-h Guidance
24-h WFO
5.5
5.0
4.5
4.0
24-h
3.5
3.0
2.5
2.0
Perf. Pg. /
LFM
PE MOS
Day/Nite NGM
EDAS AVN GFS NAM
1970
1980
1990
2000
2010
Dealing with
NWP model changes
Mitigating the effects on development
To help reduce the impact of model changes and
small sample size, we rely upon...
OBS
1. Improved model realism
MODELS
Mesonets
NCEP
better model = better statistical system
2. Coarse, consistent archive grid
smoothing of fine-scale detail
constant mesh length for grid-sensitive calculations
3. Enlarged geographic regions
larger data pools help to stabilize equations
4. Use of “robust” predictor variables
fewer boundary layer variables
variables likely immune to known model changes;
(e.g. combinations of state variables only)
Responding to NWP Model Changes
• Parallel evaluation
Run MOS…new vs. old NWP model
Assess impacts on MOS skill
Responding to NWP Model Changes
GFS: Hybrid EnKF parallel evaluation
Nov-Dec 2011 Temperature Bias: GFS MOS Oper vs. Para
(Overall - 344 Stations)
0.5
GFS MOS (OPER)
GFS MOS (PARA)
Bias
-0.5
-1
-1.5
Projection (Hours)
192
186
180
174
168
162
156
150
144
138
132
126
120
114
108
102
96
90
84
78
72
66
60
54
48
42
36
30
24
18
12
6
0
Responding to NWP Model Changes
• Parallel evaluation
Run MOS…new vs. old NWP model
Assess impacts on MOS skill
• Do nothing?
OK if impacts are minimal
But, often they aren’t! (GFS wind / temps)
2009 - 2011 GFS MOS Wind Bias
2009
Jan-Apr
2011
2010
May-Jul
2011
Wind Speed Bias for KABQ
July - Sept. 2010 (00Z Cycle)
8
OPER
PARA
7
GFSMOS”fix”
Bias (knots)
6
5
4
3
GFSMOSoper
2
1
0
0
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
54
-1
Forecast Projection
57
60
63
66
69
72
75
78
81
84
Responding to NWP Model Changes
so now
what?
• OK,
Parallel
evaluation
Run MOS…new vs. old NWP model
Assess impacts on MOS skill
• Do nothing?
OK if impacts are minimal
But, often they aren’t! (GFS wind / temps)
• Doing nothing not an option?
• Model error characteristics significantly different
• Undesirable effects on MOS performance
• Model changes may be recent
i.e. limited sample available from newest version
• Unhappy forecasters!!
Responding to NWP Model Changes
• Bias Correction for MOS?
Daily Bias Correction
based on past N (7, 10, 20 or 30)- day forecast errors
Today
Past N days
P1
Past
Future
t=N
1
1 N
Bias   [F(t)  O(t)]
N t 1
P2
Bias correction:
F' = F (t) – Bias
F = Forecasts ; O = Observations
N = Days in training sample
(typically, N = 7, 10, 20, or 30)
Daily biases can be treated equally or
weighted to favor most recent days, etc.
Raw / Corrected GFS MOS Wind MAE
KABQ – 00UTC, 96-h Projection
F(t)
Raw / Corrected GFS MOS Temp MAE
Southwest U.S. – 00UTC, 48-h Projection
F(t)
Responding to NWP Model Changes
• Bias Correction for MOS?
Apply to Temps? Winds?
Run continuously in background?
Satisfactory in rapidly-varying conditions?
• Redevelop?
Short sample from new model or “mixed”?
Full System, selected elements?
Biggest impacts on single-station equations (Temp, Wind)
GFS MOS Wind Verification Results*
5/10/2011 – 9/30/2011
MOS Windspeed MAE, Southwest CONUS
5/10/2011 - 9/30/2011
4.5
Windspeed Southwest CONUS
MAE
MAE (degrees F)
4
3.5
3
2.5
2
OPER
1.5
NEW GFS
1
0.5
0
6
12
18
24 30 36
42
48
54
60
66 72
78
84
Forecast Projection (hours)
MOS Windspeed Bias, Southwest CONUS
5/10/2011 - 9/30/2011
Using even just a little data
from the new NWP model
version can be helpful!
2.5
Bias
Bias (knots)
2
1.5
1
OPER
0.5
NEW GFS
0
0.00
-0.5
-1
6
12 18 24 30 36 42 48 54 60 66 72 78 84
Forecast Projection (hours)
* 2-season
dependent sample
(4/2010 – 9/2011)
Responding to NWP Model Changes
• Bias Correction for MOS?
Apply to Temps? Winds?
Run continuously in background?
Satisfactory in rapidly-varying conditions?
• Redevelop?
Short sample from new model or “mixed”?
Full System, selected elements?
Biggest impacts on single-station equations (Temp, Wind)
• Reforecasts?
2-3 year sample probably sufficient for T, Wind
Representativeness important!
Rare elements need longer or “mixed” sample?
Requires additional supercomputer resources
Responding to NWP Model Changes
Sample size sensitivity
Warm Seasons: 2000 - 2013
Wind Speed ≥ 10 Kts - 00Z GEFS MOS - MAE
Overall (334 Stations)
1YR
2YR
3YR
5YR
10YR
15YR
5.5
5
4.5
MAE
4
3.5
3
2.5
Projection (Hours)
192
186
180
174
168
162
156
150
144
138
132
126
120
114
108
102
96
90
84
78
72
66
60
54
48
42
36
30
24
18
12
6
2
Responding to NWP Model Changes
Sample size sensitivity
Warm Seasons: 2000 - 2013
Wind Speed ≥ 10 Kts - 00Z GEFS MOS - MAE
Overall (334 Stations)
15YR
15YR (3 DAY SKIP)
5YR
5.5
5
4.5
MAE
4
3.5
3
2.5
Projection (Hours)
192
186
180
174
168
162
156
150
144
138
132
126
120
114
108
102
96
90
84
78
72
66
60
54
48
42
36
30
24
18
12
6
2
Responding to NWP Model Changes
Four recent examples
• GFS MOS update for v.12 (T1534) (3/2015)
Mitigate impacts of resolution change
Most elements; ex. Sky, Cig/Vis, PoP/QPF
(Sample: 2 cool seasons, 4 warm; mostly reforecasts)
• GFS/GFSX MOS Wind replacement (6/2012)
Fix errors introduced by 5/2011 GFS roughness
length change (2-season sample)
• GFS MOS full-system update (3/2010)
Correct accumulated drift from several
minor model changes
• NAM MOS (12/2008)
Respond to Eta / NMM transition
“Mixed” samples except for sky, snow (Eta-based)
MOS: Today and Beyond
The Future of MOS
“Traditional” Station-oriented Products
● GFS / GFSX MOS
Update Temp, Wind, PoP/QPF with v13.0 data
Add new sites to GFS COOP MOS bulletins
Update climate normals (1981-2010 NCDC)
Bias-corrected T, Td, Max/Min?
● NAM MOS
Add precipitation type suite (TYP, POZ, POS)
Jan 13, 2016
Update remaining eta-based elements (ex. VIS)
Update temp, winds with NMM-b data
The Future of MOS
“Traditional” Station-oriented Products (contd.)
● ECMWF MOS
Complete “GFS-like” product suite
Add snow amount forecasts
Refresh other elements as needed
NOAA internal distribution only
● General
Evaluate impacts of NWP model changes
Utilize new “routine” EMC reforecast data
Periodic addition of new observation sites
New products utilizing station probabilities
GFS/NAM MOS 24-h snow amount probabilities
KBWI 00 UTC, 1/24/16
GFS
NAM
Tr
2”
4”
6”
8”
Tr
2”
4”
6”
8”
GFS/NAM MOS 24-h snow amount probabilities
KBWI 00 UTC, 2/13/14
GFS
NAM
Tr
2”
4”
6”
8”
Tr
2”
4”
6”
8”
The Future of MOS
Gridded MOS: Where do we go from here?
• Additions to current CONUS GMOS system
ECMWF, NAM-based companion systems?
Expand grid extent to include NWRFC domain
Probabilistic and/or ensemble-based products
NAM gridded snow amount probability
Probability of snow > 4”
NAM gridded snow amount probability
Sample Forecast as Quantile Function (CDF)
(72-h Temp KBWI 12/14/2004)
50
Temperature
45
40
35
30
25
0
0.1
0.2
0.3
0.4
0.5
Probability
0.6
0.7
0.8
0.9
1
ECMWF MOS Ensemble Box Plots
The Future of MOS
Gridded MOS: Where do we go from here?
• Additions to current CONUS GMOS system
ECMWF, NAM-based companion systems?
Expand grid extent to include NWRFC domain
Probabilistic and/or ensemble-based products
• Expand GMOS for AK / HI
AK: Increase grid extent (match RTMA/URMA)
AK: Improve marine winds
Hawaii: add QPF, Sky Cover
• Increase utilization of mesonet data
Expand use of MADIS archive (NCO/TOC/ESRL)
Soon: ~17,000 total MOS sites; later up to ~20,000?
The Future of MOS
Gridded MOS: Where do we go from here?
• Incorporate remotely-sensed data where possible
SCP augmented clouds / WSR-88D QPF (in use)
NSSL MRMS (Multi-radar, Multi-sensor) dataset?
New lightning datasets: Global, “Total” (CC & CG)
• Expand consensus / blended guidance products
Blend GFS, CMC, global ensembles, GMOS
Weights based on recent performance
Use MOS, BMA, MAE, other statistical techniques
Investigate downscaling from analyses (RTMA, etc.)
Debias! Calibrate! Downscale! Blend! Verify!
National Blend of Models (NBM)
• Combines forecasts from multiple NWP models to
create calibrated, blended guidance on the 2.5 km
NDFD grid.
• The 2.5 km Un-Restricted Mesoscale Analysis
(URMA) is used for calibration.
• Components are blended using a MAE-based
weighting technique.
• NBM v1.0:
Temperature-related variables only
NBM v2.0: Add PoP/QPF
National Blend of Models Dataflow
Inputs
Gridded MOS Guidance:
GFS (GMOS),
NAEFS (EKDMOS)
Direct Model Output:
GFS, GEFS mean,
CMCE mean
Daily Training
URMA 2.5 km
CONUS
Update
biases and
MAEs
Biases
MAEs
Blended Guidance Generation
Latest
Gridded MOS
and DMO Inputs
Bias-Corrected
Grids
Calibrated
Consensus
Forecasts
128
National Blend of Models v1.0
NDFD
Blend
GMOS
Blend Para
So You Wanna Be a MOS
Developer?
Consider the “Pathways” Program!
For more info:
http://www.usajobs.gov/StudentsAndGrads
NWS contact:
Ms. Hope Hasberry
1325 East-West Highway, Rm. 9158
Silver Spring, MD 20910-3283
(301) 427-9072
Hope.Hasberry@noaa.gov
REFERENCES…the “classics”
Wilks, D.: Statistical Methods in the
Atmospheric Sciences, 2nd Ed., Chap. 6,
p. 179 - 254.
Draper, N.R., and H. Smith: Applied
Regression Analysis, Chap. 6, p. 307 - 308.
Glahn, H.R., and D. Lowry, 1972: The use of
model output statistics in objective weather
forecasting, JAM, 11, 1203 - 1211.
Carter, G.M., et al., 1989: Statistical forecasts
based on the NMC’s NWP System, Wea. &
Forecasting, 4, 401 - 412.
REFERENCES (GMOS)
Glahn, H.R., et al., 2009: The Gridding of MOS.,
Wea. & Forecasting, 24, 520 – 529.
Download