Huffman

advertisement
The Status of the Multi-satellite Precipitation
Analysis and Insights Gained from Adding
New Data Sources
G.J. Huffman1,2, R.F. Adler1, D.T. Bolvin1,2, E.J. Nelkin1,2
1: NASA/GSFC Laboratory for Atmospheres
2: Science Systems and Applications, Inc.
Outline
1.
MPA Status
2.
Satellite Observation Noise
3.
Estimating Error
4.
(Validation Data Issues)
5.
Summary
1. MPA STATUS
The MPA has been
upgraded to produce both
improved real-time
(3B42RT) and new postreal-time (3B42) data sets
Code to include AMSR-E
and AMSU-B precip
estimates in the MPA is in
operational testing
The “old” real-time is
available for February 2002
– present
The post-real-time is
available for January 1998
– December 1998, and
continuing reprocessing at
5x real time
Instantaneous
SSM/I
TRMM
AMSR
AMSU
Calibrate High-Quality
(HQ) Estimates to
“Best”
30-day HQ coefficients
Merge HQ Estimates
3-hourly merged HQ
Match IR and HQ,
generate coeffs
Hourly
IR Tb
30-day IR coefficients
Apply IR coefficients
Hourly HQ-calib IR
precip
Merge IR, merged HQ
estimates
3-hourly multi-satellite
(MS)
Monthly
gauges
Compute monthly
satellite-gauge
combination (SG)
Monthly SG
Rescale 3-hourly MS to
monthly SG
Rescaled 3-hourly MS
2. SATELLITE
OBSERVATION NOISE
Different sensors “see”
different physical scenes
Microwave “sees”
hydrometeors
along front
IR “sees”
clouds ahead of
front
The inferred precip
is in different
places that are
synoptically
consistent; but the
microwave is
better
2. SAT. OBS. NOISE (cont.)
IR vs. microwave for full
resolution: 3-hr 0.25°x0.25°;
00, 03, …, 21Z 15 Feb 2002
Latitude band 30°N-S
Errors are equitably
distributed on either side of
the 1:1 line by design of the
IR calibration.
But, details of IR and
microwave patterns differ.
Scene classification might be
helpful (Sorooshian et al.).
2. SAT. OBS. NOISE (cont.)
So, we try to get as many microwave
sensors as possible (i.e., do GPM).
TRMM PR (red)
TRMM TMI (cyan)
SSM/I (3 sat.; yellow)
But, details in the microwave observations AMSR-E (blue)
Can cause noise in the precip estimates
AMSU-B (3 sat.; green)
If they’re not properly handled.
IR (black)
40
2. SAT. OBS. NOISE (cont.)
Coincident 0.25°-gridbox GPROF-AMSR and -TMI estimates for February 2004
±30 min
±60 min
0
TMI Precip (mm/h)
10
20
30
±15 min
0
10
20
30
AMSR Precip (mm/h)
0
10
20
30
AMSR Precip (mm/h)
0
10
20
30
AMSR Precip (mm/h)
The “standard” 3-hr time window for coincidence introduces error
• same grid box for spatial coincidence
• ±15-, ±30-, ±60-minute windows of time coincidence
• points near axes at ±60 result from advection into/out of box,
and/or growth/decay
• limiting the window decreases the microwave data in each period
• time interpolation, such as in morphing, helps avoid this error
40
2. SAT. OBS. NOISE (cont.)
Conic scanners
(SSM/I, TMI, AMSR-E)
Scan lines are segments of a
cycloidal pattern. The along-track
separation is the same everywhere,
but the curvature causes oversampling at the edges.
Pixels at scan edges uniquely
represent an area about 40%
smaller than at scan center.
2. SAT. OBS. NOISE (cont.)
Cross-track scanners (IR, AMSU)
Pixels grow as viewing angle grows
away from nadir. Also, oversampling
in the along-track direction occurs at
the edges.
scan edges
scan center
Changing pixel size changes
the observed precipitation rates
3. ESTIMATING ERROR
Bowman, Phillips, North (2003, GRL) validation by TOGA TAO gauges
Slope = 0.96
Slope = 0.68
4-year average of Version 5 TRMM TMI and PR
•
•
•
•
1°x1° satellite, 12-hr gauge, each centered on the other
each point is a buoy
wind bias in the gauges is not corrected
the behavior seems nearly linear over the entire range
3. ESTIMATING ERROR
(cont.)
Monthly accumulations of
GPCP Version 2 versus
Pacific atolls for 2.5°x2.5°
boxes
• more spread than 4-year
average
• part of spread is due to
gauge uncertainty
(Gebremichael et al, 2003,
Steiner et al. 2003)
• basis of bias still uncertain
3. ESTIMATING ERROR
(cont.)
Daily accumulations of MPA
(3B42RT) versus CPC
analysis for 0.25°x0.25°
boxes
• 13Z 30 July – 12Z 31 July
2004 from CPC validation
site
• correlation continues to
go down, as expected
Mean = 3.2 mm/d
Bias Ratio = 1.04
MAE = 5.3 mm/d
3. ESTIMATING ERROR (cont.)
amount
Which “satellite” estimate matches the “observations” better?
obs.
sat.1
sat.2
time
The uncertainties are multi-scale
• sat.1 is better than sat.2
• the usual 2 = (sat – obs)2 yields the same bad score for both
• the improvement can be revealed with “some” averaging, but
how much? The answer depends on the averaging.
• what does the user want to know?
• fine-scale forecasts have the same problem
3. ESTIMATING ERROR (cont.)
At the monthly scale there are a few bulk formulae for estimating
random error (Huffman; Gebremichael and Krajewski)
• even these need information not all data sets provide
• better schemes are needed that separately represent sampling
and algorithmic error
An estimator is needed for bias on coarse scales
• Tom Smith is working on this
• sticking point is possible dependence on weather regime
• Implication: regime-dependent bias would look like extra
random error when the regimes aren’t represented
At “fine” time/space scales we have a lot to do
• the cleanest possible match-ups are critical
3. ESTIMATING ERROR (cont.)
There is no practical approach for averaging up the fine-scale
errors to provide a consistent estimate of the coarser-scale errors.
• should there be separate estimates of correlated and
uncorrelated errors on the fine scale?
Speculation: accounting for weather regime and underlying
surface type will turn out to be important for getting clean answers.
Validating combination estimates has the additional challenge that
• the relative weight given different inputs fluctuates, and
• the different inputs usually have different statistical properties
3. ESTIMATING ERROR (cont.)
The precip error in no-rain areas needs to be explicitly estimated
+
++
Y
Rain Estimate
Possible Estimate of Error
X
• error is certainly not zero for every zero-rain estimate
• some locations are very certain not to contain rain, while the
no-rain estimate is much less certain in others
• error estimates in zero-rain areas might be helpful in merging
different rain estimates
• what does the user want to know?
• this is likely an algorithm-dependent calculation – GPROF is
heading towards this in Version 7
4. VALIDATION DATA ISSUES
The lack of validation is true even at the 2.5° monthly scale.
A standard monthly gauge analysis provides ≥5
gauges only in some land areas. We can’t assume
correct monthly validation in the rainforests!
4. VALIDATION DATA ISSUES (cont.)
We need to pursue the best in situ technologies
• redundant gauge siting (Krajewski, TRMM Office)
• dual-polarization radar
• revisit optical rain gauges? (Weller, Bradley, Lukas [2004
J.Tech.] think they’ve figured out TOGA COARE data)
• acoustic rain gauges (Nystuen)
• solid precipitation in general
– solid precipitation is the next frontier for satellites; validation is
a substantial issue
We need to develop more surface validation sites
• ensure that the data get shared
• sample additional climate regimes
– mid-latitude ocean
– snowy land
• develop long-term strategies without breaking the bank
– IPWG working with continental-scale validation efforts (Ebert Australia, Janowiak - U.S., Kidd - Europe)
5. SUMMARY
The MPA is ready to include “all” the standard microwave data.
The original satellite data have features that can cause “noise” if
they’re not properly handled.
• IR doesn’t respond to hydrometeors per se
• wide time windows mix non-coincident data
• different pixels along a scan represent different things
Error estimation remains a substantial problem.
• finer-scale match-ups are intrinsically more noisy
• we need concepts and methodology for making and inter-relating
quantitative estimates of error across the range of scales
• in particular, we need to develop bias estimates and estimates of
error in non-raining areas
Surface observations can help us understand the behavior of the
satellite estimates. We need to:
• develop more data sites, including areas with snow
• emphasize clean match-ups of surface and global data
5. SUMMARY
The MPA is ready to include “all” the standard microwave data.
The original satellite data have features that can cause “noise” if
they’re not properly handled.
• IR doesn’t respond to hydrometeors per se
• wide time windows mix non-coincident data
• different pixels along a scan represent different things
Error estimation remains a substantial problem.
• finer-scale match-ups are intrinsically more noisy
• we need concepts and methodology for making and inter-relating
quantitative estimates of error across the range of scales
• in particular, we need to develop bias estimates and estimates of
error in non-raining areas
Surface observations can help us understand the behavior of the
satellite estimates. We need to:
• develop more data sites, including areas with snow
• emphasize clean match-ups of surface and global data
3. ESTIMATING ERROR
Precipitation is
•
•
•
•
non-negative
intermittent
highly variable over the known range of time scales
loosely coupled to larger-scale controls
The usual notion of error is
Pest (x,y,t) = [ Ptrue (x,y,t) + r (x,y,t) ] • B (x,y)
estimated precipitation
(what we actually see)
true precipitation
(what validation
Is supposed to
tell us)
random error
(zero-mean
random parameter)
bias error
(persists when time averaging
should have damped out
the random error)
results from algorithmic
error or sampling error
2. SAT. OB. NOISE (cont.)
Both TMI and AMSU-B have a
problem detecting light precipitation
over ocean; AMSU-B is worse
AMSU-B compensates for low
occurrence of precip by having
more high rates
Probability matching can control
rates, but can’t invent rain in zerorain areas
3. ESTIMATING ERROR (cont.)
amount
How are these two “satellite” estimates best merged?
sat.1
sat.2
time
• Any linear weighting scheme will damage the statistics:
- fractional coverage will be too high
- maximum and conditional rainrates will be too low
3. ESTIMATING
ERROR (cont.)
Real rain patterns are
messy!
Rainfall for DC area,
July 1994
Convective rain has
very short correlation
distances – even for a
month
The original D.C. is
50% of a 0.25° grid
box at latitude 40°
3. ESTIMATING ERROR (cont.) Satellite-buoy validation
Buoy
TMI
4. VALIDATION DATA ISSUES (cont.)
The primary difficulties are
• lighter precipitation rates
• snowy/icy/frozen surface defeats current microwave schemes
- prevents direct estimates and calibration for IR
• IR tends to be decoupled from precipitation processes
• surface calibration/validation data are sparse
“Complex terrain” can induce variations the satellites miss
• strong variations in short distances
• “warm rain enhancement” on windward slopes not retrievable
Sounding channels – TOVS, AIRS – the current best choice
• GPCP SG and 1DD both use TOVS at high lat./alt.
• group funded to put sounder data in the MPA globally
GPM (and others) have driven recent work evaluating additional channels
• evaluating deployment of sounder channels that don’t see the sfc.
Download