Development of a benchmark network for Tom Pagano, Curtis, Doggett, Daly, Pasteris

advertisement
Development of a benchmark network for
measuring climate in the mountainous Western US
Tom Pagano,
Curtis, Doggett, Daly, Pasteris
Tom.Pagano@por.usda.gov
503 414 3010
NRCS data networks
Why “Gold standard” sites?
What is a “good” site?
How do we do this?
1906
2005
Manual Snow Surveys
Metal tube inserted into snow and weighed to measure water content
Manual Snow Surveys
Metal tube inserted into snow and weighed to measure water content
Depth of snow water equivalent helps forecast summer runoff
Primary measurement technology 1910~1985
Snotel (SNOw TELemetry) network
Automated, remote stations
Primary variables:
Snow water
Precipitation
Temperature
Also:
Snow depth
Soil moisture
Snotel (SNOw TELemetry) network
Automated, remote stations
Primary variables:
Snow water
Precipitation
Temperature
Also:
Snow depth
Soil moisture
Snotel (SNOw TELemetry) network
Automated, remote stations
Primary variables:
Snow water
Precipitation
Temperature
Also:
Snow depth
Soil moisture
Manual snow-course
SNOTEL
SCAN
Characteristics of NRCS networks
Primary purpose is realtime water supply forecasting
Secondary climate applications
Rural/wilderness/alpine - High elevation, below timberline
Long period of record (many places 1936-now)
Number of sites
Snowcourse (solid) and SNOTEL (hashed)
active station installation dates
• Time series plot of cumulative # years of
data
Active year
Quartz Mountain, Oregon (20G06S)
March 1 Snow water equivalent (inches)
Quartz Mountain, Oregon (20G06S)
March 1 Snow water equivalent (inches)
“devastating” forest fire
and salvage logging 1992
Data signals and noises
Maximize
Climate based
Macroscale – climate change, seasonal water supply
Microscale – transient, extreme events, inversions, snow line
Minimize
for climate
studies
Data signals and noises
Maximize
Climate based
Macroscale – climate change, seasonal water supply
Microscale – transient, extreme events, inversions, snow line
Non-Climate based
Macroscale – land use change, cloud seeding, pollution
Microscale – vandalism, sensor failure, sensor placement
Other – transmission error, keying, database gremlins
Minimize
for climate
studies
Montana April 1 Snow water equivalent % normal
Major land use change at sites
Copper bottom
Before
Copper bottom
Before
After
Shifts in snowcourse measurement dates
-10
Late
% of stations reporting on each day each year (relative to April 1)
-8
More
-6
-4
-2
On time
0
2
4
6
Early
8
Fewer
10
1910
1920
1930
1940
1950
1960
1970
1980
1990
2000
More subtle changes abound, even network-wide changes
Shifts in snowcourse measurement dates
-10
Late
% of stations reporting on each day each year (relative to April 1)
-8
More
-6
-4
-2
On time
0
2
4
6
Early
8
Fewer
10
1910
1920
1930
1940
1950
1960
1970
1980
1990
2000
Weekends
More subtle changes abound, even network-wide changes
Establishing a benchmark network
for monitoring mountain climateTop down vs bottom up approaches
(data based or climate based or both?)
For what?
What does it mean?
Network building approaches
Top down
Define criteria for sites to be in network
Examine existing network for matching sites
Supplement with new sites if necessary
Network building approaches
Top down
Define criteria for sites to be in network
Examine existing network for matching sites
Supplement with new sites if necessary
Bottom up
Examine existing network
Pick out the “best” sites
Possible qualities of “Not-Goodness”
-Highly variable
-Easily proven wrong
-Mysterious
Also need to consider:
“good” data from a station
a “good” station within a network
High variability:
If we wanted to detect subtle absolute trends,
sites with low interannual variability are “good”.
What are the low variability regions?
Tmax
January
Maximum
Temperature
High variability
Low variability
Results from PRISM SNOTEL
Temperature Quality Control
<3
3-4
4-5
5-7
>7
Average
Predicted
Standard
Deviation:
avg(PSD)
Tmax
January
Results from PRISM SNOTEL
Temperature Quality Control
July
Tmax
<3
3-4
4-5
5-7
>7
Average
Predicted
Standard
Deviation:
avg(PSD)
Apr 1 swe stdev
Apr 1 swe stdev/norm
Norm > 1”
<2”
2-4
4-6
6-10
>10”
<25%
25-35%
35-50%
50-80%
>80%
Easily proven wrong:
We’d like sites with “good” data
in the sense that it’s
not often missing, spikey,
absurd, impossible.
This usually is site specific.
…or is it?
PRISM Max Temperature
Percent of values missing
or unquestionably bad
Red (bad):
over 36 values/year
Blue (good):
less than 2 values/year
Percent
Pnull,P0
0.0-0.5%
0.5-1.0%
1.0-2.5%
2.5-5.0%
5.0-10%
>10%
Results from PRISM SNOTEL
Temperature Quality Control
Mysteriousness
What role for quality control (QC) in identifying sites?
Does QC mean “quality” or “predictability”?
Mysteriousness
What role for quality control (QC) in identifying sites?
Does QC mean “quality” or “predictability”?
High “predictability” is both a good thing and a bad thing.
If the data is very knowable, is it redundant?
Fewer “surprises” mean fewer learning opportunities.
Mysteriousness
What role for quality control (QC) in identifying sites?
Does QC mean “quality” or “predictability”?
High “predictability” is both a good thing and a bad thing.
If the data is very knowable, is it redundant?
Fewer “surprises” mean fewer learning opportunities.
But there’s a difference between “adventure” and “danger”.
Are “learning opportunities”
what we’re really after with this network?
Would a site be “gold standard” for all variables?
Data quality efforts for swe > precip > temperature
Is edited/estimated data valid?
What role for
the data editing process
(past and future)?
Well known snow
“undercatch” in
precipitation gages
What role for
the data editing process
(past and future)?
precip
Is this edited?
Did unedited
precip == swe?
swe
Would a site be “gold standard” for all variables?
Data quality efforts for swe > precip > temperature
Is edited/estimated data valid?
Does being one of these sites
mean that the data is treated differently
within our program?
If we agree to “prioritize” a site,
is access for repairs a factor?
Metadata
What metadata do we need?
What is the state of the metadata?
Is it too late to start collecting it?
What will people do with it?
Metadata status
Spotty… Some places excellent.
Other places poor, non-digitized,
non uniform.
Big Flat, UT
Site sketch
Big Flat, UT
Site sketch
Solar window
Metadata status
Spotty… Some places excellent.
Other places poor, non-digitized,
non uniform.
Julander 2005 (Utah):
Examined repeat photography, cloud seeding, pollution, etc
Found 48% of snow site records “contaminated”
8 sites (6%) suitable for climate studies.
GBRC Meadows
GBRC Meadows
Where do we go from here?
Many people are already using NRCS snow data for research
Non-climate signals need to be documented
How to identify the right sites?
What do we do once the sites are identified?
END
Download