Methodology - Graphs Presentation - Watts Up

advertisement
Original Graph Set
This is the first set of studies that we made after
re-rating the stations using the updated Leroy
(2010) system. We examine compliant (Class
1&2) stations with non-compliant (Class 3,4,5)
using different classes as baseline
comparisons. We also ran the data without
baselines for purposes of comparison and to
ensure that the results were not a
methodological artifact.
Part 1
Baseline Comparisons
How do stations compare with
nearby stations of a different class?
This uses a different approach than our
(more recent) “nine regions” method.
But we wanted to be certain that
however we addressed the problem
the basic results would be the same.
Methodology
• We cut the US up into 26 grid boxes.
• We then compare well sited and poorly sited
stations within each grid, using each Class as
a separate touchstone.
• This gives us four baselines of comparison:
Class 1\2, Class 3, Class 4, and Class 5.
Note: We combine Class 1&2 and treat them as a single class
to ensure a robust sample and also because neither Class 1
nor 2 stations are temperature biased, according to Leroy
(2010) and are therefore equivalent for our purposes.
Grid Boxes
1.) Comparisons are made within each box to
establish baseline.
2.) All boxes are averaged for gridded results.
A1
B1
B2
A2
B3
C1
C2
C3
D1
E1
F1
G1
E2
D2
J1
I2
H2
F2
G2
F3
G3
E3
D3
H3
E4
H4
Class 4 stations are the most numerous (36%) and therefore this
comparison is the most robust.
Compliant (Class 1\2) stations show a trend of 0.095°C / Decade
lower than non-compliant (Class 3\4\5 stations).
Class 3 stations are the second most numerous (33%).
Compliant (Class 1\2) stations show a trend of 0.102°C / Decade lower
than non-compliant (Class 3\4\5 stations).
Class 1\2 stations comprise only 20% of the total number.
Compliant (Class 1\2) stations show a trend of 0.082°C / Decade
lower than non-compliant (Class 3\4\5 stations).
Class 5 stations comprise only 12% of the total number. Results therefore
cannot be considered to be robust. Yet the same pattern emerges.
Compliant (Class 1\2) stations show a trend of 0.076°C / Decade lower than
non-compliant (Class 3\4\5 stations).
Part 2
Equipment
This is a look at how the different equipment
affects the data.
• CRS: Cotton Region Shelters (a/k/a
“Stevenson Screens”
• MMTS (Maximum-Minimum Temperature
Sensors)
• ASOS (Automated Surface Observing
Systems)
This compares different equipment. Note that the modern MMTS shows a
significantly lower trend than the obsolete CRS and the notoriously unreliable
ASOS. Yet rather than adjusting CRS and ASOS trends downward to match
MMTS, MMTS trends are adjusted upwards to conform with the older, less
reliable equipment.
CRS equipment shows a higher overall trend than MMTS and
somewhat less difference between compliant and non-compliant
stations (0.64). Part of this is due to poor distribution of stations and is
addressed by gridding (see next slide).
After gridding and baselining to Class 4, CRS equipment shows a
difference between compliant and non-compliant stations of 0.73.
Modern MMTS
equipment shows a
much larger difference
between compliant
and non-compliant
stations (0.173).
(ASOS comparisons
cannot be made, as there
are too few for a robust
internal comparison. They
tend to be better sited,
almost exclusively in
airports, yet their trends
are higher owing to an
equipment (HO-83) failure
issue and other factors
pertaining to the unique
situations in airports.)
After gridding and baselining to Class 4, MMTS equipment shows a
slightly smaller, yet still very large difference between compliant and
non-compliant stations (0.164).
Part 3
Urban vs. Rural
This section confirms the that urbanization not only
increases the readings, but also the trends.
In addition, urbanization is found to dampen,
though not eliminate, the differences between
complaint (Class 1&2) and non-compliant (Class
3,4,5) stations. Rural stations show the greatest
disparity.
This is significant because 10% of the rated sites
are urban and 25% semi-urban, which is a far
greater proportion than the actual CONUS
surface area. Therefore, to that extent, the
trends are exaggerated.
We now turn our attention to urban vs. rural trends. Urban trends are
much higher than rural (0.99) with semi-urban trends squarely in
between, at 0.56 higher than rural.
The difference between compliant and non-compliant rural stations is
much greater (.095) than for urban stations.
The difference between compliant and non-compliant semi-urban
stations is also much greater (0.114) than for urban stations.
The difference between compliant and non-compliant urban stations is
much less (0.037), as urban waste heat overwhelms the stations,
nominally compliant and non-compliant, alike. Class 4 station in urban
areas show the same tendencies as Class 5 stations in rural areas.
This chart demonstrates the large effect of urban areas on (otherwise)
compliant stations. It also tells us how the NOAA deals with this by
way of adjustment: Namely that non-urban trends appear to be
adjusted upward to match urban trends rather than urban trends
being adjusted downward to match rural trends.
Non-compliant (Class 3,4,5) stations show somewhat less
urban-rural difference than the compliant (Class 1,2)
stations.
Part 4
Gridded, but with no Baseline
We now examine the data without any
baseline.
Without a baseline, the data shows a .077 cooler trend for
compliant (Class 1&2) stations than for non-compliant
stations (Class 3,4,5). This is consistent with our overall
findings.
And this shows how NOAA adjusts for the differences: Not
by adjusting the non-compliant stations downward to
match the compliant stations, but by adjusting the
compliant stations upward to match those stations that
are out of compliance.
Finally, we showcase the best equipment with urban and semi-urban
stations excluded (there are a handful of rural airports included,
however). This data is not gridded or baselined, but is a simple
national average. The warming effects of poor siting are obvious, as
are the effects of NOAA adjustment procedure. Compliant trends are
fully .190 higher after NOAA adjustment.
Baseline Comparisons
It is, of course, important to provide a simple, ungridded
nationwide average of all well sited stations and poorly
sited stations. And indeed, we provide those figures.
But it is possible that a nationwide average result can be
skewed by poor station distribution. If well (or poorly)
sited stations are concentrated in some areas but not in
others. Furthermore, it is not very revealing to compare a
well sited station in Northern Virginia with a poorly sited
station in Arizona. One would want to compare well sited
(Class 1&2) stations with nearby poorly sited (Class
3,4,5) stations and vice versa .
Therefore, gridding and baselining is desirable.
Download