Ofer Heyman for the degree of Doctor of Philosophy in... Title: A AN ABSTRACT OF THE DISSERTATION OF 1, 2003.

advertisement
AN ABSTRACT OF THE DISSERTATION OF
Ofer Heyman for the degree of Doctor of Philosophy in Geography presented on
December 1, 2003.
Title: A Per-Segment Approach to Improving Aspen Mapping from Remote Sensing
Imagery and Its Implications at Different Scales.
Abstract approved:
A. Jon Kimerling
A per-segment classification system was developed to map aspen (Populus
tremuloides) stands on Winter Ridge in central Oregon from remote sensing imagery.
A 1-meter color infrared (CIR) image was segmented based on its hue and saturation
values to generate aspen "candidates", which were then classified to show aspen
coverage according to the mean values of spectral reflectance and multi-resolution
texture within the segments. For a three-category mapping, an 88 percent overall
accuracy with a K-hat statistic of 0.82 was achieved, while for a two-category
mapping, a 90 percent overall accuracy with a K-hat statistic of 0.78 was obtained.
In
order to compare these results to traditional per-pixel classifications,
an
unsupervised classification procedure based on the ISODATA algorithm was applied
to both pixel-based and segment-based seven-layer images. While differences among
various per-pixel classifications were found to be insignificant, the results from the
per-segment system were consistently more than 20 percent better than those from
per-pixel classifications.
Both the per-segment and per-pixel classifications were applied at various spatial
resolutions in order to study the effect of spatial resolution on the relative performance
of the two methods. The per-segment classifier outperformed the per-pixel classifier at
the 1-4-m resolution, performed equally well at the 8-16-m resolution and showed no
ability to classify accurately at the 32-m resolution due to the segmentation process
used. Overall, the per-segment method was found to be more scale-sensitive than the
per-pixel method and required some tuning to the segmentation algorithm at lower
resolutions. These results illustrate the advantages of per-segment methods at high
spatial resolutions but also suggest that segmentation algorithms should be applied
carefully at different spatial resolutions.
©Copyright by Ofer Heyman
December 1, 2003
All Rights Reserved
A Per-Segment Approach to Improving Aspen Mapping from Remote Sensing
Imagery and Its Implications at Different Scales
by
Ofer Heyman
A DISSERTATION
Submitted to
Oregon State University
In partial fulfillment of
The requirements for the
Degree of
Doctor of Philosophy
Presented December 1, 2003
Commencment June 2004
Doctor of Philosophy dissertation of Ofer Heyman presented on December 1, 2003.
APPROVED:
Major Professor, representing Geography
Chair of the Department of Geosciences
Dean of the Graduate School
I understand that my dissertation will become part of the permanent collection of
Oregon State University libraries. My signature below authorizes release of my
dissertation to any reader upon request.
Ofer Heyman, Author
CONTRIBUTION OF AUTHORS
Dr. Gregory G. Gaston assisted with data collection and with the design of Chapter 2.
Mr. Jeffery T. Campbell assisted in preliminary field work and with the writing of the
Chapter 2. Dr. A. Jon Kimerling assisted with the design and writing of Chapters 2, 3
and 4.
TABLE OF CONTENTS
Page
CHAPTER 1. INTRODUCTION ............................................................................................
1
CHAPTER 2. A PER-SEGMENT APPROACH TO IMPROVING ASPEN MAPPING FROM
HIGH-RESOLUTION REMOTE SENSING IMAGERY ............................................................. 8
Abstract ....................................................................................................................... 9
Introduction .............................................................................................................. 10
Study Area ................................................................................................................
13
Data........................................................................................................................... 15
Methods .................................................................................................................... 15
Segmentation ......................................................................................................... 17
Classification ......................................................................................................... 19
Results ...................................................................................................................... 20
Discussion................................................................................................................. 23
Literature Cited ......................................................................................................... 24
CHAPTER 3. PER-SEGMENT VS. PER-PIXEL CLASSIFICATION OF ASPEN STANDS
FROM HIGH-RESOLUTION REMOTE SENSING DATA ...................................................... 27
Abstract ..................................................................................................................... 28
Introduction .............................................................................................................. 29
Per-Pixel Classification ......................................................................................... 30
Per-Segment Classification ................................................................................... 31
Data and Study Area ................................................................................................. 33
Methods .................................................................................................................... 36
Data Preparation .................................................................................................... 36
Segmentation ......................................................................................................... 36
Classification ......................................................................................................... 37
TABLE OF CONTENTS (CONTINUED)
Page
Accuracy Assessment ........................................................................................... 39
Results and Discussion ............................................................................................. 40
Conclusions .............................................................................................................. 47
References ................................................................................................................ 48
CHAPTER 4. THE EFFECT OF IMAGERY SPATIAL-RESOLUTION ON THE ACCURACY
OF PER-SEGMENT AND PER-PIXEL ASPEN MAPPING ..................................................... 51
Abstract..................................................................................................................... 52
Introduction .............................................................................................................. 53
Data and Study Area ................................................................................................. 56
Methods .................................................................................................................... 59
Data preparation .................................................................................................... 59
Classifications ....................................................................................................... 59
Accuracy assessment ............................................................................................ 63
Comparisons ......................................................................................................... 63
Results and Discussion ............................................................................................. 64
Per-segment classification .................................................................................... 70
Per-pixel classification .......................................................................................... 72
Inter-resolution comparisons ................................................................................ 80
Three-category per-segment classifier .............................................................. 80
Two-category per-segment classifier ................................................................ 80
Two-category per-pixel classifier ...................................................................... 81
Inter-method comparisons .................................................................................... 81
Conclusion ................................................................................................................ 82
References ................................................................................................................ 84
CHAPTER 5. CONCLUSIONS ........................................................................................... 87
BIBLIOGRAPHY .............................................................................................................. 90
LIST OF FIGURES
Page
Figure
2.1
Winter Ridge, Oregon study area..................................................................... 14
2.2
Per-segment classification model for aspen stand mapping ............................ 16
2.3
Histograms of hue (left) and min saturation (right). The ranges of
values used for the segmentation lie between bold vertical lines .................... 18
2.4
Results of aspen mapping on Winter Ridge, Oregon, based on persegment classification model ........................................................................... 21
3.1
Per segment classification concept ................................................................... 32
3.2
NHAP color infrared image used for aspen mapping ...................................... 34
3.3
Winter Ridge, Oregon study area..................................................................... 35
3.4
Per-segment classification model for aspen stand mapping ............................ 38
3.5
Per-segment classification results for aspen mapping ..................................... 41
3.6
Per-pixel classification results for aspen mapping ........................................... 45
4.1
Color infrared image used for aspen mapping ................................................. 57
4.2
Winter Ridge, Oregon study area..................................................................... 58
4.3
Per-segment classification model for aspen stand mapping ............................ 61
4.4
Overall accuracy of per-segment and per-pixel aspen mapping ...................... 65
4.5
Per-segment and per-pixel classification results for aspen mapping at
varying spatial resolutions: (a) per-segment at 1-m resolution, (b) perpixel at 1-m resolution, (c) per-segment at 2-m resolution, (d) per-pixel
at 2-m resolution, (e) per-segment at 4-m resolution, (f) per-pixel at 4m resolution, (g) per-segment at 8-m resolution, (h) per-pixel at 8-m
resolution, (i) per-segment at 16-m resolution, (j) per-pixel at 16-m
resolution, (k) per-segment at 32-m resolution, (1) per-pixel at 32-m
resolution .......................................................................................................... 67
LIST OF TABLES
Page
Table
2.1
Error matrices for accuracy assessment of per-segment aspen mapping.
Top to bottom: original system settings; thresholding level change from
0.125 to 0.160; disabling of morphological opening operation ....................... 22
3.1
Error matrix for accuracy assessment of per-segment classification for
three-level aspen mapping ................................................................................ 42
3.2
Error matrix for accuracy assessment of per-segment classification for
two-level aspen mapping ................................................................................. 42
3.3
Error matrix for accuracy assessment of per-pixel classification for
aspen mapping using ISODATA with 20 classes ............................................ 44
3.4
Error matrix for accuracy assessment of per-pixel classification for
aspen mapping using ISODATA with 50 classes ............................................ 44
3.5
Error matrix for accuracy assessment of per-pixel classification for
aspen mapping using ISODATA with 20 classes masked for vegetation
only by an initial 50-class ISODATA .............................................................. 46
4.1
A summary of accuracy assessment results of all combinations of
classification method, category level and spatial resolution tested in this
study of aspen mapping in Winter Ridge, Oregon ........................................... 66
4.2
Error matrices for accuracy assessment of aspen mapping using
imagery at 1-m ground resolution .................................................................... 74
4.3
Error matrices for accuracy assessment of aspen mapping using
imagery at 2-m ground resolution .................................................................... 75
4.4
Error matrices for accuracy assessment of aspen mapping using
imagery at 4-m ground resolution .................................................................... 76
4.5
Error matrices for accuracy assessment of aspen mapping using
imagery at 8-m ground resolution .................................................................... 77
4.6
Error matrices for accuracy assessment of aspen mapping using
imagery at 16-m ground resolution .................................................................. 78
LIST OF TABLES (CONTINUED)
Table
4.7
Page
Error matrix for accuracy assessment of aspen mapping using imagery
at 32-m ground resolution ................................................................................ 79
A Per-Segment Approach to Improving Aspen Mapping from Remote Sensing
Imagery and Its Implications at Different Scales
Chapter 1. INTRODUCTION
Numerous automatic methods for vegetation mapping using remote sensing data have
been developed by researchers in many countries during the last three decades. In the
intermountain West, however, most forest mapping is still done in a non-automatic
fashion using ground and airborne visual surveys, as well as manual interpretation of
aerial photographs. Bolstad and Lillesand (1992) argued that the main reason why
forestland managers had been very slow in adopting digital remote sensing data was
the unacceptably low (<80 percent) classification accuracy. For example, Kalkhan et
al. (1998) obtained a 60 percent accuracy utilizing double sampling compared to a 50
percent accuracy with traditional single sampling of the reference points in Rocky
Mountain National Park using Thematic Mapper (TM) and Digital Elevation Model
(DEM) data. Aspens, which covered one percent of the study area, were mapped at
less than 15 percent accuracy. Laba et al. (2002) checked the New York Gap Analysis
Project land cover map and found 42-74 percent overall accuracy (class level
dependent) using conventional accuracy assessment, which was improved by up to 20
percent using fuzzy accuracy assessment. Joy et al. (2003) combined 30-m TM data
with 10-m field samples and used decision tree classifications for vegetation mapping
in Northern Arizona to obtain overall accuracy of 75 percent with a K-hat statistic of
0.50.
One way to improve the accuracy of vegetation land-cover mapping utilizing per-pixel
methods is by using higher spectral resolution data. Too often, however, the results are
not sufficiently better in terms of mapping accuracy. For example, Ustin and Xiao
(2001) mapped boreal forests in interior Alaska and achieved 74 percent accuracy at a
species level using 20-m ground resolution Advanced Visible/InfraRed Imaging
Spectrometer (AVIRIS) imagery with 224 10-nm bands, compared
to 43
percent
accuracy using Satellite Pour l'Observation de la Terre (SPOT) data. Kokaly et al.
(2003) mapped vegetation in Yellowstone National Park and obtained 74 percent
overall accuracy with a K-hat statistic of 0.62 using 15-m AVIRIS hyperspectral data.
Franklin et al. (2001) obtained 80 percent accuracy at a species dominance/codominance level incorporating spatial co-occurrence texture with one-meter resolution
Compact Airborne Spectrographic Imager (CASI) imagery. These examples
demonstrate the weakness of per-pixel methods in exploiting the information
contained in multi- and hyper-spectral remote sensing data, and the need for
alternative ways to obtain higher accuracy vegetation mapping.
Another drawback of per-pixel classification methods is that although the information
content of the imagery increases with increased spatial resolution, the accuracy of land
cover classification may decrease due to an increase in variability within each class
(Irons et al., 1985; Cushnie, 1987). Hsieh et al. (2001) illustrated the inverse effect of
spatial resolution on the classification errors associated with pure pixels and mixed
pixels. They conclude that the typical per-pixel classifier may not take advantage of
the information available in high-resolution imagery. Chen and Stow (2002) showed a
consistent increase in the K-hat statistic as spatial resolution decreases from 2-m to 16-
m through 4-m, 8-m and 12-m. Mumby and Edwards (2002) noticed better
delineations of habitat patches with higher resolution IKONOS data, but did not obtain
higher accuracy using these data compared to their results using TM data. As more
and more high spatial-resolution data become available (e.g. IKONOS, QuickBird),
there is a growing need to develop innovative methods to overcome the drawbacks of
current methods and to take advantage of the additional information embodied in the
data in order to improve mapping accuracy.
Per-segment, as opposed to per-pixel, classification provides a tool in which the
texture and spatial variability inherent in high spatial resolution imagery can be
exploited. With a per-segment approach, segments or objects, rather than arbitrary
pixels, are classified as independent units. Segmentation algorithms have been used in
land cover mapping to partition images into elements that were then classified by a
maximum likelihood or other allocation rule (e.g., Johnsson, 1994; Ryherd and
Woodcock, 1996; Lobo et al., 1996; Lobo, 1997; Aplin et. al., 1999; Geneletti and
torte, 2003). The per-segment method is particularly effective for mapping specific
types of vegetation. In this study, the target species was quaking aspen (Populus
tremuloides), which has been identified as a key habitat for wildlife, including many
bird species (DeByle, 1985; Dieni and Anderson, 1997). Aspen mapping is crucial to
many ecological studies and is required for successful land management, particularly
in areas like Central Oregon where aspens are a minor component of the landscape.
In order to address the issue of improved-quality aspen mapping in the intermountain
West, a per-segment system was developed through this research using color infra-red
imagery scanned at 1-m ground resolution. The algorithm used the image itself for the
segmentation based on its hue and saturation values, and the segments were then
classified according to their spectral and multi-resolution textural characteristics.
Utilizing this method, an 88 percent overall accuracy was obtained with a K-hat
statistic of 0.82 for three categories of aspen coverage and a 90 percent overall
accuracy with a K-hat value of 0.78 for a two-category coverage scheme. Then,
rigorous comparisons of the results to those obtained by per-pixel classifications using
the same data were made, which showed a significant difference in accuracy between
the methods, with the per-pixel mapping not exceeding 70 percent overall accuracy.
Finally, this research studied the effect of spatial resolution of the source imagery data
on the relative performance of per-segment and per-pixel classifiers. For this purpose,
both per-segment and per-pixel classifications were implemented using the same data
from the same study area at various resolutions. This procedure allowed the
examination of the effect of spatial resolution on each method, and the comparison of
the methods at each resolution. The per-segment classification method was found to be
more scale-sensitive and to significantly outperform per-pixel classification of aspen
stands.
This study was carried out in three phases, each of which was summarized into a
manuscript and was submitted to a peer reviewed journal with the author of this
dissertation as the primary contributor. In the first phase, which is presented in the
second chapter, the per-segment classification system for aspen mapping was
developed and applied to the study area in Central Oregon (HEYMAN, 0., G. G.
GASTON, A. J. KIMERLING, AND J. T. CAMPBELL, 2003. Journal of Forestry 101 (4):
29-33). In the second phase, which is presented in the third chapter, the results
obtained by the per-segment classification method were compared to those from
traditional per-pixel classifications. Rigorous comparisons were made using various
classification methods and schemes, utilizing error matrices and Z test statistics. In the
third phase, which is presented in the forth chapter, both per-segment and per-pixel
classification methods were applied to at various spatial resolutions in order to study
the effect of spatial resolution on the relative performance of the two methods.
References
APLIN, P., P. M. ATKINSON, and P. J. CuRRAN, 1999. Per-field classification of land use
using the forthcoming very fine spatial resolution satellite sensors: problems and
potential solutions. In Advances in Remote Sensing and GIS Analysis (ATKINSON,
P. M., and N. J. TATE, Eds.), John Wiley & Sons: 219-239.
BOLSTAD, P. V., AND T. M. LILLESAND, 1992. Improved classification of forest
vegetation in northern Wisconsin through a rule-based combination of soils,
terrain, and Landsat Thematic Mapper data. Forest Science 38 (1): 5-20.
CHEN, DM., AND D. STOW, 2002. The effect of taining strategies on supervised
classification at different spatial resolutions. Photogrammetric Engineering &
Remote Sensing 68 (11): 1155-1161.
CusHNIE, J. L., 1987. The interactive effect of spatial resolution and degree of internal
variability within land-cover types on classification accuracies. International
Journal of Remote Sensing 8 (1): 15-29.
DEBYLE, N. V., 1985. Wildlife. In Aspen: Ecology and Management in the Western
United States (DEBYLE, N. V., and R. P. WINOKUR, Eds.), USDA Forest Service
General Technical Report RM-119: 135-152.
DIEM, J. S., and S. H. ANDERSON, 1997. Ecology and management of Aspen forests in
Wyoming, literature review and bibliography. Wyoming Cooperative Fish and
Wildlife Research Unit, University of Wyoming, 118 pp.
FRANKLIN, S. E., A. J. MAUDIE, AND M. B. LAVIGNE, 2001. Using spatial cooccurrence texture to increase forest structure and species composition
classification accuracy. Photogrammetric Engineering & Remote Sensing 67 (7):
849-855.
GENELETTI, D., AND B. G. H. GoRTE, 2003. A method for object-oriented land cover
classification combining Landsat TM data and aerial photographs. International
Journal of Remote Sensing 24 (6): 1273-1286.
PF., L. C. LEE, AND NY. CHEN, 2001. Effect of spatial resolution on
classification errors of pure and mixed pixels in remote sensing. IEEE
HSIEH,
Transactions on Geoscience and Remote Sensing 39 (12): 2657-2663.
IRONS, J. R., B. L. MARKHAM, R. F. NELSON, D. L. TOLL, D. L. WILLIAMS, R. S. LATTY,
and M. L. STAUFFER, 1985. The effects of spatial resolution on the classification
of Thematic Mapper data. International Journal of Remote Sensing 6 (8): 13851403.
from SPOT satellite data.
JOHNSSON, K., 1994. Segment-based land-use classification
Photogrammetric Engineering & Remote Sensing 60 (1): 47-53.
Joy, S. M., R. M. REICH, AND R. T. REYNOLDS, 2003. A non-parametric, supervised
classification of vegetation types on Kaibab National Forest using decision trees.
International Journal of Remote Sensing 24 (9): 1835-1852.
KALKHAN, M. A., R. M. REICH, AND T. J. STOHLGREN, 1998. Assessing the accuracy of
Landsat Thematic Mapper classification using double sampling. International
Journal of Remote Sensing 19 (11): 2049-2060.
KoKALY, R. F., D. G. DESPAIN, R. N. CLARK, AND K. E. Livo, 2003. Mapping
vegetation in Yellowstone National Park using spectral feature analysis of
AVIRIS data. Remote Sensing of Environment 84: 437-456.
LABA, M., S. K. GREGORY, J. BRADEN, D. OGURCAK, E. HILL, E. FEGRAUS, J. FIORE,
AND S. D. DEGLORIA, 2002. Conventional and fuzzy accuracy assessment of the
New York Gap Analysis Project land cover map. Remote Sensing of Environment
81 (2-3): 443-455.
LOBO, A., 1997. Image segmentation and discriminant analysis for the identification of
land cover units in ecology. IEEE Transactions on Geoscience and Remote
Sensing 35 (5): 1136-1145.
LOBO, A., O. CHIC, and A. CASTERAD, 1996. Classification of Mediterranean crops
with multisensor data: per-pixel versus per-object statistics and image
segmentation. International Journal of Remote Sensing 17 (12): 2385-2400.
MUMBY, P. J., AND A. J. EDwARDS, 2002. Mapping marine environments with
IKONOS imagery: enhanced spatial resolution can deliver greater thematic
accuracy. Remote Sensing of Environment 82: 248-257.
RYHERD, S., and C. WOODCOCK, 1996. Combining spectral and texture data in the
segmentation of remotely sensed images. Photogrammetric Engineering &
Remote Sensing 62 (2): 181-194.
USTIN, S. L., AND Q. F. XIAo, 2001. Mapping successional boreal forests in interior
central Alaska. International Journal of Remote Sensing 22 (9): 1779-1797.
Chapter 2. A PER-SEGMENT APPROACH TO IMPROVING ASPEN MAPPING FROM
HIGH-RESOLUTION REMOTE SENSING IMAGERY
HEYMAN, 0., G. G. GASTON, A. J. KIMERLING, AND J. T. CAMPBELL.
Journal of Forestry
5400 Grosvenor Lane, Bethesda, NID 20814-2198
June 2003, Volume 101, Number 4, p. 29-33.
Abstract
Aspen (Populus tremuloides) stands on Winter Ridge in central Oregon were mapped
from remote sensing imagery utilizing a per-segment approach. A 1-meter color
infrared (CIR) image was segmented based on its hue and saturation values to generate
aspen "candidates", which were then classified to show aspen coverage according to
the mean values of multiresolution texture and spectral reflectance within the
segments. With three broad categories for aspen distribution, overall accuracy was 88
percent, with K-hat statistics of 82 percent. The classification method holds promise
for more detailed mapping of aspen from fine-resolution satellite imagery.
Introduction
Quaking aspen (Populus tremuloides) has been identified as a critical habitat for
wildlife, including many bird species (Dieni and Anderson, 1997). DeByle (1985)
argued that, in many instances, aspen forests provide the only available nesting
microhabitat for ground- and shrub-nesting species, as well as opportunities for cavity-
nesting species. Aspen mapping is therefore crucial to many ecological studies and is
required for successful land management.
Aspen populations are declining throughout the western United States. All states in
the West have suffered from this decline, but as yet there is no consensus on the cause
(FRL, 1998). To assist in studying these processes, an efficient and reliable method of
aspen mapping at a high level of detail is required.
Numerous automatic methods for forest classification using remote sensing data have
been developed by researchers in many countries during the last three decades. In the
intermountain West, however, identifying aspen stands is still done in a traditional
fashion using ground and airborne visual surveys, as well as manual reviews of aerial
photographs. Bolstad and Lillesand (1992) argued that the main reason why forestland
managers were very slow to adopt remote sensing data was the unacceptably low (<80
percent) classification accuracy. Landsat Thematic Mapper (TM), Satellite Pour
1'Observation de la Terre (SPOT), and other remote sensing data with 10- to 30-meter
ground resolution have been used extensively for land cover mapping despite overall
accuracies as low as 40-70 percent and even lower for individual classes. Laba et al.
(2002) checked the New York Gap Analysis Project land cover map and found 42-74
percent (class-level dependent) overall accuracy using conventional accuracy
assessment, which could be improved by using fuzzy accuracy assessment (for more
details on fuzzy accuracy assessment, see Gopal and Woodcock, 1994).
Kalkhan et al. (1998) evaluated land cover classification in Rocky Mountain National
Park and showed accuracies on the order of 50 percent, which rose to 60 percent by
using double sampling. Ustin and Xiao (2001) noted that in the mapping of
successional boreal forests in interior central Alaska, they achieved 74 percent
accuracy using Advanced Visible/InfraRed Imaging Spectrometer (AVIRIS) data
compared with 43 percent using SPOT data. Some improvements were achieved by
using multi-date data (e.g., Wilson and Sader 2002) and incorporating texture and
ancillary GIS data (e.g., Bolstad and Lillesand, 1992; Debeir et al., 2002). Franklin et
al. (2001) increased the accuracy at the stand level to 75 percent by using spatial cooccurrence texture with high-spatial-resolution (<1 m) multispectral imagery.
Detailed and accurate mapping of aspen stands requires images with high spatial
resolution. Given the preference for imagery with high spatial resolution and
appropriate spectral information, a choice of classification technique must next be
made. A drawback of traditional automatic classification methods, which use per-pixel
classification, is that although the information content of the imagery increases with
increased spatial resolution, the accuracy of land-use classification may decrease
because of an increase in the variability within each class (Irons et al., 1985; Cushnie,
1987). Although Salajanu and Olson (2001) show a 10 percent increase in accuracy
with 20 m SPOT-XS versus 30 m Landsat TM data, they achieve no more than 70
percent accuracy at the species level using supervised classification with a maximum
likelihood decision rule.
To extract aspen stands in an automatic fashion from high-spatial-resolution imagery,
this study uses a per-segment approach. Per-segment as opposed to per-pixel
classification takes advantage of the spatial variability and texture inherent in finespatial-resolution imagery. Segments or objects, rather than pixels, are classified as
independent units. This concept has been applied successfully to agricultural study
areas with parcel maps or other field-related data (e.g., Aplin et al., 1999). In a
forested environment, such field data are not available, and the image itself must be
used for the initial segmentation. Segmentation algorithms have been used in land
cover mapping to partition images into elements that were then classified by a
maximum likelihood or other allocation rule (e.g., Johnsson, 1994; Ryherd and
Woodcock, 1996; Lobo et al., 1996; Lobo, 1997). Here, the image itself was used for
the initial segmentation to create aspen stand "candidates," which were then classified
into three broad categories of aspen coverage based on their spectral and textural
characteristics.
Study Area
The 6-square-kilometer (2.3-square-mile) study area is located on Winter Ridge in
central Oregon within the Fremont National Forest (Figure 2.1). Although Winter
Ridge is well known for its aspen stands, no detailed map depicting those stands is
available (R. L. Wooley, pers. comm., 2001). The study area contains a variety of
sizes of aspen stands, some pure and some mixed with conifers, mainly lodgepole pine
(Pinus contorta) and ponderosa pine (Pinus ponderosa).
Aspen grows under a wide variety of climatic and environmental conditions. In the
West, aspen forms extensive pure stands in some areas but is a minor component of
the forest landscape in other areas (Jones, 1985). Winter Ridge, with an elevation of
1,950 to 2,150 meters (6,500 to 7,000 feet), a gentle western slope of 4 percent, annual
precipitation of 320 millimeters (12.6 inches), only 18.6 hot days (>329C/909F) a year
(OCS 2001, statistics from 1971 to 2000), and a variety of stands, may be regarded as
a typical aspen site.
Corvallis
s...:..
Figure 2.1. Winter Ridge, Oregon study area.
15
Data
Aerial CIR photographs from the USGS National High Altitude Photography Program
were used as the major data source. These photos were acquired on September 8,
1982, at an average scale of 1:58,000 using a 210-mm (8.25-in.) focal length mapping
camera with a ground resolution as small as 1-2 m (USGS 2001). The 9-in. CIR
transparencies were scanned with a photogrammetric scanner that produced pixels
with a 1.2-m ground resolution. Ground truth data were obtained from color aerial
videography, acquired on October 6, 2000, along with a field survey. The aerial
images had no better than 2-m spatial resolution, yet the timing during the peak of the
aspens' "golden season" made them a good source of ground truth information.
Methods
Aspen stands in the Winter Ridge study area were mapped using a per-segment
approach, in which segments rather than arbitrary picture elements were partitioned
from the remote sensing image and then classified by their spectral and textural
properties. A general illustration of the algorithm is shown in Figure 2.2.
Segmentation
The image was segmented based on its hue and saturation values according to the
following procedure.
The image was first transformed from RGB (red representing reflected nearinfrared, green for reflected red, and blue for reflected green) to IHS (intensity,
hue, and saturation). This was done because human experts use hue as a major cue
when interpreting such imagery.
-
Hue and saturation images were derived from the original CIR image by an
RGB to IHS transformation. Intensity is the overall brightness of the scene
and varies from 0 (black) to 1 (white). Hue is the color or dominant
wavelength of the pixel and is defined as an angle on a hue circle from 0
(red) to 360 (violet). Saturation describes the purity of color and varies
linearly from 0 (achromatic gray) to 1 (pure) (ERDAS, 1999).
-
A minimum filter in a 3x3 neighborhood was applied to the saturation
image:
S'1 = min (S.; i-1 < m < i+l, j-1 < n < j+1)
This operation smoothed the image and created a more distinctive
histogram for thresholding.
Overlapping areas of hue and minimum saturation values in the range that
corresponds to aspen according to their histograms (Figure 2.3) were considered
initial aspen stand candidates, represented as is in a binary output image.
Initial candidate = I. if [(60 < H < 115) AND (0.125 < S')] and 0 otherwise
Hue
ill
Saturation
.
o,
60
115
360 0
0.125
Figure 2.3. Histograms of hue (left) and min saturation (right). The ranges of values
used for the segmentation lie between bold vertical lines.
A minimum filter in a 5x5 neighborhood followed by a maximum filter in the
0
same neighborhood was applied to the initial candidates' image.
This
morphological opening process enables better separation of the segments and
eliminates small ones.
The binary segments were then clumped to join neighboring pixels into contiguous
groups, which were sieved to eliminate clumps smaller than 25 pixels. The
remaining clumps constituted the basic segments to be classified.
Classification
The aspen stand candidates extracted from the image as described above were
classified to reflect aspen percentage coverage. The classification was carried out
based on the mean values of spectral reflectance and multiresolution texture within the
segments using unsupervised ISODATA clustering with 20 classes, which were
converted into general categories of aspen coverage.
1.
Multiresolution texture statistics were generated by applying an adaptive texture
operator to the near-infrared band of the original image and to images reduced
spatially by factors of 2, 4, and 8. Texture has proven useful in classification
methods, and variance has been suggested as a useful parameter (Zhang, 2001).
The standard deviation in a 5x5 window was used as the basic texture parameter,
which was replaced by the minimum texture value among the nearest neighbors in
a 3x3 window:
T'1 = stdv (Z,,,,,; i-2 < in < i+2, j-2 < n < j+2)
T;j = min (T',,,I,; i-1 < in < i+1, j-1 < n < j+l)
2. Mean values were calculated for each segment from the three CIR reflectance
channels and the four texture images.
3. Twenty classes were generated by applying unsupervised classification of the
segments based on the seven features using the ISODATA algorithm. These 20
classes were split into two general categories of aspen coverage within the stands,
2
namely less than 50 percent aspen and 50 percent or more aspen, using aerial
videography and 1:12k aerial color photographs as interpretation aides.
Results
The study area was divided into three categories to depict aspen distribution. Areas
with no aspens, whether open spaces or other tree species, were categorized as 0
percent aspen. Mixed stands of aspen and other tree species, in which aspens are
minor component, were categorized as less than 50 percent aspen, and stands
dominated by aspens were categorized as 50 percent or more aspen. These mapping
results, which are shown in Figure 2.4, were achieved by classifying candidate
segments derived from the image as described in the methods section.
To assess the accuracy of the mapping, 200 random points were generated, at least 50
in each category. The segments (when they existed) containing each point were
examined in the field and on the ancillary imagery to determine their percentage of
aspen cover. The data for all 200 points were also categorized to the same three
classes (0 percent, <50 percent, and >50 percent) and constituted the reference data in
the error matrix used to validate the results (Table 2.1).
-i
21
/!
F`a '<' `Ta' r,jaY i'd1'Y. , tt'~ \
t; s d
y' ' '
Y.`\+!, 7..( t L.h,tip:V1
'. i tr
.lc.,,r
..
'!*it ':.ltti
Y
,r -r yr-
vac y ,ra
.tl:;`
,ti r{ fr
L \`r\ `!+
!: dtr' :..it: :fs: tq-, ryr
q..
',
fi
7
+
~'S
r_
!'.
t
" i;'
s,
iAt
..
,ry
_..7. mot'
ti`f
C` '2
.r.
+F?L.
. s.t:
._
via:.
sr'
v `;
,f...
M ;`y't r
10
}
.Y
.7 t.r. },r1>:
,
i
"r
41
1\
'
;
-,
w
!'....
1.-`
tF'`i1 a rd i
^+,...
.M.
t
.
J
1 'M
\ Rl
7.1
' 'a7+ 17,.i \
c
'.R+.,_` ` \
sy
S'',-,`.
t v' 1.V .1 j r^R .'^`v_
~ ``
,t.
TR"
,
r
rf
. yt
3
tit
H',
*\rj"; .`.f
. ° f rr.
t
f !_ ^1.L.O1',-i,
¢
'
\
r'a'y7.7
It
+
i.7{S''. T? '.
'i .
f V'sJ`
1
*4 ,"1r.
`1"; .Ji 7
fir
'; .,`, t, `'-
V;j:
`. . ':`,
`yt,{'. :,i'ir !,.t
'-. ,.
, ',
w:
;.Yti
.yam '
\?tlt :
4t
'l
j'r
i 1 , a'irLY;
; ; , i' .+1 `.
` ',[ TC'r : L ?'
t
='
i.^ a.` --
1r,
y1
S."
-
T
4% +c'.
Aspen Coverage
None
Minor (<50%)
Predominant (>50%)
:. 2
*.
wt NIP-
.-
Figure 2.4. Results of aspen mapping on Winter Ridge, Oregon, based on per-segment
classification model.
Table 2.1. Error matrices for accuracy assessment of per-segment aspen mapping. Top to bottom : original system
settings; thresholding level change from 0.125 to 0.160; disabling of m orphological opening operation
jMapped/Reference--3
No aspens
0%-50% as pen
50%-100% aspen
Total
Producer Accuracy
J,M[apped/Reference-->
No aspens
0%-50% as pen
50%-100% aspen
Total
Producer Accuracy
0%-50% aspen
50%-100% aspen
Total
User Accuracy
47
3
0
0
0
61
8
50
69
13
77
200
79%
68
76
89%
94%
88%
84%
K-hat Statistics:
82%
Overall Accuracy:
88%
No aspens
0%-50% aspen
50%-100% aspen
Total
User Accuracy
47
3
0
0
64
50
73
77
94%
88%
87%
47
77
83%
0
9
67
76
88%
No aspens
47
100%
100%
10
K-hat Statistics:
J.Mapped/ReferenceNo aspens
0%-50% as pen
No aspens
47
0%-50% aspen
200
83%
Overall Accuracy:
89%
50%-100% aspen
Total
User Accuracy
49
96%
87%
80%
2
58
9
50%-100% aspen
17
Total
Producer Accuracy
47
77
67
76
100%
75%
88%
K-hat Statistics:
81
79%
67
84
200
Overall Accuracy:
86%
To test the sensitivity of the mapping system to the thresholding level used in the
segmentation process, the whole procedure was applied with a different saturation
threshold value. The natural-break level of 0.125, which was selected visually from
the histogram, was found to be near optimal. An optimal manually selected level of
0.160 resulted in some minor differences and only a 1 percent increase in overall
accuracy. The morphological opening operation applied to the segments was found to
better visually match the aspen stands on the videography, and it improved the overall
accuracy by 3 percent.
Discussion
The per-segment approach yielded 88 percent overall accuracy of aspen mapping into
three categories on Winter Ridge, Oregon. The reference data used for the accuracy
assessment, which were based on aerial videography and a field survey, were defined
on the CIR photo and thus overcame the issue of the 18-year time gap between the
photo and the videography. Nevertheless, the changes on the landscape were very
minor within the whole study area. A full comparison of the results with those of
traditional per-pixel classification requires a careful modification of the reference data,
as the outcomes are fundamentally different, and will be carried out and reported in a
separate paper.
The categories used in the classification may be changed according to the mapping
goals. In this study, three general classes (no aspen, minor, predominant) were used,
mimicking the outcomes of common mapping done by human experts. The system,
however, has the potential to provide more detailed information about the delineated
aspen stands. For accuracy mapping at a finer level, additional features, such as
extracted shadows from the image, should be used in the classification. The minimal
mapping unit of the system was defined as 25 square meters. Smaller segments were
sieved out after the clumping step and were treated as noise, since an attempt to
classify them would require further investigation.
Many variables play a role in the crucial segmentation process, and therefore any
change in the mapping system parameters may affect the results. The thresholding
values were tested because they appeared to be the most sensitive components of the
system. However, before the system can be applied on a wider scale, the robustness of
the segmentation should be tested on other areas and using various data sources.
Literature Cited
APLiN, P., P.M. ATKINSON, and P.J. Curran. 1999. Per-field classification of land use
using the forthcoming very fine spatial resolution satellite sensors: Problems and
potential solutions. In Advances in remote sensing and GIS analysis, eds. P.M.
Atkinson and N.J. Tate, 219-39. New York: John Wiley & Sons.
BOLSTAD, P.V., and T.M. Lillesand. 1992. Improved classification of forest vegetation
in northern Wisconsin through a rule-based combination of soils, terrain, and
Landsat Thematic Mapper data. Forest Science 38(1):5-20.
CUSHNIE, J.L. 1987. The interactive effect of spatial resolution and degree of internal
variability within land-cover types on classification accuracies. International
Journal of Remote Sensing 8(1):15-29.
DEBEIR, 0., I. VAN DEN STEEN, P. LATINNE, P. VAN HAM, AND E. WOLFF, 2002.
Textural and contextual land-cover classification using single and multiple
classifier systems. Photogrammetric Engineering and Remote Sensing 68(6):597605.
DEBYLE, N.V. 1985. Wildlife. In Aspen: Ecology and management in the western
United States, eds. N.V. DeByle and R.P. Winokur, 135-52. General Technical
Report RM-119. Fort Collins, CO: USDA Forest Service, Rocky Mountain
Research Station.
DIENI, J.S., and S.H. Anderson. 1997. Ecology and management of Aspen forests in
Wyoming, literature review and bibliography. Laramie: Wyoming Cooperative
Fish and Wildlife Research Unit, University of Wyoming.
ERDAS, Inc. 1999. ERDAS field guide, 5th edition. Atlanta.
FOREST RESEARCH LABORATORY (FRL). 1998. Seeking the causes of change-In Forest
Research Laboratory biennial report 1996-1998, project 15. Corvallis: Oregon
at
online
State
University.
Available
www.cof.orst.edu/cof/pub/home/biforweb/body/text/projl5.htm; last accessed by
staff March 2003.
FRANKLIN, S.E., A J. Maudie, and M.B. Lavigne. 2001. Using spatial co-occurrence
texture to increase forest structure and species composition classification
accuracy. Photogrammetric Engineering and Remote Sensing 67(7):849-55.
IRONS, J.R., B.L. Markham, R.F. Nelson, D.L. Toll, D.L. Williams, R.S. Latty, and
M.L. Stauffer. 1985. The effects of spatial resolution on the classification of
Thematic Mapper data. International Journal of Remote Sensing 6(8):1385-403.
from SPOT satellite data.
JOHNSSON, K. 1994. Segment-based land-use classification
Photogrammetric Engineering and Remote Sensing 60(1):47-53.
1985. Distribution. In Aspen: Ecology and management in the western
United States, eds. N.V. DeByle and R.P. Winokur, 9-10. General Technical
Report RM-119. Fort Collins, CO: USDA Forest Service, Rocky Mountain
JONES, J.R.
Research Station.
KALKHAN, M.A., R.M. Reich, and T.J. Stohlgren. 1998. Assessing the accuracy of
Landsat Thematic Mapper classification using double sampling. International
Journal of Remote Sensing 19(11):2049-60.
LABA, M., S.K. GREGORY, J. BRADEN, D. OGURCAK, E. HILL, E. FEGRAUS, J. FIORE,
AND S.D. DeGloria. 2002. Conventional and fuzzy accuracy assessment of the
New York Gap Analysis Project land cover map. Remote Sensing of Environment
81(2-3):443-55.
A. 1997. Image segmentation and discriminant analysis for the identification of
land cover units in ecology. IEEE Transactions on Geoscience and Remote
LOBO,
Sensing 35(5):1136-45.
LOBO, A., O. CHIC, and A. Casterad. 1996. Classification of Mediterranean crops with
multisensor data: Per-pixel versus per-object statistics and image segmentation.
International Journal of Remote Sensing 17(12):2385-400.
OREGON CLIMATE SERVICE (OCS). 2001. Zone 5-Climate data archives. Available
online at www.ocs.orst.edu/allzone/allzone5.html; last accessed by staff March
2003.
RYHERD, S., and C. WOODCOCK, 1996. Combining spectral and texture data in the
segmentation of remotely sensed images. Photogrammetric Engineering and
Remote Sensing 62(2):181-94.
SALAJANu,
D., and C.E. Olson. 2001. The significance of spatial resolution:
Identifying forest cover from satellite data. Journal of Forestry 99(6):32-38.
US GEOLOGICAL SURVEY (USGS). 2001. National High Altitude Photography and
at
online
Available
National
Aerial
Program.
Photography
last
accessed
http://edc.usgs.gov/Webglis/glisbin/guide.pl/glis/hyper/guide/napp;
by staff March 2003.
USTIN, S.L., and Q.F. Xiao. 2001. Mapping successional boreal forests in interior
central Alaska. International Journal of Remote Sensing 22(9):1779-97.
WILSON, E.H., and S.A. Sader. 2002. Detection of forest harvest type using multiple
dates of Landsat TM imagery. Remote Sensing of Environment 80(3):385-96.
ZHANG, Y. 2001. Texture-integrated classification of urban treed areas in highresolution color-infrared imagery. Photogrammetric Engineering and Remote
Sensing 67(12):1359-65.
Chapter 3. PER-SEGMENT VS. PER-PIXEL CLASSIFICATION OF ASPEN STANDS FROM
HIGH-RESOLUTION REMOTE SENSING DATA
Abstract
A recently developed per-segment classification method for aspen mapping was
compared to traditional per-pixel classifications. The remote sensing data source was a
CIR aerial photograph of Winter Ridge, Oregon scanned at a one-meter ground pixel
size, and an unsupervised classification procedure based on the ISODATA algorithm
was applied to both pixel-based and segment-based seven-layer images. While
differences among various per-pixel classifications were insignificant, the results from
the per-segment system were consistently more than 20 percent better than those from
per-pixel classifications.
Introduction
Numerous automatic methods for forest classification using remote sensing data have
been developed by researchers in many countries during the last three decades. In the
intermountain West, however, most forest mapping is still done in a non-automatic
fashion using ground and airborne visual surveys, as well as manual interpretation of
aerial photographs. Bolstad and Lillesand (1992) argued that the main reason why
forestland managers had been very slow in adopting digital remote sensing data was
the unacceptably low (<80 percent) classification accuracy.
In this study area, the target species was quaking aspen (Populus tremuloides), which
has been identified as a key habitat for wildlife, including many bird species (DeByle,
1985; Dieni and Anderson, 1997). Aspen mapping is crucial to many ecological
studies and is required for successful land management, particularly in areas like
Central Oregon where aspens are a minor component of the landscape. In order to
provide detailed and accurate mapping of aspen stands, high spatial resolution images
are required. A crucial drawback of traditional automatic classification methods,
which use per-pixel classification, is that although the information content of the
imagery increases with increased spatial resolution, the accuracy of land use
classification may decrease due to an increase of the variability within each class
(Irons et al., 1985; Cushnie, 1987). In order to successfully extract aspen stands in an
automatic fashion from high spatial resolution imagery, a per-segment classification
system was developed by Heyman et
al.
(2003). In this study, per-segment
classification performance is compared to than of per-pixel classification.
Per-Pixel Classification
The most commonly used automatic method for land cover mapping utilizing remote
sensing data is either supervised or unsupervised classification. With either method,
each picture element (pixel) of the image is assumed to be a classifiable object and is
classified according to its spectral characteristics (see Jensen, 1996 for more details).
Additional data may be used in conjunction with the spectral bands to increase
separability between classes and to improve the classification results.
Ustin and Xiao (2001) mapped boreal forests in interior Alaska utilizing supervised
maximum likelihood classification. They achieved 74 percent accuracy at a species
level using Advanced Visible/InfraRed Imaging Spectrometer (AVIRIS) imagery with
224 10-nm bands and 20-m ground resolution, compared to 43 percent accuracy using
SPOT data. Franklin et al. (2001) incorporated spatial co-occurrence texture with five-
band one-meter Compact Airborne Spectrographic Imager (CASI) imagery to obtain
80 percent accuracy at a species dominance/co-dominance level applying maximum
likelihood classification. Kalkhan et al. (1998) used TM and digital elevation model
(DEM) data with an unsupervised classification to derive a five-class land cover
mapping in Rocky Mountain National Park. They showed 60 percent accuracy using
double sampling compared to 50 percent with traditional single sampling of the
reference points. Aspens, which covered one percent of the study area, were mapped
at less than 15 percent accuracy. Laba et al. (2002) checked the New York Gap
Analysis Project land cover map and found 42-74 percent (class level dependent)
overall accuracy using conventional accuracy assessment, which was improved by up
to 20 percent using fuzzy accuracy assessment. This land-cover mapping was
accomplished by applying an unsupervised classification to generate 240 spectral
classes from TM data and assigning each of them to one of 29 land cover types. For
the purposes of this comparison study, an unsupervised classification using the
ISODATA algorithm was used. With this method, the best per-pixel classification
results could be obtained and verified by changing the number of classes generated
and the classification scheme level.
Per-Segment Classification
Per-segment, as opposed to per-pixel, classification provides a tool in which the
texture and spatial variability inherent in fine spatial resolution imagery can be
exploited. With a per-segment approach, segments or objects, rather than pixels, are
classified as independent units. The general idea is to classify objects of interest
according to their pertinent characteristics (Figure 3.1). This method is particularly
effective when pursuing a specific mapping purpose, such as aspen mapping. In order
to implement the concept, segments have to be identified first. This is done either by
using ancillary data such as field polygons, or by extracting the objects from the image
itself. Then, each polygon/object/patch/segment is assigned feature values based on
statistics and indices describing the values and arrangement of the pixels defining each
segment. Examples include mean spectral reflectance, variations in spectral
reflectance, band ratios, and other mathematical relationships among features. After
assigning each segment corresponding feature values, the segments are classified
using either an unsupervised clustering algorithm like ISODATA, a supervised
algorithm such as maximum likelihood, a neural network, or a rule-based system.
Feature generation
Feature
Layers
Image
Per-segment statistics
c
Segmentation /
Polygons
i 'Ift
7L
OEI c O
Classifier
Figure 3.1. Per segment classification concept.
The per-segment concept has been applied successfully to agricultural study areas
utilizing parcel maps or other field-related data (Aplin et al., 1999). In the natural
environment, such field data are not available and the image itself must be used for the
initial segmentation. Segmentation algorithms have been used in land cover mapping
to partition images into elements that were then classified by a maximum likelihood or
other allocation rule (e.g., Johnsson, 1994; Ryherd and Woodcock, 1996; Lobo et al.,
1996; Lobo, 1997). More recently, Geneletti and Gorte (2003) segmented highresolution (7.5 m) orthophotographs to obtain more accurate boundary locations and a
two percent improvement in performance for their land cover classification from TM
data. They did not, however, implement a fresh per-segment classification but rather
used the original per-pixel classification results to reclassify each segment. Heyman et
al. (2003) used the image itself for the initial segmentation to create aspen stand
'candidates', which were then classified into three broad categories of aspen coverage
based on their spectral and textural characteristics. The results from this per-segment
classifier were compared to those from traditional per-pixel classifiers.
Data and Study Area
Aerial CIR photographs from the USGS National High Altitude Photography (NHAP)
program were used as the major data source. These photos were acquired at an average
scale of 1:58,000 using a 210 mm (8.25 in) focal length mapping camera with a
ground resolution as small as one to two meters (USGS, 2001). The 9-in CIR
transparencies were scanned with a photogrammetric scanner that produced pixels
with a 1.2 m ground resolution (Figure 3.2).
The 6-km2 study area is located on Winter Ridge in Central Oregon within the
Fremont National Forest (Figure 3.3). Although Winter Ridge is well known for its
aspen stands, no detailed map depicting those stands is available (R. L. Wooley, pers.
comm., 2001). The study area contains 3-4 percent overall aspen cover with a variety
of sizes of aspen stands, some pure and some mixed with conifers, mainly lodgepole
pine (Pinus contorta) and ponderosa pine (Pinus ponderosa).
,
.
a
l
F*v
[L,
{
,
1
S"
r
f
1
-. .Yw
y
,
r
ty'
'
,Wl:
r
' . }-P
o
V.
..
.-
"
Iy
i. r.`ot V
.
.
S
F;:.
;..
AlIT
'
jp. a
'
a j ar
j..
V
olf.
TC
jt
r
`yj.'
YLtla
,
.
r
T}7-
X -o
::
r
1 R`._
1
-
-
fl
I
'
'
.
-
-
'_,
i'n
.+
,
`4
'
4-
M
T_L}_:':'A.
. fL! y>I
.L
;.
'/i`
aij
;;',
PCX
r~
:c
-
,ii1 ` sY.ai ..ks .Ji:
,
.,,, 1 nl f
fr;ir,`I,.ay
r C'f
L='d_.x'
y'M1i
.f ry!;,6 +... '1. .I r
?
:.:,,'
,t.'i1.
f.l;
r-
1a1'
'
1
y./'
1lw-')Ji
(R'
-
.j;i.,
7`"/y
t
/r7p ,
fl
:
' "`
t '.,y..r s rA'7ti
.' .i.?r` ,r
, /l)y!}!Q, .
1 f'..
`t'c.;
/t~
'a ='. i
..,
.t' V r
' r t', .
'.
3
F
L'.["L r
.i
firt . `', "f.
;f.'
f f(
J J: i'1.'S!...
t
'.
y,s; grTr :.':iy. ,
_': i f:= .'.., -
4., jy}y.'
.
:'
4;; .. r ;*.
.
rI75
1,af+i-
1/kplQe(!'
t/b' 1t.'', r'
± t.
t
. t"- :tattyrl
't-9r i..,! /' rl,-1. : '.,t
1Y.'"_Z'Trt':SY)r
J1
}i, ; .. L
r{S'r
r 1 f4.rt:
i ' Ii I.lr ,'-li+_
t, y^ r. ` 5
r
(1'
fT-i
a.
t tVr
f'
.
. f
`a. rt. .,;`,a s:,
,Y -! :'`..
; ,b,` '
.. / l.+s:,`.ti
.Ljr ; "I. iArf', T' tr:'t''t
/
Flj.r ':.Yy .` .` l C'~ j' of . .
r
r';Y
_
. ' ,'x ,
-, y' ( Si o 'j rta
i ";- ldn'
(,ii, - ,j1 y ..,v K.
-.
! A,
Y
j f(AI;rfC!.
t-a
%C
,',
Ii'.rist,,,
f
.
,
%4r°_ -
Y
(lr-/
1YJ
}' `.s i f
.-..WY,.. I X'f ft s'
_1/,, :
.'4''
~ .ry'i,J
'.'. r
/G,
r`K'y.(1¢1
}+L-1f'
r°(j
' _'y
1rf ' , i'r*r t. .fir.
K
.1;,tiyy ,tt
Ff {';
vf/}ji
`ray
,1 I
./"
yy,7
t
r*
i
}:~.t tr,%t.i ' v .r
C.. r .,
it.4 * r'}. 'f '.fit ..
f r r' } f `}_
r
jr
!y1'lll`.j
11r r f+ 7.: iS-l:RRf,. v"y; (i 1r7i n'lt
q,
I
...
ti 1'!i t.. L
. ' y' { /'!7!;:''i.C..t.C
r
r
r,,'
t4
It
'y
f. r y
4Nil14.:Yk4... tL i`::t!''l .rr.
06
1
I'fi:
i
;
2'y t
'
f *-. b s ' !
V
Lf M; W, ea '
rt9 Y
1:;y'C-
1Z -f.iY 1i. - ,r
.
-
''(,
'{\''J -j1'I S-'tai/(
*
or-
-{i
,'
r7 y4l.r-
Figure 3.2. NHAP color infrared image used for aspen mapping.
Figure 3.3. Winter Ridge, Oregon study area.
Methods
Aspen stand maps from both traditional per-pixel classifications and per-segment
classifications were created and compared utilizing the following methodology.
Data Preparation
The scanned CIR image was rectified using
a
second-degree
polynomial
transformation and resampled to UTM coordinates with 1-m ground pixel size. Multiresolution texture statistics were generated by applying an adaptive texture operator to
the near infrared band of the original image and to images reduced spatially by factors
of 2, 4 and 8:
T'1 = stdv (Z1Y111i i-2 < m < i+2, j-2 < n < j+2)
(1)
Ty = min (T',,,,,; i-1 < m < i+1, j-1 < n < j+1)
(2)
This operation was applied by moving a window across the image. First, a 5x5
window was used to calculate the standard deviation of the 25-pixel neighborhood.
Then, a 3x3 window was used to calculate the minimum value among the 9-pixel
neighborhood. This minimum value was assigned to the central pixel to represent its
texture at this level/scale. The additional four texture layers were then stacked together
with the three spectral bands to be used in the per-pixel classifications.
Segmentation
The image was partitioned based on its hue and saturation values according to the
segmentation model for per-segment classification introduced by Heyman et al.
(2003). This rule-based process created spatial clusters of aspen stand 'candidates' to
be classified by their spectral and textural properties. Mean values were calculated for
each segment from the three CIR reflectance channels and the four texture images.
The seven mean layers were stacked in one file to be used in the per-segment
classification. A general illustration of the algorithm is shown in Figure 3.4.
Classification
An unsupervised classification procedure based on the ISODATA algorithm was
applied to both the pixel-based and the segment-based seven-layer images. This
classification method was chosen for several reasons. Not only has it proved useful for
per-segment classification (Heyman et al., 2003), this method also allows thorough
examination of the various parameters of the classification results, especially the
optimal separation available. By comparing the outcomes of classifications with
different number of classes, it can be determined whether more classes yield better
results. In addition, the effect of applying a majority filter and the use of a two-step
approach were tested as well. These comparisons were important in order to make sure
that the results of the per-pixel classification are optimal given the input data.
38
Aspen coverage categories were assigned to the ISODATA classes according to a
classification scheme based on image interpretation of the CIR and 1:12,000 color
aerial photographs. For the per-pixel classifications, the scheme included two
categories, aspen and no aspen, while for the per-segment classification, three levels
were discerned, following a commonly used scheme of no aspen, minor aspen (<50
percent) and dominant aspen (>50 percent). The reference data were constructed using
a five-category scheme (0, 0-20, 20-50, 50-80, and 80-100 percent aspen coverage). A
two-level look-up table was used to assess the accuracy of the per-pixel classifications
while the five categories were reduced to three for the accuracy assessment of the persegment classification results. In order to compare per-pixel to per-segment utilizing a
K-hat based Z test statistic, which is valid for identical schemes only, a two-category
scheme was applied to the per-segment classification as well.
Accuracy Assessment
A site-specific assessment employing an error matrix was carried out for each of the
classification results based on the technique presented by Congalton and Green
(1999). For this accuracy assessment process, 200 random points were generated
within the study area, at least 50 in each mapping category, to be used as the reference
data in the error matrix. Each point was examined in the field or using ancillary
imagery to determine the aspen coverage at both the specific location and the
surrounding segment. With the error matrix as a starting point, overall accuracy,
producer's accuracy, user's accuracy, and the K-hat statistic (and its variance) were
calculated (see Congalton and Green, 1999 for the mathematical formulas). In order to
compare the results from two different classifications, the Z statistic test was applied
to determine if two independent matrices were significantly different:
Z=
KI-K21
(3)
var(K,) + var(K2)
Finally, the p-value was derived from a standard normal distribution table.
Results and Discussion
In order to rigorously compare the results from the various classification methods and
schemes, error matrices were created and Z test statistics were calculated based on the
accuracy parameters, as described in the methods section.
Utilizing a per-segment approach, Heyman et al. (2003) showed an 88 percent overall
accuracy with a K-hat statistic of 0.82 for three categories of aspen cover using
unsupervised ISODATA classification with 20 classes. These per-segment mapping
results are illustrated in Figure 3.5 and the corresponding error matrix is shown in
Table 3.1. Since per-pixel classifications could use only two categories for aspen
mapping, a two-category scheme was used with the per-segment system to generate an
error matrix (Table 3.2) with valid parameters for a Z test statistic, yielding overall
accuracy of 90 percent with a K-hat statistic of 0.78.
4
1
t
.
T
f:
}'[ I 4y
J,
f
.
;
r
.
,
T
r.
h. ,
C
:r.yt'/ :, .t t 1 .tif'; r
-y}1' .'t icy- ; ,.,
h ~l
{'
F
l
F
.
s
'
+
7f
s
,{
;
-.
{`f
.
I
fi
3
Y:
+.
Lr
1
/.
}/
1,
.
`y
)
.
j
i
,
7
j . ;fir,. `
ma
yi / .51 11
!'
;
wtC
{-p..
tM:.
\
'f
t
;.I
1' t(.
rJ
:
"r
t .
.! ii-l
'_...\.L
' i _:`_
:
J
-
rtia.i'+s
`T
w //I
Tcc..
.
'
ff,
tL
y
,,_
i
f
I
,-ya
1
;11 .'
r.eyf`L'.I
,'`
..
,1
1.:;
nCF,
? 1., u
...
j:wF J
el t.fl
I:'t rl '.".1!
;
,
.Tf-.:
'
3' a
Ys+
,R
k. ya:¢
u#
tf.
t .s,aii,'t
-'fZ': . N . w
y C'' . .i -4 (0r, ,'..i. .
~
Jy..±,
t!4.
r{
f"i. l4.
i
. f
yIt r r' i.`..
Tom, y rl,j ffL
t ij r I' fN,F
=.r .i. .
,
L
I
l-T"/..
i `'-;
7J. y
z
'
, p.V[ ^i.'q-1,
a.
i''
i w
i'
i}1 Z!_'s
:rr`} 1
fr
.,f. ,K
jf
i:'
.'
, ',:i..
.. ,, ^7 ri f>+
''
._
...,
{
" : ,1. . s
t
'R'
!`i*i{(' i. i
y //Y,1 'r,,f, 3 .1`. .
;
,
t-
1
:, .isr"it:.1.'It~'li
t4 j;` 5. 't'.lf
'.f
1
iS
-
pp
rf1
C 'l& i,
R, it.'/ 1.
yf
. {:. .
,t,
,.r 4 t w ; . ;
` o ',.rte ',r .,
)'.+'. t
',
../l
%y
r "^ ,
7t.'frj;',/}
},11'
`I T vt
...r..t^1..ti}i.
'
t!.. f,;5.}rf.
.. FA Z f l t
" .
'
'
1.
Tf`'',` ,
tr ."C~ 1
y tr1s.i , Y ..,ri`
Tj:g
A4
i.(.
L
i% /f
/1rs y yt r:rf 'w' T:. '1
t
/t
/1 ..4
.'t.
I'
t
Ids
.Y ? ".
f 12..rr
( i1
'
Y.+;a 1
,'! t, ")/.?:
YTy,.
li r.vh.
H.
,w b
i`.!
=I ;.{fF%
.
i
(I
^+/17µ.r
.,f i
'1141. '
w ij ,.. C
'e/
i s !a.
Predominant (>50%)
n FL
Figure 3.5. Per-segment classification results for aspen mapping.
400 Meters
200
0
None
Minor (<50%)
^
1
am
Of
w
14 (i M 4'S ` ;,.r ,%' LJ
/;JC.i y
.r :; y.*
y .l'/ : y i
s;
6n
4L
Aspen Coverage
Table 3.1. Error matrix for accuracy assessment of per-segment classification for three-level aspen mapping.
J,Mapped/Reference->
No aspens
< 50% aspen
> 50% aspen
Total
Producer Accuracy
Var (K-hat):
No aspens
47
0
0
47
100%
< 50% aspen
0.00126
3
61
Total
50% aspen
0
50
69
8
88%
84%
77
79%
68
76
89%
200
K-hat Statistic:
0.82
Overall Accuracy:
13
81
Table 3.2. Error matrix for accuracy assessment of per-segment classification for two-level aspen
mapping.
1Mapped/Reference--+
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00206
< 50% aspen
> 50% aspen
Total
111
13
8
119
81
124
90%
K-hat Statistic:
68
76
89%
200
0.78
Overall Accuracy:
User Accuracy
94%
User Accuracy
93%
84%
90%
88%
In order to compare these results to those achieved by a per-pixel classification
system, a per-pixel mapping was implemented using unsupervised ISODATA
classification with 20 classes using the same data. Accuracy was estimated using the
same 200 random locations, only based on the individual pixel's cover with the same
two-category classification scheme. Overall accuracy was 64 percent with a K-hat
statistic of 0.33 (Table 3.3). Before comparing the per-segment to the per-pixel results,
several comparisons were made of various per-pixel classifications using different
parameters. The main purpose for those comparisons was to make sure that with the
given data, no significantly better per-pixel results can be achieved and hence the best
per-pixel classification is compared to per-segment. First, the number of classes in the
ISODATA clustering was changed to 50. Since the accuracy level was decreased by 1
percent (Table 3.4) and the difference was found to be insignificant (2-sided p-value
of 0.94), 20 classes were chosen for the final comparison to per-segment. In order to
follow the common use of per-pixel classification for land cover mapping and to
obtain better results, a two-step approach was adopted. A 50-class ISODATA
classification was used to specify the vegetation pixels, which were then classified by
a second iteration 20-class ISODATA clustering for the aspen mapping. With this
method, overall accuracy reached 67 percent with a K-hat statistic of 0.36, although
still no significant difference was found (2-sided p-value of 0.5). These per-pixel
mapping results are illustrated in Figure 3.6 and the corresponding error matrix is
shown in Table 3.5.
fable 3.3. Error matrix for accuracy assessment of per-pixel classification for aspen mapping using
ISODATA with 20 classes.
jMapped/Reference->
< 50% aspen
62
62
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00534
W3
M H
Table
[SOD
124
50%
K-hat Statistic:
? 50% aspen
Total
10
72
66
76
87%
0.33
128
User Accuracy
86%
52%
200
Overall Accuracy:
64%
matrix for accuracy assessment of per-pixel classification for aspen mapping using
50 classes.
1Mapped/Reference-->
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00586
< 50% aspen
> 50% aspen
Total
User Accuracy
54
5
92%
70
71
59
141
124
44%
K-hat Statistic:
76
93%
0.32
50%
200
Overall Accuracy:
63%
.f
pj
'.L}
A
S.
Figure 3.6. Per-pixel classification results for aspen mapping.
.
y
.j
!i'
a
,Y
,
r
/
ijr'
llr+'i i
Is,
a
ry r 4
nvv+.
.
)
a
10
t
y1
T
a'` I
v
J
h
i; lY6
.,
r
a
ar
i
la
s
rT.r
\a
''
{
.
k
/
.
f
A
e
-
!
I.
Jr.-
,
o'
C.
t
'+'
,c
rr
4
.! Yr}y` .S''
.\
4
y
1
1+
Ip
1
..
lA4LI.
'tea
L
.s v
`]_
..i
,'N14
r:r
/,its y
I
`
r lY.'
`i
;T
Q. 5
to
rt
1
O
r
4
'l, w.
`til fi
rf,4 ..r
T
.1
a L.
yy
t1flh Y..I '.
`S,f f'A',.
D .y1
lr
,_T r /
'J f'b R
^' -Y y
7
00
!
3
(
r '{a y
a.} w
ti 4n
!':.\a
r .J
A
7l
rr¢,
6
r, f
IYI>r`F+;`'\
ar, 'f S 1
+
<1.1y
y. y`y'
si
J)'_
Is
'J JIr
I.
Is
Al'y/.' a%'
fti
jJ.,.'."'!yT'y
+}'
i+t,
/f
'T/ %
'Sf £STff':T 1 ; yir
'li4}
{ ry)
_
iY-.
f
. S °r,
rYR {Jr`# / F7'rw_;
t
' / ati j'Jy.e''1 /f,Y
j / .: i.f
af4i
/y '/Y !" i it y
t ,i'. rk
1 :;
-'G
e
4 ,f{/ 'fM1N
1^'S.
r r,
'1
/
a,
f 1d .Yil 4Y.-dr
t
Y
^f !
Gn,
! -0
}r
.!
IZ yT
r {l+l,> Zt / i. j3
rJ
A
r5v s,'l1
S
_'/TZ
'C
` w{...+
_ +`
r
T -- +y '..
.L \
+ .
.,rn
'I
l
('r
j,,
it%7,/r
44.'J ray-w
r+ ± `'rry-,
` 5 i a`- s
,\
.
_
/'.t`1
S"f
W F .}
1
(
L
J.
r ml l
5 _- i } Vr w
4
''iY .
r Sth
Ass
h.
.'
+.
Is
f
E tiY.y
, = ' {,'.
1'
Alas 4.
T
r
fi r jf . :rF.', _:l-r.: 2rs;^!-
i{y
M1-'-
''./I
f '( 4.-. .`
ye `'
SI
CY t
Is.
.
`.,Cr *'1
/ .. * ,
''Z O'..`"e n:l Vf
-
i'
f`
^ i'
i'
~
e
:.r. r
i v,
..
'
+
v - +.o y . .^r .
7}Q
ia ^Tff
,y
S.r Y
`f
1"y
frS.r
ri \rl
I.
/ '\ <r';/
Ifu
\
_
.
ti.
.G.
/yr. ..
k
S.
'r}4
M
r1J 1
\.' l\ i r '. 1!
..
.ir
~
-
F1
wT,
FJ
3
. iry`1i\
ri: v
7
F
-rt -tt
-Vts:
YI
i.F7 !
v fl.
'
,-
-OF`
cD -0
..
CD
>
/n
-G
r
Table 3.5. Error matrix for accuracy assessment of per-pixel classification for aspen m apping using
ISODATA with 20 classes masked for vegetation only by an initial 50-class ISODATA.
jMapped/Reference-->
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00474
< 50% aspen
72
50% aspen
Total
User Accuracy
84%
54%
52
62
86
114
124
76
200
58%
K-hat Statistic:
82%
0.36
Overall Accuracy:
14
67%
In addition, the effect of applying a majority filter to the clusters was tested as well. A
5x5 majority filter resulted in 2 percent increase and a 3x3 filter in 3 percent increase
in overall accuracy. K-hat statistics were decreased by 7 percent and 1 percent,
respectively, and no significant difference was found (2-sided p-value > 0.5).
Finally, a comparison between per-pixel and per-segment classifications was made.
The per-segment classification yielded 21-27 percent greater overall accuracy than
per-pixel classifications with changing parameters. Even the best per-pixel results
were convincingly less accurate than the per-segment classification results (2-sided pvalue < 0.0001).
Conclusions
Aspen mapping from 1-m NHAP CIR imagery using per-pixel classification yielded
no more than 67 percent overall accuracy with a K-hat statistic of 0.36. Even with
texture statistics added and major parameters of the clustering algorithm changed, the
results could not be further improved. This leads to the conclusion that with the given
data a different approach for the classification should be taken in order to successfully
and reliably map aspen stands in the study area in Central Oregon. The per-segment
approach presented by Heyman et al. (2003) showed a significant improvement in the
mapping results, obtaining an 88 percent overall accuracy and a K-hat statistic of 0.82
for three-level mapping and 90 percent overall accuracy with a K-hat statistic of 0.78
for two-level mapping . These comparisons are particularly important as they provide
the incentive to further develop the per-segment classification system and apply it in
other areas. Yellowstone National Park is of particular interest for aspen mapping
(Hessl, 2002; Ripple, 2003) and would be a good choice for both enhancing the persegment system and making it useful for change analysis on a wider scale.
Although implementing such a per-segment classification system may require
additional image processing and analytical skills, the results, in conjunction with the
development
of
emerging
off-the-shelf packages
to
generate
per-segment
classifications, seem to well worth the effort. Moreover, the limitations of per-pixel
methods and the high performance of a per-segment system with the same data
encourage further investigations in this direction for other vegetation types and more
general feature extraction and land-cover mapping from high-resolution remote
sensing data.
References
ANDERSON, J. R., E. E. HARDY, J.T. ROACH, AND R. E. WITMER, 1976. A land use and
land cover classification system for use with remote sensor data. USGS
Professional Paper No. 964, Washington DC, 28 p.
APLIN, P., P. M. ATKINSON, and P. J. CuRRAN, 1999. Per-field classification of land use
using the forthcoming very fine spatial resolution satellite sensors: problems and
potential solutions. In Advances in Remote Sensing and GIS Analysis (ATKINSON,
P. M., and N. J. TATE, Eds.), John Wiley & Sons: 219-239.
BOLSTAD, P. V., AND T. M. LILLESAND, 1992. Improved classification of forest
vegetation in northern Wisconsin through a rule-based combination of soils,
terrain, and Landsat Thematic Mapper data. Forest Science 38 (1): 5-20.
CONGALTON, R. G., AND K. GREEN, 1999. Assessing the accuracy of remotely sensed
data: principles and practices. Lewis Publishers, Boca Raton, Florida, 137 p.
CusBNIE, J. L., 1987. The interactive effect of spatial resolution and degree of internal
variability within land-cover types on classification accuracies. International
Journal of Remote Sensing 8 (1): 15-29.
DEBYLE, N. V., 1985. Wildlife. In Aspen: Ecology and Management in the Western
United States (DEBYLE, N. V., and R. P. WINOKUR, Eds.), USDA Forest Service
General Technical Report RM-119: 135-152.
Dmr,n, J. S., and S. H. ANDERSON, 1997. Ecology and management of Aspen forests in
Wyoming, literature review and bibliography. Wyoming Cooperative Fish and
Wildlife Research Unit, University of Wyoming, 118 pp.
FRANKLIN, S. E., A. J. MAUDIE, AND M. B. LAVIGNE, 2001. Using spatial coto increase forest structure and species composition
classification accuracy. Photogrammetric Engineering & Remote Sensing 67 (7):
occurrence texture
849-855.
GENELETTI, D., AND B. G. H. GORTE, 2003. A method for object-oriented land cover
classification combining Landsat TM data and aerial photographs. International
Journal of Remote Sensing 24 (6): 1273-1286.
HESSL, A., 2002. Aspen, elk, and fire: the effects of human institutions on ecosystem
processes. BioScience 52 (11): 10/1-/022.
HEYMAN, 0., G. G. GASTON, A. J. KIMERLING, AND J. T. CAMPBELL, 2003. A persegment approach to improving aspen mapping from high-resolution remote
sensing imagery. Journal of Forestry 101 (4): 29-33.
IRONS, J. R., B. L. MARKHAM, R. F. NELSON, D. L. TOLL, D. L. WILLIAMS, R. S. LATTY,
and M. L. STAUFFER, 1985. The effects of spatial resolution on the classification
of Thematic Mapper data. International Journal of Remote Sensing 6 (8): 13851403.
JENSEN, J. R., 1996. Introductory digital image processing, a remote sensing
perspective. Prentice Hall, Upper Saddle River, New Jersey, 318 p.
from SPOT satellite data.
JOHNSSON, K., 1994. Segment-based land-use classification
Photogrammetric Engineering & Remote Sensing 60 (1): 47-53.
KALKHAN, M. A., R. M. REICH, AND T. J. STOHLGREN, 1998. Assessing the accuracy of
Landsat Thematic Mapper classification using double sampling. International
Journal of Remote Sensing 19 (11): 2049-2060.
LABA, M., S. K. GREGORY, J. BRADEN, D. OGURCAK, E. HILL, E. FEGRAUS, J. FIORE,
AND S. D. DEGLORIA, 2002. Conventional and fuzzy accuracy assessment of the
New York Gap Analysis Project land cover map. Remote Sensing of Environment
81 (2-3): 443-455.
LOBO, A., 1997. Image segmentation and discriminant analysis for the identification of
land cover units in ecology. IEEE Transactions on Geoscience and Remote
Sensing 35 (5): 1136-1145.
LOBO, A., O. CHIC, and A. CASTERAD, 1996. Classification of Mediterranean crops
with multisensor data: per-pixel versus per-object statistics and image
segmentation. International Journal of Remote Sensing 17 (12): 2385-2400.
W.
J.,
2003.
The
aspen
www.cof.orst.edu/cof/fr/research/aspen/.
RIPPLE,
project.
Available
online
at
RYHERD, S., and C. WOODCOCK, 1996. Combining spectral and texture data in the
segmentation of remotely sensed images. Photogrammetric Engineering &
Remote Sensing 62 (2): 181-194.
U.S. GEOLOGICAL SURVEY (USGS), 2001. National High Altitude Photography and
at
online
Available
National
Aerial
Program.
Photography
http://edc.usgs. gov/Webglis/glisbin/gui de.pl/glis/hyper/guide/napp.
USTIN, S. L., AND Q. F. XIAO, 2001. Mapping successional boreal forests in interior
central Alaska. International Journal of Remote Sensing 22 (9): 1779-1797.
5
Chapter 4. THE EFFECT OF IMAGERY SPATIAL-RESOLUTION ON THE ACCURACY OF
PER-SEGMENT AND PER-PIXEL ASPEN MAPPING
Abstract
Both per-segment and per-pixel classification methods were applied to aspen mapping
using remote sensing data at various spatial resolutions in order to study the effect of
spatial resolution on the relative performance of the two methods. The per-segment
classifier outperformed the per-pixel classifier at the 1-4-m resolution, performed
equally well at the 8-16-m resolution and showed no ability to classify accurately at
the 32-m resolution due to the segmentation process used. Overall, the per-segment
method was found to be more scale-sensitive than the per-pixel method and required
some tuning to the segmentation algorithm at lower resolutions. These results illustrate
the advantages of per-segment methods at high spatial resolutions but also suggest that
segmentation algorithms should be applied carefully at different spatial resolutions.
Introduction
Most automatic methods for vegetation mapping using remote sensing data are based
on per-pixel classifications (classifying each pixel separately), although the accuracy
obtained by such methods is usually low (<80 percent) (Bolstad and Lillesand, 1992).
Kalkhan et al. (1998) obtained a 60 percent accuracy utilizing double sampling
compared to a 50 percent accuracy with traditional single sampling of the reference
points in Rocky Mountain National Park using Thematic Mapper (TM) and Digital
Elevation Model (DEM) data. Aspens, which covered one percent of the study area,
were mapped at less than 15 percent accuracy. Laba et al. (2002) checked the New
York Gap Analysis Project land cover map and found 42-74 percent overall accuracy
(class level dependent) using conventional accuracy assessment, which was improved
by up to 20 percent using fuzzy accuracy assessment. Joy et al. (2003) combined 30-m
TM data with 10-m field samples and used decision tree classifications for vegetation
mapping in Northern Arizona to obtain overall accuracy of 75 percent with a K-hat
statistic of 0.50.
One way to improve the accuracy of vegetation land-cover mapping utilizing per-pixel
methods is by using higher spectral resolution data. Too often, however, the results are
not sufficiently better in terms of mapping accuracy. For example, Ustin and Xiao
(2001) mapped boreal forests in interior Alaska and achieved 74 percent accuracy at a
species level using 20-m ground resolution Advanced Visible/InfraRed Imaging
Spectrometer (AVIRIS) imagery with 224 10-nm bands, compared to 43 percent
accuracy using Satellite Pour 1'Observation de la Terre (SPOT) data. Kokaly et al.
(2003) mapped vegetation in Yellowstone National Park and obtained 74 percent
overall accuracy with a K-hat statistic of 0.62 using 15-m AVIRIS hyperspectral data.
Franklin et al. (2001) obtained 80 percent accuracy at a species dominance/codominance level incorporating spatial co-occurrence texture with one-meter resolution
Compact Airborne Spectrographic Imager (CASI) imagery. These examples
demonstrate the weakness of per-pixel methods in exploiting the information
contained in multi- and hyper-spectral remote sensing data, and the need for
alternative ways to obtain higher accuracy vegetation mapping.
Another drawback of per-pixel classification methods is that although the information
content of the imagery increases with increased spatial resolution, the accuracy of land
cover classification may decrease due to an increase in variability within each class
(Irons et al., 1985; Cushnie, 1987). Hsieh et al. (2001) illustrated the inverse effect of
spatial resolution on the classification errors associated with pure pixels and mixed
pixels. They conclude that the typical per-pixel classifier may not take advantage of
the information available in high-resolution imagery. Chen and Stow (2002) showed a
consistent increase in the K-hat statistic as spatial resolution decreases from 2-m to 16-
m through 4-m, 8-m and 12-m. Mumby and Edwards (2002) noticed better
delineations of habitat patches with higher resolution IKONOS data, but did not obtain
higher accuracy using these data compared to their results using TM data. As more
and more high spatial-resolution data become available (e.g. IKONOS, QuickBird),
there is a growing need to develop innovative methods to overcome their current
drawbacks and to take advantage of the additional information embodied in the data in
order to improve mapping accuracy.
Per-segment, as opposed to per-pixel, classification provides a tool in which the
texture and spatial variability inherent in high spatial resolution imagery can be
exploited. With a per-segment approach, segments or objects, rather than single pixels,
are classified as independent units. Segmentation algorithms have been used in land
cover mapping to partition images into elements that were then classified by a
maximum likelihood or other allocation rule (e.g., Johnsson, 1994; Ryherd and
Woodcock, 1996; Lobo et al., 1996; Lobo, 1997; Aplin et. al., 1999; Geneletti and
torte, 2003). The per-segment method is particularly effective for a specific type of
vegetation mapping, such as aspen mapping. In order to address the issue of improved-
quality aspen mapping in the intermountain West, Heyman et al. (2003) developed a
per-segment classification system in which the image itself was used for the
segmentation based on its hue and saturation values, and the segments were then
classified according to their spectral and multi-resolution textural characteristics. In
order to reliably map stands as small as 25-m2, color infrared (CIR) aerial photos were
scanned at a 1-m ground pixel size. Utilizing this method, an 88 percent overall
accuracy was obtained with a K-hat statistic of 0.82 for three categories of aspen
coverage, and a 90 percent overall accuracy with a K-hat value of 0.78 for a twocategory coverage scheme. Rigorous comparison of the results to those obtained by
56
per-pixel classifications using the same data showed a significant difference in
accuracy between the methods, with the per-pixel mapping not exceeding 70 percent
overall accuracy (Heyman and Kimerling, in review). The purpose of this research is
to study the effect of spatial resolution of the source imagery data on the relative
performance of per-segment and per-pixel classifiers.
In order to test the level of accuracy as a function of spatial resolution of the imagery,
both per-segment and per-pixel classifications were implemented using the same data
from the same study area at various resolutions. This procedure allowed the
examination of the effect of spatial resolution on each method, and the comparison of
the methods at each resolution.
Data and Study Area
Aerial color infrared (CIR) photographs were used as the remote sensing data source
(Figure 4.1). These photos were acquired at an average scale of 1:58,000 using a 210
mm (8.25 in) focal length mapping camera with a ground resolution as small as one to
two meters (USGS, 2003). The 9" x 9" CIR transparencies were digitized with a
photogrammetric scanner that produced pixels with a 1.2-m ground resolution at a 24bit depth. The scanned image was rectified and resampled to UTM coordinates with 1m ground pixel size.
The 6-km2 study area is located on Winter Ridge in Central Oregon within the
Fremont National Forest (Figure 4.2). The study area contains 3-4 percent overall
...
3
t.
.
v
MJ
.
Figure 4.1. Color infrared image used for aspen mapping.
.'
1
L,9A.'
%
's'1
{jjyrf
.
.
,L,rr'
,
'
%
'
' r -7 a+
+rp
/ ~ -,1 i.4.i }if
r4.'rO
'L
+
a
.'.
_
...
r.
44 ,
i/,
.
. 'r1
i ..r ".-
..I .'1.'1''t'
r1 .s
.r
MOP
''i! i
1
Y 5't'
'/ 4'.',
,`Yt .
'r .R1%r f
Af
TL
..J.i
(
4'+°
-
'
f ,.
r.A
.
J
ten.
"r.4 .i,1,
,iit/
'
.,
r1.. w.+` { *
.
.
ys
4,.
l
RR
it-i.-
: ..
1.3
yr
R!
:
..,
..
/ -,t:"
'Lt
L 'ry , .
i4T w'Sr E:f
'Fiir'.
or
4! x .,
it
}*
I''wi' y k '-'7 t1 .,
p
.rR
!
in,,
'''. t ..
+
1
TYi
sl
it R:J{
1
w. ,,
1
:
%J
i_ lr°
t, '
_
'ry}
.
. i,. , r.'4_%.
_}a ql_
r
' j f . /r
I7
"'a '"tr+ 'j G: ,K; y
1nf .'tf 1
J,
_
l J. 1 3
JY
'
l
R
r;'r.
Ijti
j
. /r,w,o, 'f
,,.. /
!f
.>ti'1'',{
J
<:..
ti.;.;`til
,,..+qy...
rte,
J.iJ
,;r
,,f
Ll'!+r4' ,'
1: . r
<'A' at.=F/1'
a41a
`'1l, i
vy
- 4 rjl_
7i'.y I.IF.¢
43
ry .''.II
R
1
r..
Icy
,
1 '^. i frA
y4
?i: :»'<,(
1 i`
;
1 'f'<i{!;'i i'I
I
'I,'
f ! '.f) v
.S°j
_,.
J
`i
Vr,: jt ,.':.ry,P
i ' t,
i.Q',,,r.'
'' mo.
tS
,.`
r > '..
4.0
f
tf ijr. . a}
-
. . "'Jr
it
'
a=
j.
.'t -0'.:
I'
'
.,SI1,' 'f
.+?'
ti
'
t i.....
' f}.,'+,:
'1'l1ly/
.r.,
'
-
'
+IFtr,.
/''' ..'a11%
f J
^;.,'
t'.
f ,r
`-!
r:
-
11
,
yr1syV
i '," . ..J4, _'i'
i,I/"
,1 'i'f,'
)rr {: zr
.C I -1'i'r
t
f' ,,:i 1i' «i!
S?1,
' rry
. %j `!n j,
.t
Yt ,.1It:'
'y iw /.+ I
'11 / . ' 'l w.J lJ..tti4r,
'
A. ;
fyy
1' } ' i
Ir
,.
f'!i
,.
...;.:,,, of
' ( ;!tC'
IL n
.,_.
..
i fJ
If"
r
'
f
iI r' 4;
'
S,
'ti`p.
y
/«
t
''' .
.4 `"f j. . i
54,1"F .
'
,urvV!'y,`I-'
; .:
K.
.-'
tj {
raJ'!f'
`
?'. I,f
,,.
jC'+ `fI.^i'+
j,,^,.,ikt'`
;w
aspen cover with a variety of sizes of aspen stands, some pure and some mixed with
conifers, mainly lodgepole pine (Pinus contorta) and ponderosa pine (Pinus
ponderosa).
California
Figure 4.2. Winter Ridge, Oregon study area.
58
Methods
Aspen stands in the Winter Ridge study area were mapped using per-segment and per-
pixel classifications. Both methods were applied to the original remote sensing data at
1-m ground resolution, and to images at the reduced resolutions of 2-, 4-, 8-, 16- and
32-m. Comparisons were made in order to assess the effect of spatial resolution on
each of the methods and on their relative performance utilizing the following method.
Data preparation
The original data consisted of rectified 1-m CIR imagery. Coarser resolution images
were created by reducing the resolution in steps. At each step the resolution was
reduced by a factor of 2 (a 2-m image was created from a 1-m, a 4-m from a 2-m, and
so forth) by averaging the neighboring four pixels. Six images at 1-, 2-, 4-, 8-, 16- and
32-m ground resolution were created in this manner.
Classifications
Both per-segment and per-pixel classification methods were applied to the remote
sensing data at the six different ground resolutions. The per-segment classification was
implemented following the method developed by Heyman et al. (2003), in which the
image was partitioned based on its hue and saturation values to create spatial clusters
of aspen stand 'candidates', which were then classified by their spectral and textural
characteristics. Since this algorithm, illustrated in Figure 4.3, was developed at 1-m
spatial resolution, it required some tuning for better performance at lower resolutions.
The segmentation algorithm itself relied on the histograms of both the hue and the
saturation of the image, where all pixels within a certain range of the hue and
saturation values constituted the segments to be classified. In order to avoid
redeveloping a segmentation algorithm for each resolution, but yet to allow more
variability in the comparisons, three different tunings were used for each per-segment
classification.
In the first tuning, the original thresholding levels defined the segments as:
Initial candidate =1 if [(60 < H < 115) AND (0.125 < S')] and 0 otherwise , (1)
where H is the hue value of the pixel and S' its minimum saturation value (within a
3x3 neighborhood).
Since this tuning produced a significantly smaller number of segments at lower
resolutions, two other tunings were used. In the second, the saturation values were
discarded, and the segments were based only on the hue values with the same
thresholding levels,
Initial candidate = l if (60 < H < 115) and 0 otherwise,
(2)
In the third tuning, the thresholding of the hue was more aggressive to produce more
segments,
Initial candidate =1 if (0 < H < 115) and 0 otherwise
,
(3)
61
The rest of the per-segment system was not changed. Mean values were calculated for
each segment from the three CIR reflectance channels and four texture images (see
Heyman et al., 2003 for more details). The seven mean layers were then stacked in one
file to be used in the per-segment classification.
Once segmentation was accomplished, both segment-based and pixel-based images
were ready for classification, and an unsupervised classification procedure based on
the ISODATA algorithm (ERDAS, 2002) was applied to all images. All the
classifications in this research used the ISODATA algorithm with 20 classes following
Heyman and Kimerling (in review), which showed no significant differences in aspen
mapping results between various per-pixel methods (e.g. different number of classes,
two-step process). Although Heyman and Kimerling (in review) did not find any
significant effect of applying either a 3x3 or a 5x5 majority filter to their per-pixel
classifications, those effects were tested in this study in order to examine their
interaction with the effect of different spatial resolutions.
Aspen coverage categories were assigned to the ISODATA classes based on image
interpretation of the CIR images and 1:12,000 color aerial photographs. A twocategory scheme of 'aspen' and 'no aspen' was used for both per-segment and per-pixel
classifications. For the per-segment classification only, a three-level scheme of 'no
aspen', 'minor aspen' (< 50 percent) and 'dominant aspen' (> 50 percent) was also used
in order to see how it is affected by spatial resolution.
Accuracy assessment
A site-specific assessment employing an error matrix was carried out for each of the
classifications, based on the technique presented by Congalton and Green (1999). For
the accuracy assessment, 200 random points were generated within the study area, at
least 50 in each mapping category, to be used as the reference data in the error matrix.
Each point was examined in the field or using ancillary imagery to determine the
aspen coverage at both the specific location and the surrounding segment. With the
error matrix as a starting point, overall accuracy, producer's accuracy, user's accuracy,
and the K-hat statistic (and its variance) were calculated (see Congalton and Green,
1999 for the mathematical formulas).
Comparisons
Four types of comparisons were made in order to quantitatively assess the effect of
spatial resolution on the classification results. First, at each resolution, per-segment
classifications based on different tunings of the segmentation process were compared
in order to identify the best setting. These comparisons were carried out separately on
the three- and two-category classifications. Second, at each resolution, per-pixel
classifications were compared to determine the optimal majority filter. Then, for each
method (three-category per-segment, two-category per-segment and two-category per-
pixel) the best performing classifier was compared to the one at the next level of
spatial resolution (e.g., best three-category per-segment at 1-m to best three-category
per-segment at 2-m). Finally, at each resolution, the best per-segment results were
compared to the best per-pixel results at the same category level.
In order to compare the results from two different classifications, the K-hat-based Z
test statistic was applied to determine if two independent error matrices were
significantly different:
KIK21
Z=
(4)
J
var(Kl) + var(KZ)
Since this Z test is valid for identical classification schemes only (Congalton and
Green, 1999), it was not used to compare the three- to two-category classification
results. P-values were derived from a standard normal distribution table in order to
interpret how significant was the difference found between any two compared sets of
results.
Results and Discussion
For each combination of resolution, category-level and mapping method, an error
matrix was constructed and statistical parameters were generated, as described in the
methods section. A summary of the results is presented in Table 4.1 and illustrated
graphically in Figure 4.4. The best mapping results of each system at each resolution
are shown in Figure 4.5. Comparisons were then made and inferences were derived
based on Z test statistics to quantitatively assess the effect of spatial resolution on the
relative performance of the two methods. First, each method was examined separately
at each resolution, then direct comparisons were made for each method between
different resolutions and, finally, the methods themselves were compared at each
resolution.
1.00
0.90
0.80
0.70
0.60
0.50
d
3-Category Per-Segment
N
2-Category Per-Segment
0.40
2-Category Per-Pixel
0.30
020
0.10
0.00
4
8
16
Spatial Resolution (m)
Figure 4.4. Overall accuracy of per-segment and per-pixel aspen mapping.
Table 4.1. A summary of accuracy assessment results of all combinations of classification method,
category level and spatial resolution tested in this study of aspen mapping in Winter Ridge, Oregon.
Overall
Accuracy
K-hat
Statistic
Var
Level,
Category
Resolution
Classification Method
1-m
per-segment, 2974 segments
3
0.88
0.82
0.00126
1-m
per-segment, 2974 segments
2
0.90
0.78
0.00206
1-m
per-pixel, 3x3 majority filter
2
0.67
0.32
0.00467
2-m
per-segment, 1662 segments
3
0.81
0.71
0.00189
2-m
2
0.86
0.70
0.00262
2-m
per-segment, 902 segments
per-pixel, 5x5 majority filter
2
0.70
0.32
0.00504
4-m
per-segment, 290 segments
3
0.62
0.45
0.00247
4-m
2
0.87
0.71
0.00270
4-m
per-segment, 290 segments
per-pixel, 5x5 majority filter
2
0.79
0.54
0.00386
8-m
per-segment, 50 segments
3
0.41
0.20
0.00538
8-m
per-segment, 50 segments
2
0.76
0.42
0.00545
8-m
per-pixel, 5x5 majority filter
2
0.78
0.54
0.00374
16-m
per-segment, 5 segments
3
0.30
0.07
0.00935
16-m
per-segment, 5 segments
2
0.68
0.17
0.0116
16-m
per-pixel, 5x5 majority filter
2
0.73
0.39
0.00477
32-m
per-pixel, 3x3 majority filter
2
0.69
0.27
0.00606
67
r fl.,aC
n. u'^G
° ,t
a4,T.; 4
F., a
s..
{ J.r >
'. a,r+ lr+
_.src*tiT^.
r-
4/4
y.y
-
{.
.' `
A k
'cp>'+t
*Y. iGi
;
Jw !
U'.°f"',
sf-
s
cf
r..
+"iLf f.
.}^v
'_
yn It
d
f
1
..
l
pp.+...;Zti
..
kris-:
e
-
.-..
tiir,lrr
yv `td
t'
..
a+S.
y
7:w
-.+
+A
) t .'1 +r5 Zy
r
l ':;`4 }
1.
of
r
Y ?.yam
.X.
Y
`.. r
i
a
'a
r'
r
' '. yr
n, T
. ^ .+y 'ts'2 r a,aT, y
w.'4S
g" >3l
y-.
.t ti.
..fir
_
t
M
+
'
'+
TC
-e
Aspen Mapping
None
Aspen
0
200 400 Meters
Figure 4.5. Per-segment and per-pixel classification results for aspen mapping at varying
spatial resolutions: (a) per-segment at 1-m resolution, (b) per-pixel at 1-m resolution, (c)
per-segment at 2-m resolution, (d) per-pixel at 2-m resolution, (e) per-segment at 4-m
resolution, (f) per-pixel at 4-m resolution, (g) per-segment at 8-m resolution, (h) per-pixel
at 8-m resolution, (i) per-segment at 16-m resolution, (j) per-pixel at 16-m resolution, (k)
per-segment at 32-m resolution, (1) per-pixel at 32-m resolution.
r -ryrl
.}
"
a:
t`crn DR
L
CJ*
+a
1`t"i7!
.1
. F If) 'tDRjv
y.f ,
1
r"y'.''.{p,.:t
iZ
<.Y r'
:r' irrro "
"' JlS h, ,='`r '
JL
tin
liAi
i
`
'l'!J
tla..-.
;,
.i 1'^ r:a,Ti
'. i ^t
r y<
t.
j!tr.1 l'`Y..
t
i
r
,t
,
'fiYr
'
If"'i
t
,r ..
t'{
,,,is"s "aI'
+fw
,t4{C -
+.(,'..C`A.
).
-i
'A i
...
'
.
`wA
a
is
L
7
IN arG . jib
%
C.
H'#
rk.,Pv{VfM
1.s .
ry
,'
_i +; rl rM
.., ..
trti T1.I.Y v
t:
'' 7'
', ;i
.
..;... r
Y[
11
,
,+tji'
'K_s;
w.ri,z:
a
,7^ :'!',
a-
4r,,jgti
,;,! ^t'* IG.
,v
/r'1'
r'
r ^'
,Iv:
... w`
_1
J,
:' ,.
r
tw
i
rr
-Yf,
)j}:
'`r
f
,l
.;/.Y1
?
J
y,
`
,-7
{+ ! ."I ,, r
ray -
tvyt
n{'c' »,-I'R
[."t
r
5
.:
r..
-s 3r`'MS-
`r4.v L'`rI.
i.
7'l!`
3.
:,r;{+
w
>r
,..`'.?
f i/. 'Y[:°''"?s.
yJ T'7G ?,
.ye,
,
_
Ltr''r:?<ae,j"w'»Jr:
.
;..7,. '=,i
r xi
fit-
-);vrY.!r,Y
't
:.
1.-X....
.-:rte.
.fir
::
fin
:r j r' r.
+f
!J -
y'
`^.. -yT
a.
a
.1
x^L'.,
'fJ +
:l
f v,V
i..ti
'
a
e!
Z. aiNr'T''.
),R
`_
.1 "r
t.i
.-_
ji't'
-.
rw,..
' rP;i{,,
,
`
"r <<
l
d' - 4k
..
i,I
. - -.iJ
.
`ti
"jI*
tt t ao.Kf e`
.4c rK's''
FTjCa:
. {,
,. C , k,
.Q.
..
.
K wf -
<,
r.T .a
1,A..
i_{a + r r t...
.C-~Ji. r
" .acfY?G
r.
:
..._
2
.-
)
.
i-.a.
F-'..
.N
'rr. i... r'.r,
', ti+9 * t .t,i -.,.:7;:
.r'
,;,.i YO'T'a'-':ia'.
,
,5.. r 1C'5++.Y)1.i
`R
?*
?ail
,,
'!f
f, t
K 7. u.f : . .-t'-i
y',i4
r.
s., r s,p 'f
c n it 0+
6` .LYfst
-+: f d.
" .wr '
.{
y Y. r:' r.. p L
,`
'
i!
`
l
n
y
..j
tC
cR,
'fir`
7,c a
r`. "Ti.: - ..r.r
i
log.
>t
it'.!.'".`L
-
.',\.,,
.`::. "?.- '"'i.r-.rli- ,' . 1.r=
11:
ft'
r{ P
lLi-,'t.
k` t *.l
._1., ~''n
L
;'
-&q-.
fr
i.
1!!/
^L-4 y
'y4i!
t` r d''.,t i
:' `
''f
!
r.n.-LME
t
~r1tl
- `)., ``
till
v,: s
`
s,ry`
,1j
arP+. .af 1 +kt'.
Fi,urc 4 5 Continued
+
Y.'
./,
«.
1"
1('.'y'
3 'ycf
ire.
LT v}7,,,,n
:r
i w:t +1
). t,
J '`,+ y Fx
.- ,p..,.,,
F:,..
L,S
l,-.
.
'
FcC;.
ri
`' !,-;r.. -'
w:71,+,. .a1^'.l
+.
,. ,...
-:`r t.''d
.'
llrti
Y,-
',` ' 2r,.}.
r'j
,w l,
•
•
•
:.
•
•
:'
••••
•
—
1? ••
•.,-*:'
Fi,ure 4.5. Continued
•-
•:..
I.
4
•
70
Figure 4.5. Continued
Per-segment classification
The original per-segment system for aspen mapping, developed by Heyman et al. (2003),
was based on 1-m ground resolution imagery and demonstrated a good distinction
between three categories of aspen coverage. In order to broaden the use of the system, to
implement it at coarser resolutions, and to compare it to per-pixel classification systems,
the effect of the parameterization of the segmentation process on the performance was
examined at varying resolutions for both three- and two-category aspen mapping.
At 1-m resolution, the original segmentation process generated 2974 segments. For the
three-category mapping, an 88 percent overall accuracy with a K-hat statistic of 0.82
was achieved. For the two-category mapping, a 90 percent overall accuracy with a K-
hat statistic of 0.78 was obtained (Table 4.2). Using only the hue at the same
thresholding levels almost doubled the number of segments (5629) and produced
results at lower accuracy by 8 percent for three-category mapping with moderate
evidence for a difference (p-value = 0.03).
At 2-m resolution, the original segmentation created only 327 segments, while
changing the thresholding parameters (without saturation and with low threshold of
hue set to zero, as explained in the methods section) generated 902 and 1662
segments, respectively. With three categories, the most aggressive segmentation (1662
segments) showed significantly better results (p-value < 0.004) with an 81 percent
overall accuracy and a K-hat statistic of 0.71 (Table 4.3). With two-category mapping,
however, the segmentation effect was less significant (0.07 < p-value < 0.40) and the
best results were achieved with the intermediate segmentation (902 segments),
showing an 86 percent overall accuracy and a K-hat statistic of 0.70 (Table 4.3).
At 4-m resolution, the number of segments dropped to 19, 65 and 290 according to the
threshold parameters used. For both three- and two-category mappings, the most
aggressive segmentation (290 segments) performed significantly better (p-value <
0.004). The overall accuracy dropped to a 62 percent with a K-hat statistic of 0.45 for
three-category mapping and remained high for two-category mapping with an 87
percent overall accuracy and a K-hat statistic of 0.71 (Table 4.4).
At 8-m resolution, the most aggressive segmentation produced 50 segments and
performed better than the intermediate segmentation that produced only 5 segments.
The third segmentation tuning produced no segments, and became irrelevant. Overall
accuracy dropped to a very low level for three-category mapping, with a 41 percent
overall accuracy and a K-hat statistic of 0.20. A 76 percent overall accuracy with a Khat statistic of 0.42 was obtained for two-category mapping (Table 4.5).
At 16-m resolution, only 5 segments were created by the most aggressive
segmentation, but these segments included the biggest aspen stands and thus gave
meaning to the classification. With three-category mapping, the results were
meaningless (a 30 percent overall accuracy with a K-hat statistic of 0.07). With two-
category mapping, overall accuracy was 68 percent but with a low K-hat statistic of
0.17 (Table 4.6). At 32-m resolution, no more than a single segment was generated
and hence no aspen mapping could be done.
Per pixel classification
The per-pixel classification was based on the ISODATA algorithm with 20
classes.
Before comparing the results from the per-pixel classification method at varying
resolutions to the corresponding per-segment results, the effect of applying majority
filters was tested as a function of spatial resolution. Although Heyman and Kimerling
(in review) did not find significant differences in classification results at the 1-m
ground resolution using various majority filters (without applying any majority filter
and with either a 3x3 or a 5x5 filter), the spatial resolution dependance of the majority
filter suggests that similar examinations are needed for the various resolutions used in
this study.
At both 1- and 2-m resolutions, no significant difference was found between the three
filters (p-value > 0.5). At 1-m resolution, the best results were obtained using a 3x3
filter with a 67 percent overall accuracy and a K-hat statistic of 0.32 (Table 4.2). At 2-
m resolution, the best results were found using a 5x5 filter, with a 70 percent overall
accuracy and a K-hat statistic of 0.32 (Table 4.3).
At both 4- and 8-m resolutions, the best results were obtained using a 5x5 filter, which
produced 79 and 78 percent overall accuracies with identical K-hat statistics of 0.54
(Table 4.4, Table 4.5). Using a 3x3 filter lowered the accuracy by 3-6 percent with no
evidence for a difference (p-value > 0.2), whereas the results without any filter
showed suggestive but inconclusive evidence for a difference (p-value = 0.05).
At 16- and 32-m resolutions, the differences in performance were insignificant (pvalue > 0.3). At the 16-m resolution, the best results were achieved using a 5x5 filter,
with a 73 percent overall accuracy and a K-hat statistic of 0.39 (Table 4.6), and at the
32-m resolution using a 3x3 filter, with a 69 percent overall accuracy and a K-hat
statistic of 0.27 (Table 4.7).
Table 4.2. Error matri ces for accuracy assessment of aspen mapping using imagery at 1-m ground resolution.
(a) per-segment classi fication (2974 se gments), three-category level.
No aspens
< 50% aspen
> 50% aspen
IMapved/Reference-*
47
0
No aspens
3
0
8
< 50% aspen
61
0
13
68
>_ 50% aspen
Total
Producer Accuracy
Var (K-hat):
Total
User Accuracy
50
69
94%
88%
84%
81
47
77
76
200
100%
79%
K-hat Statistic:
89%
0.82
Overall Accuracy:
0.00126
(b) per-segment classification (2974 segments), two-category level.
jMapped/Reference-*
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00206
< 50% aspen
> 50% aspen
Total
111
13
124
8
119
81
90%
K-hat Statistic:
68
76
89%
0.78
200
Overall Accuracy:
(c) per-pixel classification (3x3 majority filter), two-category level.
< 50% aspen
> 50% aspen
jMapped/Reference--+
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00467
85
39
124
69%
27
49
76
64%
K-hat Statistic:
0.32
User Accuracy
93%
84%
Total
112
88
90%
User Accuracy
76%
56%
200
Overall Accuracy:
67%
88%
Table 4.3. Error matri ces for accuracy assessment of aspen mapping using imagery at 2-m ground resolution.
(a) per-segment classi fication (1662 se gments), three-category level.
< 50% aspen
> 50% aspen
1Mapped/Reference-->
No aspens
0
No aspens
44
3
17
1
59
< 50% aspen
59
2
15
>_ 50% aspen
Total
Producer Accuracy
Var (K-hat):
Total
User Accuracy
47
77
76
94%
77%
78%
47
94%
77
76
200
77%
0.00189
K-hat Statistic:
78%
0.71
Overall Accuracy:
(b) per-segment classification (902 segments), two-category level.
Total
> 50% aspen
j,Mapped/Reference-*
< 50% aspen
113
No aspen
104
9
87
20
67
Aspen
200
76
Total
124
88%
Producer Accuracy
84%
Overall Accuracy:
Var (K-hat): 0.00262
K-hat Statistic:
0.70
User Accuracy
92%
77%
86%
(c) per-pixel classification (5x5 majority filter), two-category level.
IMapped/Reference--+
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00504
< 50% aspen
102
22
124
82%
K-hat Statistic:
> 50% aspen
22
37
76
49%
0.32
Total
141
59
User Accuracy
72%
63%
200
Overall Accuracy:
70%
81%
Table 4.4. Error matrices for accuracy assessment of aspen mapping using imagery at 4-m ground resolution.
(a) per-segment classifica tion (290 seg ments), three-category level
< 50% aspen
> 50% aspen
jMapped/Reference- No aspens
49
15
No aspens
46
18
2
< 50% aspen
1
10
59
> 50% aspen
0
76
77
Total
47
78%
23%
98%
Producer Accuracy
0.00247
0.45
K-hat Statistic:
Var (K-hat):
(b) per-segment classification (290 segments), two-category level
< 50% aspen
> 50% aspen
lMapped/Reference--+
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00270
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00387
110
21
69
Overall Accuracy:
131
114
10
17
59
69
124
76
78%
200
0.71
Overall Accuracy:
92%
K-hat Statistic:
User Accuracy
42%
86%
86%
200
Total
(c) per-pixel classification (5x5 majority filter), two-category level
< 50% aspen
> 50% aspen
IMapped/Reference-->
No aspen
Total
User Accuracy
87%
86%
87%
131
User Accuracy
81%
74%
Total
106
25
18
51
69
124
85%
K-hat Statistic:
76
67%
200
0.54
Overall Accuracy:
79%
62%
Table 4.5. Error matri ces for accu racy assessment of aspen mapping using imagery at 8-m ground resolution.
(a) per-segment classi fication (50 segments), three-categor_y level
No aspens
< 50% aspen
> 50% aspen
iMapped/Reference-->
47
45
No aspens
69
8
0
4
< 50% aspen
31
0
4
> 50% aspen
Total
Producer Accuracy
Var (K-hat):
47
100%
0.00538
Total
User Accuracy
161
29%
100%
89%
4
77
76
35
200
5%
41%
0.20
Overall Accuracy:
K-hat Statistic:
(b) per-segment classification (50 segments), two-category level
Total
< 50% aspen
> 50% aspen
IMapped/Reference-->
165
120
45
No aspen
4
35
31
Aspen
200
124
76
Total
97%
41%
Producer Accuracy
Overall Accuracy:
K-hat Statistic:
Var (K-hat): 0.00545
0.42
User Accuracy
73%
89%
76%
(c) per per-pixel classification (5x5 majority filter), two-category level
J,Mapped/Reference-->
No aspen
Aspen
Total
Producer Accuracy
Var (K-hat): 0.00374
User Accuracy
83%
< 50% aspen
> 50% aspen
Total
100
20
120
24
56
76
74%
80
200
70%
0.54
Overall Accuracy:
78%
124
81%
K-hat Statistic:
41%
Table 4.6. Error matrices for accuracy asses sment of aspen mapping using imagery at 16-m ground resolution.
(a) per-segment classification (5 segments), three-category level
50% aspen
< 50% aspen
No aspens
J.Mapped/Reference-+
65
No aspens
47
76
1
0
0
< 50% aspen
11
0
0
>_ 50% aspen
76
47
77
Total
14%
100%
Producer Accuracy
1%
0.07
Var (K-hat):
K-hat Statistic:
0.00935
(b) per-segment classification (5 segments), two-category level
< 50% aspen
50% aspen
IMapped/Reference-+
124
65
No aspen
11
Aspen
0
124
76
Total
14%
100%
Producer Accuracy
Var (K-hat): 0.00116
K-hat Statistic:
0.17
Total
188
User Accuracy
25%
100%
100%
1
11
200
Overall Accuracy:
Total
189
User Accuracy
66%
11
100%
200
Overall Accuracy:
68%
(c) per-pixel classification (5x5 majority filter), two-category level
Total
User Accuracy
< 50% aspen
>_ 50% aspen
jMapped/Reference-->
No aspen
141
74%
105
36
Aspen
59
68%
40
19
200
124
76
Total
53%
85%
Producer Accuracy
Overall Accuracy:
73%
K-hat Statistic:
Var (K-hat): 0.00477
0.39
30%
Table 4.7. Error matrix for accuracy assessment of aspen mapping using imagery at 32-m ground
resolution.
(a) per-pix el classification (3x3 majority filter), two-category level
Total
< 50% aspen
> 50% aspen
jMapped/Reference- *
157
109
48
No aspen
43
15
28
Aspen
200
124
Total
76
88%
Producer Accuracy
37%
Overall Accuracy:
Var (K-hat): 0.00606
K-hat Statistic:
0.27
User Accuracy
69%
65%
69%
Inter-resolution comparisons
For the investigation of the direct effect of spatial resolution on the performance of the
classifiers, the results at the six spatial resolutions were compared and statistical
inferences were derived as described in the methods section.
Three-category per-segment classifier
Overall accuracy of the three-category per-segment classifier at 1-m resolution was
higher by 7 percent than the accuracy at 2-m resolution with suggestive but
inconclusive evidence for a difference (p-value = 0.054). A very significant drop in
performance occurred in the transition from 2- to 4-m resolution (20 percent
difference with p-value < 0.0001) and from 4- to 8-m resolution (21 percent difference
with p-value = 0.0047). Between 8- and 16-m resolutions, a non-significant (p-value =
0.3) drop of 12 percent in overall accuracy was found, while at 32-m resolution the
classifier appeared useless.
Two-category per-segment classifier
The two-category classifier maintained a high level of overall accuracy with no
significant differences (p-value > 0.21) at 1-m resolution (90 percent), 2-m resolution
(86 percent) and 4-m resolution (87 percent). At 8-m resolution, overall accuracy
decreased to 76 percent with convincing evidence for a difference from 4-m resolution
(p-value = 0.0014), while at 16-m the overall accuracy further dropped by 8 percent
with moderate evidence for a difference (p-value = 0.06).
Two-category per-pixel classifier
In general, the per-pixel classifier showed moderate changes in performance across all
checked resolutions, with overall accuracy ranges from 67 to 79 percent. From 1- to 2-
m resolution, overall accuracy increased by 3 percent with no evidence for a
difference (p-value > 0.9), while at 4-m resolution a more significant (p-value = 0.025)
9 percent increase was observed. At both 4- and 8-m resolution, the best results (78-79
percent overall accuracy) among all resolutions were achieved with no significant
difference between the two (p-value > 0.9). Overall accuracy decreased to 73 percent
at 16-m resolution with suggestive evidence for a difference (p-value = 0.11) and 69
percent at 32-m resolution with no evidence for further difference (p-value = 0.25).
Inter-method comparisons
Statistical comparisons were made between the corresponding per-segment and per-
pixel classification results at the same resolution for two-category aspen mapping. At
1- and 2-m resolutions, the per-segment method outperformed the per-pixel classifier
by 23 and 16 percent, respectively, in overall accuracy with convincing evidence for a
difference (p-value < 0.0001). At the 4-m resolution, the per-segment results were
higher by only 8 percent, with moderate evidence for a difference (p-value = 0.03). At
8- and 16-m resolutions, no significant difference was found (p-value = 0.22 and 0.09,
respectively) where the per-segment classifier had an overall lower accuracy of 3 and
5 percent, respectively. At the 32-m resolution, no comparison was made since the
per-segment method generated only a single segment.
Although these comparisons brought out some limitations of the per-segment method,
they confirmed the superiority of the per-segment system at high spatial resolution and
pointed out directions for improvement.
Conclusion
This study demonstrates that a per-segment classification approach for aspen mapping
yields significantly higher accuracy results than a traditional per-pixel method when
using high resolution data at 1-4-m ground pixel size. The results presented in this
study, however, are of significance beyond the direct comparison of the two
classification methods. They show the effect of spatial resolution on each method and
its main parameters, giving a better understanding of how robust the methods are at
varying spatial resolutions,
Overall, the per-segment classifier was found to be more sensitive to changes in
spatial resolution over the 1-32-m range than the per-pixel classifier. The per-segment
classifier, originally developed for data at 1-m ground resolution, demonstrated
impressive performance in distinguishing between three categories of aspen coverage.
However, it did not maintain this level of accuracy at resolutions of 4-m or coarser
even with tuning of the segmentation algorithm. For two categories of aspen coverage,
high accuracies (86-90 percent) were achieved down to 4-m resolution, with lower but
reasonable accuracies obtained at coarser resolutions of 8- and 16-m (68-76 percent).
The main conclusion from these results is that the segmentation process is scale-
dependent and merely tuning the algorithm may not make it robust enough. In order to
achieve similar high performance at various resolutions, a different segmentation
algorithm should be developed for each.
The per-pixel system, which cannot distinguish between more than two categories of
aspen cover, did not yield overall accuracy higher than 79 percent at any spatial
resolution. Nevertheless, this method maintained a much more uniform level of
accuracy across resolutions, which is an indication for a higher robustness to
resolution changes. An increase in mapping accuracy with decreasing resolution,
which has been reported in the literature for per-pixel classifications, was observed
between 1-2-m and 4-8-m resolutions, but was inverted, although not significantly,
between the 4-8-m and 16-32-m resolutions.
In summary, even with its limited robustness, the per-segment approach outperformed
the per-pixel one significantly at the 1-, 2- and 4-m spatial resolutions, whereas at 8and 16-m resolution the performances were not significantly different. These results
show the advantages of the per-segment classifier and encourage its use in other
applications in order to reach high accuracy levels using remote sensing data at spatial
resolutions of 4-m or finer. Even at coarser resolutions, the per-segment approach has
a good chance to perform better if the segmentation process has been developed for
data at a similar resolution. In addition, further information can be extracted from the
image, such as shadows, that can be used in the per-segment mapping to exploit
specific characteristics of the segments that cannot be applied to individual pixels.
Moreover, these comparisons are particularly important as they provide the incentive
to further develop the per-segment aspen classification system and apply it in other
areas. Yellowstone National Park is of particular interest for aspen mapping (Hessl,
2002; Ripple, 2003) and would be a good choice for both enhancing the per-segment
system and making it useful for change analysis on a wider scale.
Although developing per-segment mapping systems requires higher image processing
skills and deeper understanding of the remote sensing data in hand, the results
presented in this study indicate the effort is well expended. With the development of
off-the-shelf programs for segmentation that allow the user to implement the concept
through user-friendly interfaces, it is reasonable to assume that more remote sensing
experts will adopt this method, especially when high accuracy is paramount and fine
spatial resolution imagery is required.
References
APLIN, P., P. M. ATKINSON, and P. J. CuRRAN, 1999. Per-field classification of land use
using the forthcoming very fine spatial resolution satellite sensors: problems and
potential solutions. In Advances in Remote Sensing and GIS Analysis (ATKINSON,
P. M., and N. J. TATE, Eds.), John Wiley & Sons: 219-239.
BOLSTAD, P. V., AND T. M. LILLESAND, 1992. Improved classification of forest
vegetation in northern Wisconsin through a rule-based combination of soils,
terrain, and Landsat Thematic Mapper data. Forest Science 38 (1): 5-20.
CHEN, DM., AND D. STOW, 2002. The effect of taining strategies on supervised
classification at different spatial resolutions. Photogrammetric Engineering &
Remote Sensing 68 (11): 1155-1161.
85
CONGALTON, R. G., AND K. GREEN, 1999. Assessing the accuracy of remotely sensed
data: principles and practices. Lewis Publishers, Boca Raton, Florida, 137 p.
CusHNIE, J. L., 1987. The interactive effect of spatial resolution and degree of internal
variability within land-cover types on classification accuracies. International
Journal of Remote Sensing 8 (1): 15-29.
ERDAS LLC, 2002. ERDAS field guide, 6th edition. Atlanta, Georgia.
FRANKLIN, S. E., A. J. MAUDIE, AND M. B. LAVIGNE, 2001. Using spatial cooccurrence texture to increase forest structure and species composition
classification accuracy. Photogrammetric Engineering & Remote Sensing 67 (7):
849-855.
GENELETTI, D., AND B. G. H. GoRTE, 2003. A method for object-oriented land cover
classification combining Landsat TM data and aerial photographs. International
Journal of Remote Sensing 24 (6): 1273-1286.
HESSL, A., 2002. Aspen, elk, and fire: the effects of human institutions on ecosystem
processes. BioScience 52 (11): 10/1-/022.
HEYMAN, 0., AND A. J. KIMERLING, in review. Per-segment vs. per-pixel classification
of aspen stands from high-resolution remote sensing data. Remote Sensing of
Environment.
HEYMAN, 0., G. G. GASTON, A. J. KIMERLING, AND J. T. CAMPBELL, 2003. A per-
segment approach to improving aspen mapping from high-resolution remote
sensing imagery. Journal of Forestry 101 (4): 29-33.
PF., L. C. LEE, AND NY. CHEN, 2001. Effect of spatial resolution on
classification errors of pure and mixed pixels in remote sensing. IEEE
HSIEH,
Transactions on Geoscience and Remote Sensing 39 (12): 2657-2663.
IRONS, J. R., B. L. MARKHAM, R. F. NELSON, D. L. TOLL, D. L. WILLIAMS, R. S. LATTY,
and M. L. STAUFFER, 1985. The effects of spatial resolution on the classification
of Thematic Mapper data. International Journal of Remote Sensing 6 (8): 13851403.
JOHNSSON, K., 1994. Segment-based land-use classification from SPOT satellite data.
Photogrammetric Engineering & Remote Sensing 60 (1): 47-53.
Joy, S. M., R. M. REICH, AND R. T. REYNOLDS, 2003. A non-parametric, supervised
classification of vegetation types on Kaibab National Forest using decision trees.
International Journal of Remote Sensing 24 (9): 1835-1852.
KALKHAN, M. A., R. M. REICH, AND T. J. STOHLGREN, 1998. Assessing the accuracy of
Landsat Thematic Mapper classification using double sampling. International
Journal of Remote Sensing 19 (11): 2049-2060.
KoKALY, R. F., D. G. DESPAIN, R. N. CLARK, AND K. E. Livo, 2003. Mapping
vegetation in Yellowstone National Park using spectral feature analysis of
AVIRIS data. Remote Sensing of Environment 84: 437-456.
LABA, M., S. K. GREGORY, J. BRADEN, D. OGURCAK, E. HILL, E. FEGRAUS, J. FIORE,
AND S. D. DEGLORIA, 2002. Conventional and fuzzy accuracy assessment of the
New York Gap Analysis Project land cover map. Remote Sensing of Environment
81 (2-3): 443-455.
LOBO, A., 1997. Image segmentation and discriminant analysis for the identification of
land cover units in ecology. IEEE Transactions on Geoscience and Remote
Sensing 35 (5): 1136-1145.
LOBO, A., O. CHIC, and A. CASTERAD, 1996. Classification of Mediterranean crops
with multisensor data: per-pixel versus per-object statistics and image
segmentation. International Journal of Remote Sensing 17 (12): 2385-2400.
Mu BY, P. J., AND A. J. EDwARDS, 2002. Mapping marine environments with
IKONOS imagery: enhanced spatial resolution can deliver greater thematic
accuracy. Remote Sensing of Environment 82: 248-257.
J.,
2003.
The
aspen
www.cof.orst.edu/cof/fr/research/aspen/.
RIPPLE,
W.
project.
Available
online
at
RYHERD, S., and C. WOODCOCK, 1996. Combining spectral and texture data in the
segmentation of remotely sensed images. Photogrammetric Engineering &
Remote Sensing 62 (2): 181-194.
U.S. GEOLOGICAL SURVEY (USGS), 2003. National High Altitude Photography
(NHAP). Available online at http://edc.usgs.gov/products/aerial/nhap.html.
USTIN, S. L., AND Q. F. XIAO, 2001. Mapping successional boreal forests in interior
central Alaska. International Journal of Remote Sensing 22 (9): 1779-1797.
Chapter 5. CONCLUSIONS
Aspen mapping from 1-m NHAP CIR imagery using various per-pixel classifications
yielded no more than 67 percent overall accuracy with a K-hat statistic of 0.36. Even
with texture statistics added and major parameters of the clustering algorithm changed,
the results could not be further improved. This leads to the conclusion that with the
given data a different approach for the classification of aspens should be taken in order
to successfully and reliably map stands in the study area in Central Oregon. The persegment approach developed through this research showed a significant improvement
in the mapping results, obtaining an 88 percent overall accuracy and a K-hat statistic
of 0.82 for three-level mapping and 90 percent overall accuracy with a K-hat statistic
of 0.78 for two-level mapping. This study demonstrates that a per-segment
classification approach for aspen mapping yields significantly higher accuracy results
than a traditional per-pixel method when using high resolution data at 1-4-m ground
pixel size. The results presented in this study, however, are of significance beyond the
direct comparison of the two classification methods. They show the effect of spatial
resolution on each classification method and its main parameters, giving a better
understanding of how robust the methods are at varying spatial resolutions.
Overall, the per-segment classifier was found to be more sensitive to changes in
spatial resolution over the 1-32-m range than the per-pixel classifier. The per-segment
classifier, originally developed for data at 1-m ground resolution, demonstrated
impressive performance in distinguishing between three categories of aspen coverage.
However, it did not maintain this level of accuracy at resolutions of 4-m or coarser
even with tuning of the segmentation algorithm. The main conclusion from these
results is that the segmentation process is scale-dependent and merely tuning the
algorithm may not make it robust enough. In order to achieve similar high
performance at various resolutions, a different segmentation algorithm should be
developed for each. Per-pixel classifications did not yield overall accuracy higher than
79 percent at any spatial resolution, but maintained a much more uniform level of
accuracy across resolutions, which is an indication for a higher robustness to
resolution changes.
The results presented in this study show the advantages of the per-segment classifier
and encourage its use in other applications in order to reach high accuracy levels using
remote sensing data at spatial resolutions of 4-m or finer. Even at coarser resolutions,
the per-segment approach has a good chance to perform better if the segmentation
process has been developed for data at a similar resolution. In addition, further
information can be extracted from the image, such as shadows, that can be used in the
per-segment mapping to exploit specific characteristics of the segments that cannot be
applied to individual pixels. Moreover, these comparisons are particularly important as
they provide the incentive to further develop the per-segment aspen classification
system and apply it in other areas. Yellowstone National Park is of particular interest
for aspen mapping (Hessi, 2002; Ripple, 2003) and would be a good choice for both
enhancing the per-segment system and making it useful for change analysis on a wider
scale.
Although developing per-segment mapping systems requires higher image processing
skills and deeper understanding of the remote sensing data in hand, the results
presented in this study indicate the effort is well expended. With the development of
off-the-shelf programs for segmentation that allow the user to implement the concept
through user-friendly interfaces, it is reasonable to assume that more remote sensing
experts will adopt this method, especially when high accuracy is paramount and fine
spatial resolution imagery is required.
BIBLIOGRAPHY
ANDERSON, J. R., E. E. HARDY, J.T. ROACH, AND R. E. WITMER, 1976. A land use and
land cover classification system for use with remote sensor data. USGS
Professional Paper No. 964, Washington DC, 28 p.
APLIN, P., P. M. ATKINSON, and P. J. CuRRAN, 1999. Per-field classification of land use
using the forthcoming very fine spatial resolution satellite sensors: problems and
potential solutions. In Advances in Remote Sensing and GIS Analysis (ATKINSON,
P. M., and N. J. TATE, Eds.), John Wiley & Sons: 219-239.
P. V., AND T. M. LiLLESAND, 1992. Improved classification of forest
vegetation in northern Wisconsin through a rule-based combination of soils,
BOLSTAD,
terrain, and Landsat Thematic Mapper data. Forest Science 38 (1): 5-20.
CHEN, DM., AND D. STOW, 2002. The effect of taining strategies on supervised
classification at different spatial resolutions. Photogrammetric Engineering &
Remote Sensing 68 (11): 1155-1161.
CONGALTON, R. G., AND K. GREEN, 1999.Assessing the accuracy of remotely sensed
data: principles and practices. Lewis Publishers, Boca Raton, Florida, 137 p.
CUSHNIE, J. L., 1987. The interactive effect of spatial resolution and degree of internal
variability within land-cover types on classification accuracies. International
Journal of Remote Sensing 8 (1): 15-29.
DEBEIR, 0., I. VAN DEN STEEN, P. LATINNE, P. VAN HAM, AND E. WOLFF, 2002.
Textural and contextual land-cover classification using single and multiple
classifier systems. Photogrammetric Engineering and Remote Sensing 68(6):597605.
DEBYLE, N. V., 1985. Wildlife. In Aspen: Ecology and Management in the Western
United States (DEBYLE, N. V., and R. P. WINOKuR, Eds.), USDA Forest Service
General Technical Report RM-119: 135-152.
DIENI, J. S., and S. H. ANDERSON, 1997. Ecology and management of Aspen forests in
Wyoming, literature review and bibliography. Wyoming Cooperative Fish and
Wildlife Research Unit, University of Wyoming, 118 pp.
ERDAS LLC, 2002. ERDAS field guide, 6th edition. Atlanta, Georgia.
ERDAS, Inc. 1999. ERDAS field guide, 5th edition. Atlanta, Georgia.
FOREST RESEARCH LABORATORY (FRL). 1998. Seeking the causes of change.In Forest
Research Laboratory biennial report 1996-1998, project 15. Corvallis: Oregon
at
online
State
Available
University.
www.cof.orst.edu/cof/pub/home/biforweb/body/text/proj 15.htm.
FRANKLIN, S. E., A. J. MAUDIE, AND M. B. LAVIGNE, 2001. Using spatial cooccurrence texture to increase forest structure and species composition
classification accuracy. Photogrammetric Engineering & Remote Sensing 67 (7):
849-855.
GENELETTI, D., AND B. G. H. GORTE, 2003. A method for object-oriented land cover
classification combining Landsat TM data and aerial photographs. International
Journal of Remote Sensing 24 (6): 1273-1286.
HESSL, A., 2002. Aspen, elk, and fire: the effects of human institutions on ecosystem
processes. BioScience 52 (11): 10/1-/022.
HSIEH, PF., L. C. LEE, AND NY. CHEN, 2001. Effect of spatial resolution on
classification errors of pure and mixed pixels in remote sensing. IEEE
Transactions on Geoscience and Remote Sensing 39 (12): 2657-2663.
J. R., B. L. MARKHAM, R. F. NELSON, D. L. TOLL, D. L. WILLIAMS, R. S. LATTY,
and M. L. STAUFFER, 1985. The effects of spatial resolution on the classification
of Thematic Mapper data. International Journal of Remote Sensing 6 (8): 1385-
IRONS,
1403.
JENSEN, J. R., 1996. Introductory digital image processing, a remote sensing
perspective. Prentice Hall, Upper Saddle River, New Jersey, 318 p.
JoHNSSON, K., 1994. Segment-based land-use classification from SPOT satellite data.
Photogrammetric Engineering & Remote Sensing 60 (1): 47-53.
JONES, J.R. 1985. Distribution. In Aspen: Ecology and management in the western
eds. N.V. DeByle and R.P. Winokur, 9-10. General Technical
Report RM-1 19. Fort Collins, CO: USDA Forest Service, Rocky Mountain
United States,
Research Station.
Joy, S. M., R. M. REICH, AND R. T. REYNOLDS, 2003. A non-parametric, supervised
classification of vegetation types on Kaibab National Forest using decision trees.
International Journal of Remote Sensing 24 (9): 1835-1852.
KALKHAN, M. A., R. M. REICH, AND T. J. STOHLGREN, 1998. Assessing the accuracy of
Landsat Thematic Mapper classification using double sampling. International
Journal of Remote Sensing 19 (11): 2049-2060.
KoKALY, R. F., D. G. DESPAIN, R. N. CLARK, AND K. E. Livo, 2003. Mapping
vegetation in Yellowstone National Park using spectral feature analysis of
AVIRIS data. Remote Sensing of Environment 84: 437-456.
LABA, M., S. K. GREGORY, J. BRADEN, D. OGURCAK, E. HILL, E. FEGRAUS, J. FIORE,
AND S. D. DEGLORIA, 2002. Conventional and fuzzy accuracy assessment of the
New York Gap Analysis Project land cover map. Remote Sensing of Environment
81 (2-3): 443-455.
LOBO, A., 1997. Image segmentation and discriminant analysis for the identification of
land cover units in ecology. IEEE Transactions on Geoscience and Remote
Sensing 35 (5): 1136-1145.
LOBO, A., O. CHIC, and A. CASTERAD, 1996. Classification of Mediterranean crops
with multisensor data: per-pixel versus per-object statistics and image
segmentation. International Journal of Remote Sensing 17 (12): 2385-2400.
MumBY, P. J., AND A. J. EDwARDS, 2002. Mapping marine environments with
IKONOS imagery: enhanced spatial resolution can deliver greater thematic
accuracy. Remote Sensing of Environment 82: 248-257.
OREGON CLIMATE SERVICE (OCS). 2001. Zone 5 - Climate data archives. Available
online at www.ocs.orst.edu/allzone/allzone5.html.
W.
2003.
J.,
The
aspen
www.cof.orst.edu/cof/fr/research/aspen/.
RIPPLE,
project.
Available
online
at
RYHERD, S., and C. WOODCOCK, 1996. Combining spectral and texture data in the
segmentation of remotely sensed images. Photogrammetric Engineering &
Remote Sensing 62 (2): 181-194.
SALAJANU, D., and C.E. Olson. 2001. The significance of spatial resolution:
Identifying forest cover from satellite data. Journal of Forestry 99(6):32-38.
U.S. GEOLOGICAL SURVEY (USGS), 2001. National High Altitude Photography and
at
online
Available
Program.
National
Aerial
Photography
http://edc.usgs.gov/Webglis/glisbin/guide.pl/glis/hyper/guide/napp.
U.S. GEOLOGICAL SURVEY (USGS), 2003. National High Altitude Photography
(NHAP). Available online at http://edc.usgs.gov/products/aerial/nhap.html.
UsTIN, S. L., AND Q. F. XIAO, 2001. Mapping successional boreal forests in interior
central Alaska. International Journal of Remote Sensing 22 (9): 1779-1797.
WILSON, E.H., and S.A. Sader. 2002. Detection of forest harvest type using multiple
dates of Landsat TM imagery. Remote Sensing of Environment 80(3):385-96.
ZHANG, Y. 2001. Texture-integrated classification of urban treed areas in highresolution color-infrared imagery. Photogrammetric Engineering and Remote
Sensing 67(12):1359-65.
Download