Lab 4: Supervised and Unsupervised Classification

advertisement

Geog 477-Lab 4 Due November 9, 2006

Lab 4: Unsupervised and Supervised Classification

In this lab you will classify the UNC Ikonos image using unsupervised and supervised methods in ERDAS Imagine. Classification is the process of assigning individual pixels of a multi-spectral image to discrete categories. By assembling groups of similar pixels into classes, we can form uniform regions or parcels to be displayed as a specific color or symbol. Remember that although these classes appear homogenous they can be made up of heterogeneous pixel values and therefore, each class will exhibit some degree of variability.

In unsupervised classification, clusters of pixels are separated based on statistically similar spectral response patterns rather than user-defined criteria. Each pixel in an image is compared to a discrete cluster to determine which group it is closest to. Colors are then assigned to each cluster and the analyst interprets the clusters after classification based on knowledge of the scene or by visiting the location on the ground (groundtruthing).

The supervised classification method requires the analyst to specify the desired classes upfront, and these are determined by creating spectral signatures for each class. In a supervised classification, the analyst locates specific training areas in the image that represent homogenous examples of known land cover types. The statistical data are used from each training site to classify the pixel values for the entire scene into likely classes according to some decision-rule or classifier.

Geog 477-Lab 4 Due November 9, 2006

Part I- Unsupervised Classification

For the unsupervised classification of the UNC campus, we will use the k-means clustering, which defines image classes by determining the optimal partitioning of the data distribution into a specified number of subdivisions.

Start ERDAS Image and open uncsubset.img in a new viewer.

 Classifier | Unsupervised Classification…

Click on the folder icon next to the Input Raster File. Navigate to your working directory and select uncsubset.img.

Click on the folder icon next to Output Cluster Layer filename and navigate to your directory. Save the output layer as unc_unsupervise.img

Click on the folder icon next to Output Signature Set filename and navigate to your directory. Save the signature profile as unc_unsupervise.sig

Make sure Initialize from Statistics is selected. Select 10 classes. Keep the other options as defaults.

Click Color Scheme Options and click on Approximate True Color and OK

Click OK to run the classification

Open the classified image in a new viewer

1.

Describe 5 of the 10 classes represented in the new image.

2.

What happens to the image if you change the number of classes to 4? To 20?

3.

What are some advantages to the unsupervised classification approach?

Disadvantages?

Geog 477-Lab 4 Due November 9, 2006

Part II- Supervised Classification

(1) Image Setup

Start ERDAS Image and open uncsubset.img in a new viewer.

Open the Classification | Signature Editor… from the Image toolbar. You now have an empty signature editor in which you will create and edit signatures.

(2) Training Site Selection

Supervised training requires careful guidance by the analyst. You must select pixels that are recognized as representative of the classes set out in the classification scheme. Thus, training sites identified in the field or on aerial photographs or maps have corresponding training pixels in the feature set image. Training helps the supervised classification find similar patterns (i.e., signatures) throughout the remainder of the image. Parametric signatures are composed of statistical descriptions of the spectral reflectance properties of a class (mean and covariance matrix) based on the pixels in the training sample (the collection of all training pixels for an individual class). Parametric signatures are used to train a statistically based classified/decision rule (e.g., Maximum Likelihood classifer).

Every pixel is assigned to a class using these parametric rules.

Use the Imagine AOI (Areas of Interest) tools to delineate training pixels/samples onscreen:

Select File | New |AOI on the Viewer #1 top menu bar

Under the AOI pull-down menu select Tools . The AOI Toolbar will appear.

Choose the Seed Properties option on the AOI pull-down menu. This dialog box allows you to build a training sample from a single pixel by examining a seed pixel and comparing it to its neighbors

Set the parameter in the seed properties dialog to 3 x 3 neighborhood including diagonals, spectral Euclidean distance = 10, distance limit = 20 pixels (you can modify these numbers)

Open the inquire cursor in Viewer #1 ( Utility | Inquire Cursor ) to locate a pixel on the UNC image as a possible training area. Zoom into this location in the viewer.

Select the open magnifier type tool on the AOI toolbar and click on the seed pixel corresponding to the training site. The seed pixel will expand to a surrounding area based on the reflectance and seed properties parameters. The expansion of the single pixel to a broader sample will incorporate spectral variability into the subsequent definition of the class.

You must tell the Signature Editor where to look for spectral data for generating a signature. In the Signature Editor select Edit/Image Association and provide the

Geog 477-Lab 4 Due November 9, 2006 image uncsubset.img

for the feature set. The default option uses all bands in this image for the feature set.

With the AOI training site highlighted, choose Edit | Add on the Signature Editor to add the signature for this training sample. Note that the color and name of the signature can be edited. Save this signature file to your directory before proceeding (e.g., unc.sig

)

Repeat the above steps to delineate training samples and add signatures for four cover classes (e.g. grass, urban, conifers, bare soil). Where necessary you can delete or merge signatures. Use caution when altering signatures, and be sure to save periodically and name your signatures. You can adjust the seed properties and redo, if you are unhappy with the extent of a training sample.

(3) Signature Evaluation with Feature Space Images

Once you have a signature for each class, you can evaluate their relative spectral characteristics and overlap using feature space images. Visual assessment can portray the class mean and ellipse (i.e., standard deviations in 2-space). Two signatures having large overlap and close means will be spectrally indistinct and confused in the classification.

In another bi-plot, however, classes may be distinct. The trade-off between overlap among all classes (average separability) will be assessed non-graphically. Steps to follow:

Open the Signature editor Feature | Create | Feature Space Layers . This will open a dialog box. Choose the image and provide a root name for output files

(e.g., uncout).

Pick grey levels slice to show brightness corresponding to frequency in the spectral bi-plots. Select the feature space maps you want (bands 3 x 4, for example).

Once the feature space images are generated, open one of the images

(uncout3_4.img) in Viewer #2. Zoom in to see the entire feature space clearly on the screen.

In the signature editor, select Feature | View | Select Viewer , and then click anywhere on the feature space image to define the feature space viewer that you want to link to the image viewer.

Now you can display a signature in this feature space image. In the Signature

Editor, highlight all the signatures and open Feature | Objects . In the signature objects dialog box choose to plot ellipses using 3-standard deviations , means and labels . You can now visually assess the spectral distinction among the signatures.

4.

Explain the distribution of different classes in feature space. Attach a copy of the feature space with all the signatures plotted on it to assist your explanation (you can hit the Print Screen key on your keyboard and then select paste in Microsoft

Word. Crop the picture to only include the feature space.)

Geog 477-Lab 4 Due November 9, 2006

Open another feature space image and re-plot the ellipses using different band combinations.

5.

Do any of these classes overlap? Why or why not?

6.

How do different band combinations affect the classes in feature space? Attach a copy of these images to support your conclusions.

(4) Signature Separability Evaluation

Signature separability techniques quantify the spectral distinction/overlap of signatures.

They are also useful to find redundant features in large feature sets –

Examine the univariate statistics for a single signature. Highlight a signature and select View/Statistics in the Signature Editor.

To view the histogram of a training sample by selecting a signature and

View/Histograms. You can also add comments to a signature as well

(/Comments).

Pair-wise comparisons of features (bands or channels) and a combination of bands can be evaluated for signature separability. For example, if you start with 8 features and want to narrow the set to 4, you can evaluate signature separability for 4-bands at a time (4 “layers per combination”) and find the combination of 4 bands that gives you the best separability. Highlight your signatures and select

Evaluate/Separability. Set Layers Per Combination to 4, and choose a distance measure (e.g., Euclidean distance or the J-M distance). Select summary report to give the best minimum and average separabilities. Complete report lists the distance measure between each signature for each combination as well as an average overall all signatures. Determine the number of layers per combination to use. Examine the output report.

(5) Classification

You are ready to classify the entire feature image when you have – (1) training sites/samples and derived signatures for the classes to be mapped, (2) graphically and statistically evaluated signatures, (3) selected a classifier (use the Maximum Likelihood classifier but note the others available).

Highlight all the signatures that you want to use, and select Classify | Supervised in the Signature Editor. Provide an output filename ( unc_supervised.img

) and save it to your directory. Choose the decision-rule. Do NOT classify zeros; do

NOT use probabilities.

Open the classified image in a new Viewer. Use your local knowledge of Chapel

Hill and vicinity to consider the general quality of the classification.

7.

How accurate is the supervised classification image of UNC?

Geog 477-Lab 4 Due November 9, 2006

8.

What are some advantages to the supervised classification approach?

Disadvantages?

9.

How does the quality of the training area affect the final classification output?

How could you increase the accuracy of the supervised classification method?

10.

Compare the visual differences between the two classification methods. Give an example of when you would use each method.

Download