GISC 7365: Remote Sensing Digital Image Processing Instructor: Dr. Fang Qiu Lab 10: Unsupervised Image Classification Objective: To generate clusters using an unsupervised classification approach (ISODATA) and evaluate the clusters in feature image space. Image: Quickview File - charleston_11-9-82tm.img Landsat TM Data Band 1 = Blue (.45-.52) Band 2 = Green (.52-.60) Band 3 = Red (.63-.69) Band 4 = NIR (.76-.90) Band 5 = MIR (1.55-1.75) Band 6 = MIR (2.08-2.35) Band 7 = TIR (10.4-12.6) In unsupervised classification the computer develops the signatures that results in a number of spectral classes, which the analyst must then assign (a posteriori) to information classes of interest. This requires the knowledge of the terrain present in the scene as well as its spectral characteristics. The Iterative Self-Organizing Data Analysis Technique (ISODATA) is a widely used clustering algorithm and is different from the formerly used chain method because it makes a large number of passes through the remote sensing dataset, not just two passes. It uses the minimum spectral distance formula to form clusters. It begins with either arbitrary cluster means or means of an existing signature set, and each time the clustering repeats, the means of these clusters are shifted. The new cluster means are used for the next iteration. The ISODATA utility repeats the clustering of the image until either a maximum number of iterations have been performed, or a maximum percentage of unchanged pixels have been reached between two iterations. Performing an unsupervised classification is simpler than a supervised classification, because the signatures are automatically generated by the ISODATA algorithm. However, as stated before, the analyst must have ground truth information and knowledge of the terrain, or ancillary high resolution data if this approach is to be successful. Open a color infra-red composite of charleston_11-9-82tm.img in a viewer (RGB= bands 4, 3, 2) and fit to frame. To begin the unsupervised classification, click on the Classifier icon and then select Unsupervised Classification. You will notice that the Unsupervised Classification dialog box states that it is an ISODATA unsupervised classification (Title bar). Fill in the input file as Charleston_11-9-82tm.img and output information in the Unsupervised Classification dialog box. Give both the Output Cluster Layer and Output Signature Set a similar name. Make sure that under Clustering Options, the Initialize from Statistics box is on and set Number of Classes to 15. Under Processing Options, set Maximum Iterations to 20 and leave the Convergence Threshold set to 0.950. Maximum Iterations is the number of times that the ISODATA utility will recluster the data. It prevents the utility from running too long, or from getting stuck in a cycle without reaching the convergence threshold. The convergence threshold is the maximum percentage of pixels whose cluster assignments can go unchanged between iterations. This prevents the ISODATA utility from running indefinitely. Leave everything else in its default state. When you have entered the entire relevant information click OK to begin the process. Cluster Identification To aid in evaluation we will need to view the results of the clustering so that we may see how the clusters are arranged in feature space and thereby make informed decisions about the nature of the cluster. The first step that will allow us to do so is the creation of feature space images. Go to the Classification menu and click the Feature Space Image button. A dialog box will appear saying Create Feature Space Images at the top. Select the original image (charleston_11-9-82tm.img) as the Input Raster Layer and make sure the Output Root Name is (charleston_11-9-82tm) and the directory path is correct. The number of combinations of two bands out of 7 TM bands is 21. By default, Imagine will create 21 feature space images! Under feature space layer, select the layers which are the band combinations of 1 and 3, 2 and 4, 3 and 4, 4 and 5. Four feature space images will be created. Leave the rest of the selections at their default settings and click OK. When the processing is complete open a new viewer and view the output images (i.e. charleston_11-82tm_1_3.fsp.img, charleston_11-82tm_2_4.fsp.img, charleston_1182tm_ 3_4.fsp.img, charleston_11-82tm_4_5.fsp.img). Examine those four feature space images. Open the Signature Editor (under the Classification menu) with the *.sig file you created in the unsupervised classification. Select all the clusters (they should all be highlighted in yellow). In the Signature Editor main menu select Feature and then in that pull-down menu select Objects. This will display a Signature Objects dialog box that allows you to tell Imagine which viewer you want to receive the signature editor information about the clusters. In this case we want the viewer in which you have displayed your chosen feature space image. Select that viewer # in the Signature Objects space provided that represents this viewer. Select Plot Ellipses, Plot Means and Label (or you can try the others if you like). Leave everything else in its default state and click OK. Only selected clusters in the Signature Editor window will be drawn. More than likely your ellipses and means are multi-tonal in nature. If you would like them all to be white, red, green, etc..., select all the classes in the Signature Editor dialog box using the mouse and change the color to the one you desire. To analyze the content of the clusters, you should use a combination of techniques. You will more than likely have to zoom in to get a better look at some of the clusters given the close proximity of clusters to each other. You should also have a viewer open with the original scene displayed. This will further help you identify the land cover class. Overlay your classified image (The cluster image file generated when you did the ISODATA classification) on the original image. Set all the clustered image's colors to transparent using the Raster Attribute Editor found under the Viewer menu and changing all the opacity values to 0. Once you have set all classes to transparent then you can individually color them by making them opaque (opacity value – 1) particular classes and see where they are on the image. Another method may be to use the Utility - Swipe or the Utility - Flicker tools in the Viewer by opening the classified image on top of the raw data (do not Clear Display after opening the first raw image). When you have decided upon the class breakdowns, use the Raster Attribute editor to assign class names (“Class names” column) and colors to the classification image. Create the same four classes you used in the supervised classification (i.e., urban, forest, wetland, and water) and place each of the clusters into one of the classes by giving it the same color and class name as every other cluster in that class. Homework Q3. Create another new map composition containing the completed unsupervised classification. Make sure the colors are somewhat appropriate to the class type. Include all appropriate cartographic elements. Capture the print screen of the map and paste in the word file. Q4. Compare the advantages and disadvantages between using a supervised and unsupervised classification approach. When would one approach be more appropriate than the other?