Object-Based Classification and e-Cognition Software

advertisement
Exercise #10
Name: _______________________________
Geography 475
Digital Image Processing
Object-Based Classification and e-Cognition Software
Due: May 10, 2010
Part I: The Object-Oriented Approach
In several previous lab exercises, you have examined various pixel-based classification
methods. Pixel-based methods are based on statistical algorithms that place pixels into
groups according to their spectral properties. You are now familiar with several pixelbased approaches to grouping pixels, including parallel piped, minimum distance,
maximum likelihood, and ISODATA. In this lab, you will be introduced to object-based
classification. This method extracts objects (grouping of pixels with similar spectral and
spatial properties) from imagery through a process called segmentation.
What are objects? Objects are things such as roads, buildings, fields, lakes, rivers,
trees, airports, or anything else that the human eye and brain can interpret from
remotely sensed data. The human brain has an innate ability to recognize patterns,
colors, textures, sizes, shapes, and associations in order to accurately identify objects.
Rapid increases in computing power coupled with research in the field of artificial
intelligence (AI) have led to the development of algorithms that mimic the process of
human pattern recognition. Although not perfect, these algorithms are at the forefront of
a new direction in the field of digital image processing.
A few image segmentation programs have been developed. One, which runs as an addon module for either ArcGIS or ERDAS, is called Feature Analyst. The program you will
use to complete this lab is e-Cognition, developed by Definiens of Munich, Germany.
This software includes the patented heuristic algorithm that creates image segments
based on scale, color, smoothness, and compactness (Navulur, 2007).
1. Object-based classification is best suited for data that have high spatial
resolution and low spectral resolution. Why?
1
Part II: Introduction to e-Cognition and Image Segmentation
Start e-Cognition software by selecting it from the Start Menu Start  All Programs 
eCognition Developer 8.0  eCognition Developer. Choose Rule Set Mode and
click OK.
Now you are ready to add the imagery. Copy the contents of the Exercise10 folder to a
local directory of your choice. To start a new project, either click on the “ ” icon or click
on File  New Project. Two windows should appear. The first window says “Import
Image Layers.” Locate and click on the file grandforksnaip2005.tif and click Open (Note:
you may need to change Files of type: to All Files). You will notice that the three
layers contained in the aerial photo will load into the “Create Project” window. Next, you
need to subset the image. Click the Subset Selection button at the top right of the
Create Project window. A new window that says “Subset Selection” will appear. There
are two ways to create a subset. The first is to click and drag with the mouse over the
area of interest, and the other is to enter minimum and maximum X, Y pixel locations.
You should enter 3000 for Minimum X, 6600 for Minimum Y, 5000 for Maximum X, and
8500 for Maximum Y. When finished, click OK twice.
2. Why might someone want to create a subset of an original image?
3. How many pixels does this image contain? (This information can be found at the
bottom right of the screen)
Now that the image is loaded, you are ready to begin creating segments. On the main
toolbar, click the Develop Rulesets button (it looks like a piece of paper with the
number “4”). Several new windows should now be displayed on the screen. For this
task, you are going to create a multi-level segmentation of the University Campus.
Right-click in the Process Tree window and select Append New. Under Algorithm
choose multiresolution segmentation. In the Algorithm parameters box, for the
parameter Level Name type “Level 1” for its value and make sure Scale parameter is
set to 10. Leave everything else set to the default values. Click on Execute to begin the
segmentation process. This will take about a minute to complete.
2
After the process finishes, take a minute to examine the resulting image using the zoom
and pan tools on the toolbar. When finished, click on the Zoom Scene to Window
button.
Next, right-click on the item in the Process Tree window and select Append New.
Change Algorithm to multiresolution segmentation, and change Image Object
Domain to image object level. Make sure Level is set to Level 1. Change Level
Name to “Level 2”, and change Level Usage to Create above. Lastly, change Scale
parameter to 25. Click Execute.
Next, right-click on the second item in the Process Tree window and select Append
New. Change Algorithm to multiresolution segmentation, and change Image
Object Domain to image object level. Make sure Level is set to Level 2. Change
Level Name to “Level 3”, and change Level Usage to Create above. Lastly, change
Scale parameter to 50. Click Execute.
Next, right-click on the third item in the Process Tree window and select Append New.
Change Algorithm to multiresolution segmentation, and change Image Object
Domain to image object level. Make sure Level is set to Level 3. Change Level
Name to “Level 4”, and change Level Usage to Create above. Lastly, change Scale
parameter to 100. Click Execute.
When finished, explore the View toolbar (the first icon on the toolbar is a single human
eye). The sixth icon on this toolbar toggles between the pixel view and the object view.
Try it. Using the Layer Toolbar (
), examine objects created at each
level. Zoom into an area where there are some small details evident (the new UND
Wellness Center west of the Engelstad Arena is a good choice). Toggle between the
layers. You might also use the hide/show segment outline tool found on the View
toolbar to help you identify segment boundaries.
4. In general, discuss how the different segmentation scales relate to the level of
object generalization.
Record the number of objects created at each level in Table 1.
Table 1: Multiresolution Segmentation
Level
Scale Factor
1
10
2
25
3
50
4
100
Number of Objects
3
One of the advantages of the object-oriented approach is that you can create objects at
various scales and extract features at different resolutions (Navulur, 2007). This
technique is referred to multiresolution segmentation. Multiresolution segmentation
allows the user to create a hierarchical classification where large objects could be
classified at one level and smaller objects can be classified at another level.
5. Why do smaller scale factors take more processing time?
6. What are some of the practical applications of multiresolution segmentation?
Close the project and when it asks you if you want to save the changes, click “No”.
Part III: Rule-Based Classification
Now you will create a land-cover classification based on a set of rules. Click on File 
New Project. Open up the file stump_lake_062800.img and press OK. This is a
Landsat image of Stump Lake in Nelson Co., ND. Only 5 bands are included (file band 1
= image band 2 {green}, file band 2 = image band 3 {red}, file band 3 = image band 4
{NIR}, file band 4 = image band 5 = {MIR}, and file band 5 = image band 7 {MIR}).
Select View  Image Layer Mixing. To display a standard false-color composite, set
Layer 1 to blue, Layer 2 to green, and Layer 3 to red (and remove the other display
settings). Next, segment the image at a scale factor of 10.
Right-click in the Class Hierarchy window and select Insert Class. Name the class
“Water” and change the color to blue. Click OK. Create another class named
“Vegetation” that’s green.
Right-click in the Process Tree window and select Append New. Under Algorithm
select assign class. Make sure the value for the parameter Level is set to whatever
you named your segmentation. For the parameter Threshold condition click in the
box under Value and then click on the shaded “…” on the right of the box. Navigate to
Object features  Layer Values  Mean and double-click on Layer 3. Select the
less than sign (<) and enter a value of 40. Click OK. In the Algorithm parameters box,
change unclassified to Water. Click OK.
4
Again right-click in the Process Tree window and select Append New. Under Algorithm
select assign class. Make sure the value for the parameter Level is set to whatever
you named your segmentation. Under Value for Threshold condition navigate to Object
features  Customized and double-click on Create new ‘Arithmetic Feature’.
Change Feature name to “NDVI”. Navigate to Object features  Layer Values 
Mean. Enter the following expression by using the calculator buttons and doubleclicking on the layers in the box: (([Mean Layer 3]-[Mean Layer 2])/([Mean Layer
3]+[Mean Layer 2])+1)*100 Next, click OK twice. Select the greater than sign (>) and
enter a value of 100. Click OK. Change unclassified to Vegetation. Click OK.
Right-click on the Water classification item in the Process Tree window and click
Execute. Next, do the same for the Vegetation classification item. On the main toolbar,
click on the Show or Hide Outlines button, the Pixel View or Object Mean View
button, and then toggle between views using the View Layer and View Classification
buttons.
7. Why would we make a water class rule for Layer 3 (image band 4) from 0-40?
(Hint: think of the spectral properties of water)
8. Describe the effectiveness of this classification. Were any areas unclassified?
Why or why not? How might you get those to classify?
9. What advantages does rule-based classification have over supervised
classification? What are some of the disadvantages?
10. e-Cognition does not only allow you to set up rules based on spectral signatures
and spectral indices, but also based on shapes, textures, and associations.
What types of applications would you use shape and texture rules?
5
Download