Digital image processing and analysis Laboratory 6: Detection

advertisement
Digital image processing and analysis
Laboratory 6: Detection of image features
Vedrana Baličević, Pavle Prentašić, Hrvoje Kalinić, Sven Lončarić
2014.
6.1
Image features
6.1.1
Spatial features
Features can be differentiated based on the space where they exist, for example they can be spatial, temporal,
frequency features etc. Since the word ”image” usually refers to an image in 2D space, for most of applications we use spatial features. Based on these features the computer can analyze, describe and interpret the
image content. Therefore, image feature retrieval is the first step for computational image analysis.
6.1.2
Amplitude features
For each pixel in an image, amplitude features are defined as some parameters calculated from neighborhood
pixel intensities. In MATLAB, there are several functions that can apply a selected operation on each pixel’s
neighborhood (eg. blkproc(), colfilt(), nlfilt()). We are going to use the function colfilt(). To do this, we need
to understand function pointers concept and how to use them in MATLAB. This is described in the following
example.
An example
>>
>>
>>
>>
img = imread(’texture.tif’);
fun = @(x) mean(x);
img2 = colfilt(img,[32 32],’sliding’,fun);
figure; imagesc(img2); colormap(gray)
%
%
%
%
read the image
function pointer (function handle)
image is being processed
display the resulting image
The same way we defined the function pointer f un, pointing to the function mean(), we can define function
pointers to some other existing functions or our own functions.
6.1.3
Problems
1. Select an image with smaller dimensions (such as 256 × 256) - or scale a larger image to this size. For
a selected image calculate several amplitude features (using functions: max(), min(), mean(), std()).
2. How do the results depend on the selected block size?
1
6.2
First order histogram features
Amplitude values within a neighborhood of a pixel can be interpreted as the results of a random experiment.
In that case the first order histogram represents the estimation of the probability density function (PDF) for
the selected neighborhood. Based on this estimation, we can determine some statistical properties (features)
such as moments and central moments of the PDF. Besides moments, we find entropy and energy of the
histogram interesting in image processing as well.
To do this, we first need to learn how to calculate histogram in MATLAB, which is given with the
following example. Since the function hist() calculates the histogram of a 1D vector, in this example we
transform an image into a vector (img(:)) and provide this to the function in the same line of code:
An example
>> img = imread(’texture.tif’); % read the image
>> hist(im2double(img(:)),30); % calculate and display the histogram of the image,
% by grouping the values into 30 bins
6.2.1
Problems
1. Write the functions for calculating moments, entropy and energy, in such way that they can be used
with a function colfilt(). Mathematical expressions for calculating these features are given in the
lectures.
2. Select an image with smaller dimensions (such as 256 × 256) - or scale a larger image to this size. For
a selected image calculate 1st order histogram features (entropy and energy) on the block / window
of size 5 × 5, using the funciton colfilt(). Display the results and comment on them.
6.3
Second order histogram features
As described, pixel intensity values within a neighborhood of a pixel can be interpreted as the results of
a random experiment. In that case the first order histogram represents the estimation of the probability
density function (PDF) for the selected neighborhood. But, instead of observing a single pixel, we can
observe a pair of pixels, whose distance (shift) is defined with a specific relation. In that case the random
experiment becomes two-dimensional, and probability density function can be estimated using the second
order histogram as described here:
An example
>>
>>
>>
>>
>>
>>
>>
[img,map] = imread(’testpat1.tif’); % read the image
img = im2double(img);
% convert the image to double format
% Try the shift (2,3):
img1 = img(4:end,3:end);
% crop the images to obtain the (2,3) shift between them
img2 = img(1:end-3,1:end-2);
hist3([img1(:), img2(:)])
% calculate and display the 2nd order histogram of the image
% Try the shift (5,1):
2
>> img1 = img(2:end,6:end);
% crop the images to obtain the (5,1) shift between them
>> img2 = img(1:end-1:1:end-5);
>> hist3([img1(:), img2(:)])
% calculate and display the 2nd order histogram of the image
6.3.1
Problems
1. Read the image 4.2.07.tiff from the USC-SIPI database. Calculate and display the 2nd order histograms for the shifts (1, 1) and (5, 5).
2. Why are the results grouped around a diagonal?
3. Why are they more dispersed in the second case?
4. Read the image saturn.tif and calculate the 2nd order histograms for several shifts of your choice.
Display the results and comment on them.
6.4
Edge detection
Edge detection is one of the most important tasks in image processing since the edges actually define borders
of objects. Therefore we can use them to segment, register and/or identify objects in the image.
Since the edges in an image are characterized by a sudden change of the amplitude values of the pixels,
we can employ the gradient estimation methods for their detection.
6.4.1
Sobel and Prewitt operator
Gradient-based methods usually estimate the gradient value in 2 directions: x and y. These two values define
a gradient vector, ie. its direction and its value for a selected pixel. If we’re interested only in the gradient
value, we can calculate it as the square root of the each direction’s square value, as shown in the Fig.6.1.
Figure 6.1: Gradient calculation procedure.
Masks for the Prewitt and the Sobel operator for the detection of horizontal edges can be obtained in
MATLAB with the function fspecial():
An example
>> sy = fspecial(’sobel’) % sobel mask for horizontal edges
>> sx = rot90(sy)
% rotate it to obtain the mask for vertical edges
>> sy, sx
3
sy =
1 2 1
0 0 0
-1 -2 -1
sx =
1
2
1
0 -1
0 -2
0 -1
Now that we have the masks for the horizontal and the vertical edge detection, we can calculate the
convolution of an image with these masks, ie. estimate the gradient in x−direction and y−direction.
An example
>>
>>
>>
>>
>>
[img,map]=imread(’moon.tif’);
gx=conv2(img,sx,’same’); % estimation of gradient in x-direction
gy=conv2(img,sy,’same’); % estimation of gradient in y-direction
g=sqrt(gx.*gx+gy.*gy);
% estimation of gradient in total
imagesc(g)
% display the obtained gradient
6.4.2
Problems
1. Read the image 4.2.07.tiff from the USC-SIPI database. Apply Sobel and Prewitt operator to this
image to estimate its edges. Display the results and comment on them.
6.4.3
Compas operators for edge detection
Simple gradient-based methods estimate the gradient in two directions. Compas operators detect edges
leaned at a certain angle. Examples of the masks (of size 3 × 3) for angles 0, 45 and 90 degrees can be
generated as explained:
An example
>>
>>
>>
>>
h0
h0=[1 1 1; 0 0 0; -1 -1 -1]; % mask for the angle of 0 degrees
h45=[1 1 0; 1 0 -1; 0 -1 -1]; % mask for the angle of 45 degrees
h90=[1 0 -1; 1 0 -1; 1 0 -1]; % mask for the angle of 90 degrees
h0, h45, h90
=
1 1 1
0 0 0
-1 -1 -1
h45 =
1 1 0
4
1 0 -1
0 -1 -1
h90 =
1 0 -1
1 0 -1
1 0 -1
Convolution of an image with a selected compas operator results in an image with emphasized edges of a
selected angle.
6.4.4
Problems
1. Read the image 4.2.07.tiff from the USC-SIPI database. Apply several compas operators on this
image (for different directions).
2. What happens with the edges positioned in the direction that does not correspond to the specific
operator?
3. Calculate and display the frequency characteristics for the selected compas operators. (This can be
done with the function fft2() as described in the Exercise 3.)
4. Why is their resolution so small?
6.4.5
Laplacian operator for edge detection
Gradient masks give the best result in cases when there are sudden edges, ie. large change in the pixel
intensity values. For the ”softer” edges, methods for second derivative estimation give better results. An
example of such estimation is the Laplacian operator. Laplacian of size 3 × 3 can be simply generated with
function fspecial:
An example
>> lap =
lap =
0 1
1 -4
0 1
6.4.6
fspecial(’laplacian’,0) % mask that approximates the Laplacian operator
0
1
0
Problems
1. Read the image 4.2.07.tiff from the USC-SIPI database. Apply several Laplacian operator approximations on this image (note: change the second input parameter of the fspecial() function). Display
the results and comment on them.
5
6.5
Texture features
Texture are present in almost every image. Even though we have an intuitive ”knowledge” about textures,
despite their importance there is no formal definition of a texture. In this part of the exercise, we will use
the second order histogram features to represent texture features. For that purpose, we will use the function
nlfilter() to calculate each feature on the block (of the defined size) in the input image.
An example
>>
>>
>>
>>
>>
>>
>>
[img,map] = imread(’testpat1.tif’); % read the image
img = im2double(img);
% convert it to double format
p = [2,2];
% define the shift
nhood = [15, 15];
% define the neighborhood size
fun = @(x) inertia(x,p);
% function pointer (function handle)
img2 = nlfilter(img,nhood,fun);
% process the image
figure; imagesc(img2); colormap(gray) % display the resulting image
You can notice that for calculating the second order histogram, we need to define a shift vector, p = [Dy , Dx ],
which corresponds to the shift between the two pixels under observation. This can be a problem since even
for the small neighborhoods (eg. neighborhood size = [M,N]), the number of possibilities is very large.
Therefore, to reduce the calculation time, select the images with smaller dimension and define the smaller
neighborhood sizes. Also, have in mind that the shift vector [Dy , Dx ] has to be smaller than the neighborhood
size.
6.5.1
Problems
1. Read the image texture.tif, which contains 5 different areas: 3 textures and 2 plain areas. Calculate
the following features: absolute(), inertia(), covariance(), energy(), entropy() (with the corresponding
functions) for these shifts: [2,2], [3,3] and [3,5]. Define the block size as 12 × 12.
Display the results and comment on them.
2. Select few images that display different textures (you can crop the image to work with images of
smaller dimensions). Select a single feature and calculate it for each of the chosen images.
3. Do the features exhibit a sufficient difference to use them for the texture classification?
4. Which feature seems the best to perform the classification of the textures in the image?
6
Download