Uploaded by Icecream Butter

6 Image Segmentation

advertisement
Image
Segmentation
Image Segmentation

Image segmentation divides an image into regions that are connected
and have some similarity within the region and some difference between
adjacent regions.

The goal is usually to find individual objects in an image.

For the most part there are fundamentally two kinds of approaches to
segmentation: discontinuity and similarity.
– Similarity may be due to pixel intensity, color or texture.
– Differences are sudden changes (discontinuities) in any of these, but especially sudden
changes in intensity along a boundary line, which is called an edge.
Fundamentals
Detection of Discontinuities

There are three kinds of discontinuities of intensity: points, lines and edges.
Detection of Discontinuities
Detection of Discontinuities

The most common way to look for discontinuities is to scan a small mask over the
image. The mask determines which kind of discontinuity to look for.
Detection of Discontinuities
Point Detection
Detection of Discontinuities
Point Detection
Detection of Discontinuities
Line Detection

Only slightly more common than point detection is to find a one pixel wide line in
an image.
Detection of Discontinuities
Line Detection
Detection of Discontinuities
Line Detection

For digital images the only three point straight lines are only horizontal, vertical, or
diagonal (+ or –45°).
Detection of Discontinuities
Line Detection
Detection of Discontinuities
Edge Detection
Detection of Discontinuities
Edge Detection
Detection of Discontinuities
Edge Detection
Detection of Discontinuities
Edge Detection
Detection of Discontinuities
Gradient Operators

First-order derivatives:
– The gradient of an image f(x,y) at location (x,y) is defined as the vector:
– The magnitude of this vector:
– The direction of this vector:
– It points in the direction of the greatest rate of change of f at location (x,y)
Detection of Discontinuities
Gradient Operators
Different Edge masks
Different Edge masks
Different Edge masks
Different Edge masks
Different Edge masks
Detection of Discontinuities
Gradient Operators: Example
Detection of Discontinuities
Gradient Operators: Example
Detection of Discontinuities
Gradient Operators: Example
Detection of Discontinuities
Gradient Operators: Example
Advanced Techniques for edge
detection
Marr-Hildreth edge detector
• It takes into account factors such as image noise and the nature of edges.
• Time period – 1980, MIT
• During that time: edge detection was mainly based on small operators such as the Sobel,
Robert, Prewitt masks
Marr-Hildreth edge detector

Marr and Hildreth argued that
1) Intensity changes are dependent of image scale and so their detection requires the
use of operators of different sizes and
2) That a sudden intensity change will give rise to a peak or trough in the first derivative
or, equivalently, to zero crossing in the second derivative.
Two salient feature:
1)It should be differential operator capable of computing first or second order derivatives
at every point in an image
2)It should be capable of being tuned to act at any desired scale, so that large operator
can be used to detect blurry edges and small operators to detect sharply focused fine
detail
Marr-Hildreth edge detector
• Marr and Hildreth argued that the most satisfactory operator is the Laplacian of a
Gaussian (LoG)
2
2
𝜕
𝜕
∇2 = 2 + 2
𝜕𝑥
𝜕𝑦
𝐺(𝑥, 𝑦) =
𝑥 2 +𝑦 2
−
𝑒 2𝜎2
Marr-Hildreth edge detector
2
2
𝑥 + 𝑦 − 2𝜎
∇ 𝐺(𝑥, 𝑦) =
𝜎4
2
This is known as LoG operator
2
𝑥 2 +𝑦 2
−
𝑒 2𝜎2
Marr-Hildreth edge detector
Marr-Hildreth edge detector
Fundamental ideas:
• the Gaussian part blurs the image, reducing structures/noise
• unlike average masks, Gaussian filter is smooth in both the spatial and frequency
domains
• less artifacts are introduced
Marr-Hildreth edge detector
Fundamental ideas:
• second part: the Laplacian part is isotropic, invariant to rotation
• no need to use multiple masks like first derivative cases, since isotropic
Marr-Hildreth edge detector
• Finally, the edge detection is performed as
2
𝑔(𝑥, 𝑦) = ∇ 𝐺(𝑥, 𝑦) ∗ 𝑓(𝑥, 𝑦)
𝑔(𝑥, 𝑦) = ∇2 𝐺(𝑥, 𝑦) ∗ 𝑓(𝑥, 𝑦)
The locations of edges is found by the zero-crossings of g (x, y).
Marr-Hildreth edge detector
Algorithm:
1. Filter the input image with an nxn Gaussian lowpass filter
2. Compute the Laplacian of the image resulting from Step 1.
3. Find the zero crossings of the image from Step 2.
Marr-Hildreth edge detector
• Size of the mask: about 99.7% of the volume under a 2-D Gaussian surface lies
between ±3𝜎 about the mean.
• The value of n should be the smallest odd integer greater than or equal to 6 𝜎
Marr-Hildreth edge detector
How to find the zero crossings:
• Consider a 3x3 neighborhood centered at any pixel p.
• A zero crossing at p implies that the signs of at least two of its opposing neighboring
pixels must differ (out of four cases).
• Sometimes, the absolute value of opposing neighbor’s numerical difference must also
exceed the threshold.
Marr-Hildreth edge detector
Marr-Hildreth edge detector
• It is possible to approximate the LoG filter by a difference of Gaussians (DoG)
1
𝐷𝑜𝐺(𝑥, 𝑦) =
2𝜋𝜎12
𝜎1 > 𝜎2
The value of σ for the LoG
𝑥 2 +𝑦 2
−
2
2𝜎
1
𝑒
1
−
2𝜋𝜎22
𝑥 2 +𝑦 2
−
2
2𝜎
2
𝑒
Marr-Hildreth edge detector
Marr-Hildreth edge detector
• Gives false edges
• Poor localization at curved edges
• Now of historical interest
• Canny edge detector gives better performance
Canny edge detector
• Best edge detection method
• Increase in implementation complexity
Canny edge detector
• Canny’s approach based on three basic objectives
• Low error rate – all edges should be found, no false edges
• Edge points should be well localized
• Single edge point response – only one point for each true edge
point/Minimize no. of false maxima around true edge.
Canny edge detector
• First smoothing the image with a circular 2-D Gaussian function, then computing
the gradient image
𝑥 2 +𝑦 2
−
𝐺(𝑥, 𝑦) = 𝑒 2𝜎2
𝑓𝑠 (𝑥, 𝑦) = 𝐺(𝑥, 𝑦) ∗ 𝑓(𝑥, 𝑦)
𝑀(𝑥, 𝑦) =
𝑔𝑥2 + 𝑔𝑦2
𝑔𝑦
𝛼(𝑥, 𝑦) =
𝑔𝑥
𝜕𝑓𝑠
𝑤ℎ𝑒𝑟𝑒 𝑔𝑥 =
𝜕𝑥
𝜕𝑓𝑠
𝑔𝑦 =
tan−1
𝜕𝑦
Canny edge detector: Single
response requirement
• Since M(x, y) is generated using the gradient, it contains wide ridges
around local maxima
• Next step is to thin those ridges, using Non-maxima suppression
• Specify a number of discrete orientations of the edge normal (gradient
vector)
• In a 3x3 region, we can define four orientations, horizontal, vertical,
+45, -45
Canny edge detector
Canny edge detector
• Let d1, d2, d3, and d4 denote four basic edge normal directions
for a 3x3 region
• Find the direction dk that is closet to 𝛼(𝑥, 𝑦)
• If the value of M(x, y) is less than at least one of its two
neighbors along dk,
gN(x, y)=0 (suppression);
otherwise let gN(x, y)=M(x, y)
Canny edge detector: Post Processing
• Final operation is to threshold gN(x, y) to reduce false edge points
• Using Hysteresis Thresholding – low and high thresholds
• The ratio of high to low threshold should be 2:1 or 3:1
Canny edge detector
𝑔𝑁𝐻 (𝑥, 𝑦) = 𝑔𝑁 (𝑥, 𝑦) ≥ 𝑇𝐻
Strong edge
𝑔𝑁𝐿 (𝑥, 𝑦) = 𝑔𝑁 (𝑥, 𝑦) ≥ 𝑇𝐿
Weak edge
All the nonzero pixels of former image will be contained in the later image
𝑔𝑁𝐿 (𝑥, 𝑦) = 𝑔𝑁𝐿 (𝑥, 𝑦) − 𝑔𝑁𝐻 (𝑥, 𝑦)
Canny edge detector
• After thresholding operations, all strong edge pixels in 𝑔𝑁𝐻 (𝑥, 𝑦)
are marked as valid edge pixels
• These edge pixels have gaps (broken edges); so longer edges are
formed using the next procedure
Canny edge detector
• Locate the next unvisited edge pixel p in 𝑔𝑁𝐻 (𝑥, 𝑦)
•All the weak edge pixels in 𝑔𝑁𝐿 (𝑥, 𝑦) that are 8-connected to p, are
marked as valid edge pixels
• If all non-zero pixels in 𝑔𝑁𝐻 (𝑥, 𝑦) have been visited go to next step, else
go to step 1.
Canny edge detector
• Set rest weak edge pixels in 𝑔𝑁𝐿 (𝑥, 𝑦) that were not marked as valid
edge pixels to zero
• Final edge image is formed by appending to 𝑔𝑁𝐿 (𝑥, 𝑦) all the nonzero
pixels from 𝑔𝑁𝐻 (𝑥, 𝑦)
Canny edge detector
Summary
• Smooth the input image with a Gaussian filter
• Compute the gradient magnitude and angle images
• Apply Non-maxima suppression to the gradient image
• Use Double thresholding (Hysteresis Thresholding) and connectivity
analysis to detect and link edges
Canny edge detector
Canny edge detector
Edge Linking

Edge detection should yield sets of pixels lying only on edges.

These pixels seldom characterize edges completely because of noise,
breaks in the edges due to nonuniform illumination, and other effects that
introduce spurious discontinuities in intensity values.

Edge detection typically is followed by linking algorithms designed to
assemble edge pixels into meaningful edges and/or region boundaries.
Edge Linking using local Processing
Edge Linking using local Processing
Global Processing using
Hough Transforms

Hough Transforms takes the images created by the edge detection operators

Most of the time, the edge map generated by the edge detection algorithms is
disconnected

HT can be used to connect the disjointed edge points

It is used to fit the points as plane curves

Plane curves are lines, circles, and parabolas

The line equation is

However, the problem is that there are infinite line passing through one points

Therefore, an edge point in an x-y plane is transformed to a c-m plane

Now equation of line is c = (-x)m + y
y = mx + c
Hough Transforms
Hough Transforms

All the edge points in the x-y plane need to be fitted

All the points are transformed in c-m plane

The objective is to find the intersection point

A common intersection point indicates that the edges points which are part of the line

If A and B are the points connected by a line in the spatial domain, then they will be
intersecting lines in the Hough Space (c-m plane)

To check whether they are intersecting lines, the c-m plane is partitioned as accumulator
lines

To find this, it can be assumed that the c-m plane can be partitioned as an accumulator
array

For every edge point (x,y), the corresponding accumulator element is incremented in the
accumulator array

At the end of this process, the accumulator values are checked

Significance is that this point gives us the line equation of the (x,y) space
Hough Transforms
Hough Transforms
Hough Transforms
Thresholding

Assumption: the range of intensity levels covered by objects of interest is different from
the background.
Thresholding
The Role of Noise
Image Smoothing prior to
Thresholding
Thresholding
The Role of Illumination
Basic Global Thresholding
Iterative Global thresholding
Image Partitioning
Region-Based Segmentation
• Edges and thresholds sometimes do not give good results for segmentation.
• Region-based segmentation is based on the connectivity of similar pixels in a region.
– Each region must be uniform.
– Connectivity of the pixels within the region is very important.
• There are two main approaches to region-based segmentation: region growing and
region splitting.
Region Growing

Thresholding still produces isolated image

Region growing algorithms works on principle of similarity

It states that a region is coherent if all the pixels of that region are homogeneous with
respect to some characteristics such as color, intensity, texture, or other statistical
properties

Thus idea is to pick a pixel inside a region of interest as a starting point (also known as a
seed point) and allowing it to grow

Seed point is compared with its neighbors, and if the properties match , they are
merged together

This process is repeated till the regions converge to an extent that no further merging is
possible
Region Growing Algorithm

It is a process of grouping the pixels or subregions to get a bigger region present in an
image

Selection of the initial seed: Initial seed that represent the ROI should be given typically
by the user. Can be chosen automatically. The seeds can be either single or multiple

Seed growing criteria: Similarity criterion denotes the minimum difference in the grey
levels or the average of the set of pixels. Thus, the initial seed ‘grows’ by adding the
neighbors if they share the same properties as the initial seed

Terminate process: If further growing is not possible then terminate region growing
process
Region Splitting and Merging
Region Splitting and Merging
Download