4-Edge Detection Methods

advertisement
1-Abstract :
Medical ultrasound (US) has been widely used for imaging
human organs (such as the heart, kidney, prostate, etc.) and
tissues (such as the breast, the abdomen, the muscular system,
and tissue in the fetus during pregnancy). US imaging is realtime, non-radioactive, non-invasive and inexpensive. However,
US imagery is characterized by low signal to noise ratio, low
contrast between tissues and speckle contamination. In general,
medical US imagery is hard to interpret objectively. Thus,
automatic analysis and interpretation of US imagery for disease
diagnostics and treatment planning is desirable and of clinical
value.
An essential step toward automatic interpretation of imagery
is detecting boundaries of different tissues. Though in general,
the boundary of an object can be a combination of step edges,
ridges, ramp edges, etc., we focus upon detecting the boundaries
of human organs that can be modeled as "step edges", We will
discus in our search a new partial differential equation (PDE)
based speckle reducing filter for enhancement of US images is
proposed. This filter relies on the instantaneous coefficient of
variation (ICOV) to measure the edge strength in speckled
images and help to perform the segmentation to the medical
image.
In our search, we will give a general idea about the Digital
Image formation that helping to understand our search, then we
will took about Segmentation & Segmentation Method, then we
will discus the Edge detection and the several techniques using
in this method including (ICOV), and in the final we will
operating K-mean clustering equation (on combine image)
on the MATLAB program to show the result image segment.
1
Ch1. introduction
1-What is an image?
An image is an array, or a matrix, of square pixels (picture
elements) arranged in columns and rows.
1.1-Grayscale:
Figure 1: An image — an array or a matrix of pixels arranged in columns
and rows.
In a (8-bit) grayscale image each picture element has an
assigned intensity that ranges from 0 to 255. A grey scale image
is what people normally call a black and white image, but the
name emphasizes that such an image will also include many
shades of grey.
Figure 2: Each pixel has a value from 0 (black) to 255 (white). The
possible range of the pixel values depend on the color depth of the image,
here 8 bit = 256 tones or grayscales.
2
A normal grayscale image has 8 bit color depth = 256
grayscales. A “true color” image has 24 bit color depth = 8 x 8 x
8 bits = 256 x 256 x 256 colors = ~16 million colors.
Figure 3: A true-color image assembled from three grayscale images
colored red, green and blue. Such an image may contain up to 16 million
different colors.
Some grayscale images have more grayscales, for instance 16
bit = 65536 grayscales. In principle three grayscale images can
be combined to form an image with 281,474,976,710,656
grayscales.
There are two general groups of ‘images’: vector graphics (or
line art) and bitmaps (pixel-based or ‘images’). Some of the
most common file formats are:
GIF — an 8-bit (256 color), non-destructively compressed
bitmap format. Mostly used for web. Has several sub-standards
one of which is the animated GIF.
JPEG — a very efficient (i.e. much information per byte)
destructively compressed 24 bit (16 million colors) bitmap
format. Widely used, especially for web and Internet
(bandwidth-limited).
3
TIFF — the standard 24 bit publication bitmap format.
Compresses non-destructively with, for instance, Lempel-ZivWelch (LZW) compression.
PS — Postscript, a standard vector format. Has numerous substandards and can be difficult to transport across platforms and
operating systems.
PSD – a dedicated Photoshop format that keeps all the
information in an image including all the layers.
1.2-Color scale:
the two main color spaces are RGB and CMYK.
1.2.1/ RGB
The RGB color model relates very closely to the way we
perceive color with the r, g and b receptors in our retinas. RGB
uses additive color mixing and is the basic color model used in
television or any other medium that projects color with light. It
is the basic color model used in computers and for web graphics,
but it cannot be used for print production.
The secondary colors of RGB – cyan, magenta, and yellow – are
formed by mixing two of the primary colors (red, green or blue)
and excluding the third color. Red and green combine to make
yellow, green and blue to make cyan, and blue and red form
magenta. The combination of red, green, and blue in full
intensity makes white.
In Photoshop using the “screen” mode for the different layers in
an image will make the intensities mix together according to the
additive color mixing model. This is analogous to stacking slide
images on top of each other and shining light through them.
4
Figure 4: The additive model of RGB. Red, green, and blue are the
primary stimuli for human color perception and are the primary additive
colors. Courtesy of adobe.com.
1.2.2/CMYK
The 4-colour CMYK model used in printing lays down
overlapping layers of varying percentages of transparent cyan
(C), magenta (M) and yellow (Y) inks. In addition a layer of
black (K) ink can be added. The CMYK model uses the
subtractive color model.
Figure 5: The colours created by the subtractive model of CMYK don't
look exactly like the colours created in the additive model of RGB Most
importantly, CMYK cannot reproduce the brightness of RGB colours. In
addition, the CMYK gamut is much smaller than the RGB gamut.
Courtesy of adobe.com
5
2-Digital image
2.1-Common Values:
There are standard values for the various parameters
encountered in digital image processing. These values can be
caused by video standards, by algorithmic requirements, or by
the desire to keep digital circuitry simple. Table (1) gives some
commonly encountered values.
Table (1): Common values of digital image parameters
2.2-Types of Operations:
The types of operations that can be applied to digital images
to transform an input image f[m,n] into an output image b[m,n]
(or another representation) can be classified into three categories
as shown in Table (2).
Table (2): Types of image operations. Image size = N N.
These types of image operations can shown graphically in the
Figure (2).
6
Figurer (2): Illustration of various types of image operations
2.3-Effect of Noise in Digital Image:
Real images are often degraded by some error this is called
Noise. In the digital image noise can occur during image
transmission and digitization. Image sensors are affected by
environmental condition during image digitization & by quality
of elements.
Noises may be dependent or independent of image content.
Images are corrupted during the transmission due to interference
in the channel used for transmission.
7
Ch2.
1-Image processing
Image processing is any form of signal processing for which the
input is an image, such as photographs or frames of video; the
output of image processing can be either an image or a set of
characteristics or parameters related to the image. Most imageprocessing techniques involve treating the image as a twodimensional signal and applying standard signal-processing
techniques to it.
Image processing usually refers to digital image processing, but
optical and analog image processing are also possible. This
article is about general techniques that apply to all of them.
2-Segmentation (image processing)
In computer vision, segmentation refers to the process of
partitioning a digital image into multiple regions (sets of pixels).
The goal of segmentation is to simplify and/or change the
representation of an image into something that is more
meaningful and easier to analyze. Image segmentation is
typically used to locate objects and boundaries (lines, curves,
etc.) in images.
The result of image segmentation is a set of regions that
collectively cover the entire image, or a set of contours extracted
from the image (see edge detection). Each of the pixels in a
region are similar with respect to some characteristic or
computed property, such as color, intensity, or texture. Adjacent
regions are significantly different with respect to the same
characteristic(s).
8
2.1-Some of the practical applications of image
segmentation are:
-Medical Imaging [in our report we will talk specially in
this field]
-Locate tumors and other pathologies
-Measure tissue volumes
-Computer-guided surgery
-Diagnosis
-Treatment planning
-Study of anatomical structure
-Locate objects in satellite images (roads, forests, etc.)
-Face recognition
-Fingerprint recognition
-Automatic traffic controlling systems
-Machine vision
2.2-algorithms
and techniques for image
segmentation:
Several general-purpose algorithms and techniques have been
developed for image segmentation. Since there is no general
solution to the image segmentation problem, these techniques
often have to be combined with domain knowledge in order to
effectively solve an image segmentation problem for a problem
domain:
9







1- Clustering Methods
2- Histogram-Based Methods
3- Edge Detection Methods
4- Region Growing Methods
5- Level Set Methods
6- Graph Partitioning Methods
7-Multi-scale Segmentation
3-Medical imaging
Medical imaging refers to the techniques and processes used to
create images of the human body (or parts thereof) for clinical
purposes (medical procedures seeking to reveal, diagnose or
examine disease) or medical science (including the study of
normal anatomy and function). As a discipline and in its widest
sense, it is part of biological imaging and incorporates radiology
(in the wider sense), radiological sciences, endoscopy, (medical)
thermography, medical photography and microscopy (e.g. for
human pathological investigations). Measurement and recording
techniques which are not primarily designed to produce images,
such as electroencephalography (EEG) and
magnetoencephalography (MEG) and others, but which produce
data susceptible to be represented as maps (i.e. containing
10
positional information), can be seen as forms of medical
imaging.
In the clinical context, medical imaging is generally equated to
radiology or "clinical imaging" and the medical practitioner
responsible for interpreting (and sometimes acquiring) the
images is a radiologist. Diagnostic radiography designates the
technical aspects of medical imaging and in particular the
acquisition of medical images. The radiographer or radiologic
technologist is usually responsible for acquiring medical images
of diagnostic quality, although some radiological interventions
are performed by radiologists.
As a field of scientific investigation, medical imaging
constitutes a sub-discipline of biomedical engineering, medical
physics or medicine depending on the context: Research and
development in the area of instrumentation, image acquisition
(e.g. radiography), modeling and quantification are usually the
preserve of biomedical engineering, medical physics and
computer science; Research into the application and
interpretation of medical images is usually the preserve of
radiology and the medical sub-discipline relevant to medical
condition or area of medical science (neuroscience, cardiology,
psychiatry, psychology, etc) under investigation. Many of the
techniques developed for medical imaging also have scientific
and industrial applications.
Medical imaging is often perceived to designate the set of
techniques that noninvasively produce images of the internal
aspect of the body. In this restricted sense, medical imaging can
be seen as the solution of mathematical inverse problems. This
means that cause (the properties of living tissue) is inferred from
effect (the observed signal). In the case of ultrasonography the
probe consists of ultrasonic pressure waves and echoes inside
the tissue show the internal structure. In the case of projection
radiography, the probe is X-ray radiation which is absorbed at
different rates in different tissue types such as bone, muscle and
fat.
11
4-Edge Detection Methods
Edge detection is a well-developed field on its own within
image processing. Region boundaries and edges are closely
related, since there is often a sharp adjustment in intensity at the
region boundaries. Edge detection techniques have therefore
been used to as the base of another segmentation technique.
The edges identified by edge detection are often
disconnected. To segment an object from an image however,
one needs closed region boundaries. Discontinuities are bridged
if the distance between the two edges is within some
predetermined threshold.
An edge may be regarded as a boundary between two
dissimilar regions in an image, These may be different surfaces
of the object, or perhaps a boundary between light and shadow
falling on a single surface.
In principle an edge is easy to find since differences in pixel
values between regions are relatively easy to calculate by
considering gradients.
4.1-Edges are very important to any vision system because: They are fairly cheap to compute.
 They do provide strong visual clues that can help the
recognition process.
 Edges are affected by noise present in an image though.
4.2-There are several methods used in edge detection, some of
these methods:1- Gradient based method.
2- Second order method.
3- Zero-crossing based method.
4- Instantaneous coefficient of variation (ICOV).etc
12
4.3-Concepts of Edge Detection
First of all, we have to clarify what is Edge Detection. Here are
some definitions of edge detection: An edge is not a physical
entity, just like a shadow. It is where the picture ends and the
wall starts. It is where the vertical and the horizontal surfaces of
an object meet. It is what happens between a bright window and
the darkness of the night. Simply speaking, it has no width. If
there were sensor with infinitely small footprints and zero-width
point spread functions, an edge would be recorded between
pixels within in an image. In reality, what appears to be an edge
from the distance may even contain other edges when looked
closer. The edge between a forest and a road in an aerial photo
may not look like an edge any more in an image taken on the
ground. In the ground image, edges may be found around each
individual tree. If looked a few inches away from a tree, edges
may be found within the texture on the bark of the tree. Edges
are scale-dependent and an edge may contain other edges, but at
a certain scale, an edge still has no width .
Traditionally, edges have been loosely defined as pixel intensity
discontinuities within an image. While two experimenters
processing the same image for the same purpose may not see the
same edge pixels in the image, two for different applications
may never agree. In a word, edge detection is usually a
subjective task. As a user of an edge detector, one should not
expect the software to automatically detect all the edge he or she
wants and nothing more, because a program can not possibly
know what level of details the experimenter has in mind.
Usually it is easy to detect those obvious edges, or those with
high S/N ratio. But what about those not very obvious? If a
program detects all the pixel intensity discontinuities in an
image, the result image will not be very much different from one
fill of noise. On the other side, as a developer of an edge
detector, one should not try to create a program that
automatically produces the ideal result each and every user has
in mind, because nobody can read other people's mind. Instead,
13
a developer try to: 1) create a good but simple way to let the
users express their idea about the edges they have in mind
regarding a specific image; and to 2) implement a method to
detect the type of edges a user ordered. In another word, an edge
detector can not possibly be 100 percent automatic. It must be
interactive, requiring a few input parameters at least.
The quality of edge detection is limited by what's in the image.
Sometimes a user knows there should be an edge somewhere in
the image but it is not shown in the result. So he adjusts the
parameters of the program, trying to get the edge detected.
However, if the edge he has in mind is not as obvious to the
program as some other features he does not want detect, he will
get the other "noise" before the desired edge is detected. Edge
detecting programs process the image "as it is". As a human
being, an experimenter knows there is an edge because he is
using knowledge in addition to what's contained in the image.
How to use such knowledge about the real world in the process
of general edge detection is a huge topic that I would like to
watch from a safe distance for the time being. For example, if
the program knows an edge is that of a road and it is likely that
it will continue on the other side of a tree branch, then it may
have a chance to detect the edge of each and every visible part
of a road behind a tree; otherwise, some small and not so
obvious pieces of the edge may remain undetected. In a
simplified special case, an edge detector may be tailored to take
advantage of the domain knowledge. For example, a "straight
edge" detector may be very effective in locating most buildings
and objects such as tennis courts in an aerial photo.
Also because of the subjectivity of edge detection, it is difficult
to compare the performance of two edge detectors on most realworld images. However, it is quite easy to compare them using
synthetic images such as those shown on a separate page. In
those images, the number of edge pixels should be the same as
the height of the image. Whichever edge detector that produces
the most edge pixels along the central line and the fewest in
14
other areas wins. If an edge detector that performs badly on such
images, it is unnecessary to try it on other real-world images. If
it does well on such synthetic images, however, it may not do
well in the real game.
4.4- Edge Detection using:
(Instantaneous Coefficient of Variation)
(ICOV)
4.4.1-The History:
In 1980, D. Marr and E. Hildreth and, later, have examined
the use of zero crossings produced by the Laplacian-ofGaussian (LoG) operator for the detection of edge.
In 1986, D. Canny proposed the odd-symmetric derivative-
of-Gaussian filter as a near-optimal edge detector, while evensymmetric (sombrero-like) filters have been proposed for ridge
and roof detection.
In 1990, D. Bovik proved that both the gradient and the LoG
operator do not have the property of constant false alarm rate in
homogeneous speckle regions of speckled imagery. It has been
argued that the application of such detectors generally fails to
produce desired edges from US imagery. Some constant false
alarm rate (CFAR) edge detectors for speckle clutter have been
proposed, including the ratio of averages (ROA) detector, the
ratio detector, the ratio of weighted averages, or the likelihood
ratio (LR). Other ratio detectors include the refined gamma
maximum a posteriori detectors and more recent improvements,
which use a combination of even-symmetric and odd-symmetric
operators to extract step edges and thin linear structures in
speckle. With CFAR edge detectors, the image needs to be
scanned by a sliding window composed of several differently
oriented splitting sub-windows. The accuracy of edge location
for these ratio detectors depends strongly on the orientation of
the sub-windows.
15
For the LR detector, an edge bias expression has been
derived in 2001 by D. Germain and P. Refregier. The bias in
edge location is deleterious in obtaining quantitative estimates
of the volume of the organs from diagnostic US imagery.
In an attempt to develop a more efficient edge detector with
high edge positioning accuracy for US imagery, we turn our
attention to differential/difference operators that are
straightforward to compute in small windows. We believe that
the key problem in developing differential type edge detectors is
one of correctly accommodating the multiplicative nature of
speckle. So in November 2002, D. Yongjian Yu and D. Scott took
about the use of a new partial differential equation (PDE) based
speckle reducing filter for enhancement of US imagery is
proposed. This filter relies on the instantaneous coefficient of
variation (ICOV) to measure the edge strength in speckled
images.
4.4.2-Instantaneous Coefficient of Variation (ICOV):
The instantaneous coefficient of variation (ICOV) edge
detector, based on normalized gradient and Laplacian operators,
has been proposed for edge detection in ultrasound images.
In our search, the edge detection and localization
performance of the ICOV detector are examined. First, a
simplified version of the ICOV detector, the normalized
gradient magnitude-squared (NG), is scrutinized in order to
reveal the statistical performance of edge detection and
localization in speckled ultrasound imagery.
Edge localization is characterized by the position of the peak
and the 3 dB width of the detector response. Then, the speckle
edge response of the ICOV as applied to a realistic edge model
is studied. Through theoretical analysis, we reveal the
compensatory effects of the normalized Laplacian operator in
the ICOV edge detector for edge localization error.
16
An ICOV-based edge detection algorithm is implemented in
which the ICOV detector is embedded in a diffusion coefficient
in a anisotropic diffusion process. Experiments with syntor
images have shown that the proposed algorithm is effective in
extracting edges in the presence of speckle.
Denoting the image intensity at position (i, j) as Ii,j , the
instantaneous coefficient of variation
is given by this
Equation:-
gradient.
gradient magnitude.
Laplacian.
absolute value.
,
It is seen that the ICOV equation combines image intensity with
first and second derivative operators.
17
5-k-means clustering:
Simply speaking k-means clustering is an algorithm to classify
or to group your objects based on attributes/features into K
number of group. K is positive integer number. The grouping is
done by minimizing the sum of squares of distances between
data and the corresponding cluster centroid. Thus the purpose of
K-mean clustering is to classify the data.
5.1-Clustering Methods
The K-means algorithm is an iterative technique that is used to
partition an image into K clusters. The basic algorithm is:
1. Pick K cluster centers, either randomly or based on some
heuristic
2. Assign each pixel in the image to the cluster that
minimizes the variance between the pixel and the cluster
center
3. Re-compute the cluster centers by averaging all of the
pixels in the cluster
4. Repeat steps 2 and 3 until convergence is attained (e.g. no
pixels change clusters)
In this case, variance is the squared or absolute difference
between a pixel and a cluster center. The difference is typically
based on pixel color, intensity, texture, and location, or a
weighted combination of these factors. K can be selected
manually, randomly, or by a heuristic.
This algorithm is guaranteed to converge, but it may not return
the optimal solution. The quality of the solution depends on the
initial set of clusters and the value of K.
18
6-Using MATLAB Program:
The name MATLAB stands for matrix laboratory, it is define
as a high-performance language for technical computing. It
integrates computation, visualization, and programming in an
easy-to-use environment where problems and solutions are
expressed in familiar mathematical notation.
6.1- Typical uses of MATLAB include:- Math and computation.
-Algorithm development
-Data acquisition
-Modeling, simulation, and prototyping
-Data analysis, exploration, and visualization
-Scientific and engineering graphics
-Application development, including graphical user interface
building
In our project we will use MATLAB to perform segmentation
process in noise image by using Instantaneous Coefficient of
Variation (ICOV) and then will perform
K-mean clustering command on both image ( noise image and
ICOV image), so we must operate the MATLAB as following
sequences:(1) First, we open the MATLAB program and waiting until the main
page appear.
19
(2) The main page of MATLAB consist of three sections (Command
Window, Current Directory and Command History) which help us to
perform our job, we will save our images (noise & ICOV) in the Current
Directory section, and then we will write program or code that represent
(ICOV) equation in Command Window.
Current Directory
Command Window
Command History
(3) before start write program in the Command Window, we must in
first understanding the equation and then using the appropriates
MATLAB Code that represent each part of the equation, and this is one of
hard job in our project, because if there is any mistake or error in code,
the result image will be affected.
The (ICOV) Program or Code
20
(4) the Result image on noise image by ICOV will appear.
Result image from (Noisy image)
(5)after combine both images (noise image & ICOV image) the final
result will appear as:
Result image by using K-mean clustering
21
When we applied our equation on the NOISE IMAGE, the
result image will appear clear, and this is one of best advantage
resulting from using (ICOV) method.
Noisy image
Instantaneous Coefficient
of Variation (ICOV) filter
Result image
22
After we used edge detection by ICOV we will combine noise
image with edge detection for it by using K-mean Clustering
commands to do better segmentation for medical images.
We discus now how do to make this method :
Noise image(pic)
ICOV (result)
With K-mean Clustering commands ( to appear 3 classes) :
>>a=imread('pic.tif');
>>b=imread('result.tif');
>>c=cat(3,a,b);
>>ab = im2double(c);
>>ab1 = reshape(ab,140*140,2);
>>[cluster_idx cluster_center] kmeans (ab1,3,'distance'
,'sqEuclidean' ,'Replicates',3);
>>pixel_labels = reshape(cluster_idx,140,140);
>> imshow(pixel_labels,[]);
23
In final we get this image :
Class 1
Class 2
Class 3
7-Conclusion :
During this project we learned a lot of interesting things. We get
some experience in image processing segmentation and edge
detection in theory and in practice using Matlab-7 software.
The k-mean clustering on combine images (synthetic image &
ICOV) which is due to the following characteristics :
1- The The k-mean clustering provides a lower
localization error, and qualitatively, a dramatic
improvement in edge detection performance over an
existing edge detection method for speckled imagery.
2- The The k-mean clustering meant to allow for
balanced and well localized edge strength
measurements in bright regions as well as in dark
regions.
24
3- The performance of the The k-mean clustering has
been demonstrated for edge-detection speckle
reducing anisotropic diffusion.
4- This segmentation method can be develop by other
project to get better view for medical image.
Still, an application on real US images need to be tested in
order to confirm the result obtained on synthetic one.
8- Reference
:
1-Digital Image Processing Using MATLAB "Book".
By Rafael C. Gonzalez(University of Tennessee)
Richard E. Woods (MedData Interactive)
Steven L. Eddins (The MathWorks, Inc).
2-Tutorial Image. (www.cs.washington.edu.com)
3-Segmentation (image processing) - Edge Detection.
(www.wikipedia.com)
4- Segmentation of Ultrasound Image.
(www.elsevier.com/locate/patrec)
5-Edge Detection. (www.cm.cf.uk/Dave/Vision)
25
Download