PRESENTATION - VIP-Tomographic

advertisement
Development of a baggage scanning
algorithm using image analysis
techniques
KRITHIKA CHANDRASEKAR
SCHOOL OF ELECTRICAL AND
COMPUTER ENGINEERING
1
OVERVIEW
•GRAYSCALING AND THRESHOLDING
•OTSU’S METHOD AND MATLAB’S
GRAYTHRESH COMMAND
•CONNECTED COMPONENT ANALYSIS
•REGION GROWING
•CLUSTERING
•FEATURE VECTOR ANALYSIS
•CT IMAGES AND FILTERED BACK
PROJECTION
2
GRAYSCALING
•A grayscale digital image is an image in which the
value of each pixel is a single sample, that is, it carries
only intensity information.
•The intensity of a pixel is expressed within a given
range between a minimum and a maximum, inclusive.
This range varies from 0 (total absence, black) and 1
(total presence, white), with any fractional values in
between.
•Technical uses (e.g. in medical imaging or remote
sensing applications) often require more levels, to make
full use of the sensor accuracy (typically 10 or 12 bits
per sample) and to guard against round off errors in
computations. Sixteen bits per sample (65,536 levels) is
a convenient choice for such uses, as computers
manage 16-bit words efficiently.
•RGB images are converted into grayscale using gamma
correction. Generally, 30 % red+59 % green + 11 % blue
components are added together to give a grayscale
image. The other common conversion method used is
(11*R + 16*G + 5*B) /32 as it can be used efficiently for
integer implementations.
Fig 1: An example of a
grayscale image
3
OTSU’S METHOD
•The algorithm assumes that the image to be thresholded contains two classes of
pixels (e.g. foreground and background) then calculates the optimum threshold
separating those two classes so that their combined spread (intra-class variance)
is minimal.The extension of the original method to multi-level thresholding is
referred to as the Multi Otsu method.
•In Otsu's method we exhaustively search for the threshold that minimizes the
intra-class variance, defined as a weighted sum of variances of the two classes.
Weights ωi are the probabilities of the two classes separated by a threshold t and
variances of these classes.
•Algorithm
Compute histogram and probabilities of each intensity level
Set up initial ωi(0) and μi(0)
Step through all possible thresholds maximum intensity
Update ωi and μi
Compute inter class variance
Desired threshold corresponds to the maximum inter class variance
4
GRAYTHRESH COMMAND - MATLAB
graythresh - Global image threshold using Otsu's method
level = graythresh(I)
[level EM] = graythresh(I)
Description
level = graythresh(I) computes a global threshold (level) that can be used to convert an
intensity image to a binary image with im2bw. level is a normalized intensity value that
lies in the range [0, 1].
The graythresh function uses Otsu's method, which chooses the threshold to minimize the
intraclass variance of the black and white pixels.
[level EM] = graythresh(I) returns the effectiveness metric, EM, as the second output
argument. The effectiveness metric is a value in the range [0 1] that indicates the
effectiveness of the thresholding of the input image. The lower bound is attainable only by
images having a single gray level, and the upper bound is attainable only by two-valued
images.
Class Support
The input image I can be of class uint8, uint16, int16, single,or double and it must be
nonsparse.
5
CONNECTED COMPONENT ANALYSIS
Once region boundaries have been detected, it is often useful to extract regions which are
not separated by a boundary. Any set of pixels which is not separated by a boundary is
call connected. Each maximal region of connected pixels is called a connected
component. The set of connected components partition an image into segments.
Let s be a neighborhood system.
– 4-point neighborhood system
– 8-point neighborhood system
• Let c(s) be the set of neighbors that are connected to the
point s.
For all s and r, the set c(s) must have the properties that
– c(s) ε s
– r ε c(s) , s ε c(r)
A region R S is said to be connected under c(s) if for all s, r 2 R there exists a sequence
of M pixels, s1, · · · , sM such that
s1 ε c(s), s2 ε c(s1), · · · , sM ε c(sM−1), r ε c(sM)
i.e. there is a connected path from s to r.
*www.ee.mit.edu/ece590/lecture_notes/ece590H
6
CONNECTED COMPONENT EXTRACTION
Iterate through each pixel in the image.
• Extract connected set for each unlabeled
pixel.
ClassLabel = 1
Initialize Yr = 0 for r Є S
For each s Є S {
if(Ys = 0) {
ConnectedSet(s, Y,ClassLabel)
ClassLabel ClassLabel + 1
}
}
7
8
EXAMPLE 1
s = (i, j);
ClassLabel
(0, 0); 1
The image X
j
01234
i010000
111000
201100
301100
401001
The segmentation Y
j
01234
i010000
111000
201100
301100
401000
9
EXAMPLE 2
s = (i, j);
ClassLabel
(0, 1); 2
The image X
j
01234
i 010000
111000
201100
301100
401001
The segmentation Y
j
01234
i 012222
111222
201122
301122
401220
10
EXAMPLE 3
s = (i, j);
ClassLabel
(2, 0); 3
The image X
j
01234
i010000
111000
201100
301100
401001
The segmentation Y
j
01234
i 012222
111222
231122
331122
431220
11
EXAMPLE 4
s = (i, j);
ClassLabel
(4, 4); 4
The image X
j
01234
i010000
111000
201100
301100
401001
The segmentation Y
j
01234
i 012222
111222
231122
331122
431224
12
REGION GROWING
•
•
•
•
Region growing is a simple region-based image segmentation method. It is also
classified as a pixel-based image segmentation method since it involves the
selection of initial seed points.
This approach to segmentation examines neighboring pixels of initial “seed points”
and determines whether the pixel neighbors should be added to the region.
The first step in region growing is to select a set of seed points. Seed point
selection is based on some user criterion (for example, pixels in a certain gray-level
range, pixels evenly spaced on a grid, etc.). The initial region begins as the exact
location of these seeds. The regions are then grown from these seed points to
adjacent points depending on a region membership criterion. The criterion could
be, for example, pixel intensity, gray level texture, or color.
Since the regions are grown on the basis of the criterion, the image information
itself is important. For example, if the criterion were a pixel intensity threshold
value, knowledge of the histogram of the image would be of use, as one could use
it to determine a suitable threshold value for the region membership criterion.
13
K means clustering
In statistics and machine learning, k-means
clustering is a method of cluster analysis
which aims to partition n observations into k
clusters in which each observation belongs to
the cluster with the nearest mean.
The K means algorithm will do the three steps
below until convergence
Iterate until stable (= no object move group):
•Determine the centroid coordinate
•Determine the distance of each object to the
centroids
•Group the object based on minimum distance
The aim of this method is to minimize the objective function:
where
is a chosen distance measure between a data point and the cluster centre ,
an indicator of the distance of the n data points from their respective cluster centres
is
14
FEATURE VECTOR ANALYSIS
In pattern recognition and machine learning, a feature vector is an n-dimensional
vector of numerical features that represent some object. Many algorithms in machine
learning require a numerical representation of objects, since such representations
facilitate processing and statistical analysis. When representing images, the feature
values might correspond to the pixels of an image, when representing texts perhaps to
term occurrence frequencies.
A common example of feature vectors appears when each image point is to be
classified as belonging to a specific class. Assuming that each image point has a
corresponding feature vector based on a suitable set of features, meaning that each class
is well separated in the corresponding feature space, the classification of each image
point can be done using standard classification method.
15
CT IMAGE FORMATION
The formation of a CT
image is a distinct three
phase process.
•The scanning phase
•The reconstruction
phase
•The visible and
displayed analog image
(shades of gray) is
produced by the
digital-to analog
conversion phase.
16
Scanning Phase
During the scanning phase a fan-shaped x-ray
beam is scanned around the body.
The amount of x-radiation that penetrates the
body along each individual ray (pathway)
through the body is measured by the
detectors that intercept the x-ray beam after
it passes through the body
The projection of the fan-shaped x-ray beam
from one specific x-ray tube focal spot
position produces one view.
Many views projected from around the
patient's body are required in order to acquire
the necessary data to reconstruct an image.
17
CT IMAGE QUALITY
Like all medical images, CT images have the five specific image quality characteristics. They are:
•Contrast Sensitivity (very high for CT)
•Blurring and visibility of Detail
•Visual Noise
•Artifacts
•Spatial (Tomographic slice or volume views)
Each of these characteristics are affected by the selection of protocol factor values that control the imaging process.
18
THE COMPLETE SCAN
A complete scan is formed by rotating the xray tube completely around the body and
projecting many views.
Each view produces one "profile" or line of
data as shown here.
The complete scan produces a complete data
set that contains sufficient information for the
reconstruction of an image.
In principle, one scan produces data for one
slice image. However, with spiral/helical
scanning, there is not always a one-to-one
relationship between the number of scans
around the body and the number of slice
images produced.
19
CT IMAGING PROCESS
The principle objective of CT imaging is to
produce a digital image (a matrix of pixels) for a
specific slice of tissue.
During the image reconstruction process, the slice
of tissue is divided into a matrix of voxels (volume
elements).
As we will see later, a CT number is calculated and
displayed in each pixel of the image. The value of
the CT number is calculated from the x-ray
attenuation properties of the corresponding tissue
voxel.
20
MOVEMENT OF BEAMS DURING SCAN
There are two distinct motions of
the x-ray beam relative to the
patient's body during CT imaging.
One motion is the scanning of the
beam around the body.
The other motion is the
movement of the beam along the
length of the body. Actually, this
is achieved by moving the body
through the beam as it is rotating
around.
21
SCAN AND STEP SCANNING
Scan and step is one mode of scanning. It
was the first scanning mode developed
and is still used today for some
procedures.
One complete scan around the body is
made while the body is not
moving. Then the body is moved to the
next slice position.
The
principle
characteristic
(and
limitation) of this mode is that the data
set is fixed to a specific slice of
tissue.
This means that the slice
thickness, position, and orientation is
"locked in" during the scanning phase.
22
HELICAL SCANNING
Spiral or helical scanning is a more recently
developed mode and is used for many
procedures.
The patient's body is moved continuously as
the x-ray beam is scanned around the body.
This motion is controlled by the operator
selected value of the pitch factor.
As illustrated, the pitch value is the distance
the body is moved during one beam rotation,
expressed as multiples of the x-ray beam
width or thickness.
If the body is moved 10 mm during one
rotation, and the beam width is 5 mm, the
pitch will have a value of 2.
23
CHANGING THE PITCH
When the pitch is increased, the x-ray
beam appears to move faster along the
patient's body.
During the same time (as illustrated), the
x-ray beam will be spread over more of
the body when the pitch is
increased. This has three major effects.
Scan time will be less to cover a specific
body volume.
The radiation is less concentrated so dose
is reduced.
There will not be as much "detail" in the
data and image quality might be reduced
24
RECONSTRUCTION
Image reconstruction is the phase in
which the scan data set is processed to
produce an image. The image is digital
and consist of a matrix of pixels.
"Filtered" refers to the use of the digital
image processing algorithms that are
used to improve image quality or change
certain image quality characteristics, such
as detail and noise.
"Back projection" is the actual process
used to produce or "reconstruct" the
image.
25
CONCEPT OF FILTERED BACK
PROJECTION
We start with one scan view through a
body section that contains two
objects. The data produced is not a
complete image, but a profile of the x-ray
attenuation by the objects.
Let's now take this profile and attempt to
draw an image by "back projecting" the
profile onto our image surface.
As we see, there is only enough
information in the profile to allow us to
draw in streaks.
26
FBP CONTINUED
We have now rotated the xray beam around the body by
900 and obtained another
view.
If we now back project this
profile onto our image area
we see the beginnings of an
image showing the two
objects.
Several hundred views are
used to produce clinical CT
images
27
FBP-A MATHEMATICAL DERIVATION
Assume a computerised tomography set up with rays parallel to the y-axis directed
upwards. Assume absorbing density in the plane to be
, where function has a
compact support. Signal intensity registered on the x axis,
will then be:
Now consider a two-dimensional Fourier transform of function :
Taking the one-dimensional Fourier Transform of the projection
y axis gives us the (k,0) line in the Fourier space of .
along the
28
FBP CONTINUED
We can now rotate our set-up and obtain lines in the Fourier space of
under any
Angle by calculating one-dimensional Fourier transforms of measured parallel projection
and in this way effectively reconstructing the whole M(k, l). Once we have M(k, l) we
can then obtain
by taking the inverse Fourier Transform of M(k, l).
It is convenient in this case to write it all down in terms of the rotation angle .
Consider rotating the system of coordinates as follows:
The inverse transform yields :
29
FBP CONTINUED
b(x’)=
30
FBP CONTINUED
Since
is the inverse Fourier Transform of
the filtered projection
is the convolution of
, the Fourier Transform
and b(x').
31
CT NUMBERS
The CT numbers are calculated from the x-ray linear
attenuation coefficient values for each individual
tissue voxel. It is the attenuation coefficient that is
first calculated by the reconstruction process and
then used to calculate the CT number values.
Note that water is the reference material for CT
numbers and has an assigned value of zero.
Tissues or materials with attenuation (density)
greater than water will have positive CT
numbers. Those that are less dense will have
negative CT numbers.
X-ray attenuation depends on both the density and
atomic number (Z) of materials and the energy of
the x-ray photons. For CT imaging a high KV (like
120-140) and heavy beam filtration is used. This
minimizes the photoelectric interactions that are
influenced by the Z of a material. Therefore, CT
numbers are determined by the density of the
tissues or materials.
CT numbers are in Hounsfield Units.
32
Download