Uploaded by Jane Smith's

Unit I Notes

advertisement
Unit I:
1) Fundamentals of Image Processing(Key Stages):
There are two categories of the steps involved in the image processing –
1. Methods whose outputs are input are images.
2. Methods whose outputs are attributes extracted from those images.
1. Image Acquisition:
It refers to the process of capturing real world images and storing them. Digital camera
captures image directly and stores in its memory device. Imaging device can use either
a single sensor or a line of sensors or array of sensors, depending on the scene to be
imaged. Photo diode or charge coupled devices(CCD) or CMOS devices can be used as
sensor that converts light energy into electrical energy.
2. Image Enhancement:
Enhancement alters an image to make it clearer to human observer or to make it better
for an automatic computer algorithm. The main goal is to emphasize certain features
of interest in an image for further analysis or image display. Various enhancement
techniques are: Contrast and edge enhancement, noise filtering, sharpening and
pseudo coloring.
Image enhancement can be done in two domains:
a) Spatial domain
b) Frequency domain
3. Image Restoration:
4.
5.
6.
7.
8.
9.
Image restoration refers to removal or minimization of the degradations in an image
using a prior knowledge of the degradation process. Objective of restoration is to
estimate the signal from degraded signal if some prior knowledge of degradation
function and noise is there. It also deals with improving the appearance of an image,
but unlike enhancement, restoration is an objective method where mathematical
model is needed.
Morphological Processing:
It is used as tool for extracting image components used in representation and
description of region/shape, such as boundaries, skeletons etc. Morphological
techniques such as thinning, filtering and pruning is used for pre and post processing.
Erosion and dilation are two important operators of morphological processing.
Segmentation:
Image segmentation is related to portioning of an image into its constituent parts.
Segmentation divides the image into “meaningful ” parts or region. Meaningful part
may be a complete object or part of it. Segmentation algorithms use image features to
ectract regions. Edge detection, thresholding, boundary extraction, region growing,
splitting and merging are generally used for segmentation.
Representation and Description:
Representation and description stage follows segmentation. Boundary representation
is useful when external shape characteristics such as corners and inflections are to be
used. Whereas regional representation is useful when internal properties such as
texture or skeletal shape is to be used. After boundary or region representation,
description or feature extraction is used to extract attributes for object recognition.
Recognition:
It is a process of assigning labels to objects based on its description. After the
appropriate features are selected which represent the object truly, recognition task
becomes easy. Thus, pattern recognition means identification of ideal object. There are
two phases of object recognition: learning and classification. Learning is development
of model based on features. Classification is to classify and label carious classes with
particular patterns.
Image Compression:
Compression is a technique to reduce the amount of memory needed to store an image
and the amount of time needed to transmit it. Image compression techniques make it
possible to communicate and access digital data at very high speed.
Color Image Processing:
Color image Processing is a very important field of digital image processing. Colour
images are integral part of our daily like as we generate and share these images. RGB
colour model is used for color monitors, video cameras, etc. CMY(Cyan, Magenta,
Yellow) and CMYK(Cyan, Magenta, Yellow, Black) color models are used for color
printing. HSI(Hue, Saturation, Intensity) model is used for image analysis.
2) Explain elements of DIP.
Image Sensors: With reference to sensing, two elements are required to acquire
digital image. The first is a physical device that is sensitive to the energy radiated by
the object we wish to image and second is specialized image processing hardware.
Specialize image processing hardware: It consists of the digitizer just mentioned, plus
hardware that performs other primitive operations such as an arithmetic logic unit, which
performs arithmetic such addition and subtraction and logical operations in parallel on
images.
Computer: It is a general purpose computer and can range from a PC to a
supercomputer depending on the application. In dedicated applications, sometimes
specially designed computer are used to achieve a required level of performance.
Software: It consists of specialized modules that perform specific tasks a well designed
package also includes capability for the user to write code, as a minimum, utilizes the
specialized module. More sophisticated software packages allow the integration of
these modules.
Mass storage: This capability is a must in image processing applications. An image of
size 1024 x1024 pixels, in which the intensity of each pixel is an 8- bit quantity requires
one Megabytes of storage space if the image is not compressed .Image processing
applications falls into three principal categories of storage
i)
Short term storage for use during processing
ii)
On line storage for relatively fast retrieval
iii)
Archival storage such as magnetic tapes and disks
Image display: Image displays in use today are mainly color TV monitors. These
monitors are driven by the outputs of image and graphics displays cards that are an
integral part of computer system.
Hardcopy devices: The devices for recording image includes laser printers, film
cameras, heat sensitive devices inkjet units and digital units such as optical and CD
ROM disk. Films provide the highest possible resolution, but paper is the obvious
medium of choice for written applications.
Networking: It is almost a default function in any computer system in use today
because of the large amount of data inherent in image processing applications. The
key consideration in image transmission bandwidth.
3) Write a short note on representation of digital images.
An image is a two-dimensional function that represents a measure of some
characteristic such as brightness or colour of a viewed scene. An image is a projection
of a 3-D scene into a 2D projection plane.
An image may be defined as a two-dimensional function f(x,y), where x and y are
spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x,y) is
called the intensity of the image at that point. The term gray level is used often to
refer to the intensity of monochrome images. Colour images are formed by a
combination of individual 2-D images.
The result of sampling and quantization is matrix of real numbers. Assume that an
image f(x,y) is sampled so that the resulting digital image has M rows and N Columns.
The values of the coordinates (x,y) now become discrete quantities thus the value of
the coordinates at origin become (x,y) =(0,0) The next Coordinates value along the first
signify the image along the first row.
Due to processing storage and hardware consideration, the number gray levels
typically is an integer power of 2. L = 2K
Then, the number, B, of bites required to store a digital image is B=M *N* k
When M=N The equation become B=N2 * k
When an image can have 2K gray levels, it is referred to as “k- bit” . An image with 256
possible gray levels is called an “8- bit image”(256=28)
Digital image is a finite collection of discrete samples (pixels) of any observable object.
The pixels represent a two- or higher dimensional “view” of the object, each pixel
having its own discrete value in a finite range. The pixel values may represent the
amount of visible light, infra red light, absortation of x-rays, electrons, or any other
measurable value such as ultrasound wave impulses. The image does not need to have
any visual sense; it is sufficient that the samples form a two-dimensional spatial
structure that may be illustrated as an image. The images may be obtained by a digital
camera, scanner, electron microscope, ultrasound stethoscope, or any other optical or
non-optical sensor. Examples of digital image are:
 digital photographs
 satellite images
 radiological images (x-rays, mammograms)
 binary images, fax images, engineering drawings
4) Write short notes on sampling and quantization.
To create a digital image, we need to convert the continuous sensed data into digital
form. This involves two processes: sampling and quantization. An image may be
continuous with respect to the x- and y-coordinates, and also in amplitude. To convert
it to digital form, we have to sample the function in both coordinates and in amplitude.
Digitizing the coordinate values is called sampling. Digitizing the amplitude values is
called quantization. In digitizing sampling is done on independent variable. In case of
equation y = sin(x), it is done on x variable. In sampling we reduce this noise by taking
samples. It is obvious that more samples we take, the quality of the image would be
more better, the noise would be more removed and same happens vice versa. However,
if you take sampling on the x axis, the signal is not converted to digital format, unless
you take sampling of the y-axis too which is known as quantization. Sampling has a
relationship with image pixels. The total number of pixels in an image can be calculated
as Pixels = total no of rows * total no of columns.
Quantization is the second step in digitization process. It is the process of rounding off
the amplitude values of the samples to nearest integer value. It is an irreversible
process and there is a information loss in the process of quantization. It plays a critical
role in image and video compression. Quantizer controls bit rate of the encoder and
the distortion of reconstructed image and video. Under quantization process the
amplitude values of the image are digitized. In simple words, when you are quantizing
an image, you are actually dividing a signal into quanta(partitions). There is a
relationship between Quantization with gray level resolution When we want to improve
the quality of image, we can increase the levels assign to the sampled image. If we
increase this level to 256, it means we have a gray scale image. Whatever the level
which we assign is called as the gray level. Most digital IP devices uses quantization
into k equal intervals. If b-bits per pixel are used,
The number of quantization levels should be high enough for human perception of
fine shading details in the image.
5) Explain spatial and gray Level resolution.
The details of an image is determined by its resolution. The quality and accuracy of
details presented by a graphics display system such as computer monitor or a printout
is called resolution. Resolution quality increases with the number of pixels.
There are two types of resolution: spatial and grey level resolution.
Spatial resolution determines the smallest discernible detail in the image. It is
determined by how sampling is carried out. Pixel size is important from screen display
point of view whereas dots per inch is important in printers. Spatial resolution can be
expressed as the total number of pixels in an image. A 13.3 Mega pixel camera will
have 1310720 pixels in the generated image with 1024 rows and 1280 columns.
Resolution is also the level of details with which image can be reproduced. As the total
number of pixels on an mage sensor increases, the pixel size gets smaller and requires
higher quality lense to achieve best focus. In case of remote sensing application,
images with visible large features are low resolution images, whereas in fine or high
resolution images, small objects can be detected.
Commercial satellite provide imaging with resolutions varying from a few meters to
several kilometers.
Grey level resolution refers to smallest noticeable change in grey level. Generally, grey
scale images are 8 bit images, that is 256 different grey levels can be seen in the
images. If the number of grey levels are increased to 1024 or 4096, human eyes are
not capable of distinguishing between 4096 distinct grey levels. Thus generally 8 bit
images are used. If the number of grey levels are reduced in an mage false contouring
can be seen prominently for 5 bits and lower. False contouring happens because of
insufficient number of grey levels in the image. Grey levels can be reduced by quantizer.
6) Explain
the process of conversion of Analog to Digital image.
The process of converting an analog image to a digital image can be explained using the
following block diagram:
Analog Image → Sampling → Quantization → Encoding → Storage → Display
1. Sampling: The first step in the conversion process is to sample the analog image.
This is typically done using an analog-to-digital converter (ADC), which takes
measurements of the image at specific intervals to capture discrete points of data. The
result is a series of samples, where each sample represents the value of the image at a
particular point.
2. Quantization: Once the image has been sampled, the next step is to quantize the
data. This involves assigning a numerical value to each sample based on its amplitude,
or brightness. This is typically done by dividing the range of possible values into a finite
number of discrete levels, such as 256 levels for an 8-bit image. The result is a series
of quantized samples, where each sample is represented by a numerical value.
3. Encoding: After quantization, the digital data must be encoded into a format that
can be stored and processed by a computer. This is typically done using a binary code,
where each sample is represented by a series of bits. For example, an 8-bit sample
would be represented by 8 bits, or 1 byte, of data.
4. Storage: Once the digital image has been encoded, it can be stored on a computer's
hard drive or other digital storage medium. The size of the digital image file will
depend on the resolution of the image, the number of bits used to represent each
sample, and the number of samples in the image.
5. Display: Finally, the digital image can be displayed on a computer monitor or other
digital display device. This involves converting the digital data back into an analog
signal that can be interpreted by the display device. This is typically done using a
digital-to-analog converter (DAC), which converts the binary code back into a series of
voltage levels that can be displayed on the screen.
Overall, the process of converting an analog image to a digital image involves a series
of steps that allow for the capture, quantization, encoding, storage, and display of the
image using digital technology.
7) Explain types of sensors used to image Acquisition.
There are majorly two types of image sensors.
1. Charge coupled device (CCD) image sensor
2. Complementary metal oxide semiconductor (CMOS) image sensor
CCD image sensor:
CCD is a charge transfer device that collects light in pixel and then uses clock pulses
to shift charge through the pixels. CCD cameras consists of a lens and an image plane
containing tiny solids cells (pixels) that convert light energy into electrical energy. The
output of CCD camera is analog. CCD is highly sensitive photo detector. When light
energy falls on the pixels of CCD, it gets converted into electrons. The number of
electrons collected are directly proportional to the intensity of the scene at each pixel.
When CCD output is taken, the number of electrons in each pixel are measured and
the scene is reconstructed. There are three basic steps in CCD imaging.
1. Exposure: Sensor are exposed to incident light and light gets converted into
electrical charge.
2. Charge transfer: Packets of charge are moved within the silicon substrate
3. Output: Charge is converted into voltage and output is amplified.
CMOS Image Sensors:
A CMOS Camera consists of a pixel sensor which in turn consisting of light sensitive
elements and a MOS transistor acts as a switch. It has different working principle as
compared to CCD cameras. Charge to voltage conversion takes place in each pixel in
CMOS camera by placing the process of pixel measurement within the pixel. The sensor
also includes amplifier, noise correction and digitization circuit. Thus, sensor output
would be a digital bit. CMOS sensors are more sensitive to noise.
8) Explain in details image sensing and acquisition process.
Depending on the kind of scene to be imaged and chio array size, scanning mechanism
can be of three types:
1) Using single sensor
2) Using sensor strip
3) Using sensor Array
Single Sensor for Image Acquisition:
A single sensor is used for acquisition of image. Sensor can be photodiode, CCD or
CMIS. To generate a 2D image from single sensor, sensor has to be moved in both
x and y direction. A film negative can be mounted over a drum which can be rotated
along its axis, thus providing displacement in one dimension. Single sensor is
mounted on an arm of rotating drum provides motion in perpendicular direction.
In this way, 2D image can be acquired using a single sensor.
Sensor Strip for Image Acquisition:
A strip of sensors consisting of more than one sensor can also be used for image
acquisition. In a flatbed scanner, sensor strips uses linear motion to acquire a 2D image.
Around 4000 sensors are used in these strips which generate one line of an image at
a time. One dimensional motion of this strip created the entire image.
Sensors Arrays for Image Acquisition:
Generally CCD/ CMOS cameras use sensor arrays for image acquisition. The array size
can be 4000 x 4000 elements or more. As the sensor arrays is two dimensional,
complete image can be obtained without any movement. This arrangement is very
simple and doesn’t need any kind of mechanical motion which can be source of noise.
9) Relationship between pixels:
Various relationship between pixels are the following:
1. Neighbours: In any image f(x,y), let us consider a pixel ‘p’ at coordinated (x,y).
This pixel can have 3 types of neighbours:
a. 4 – neighbors (N4)
b. Diagonal neighbors (ND)
c. 8 -Neighbours (N8)
The pixel p has four horizontal and vertical neighbours as shown in the fig(a) with the
coordinates (x+1 , y), (x-1, y), (x, y-1), (x, y+1). N4(p) denotes this set of 4 neighbours
of pixel p. Each of these neighbours is having a disctance of 1 pixel from the pixel p.
Four diagonal neighbours of p in fig (b) have coordinates (x+1 , y+1), (x+1, y-1), (x-1,
y+1), (x-1, y-1). This set of pixels is called ND(p). Eight neighbourhood is defined by
N8(p) = N4(p) U ND(p)
2. Adjacency:
To determine if the two pixels are adjacent or not, there two conditions:
a. Two Pixels should be neighbours
b. Their grey levels should be similar
There are three types of adjacencies:
a) 4- adjacency:
Pixels p & q with values V are said to be 4-adjacent if q is in
the set of N4(p)
V = { 0 or 1} for binary Image
V = { 0 , 1…..255 } for gray level Image
b) 8 – adjacency:
Pixels p & q with values V are said to be 8-adjacent if q is in the set
of (p)
c) m – adjacency:
Two pixels p and q are called m-adjacent, if both have the
same value and i) q is in N4(p) or ii) q is in ND(p) and value of
any element in set (N4(p) ∩ N4(p) ) is not same as p
In case of 8 adjacency, sometimes ambiguity exists between a
pixel p and q.
3. Connectivity:
Two pixels are connected if they are adjacent. Similarly, two subsets are
connected or adjacent if some pixels in S1 is adjacent to some in pixel in S2
In this image, pixel p in sub image S1, and pixel q in subimage S2 have a value
1 and are 8 adjacent thus s1 and s2 are 8 adjacent.
4. Path: A path between pixel p having x coordinates (x,y) to pixel q with (u,v)
coordinates is a sequence of connected pixels.
5. Region:
R is a subset of pixels in an image. If every pixel in R is connected to other
pixels in R, then R is called a Region.
6. Distance measure:
Download