Uploaded by radhasre1511

AIML-M1-Introduction and digital image fundamentls

advertisement
Module 1
Digital Image Fundamentals
Content
Chapter 1:Digital Image Fundamentals:
• What is Digital Image Processing?, Origins of Digital Image Processing,
Examples of fields that use DIP, Fundamental Steps in Digital Image
Processing, Components of an Image Processing System,
Chapter 2: Elements of Visual Perception, Image Sensing and Acquisition,
Image Sampling and Quantization, Some Basic Relationships Between Pixels,
Linear and Nonlinear Operations.
[Text: Chapter 1 and Chapter 2: Sections 2.1 to 2.5, 2.6.2]
• L1, L2
Introduction
“One picture is worth more than ten
thousand words”
Anonymous
one single picture can more effectively convey something, or
can depict something more vividly and clearly, than a lot of
words—and can certainly do so faster.
What Is an Image?
An image is represented by its dimensions (height and width) based on the
number of pixels. For example, if the dimensions of an image are 500 x 400
(width x height), the total number of pixels in the image is 200000.
This pixel is a point on the image that takes on a specific shade, opacity
or color. It is usually represented in one of the following:
• Grayscale - A pixel is an integer with a value between 0 to 255 (0 is
completely black and 255 is completely white).
• RGB - A pixel is made up of 3 integers between 0 to 255 (the integers
represent the intensity of red, green, and blue).
• RGBA - It is an extension of RGB with an added alpha field, which
represents the opacity of the image
What Is Image Processing?
• Image processing is the process of transforming an image into a digital
form and performing certain operations to get some useful information
from it.
1.1 What is Digital image processing ?
• An image may be defined as a two-dimensional function f(x,
y), where x and y are spatial(plane) coordinates, and the
amplitude of f at any pair of coordinates (x, y) is called the
intensity or gray level of the image at that point.
• When x, y, and the amplitude values of f are all finite,
discrete quantities image a digital image.
• Digital image is composed of a finite number of elements,
each of which has a particular location and value. These
elements are referred to as picture elements, image
elements, pels and pixels.
• Pixel is the term most widely used to denote the elements of
a digital image
A digital image is a representation of a two-dimensional image as a finite set of digital values,
called picture elements or pixels
What is a Digital Image? (cont…)
Pixel values typically represent gray levels, colours, heights, opacities
etc
Remember digitization implies that a digital image is an approximation
of a real scene
1 pixel
WHAT IS A DIGITAL IMAGE? (CONT…)
•Common image formats include:
• 1 sample per point (B&W or Grayscale)
• 3 samples per point (Red, Green, and Blue)
• 4 samples per point (Red, Green, Blue, and “Alpha”, a.k.a. Opacity)
• For most of this course we will focus on grey-scale images
DEPT. OF ECE, CANARA ENGINEERING
COLLEGE, MANGALORE
7/28/2014
7
1.2 The Origins[History] of Digital Image Processing
Early 1920s: One of the first applications of digital imaging was in the
news-paper industry
Early digital image
• The Bartlane cable picture
transmission service
• Images were transferred by submarine cable between London and New York
• Printing equipment coded pictures for cable transfer and reconstructed at the
receiving end on a telegraph printer
History of DIP (cont…)
Mid to late 1920s: Improvements to the Bartlane system resulted in
higher quality images
• New reproduction
processes based
on photographic
techniques
• Increased number
of tones in
reproduced images
Improved
digital image
Early 15 tone digital
image
History of DIP (cont…)
1960s: Improvements in computing technology and the onset of the
space race led to a surge of work in digital image processing
• 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
• Such techniques were used
in other space missions
including the Apollo landings
A picture of the moon taken
by the Ranger 7 probe
minutes before landing
History of DIP (cont…)
1970s: Digital image processing begins to be used in medical
applications
• 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans
Typical head slice CAT
image
History of DIP (cont…)
1980s - Today: The use of digital image processing techniques has
exploded and they are now used for all kinds of tasks in all kinds of
areas
• Image enhancement/restoration
• Artistic effects
• Medical visualisation
• Industrial inspection
• Law enforcement
• Human computer interfaces
Examples: Image Enhancement
One of the most common uses of DIP techniques: improve quality,
remove noise etc
Examples: The Hubble Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubble’s
images useless
Image processing
techniques were
used to fix this
1.3 Examples of fields that use Digital Image processing
1.3.1 Gamma-Ray Imaging
• Nuclear medicine and astronomical observation.
• In Nuclear medicine ,the approach is to Inject a patient with a
radioactive isotope that emits GR as it decays.
• PET(position emission tomography),when a positron meets an
electron both are annihilated and two gamma rays are given off. these
are detected and a tomographic image is created
1.3.2 X-Ray Imaging
• Oldest sources of EM radiation used for imaging
• use of X-rays is medical diagnostics and industry also and other areas
like astronomy,
• X-ray tube ,which is vacuum tube with a cathode and anode
• The cathode is heated, causing free electrons to be released
• These electrons flow at high speed to the positively charged anode,
• When the electrons strike a nucleus, energy is released in the form of
X-ray radiation.
• Angiography is another major application in an area called contrast
enhancement radiography. this procedure is used obtain images of blood
vessels.
• A catheter (small hollow tube) is inserted for example into an artery or
vein ..
• The catheter is threaded into the blood vessel and guided to the area to be
studied.
• When the catheter reaches the site under investigation, an x-ray contrast
medium is injected through the tube.
• This enhances contrast of the blood vessels and enables the radiologist to
see any irregularities or blockages.
1.3.3 Imaging in the Ultraviolet Band
• Applications of ultraviolet ”light” are varied. they include lithography,
industrial inspection, microscopy, laser, biological imaging and astronomical.
1.3.4 Imaging in the Visible and Infrared Bands
• Applications in light microscopy,astronomy,remote sensing,industry
and law enforcement.
1.3.6 Imaging in the radio band
• Applications of imaging in the radio band are medicine and astronomy.
• In medicine, radio waves are used in magnetic resonance imaging(MRI)
1.4 Fundamental Steps in Digital Image Processing
Image Acquisition
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
Image acquisition is the first step in image processing. This step
is also known as preprocessing in image processing. It involves
retrieving the image from a source, usually a hardware-based
source
Image Enhancement
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
Image enhancement is the process of bringing out and highlighting certain
features of interest in an image that has been obscured. This can involve
changing the brightness, contrast, etc.
which is subjective in the sense that is based on human subjective
preferences regarding what constitutes a “good” enhancement result.
Image Restoration
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
Image restoration is the process of improving the appearance of an
image (or recovering an image that has been degraded) However,
unlike image enhancement, image restoration is done using certain
mathematical or probabilistic models
Morphological processing
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
• Morphological processing deals with tools for extracting image
components that are useful in the representation and description of
shape.
Segmentation
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
• Segmentation procedures partition an image into its constituent parts or
objects.
In general, autonomous segmentation is one of the most difficult tasks in
digital image processing.
A rugged segmentation procedure brings the process a long way toward
successful solution of Imaging problems that require objects to be identified
individually.
 Weak or erratic segmentation algorithms almost always guarantee
eventual failure.
Object Recognition
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. We
conclude our coverage of digital image processing with the development of methods for recognition
of individual objects
Representation and description
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
• Representation and description almost always follow the output of a segmentation stage,
which usually is raw pixel data, constituting either the boundary of a region (i.e., the set
of pixels separating one image region from another) or all the points in the region itself. In
either case, converting the data to a form suitable for computer processing is necessary. The
first decision that must be made is whether the data should be represented as a boundary or
as a complete region.
• Boundary representation is appropriate when the focus is on external shape
characteristics, such as corners and inflections.
• Regional representation is appropriate when the focus is on internal properties, such
as texture or skeletal shape.
• Description, also called feature
selection,
deals with extracting
attributes that result in
some quantitative
information of interest or are basic for
differentiating one class of objects from another
Image Compression
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
• Image Compression
Compression is a process used to reduce the storage required to save an
image or the bandwidth required to transmit it. This is done particularly
when the image is for use on the Internet.
Color Image Processing
Image
Restoration
Morphological
Processing
Image
Enhancement
Segmentation
Image
Acquisition
Object
Recognition
Problem Domain
Representation
& Description
Colour Image
Processing
Image
Compression
Color image processing includes a number of color modeling techniques in a
digital domain. This step has gained prominence due to the significant use of
digital images over the internet.
1.5 Components of an Image Processing System
Image sensing : With reference to sensing, two elements are
required to acquire digital images.
1. Physical device that is sensitive to the energy radiated by the object we wish
to image.
2. Digitizer, is a device for converting the output of the physical sensing device
into digital form.
Specialized image processing hardware usually consists of the digitizer plus
hardware that performs other primitive operations, such as an arithmetic logic
unit (ALU), Which performs arithmetic and logical operations in parallel on
entire images.
Example: ALU is used is in averaging images as quickly as they
are digitized, for the purpose of Noise reduction. This type of
hardware sometimes is called a front-end subsystem, and its
most distinguishing characteristic is speed. In other words, this
unit performs functions that require fast data throughputs (e.g.,
digitizing and averaging video images at 30 frames/s) that the
typical main computer cannot handle.
• The computer in an image processing system is a general-purpose computer
and can range from a PC to a supercomputer. In dedicated applications, some
times specially designed computers are used to achieve a required level of
performance.
• Software for image processing consists of specialized modules that perform
specific tasks. A well-designed package also includes the capability for the
user to write code that, as a minimum, utilizes the specialized modules. More
sophisticated software packages allow the integration of those modules and
general- purpose software commands from at least one computer language.
Mass storage capability is a must in image processing applications.
ex: An image of size 1024*1024pixels, in which the intensity of each pixel
is an 8-bit quantity, requires one megabyte of storage space if the image is
not compressed. When dealing with thousands, or even millions of images,
providing adequate storage in an image processing system can be a
challenge.
Storage is measured in bytes (eight bits), Kbytes (one thousand bytes), Mbytes
(one million bytes), Gbytes (meaning giga, or one billion, bytes), and Tbytes
(meaning tera, or one trillion, bytes).
Image displays in use today are mainly color(preferably flat screen) TV
monitors. Monitors are driven by the outputs of image and graphics display
cards that are an integral part of the computer system.
Hardcopy devices for recording images include laser printers, film cameras,
heat-sensitive devices, inkjet units, and digital units, such as optical and CDROM disks. Film provides the highest possible resolution, but paper is the
obvious medium of choice for written material. For presentations, images are
displayed on film transparencies or in a digital medium if image projection
equipment is used.
Networking is almost a default function in any computer system in use today.
Because of the large amount of data inherent in image
processing
applications, the key consideration in image transmission is bandwidth.
In dedicated networks, this typically is not a problem, but communications
with remote sites via the Internet are not always as efficient. Fortunately,
this situation is improving quickly as a result of optical fiber and other
broadband technologies.
Reference Concept of Bits Per Pixel
• Pixel is the smallest element of an image. Each pixel correspond to any one
value. In an 8-bit gray scale image, the value of the pixel between 0 and 255.
Number of different colors:
Now as we said it in the
beginning, that the
number of different colors
depend on the number of
bits per pixel.
The table for some of the
bits and their color is
given below.
Bits per pixel
Number of colors
1 bpp
2 colors
2 bpp
4 colors
3 bpp
8 colors
4 bpp
16 colors
5 bpp
32 colors
6 bpp
64 colors
7 bpp
128 colors
8 bpp
256 colors
10 bpp
1024 colors
16 bpp
65536 colors
24 bpp
16777216 colors (16.7 million colors)
32 bpp
4294967296 colors (4294 million colors)
Module1-chapter 2
Digital Image Fundamentals
2.1 Elements of visual perception
The digital image processing field is built on a foundation of
mathematical and probabilistic formulation, human intuition and
analysis play a central role in the choice of one technique versus
another, and this choice often is made based on subjective, visual
judgments.
What is Eye?
2.1.1 Structure of [Visual Perception] Human Eye
Fig: simplifIed diagram of Cross Section of the Human Eye
Visual Perception: Human Eye
• The lens is colored by a slightly yellow pigmentation that increases
with age in extreme cases, excessive clouding of the lens, caused by
the affliction commonly referred to as cataracts, can lead to poor
color discrimination and loss of clear vision.
• Lens absorbs approximately 8% of the visible light spectrum, with
relatively higher absorption at shorter wavelengths.
• Light receptors in the retina innermost membrane of the eye
which lies inside of the wall’s entire posterior portion.
• Two classes of receptors: cones and rods
-About 6-7 millions cones for bright light vision called
photopic
-Density of cones is about 150,000 elements/mm2.
-Cones involve in color vision.
-Cones are concentrated in fovea about 1.5x1.5 mm2.
-About 75-150 millions rods for dim light vision called
scotopic
-Rods are sensitive to low level of light and are not
involved color vision.
Distribution of Rods and Cones in the Retina
Figure: Distribution of Rods and Cones in the Retina
• Figure shows the density of rods and cones for a cross section of the right
eye passing through the region of emergence of the optic nerve from the eye.
• The absence of receptors in this area results in the so-called blind spot.
• Except blind spot region, the distribution of receptors is
radially symmetric
about the fovea.
• Receptor density is measured in degrees from the fovea
i.e in degree off axis, as measured by the angle formed by the visual axis and a
line passing through the centre of the lens and intersecting the retina.
• From fig. cones are most dense in the centre of the retina and also rods increase in density
from the centre out to approximately 20° off axis and then decrease in density out to the
extreme periphery of the retina.
• The fovea itself is circular indentation in the retina of about 1.5 mm in diameter or fovea is a
square sensor array of size 1.5 mm x 1.5 mm.
2.1.2 Image Formation In The Eye
•
•
•
•
•
In an ordinary photographic camera, the lens has fixed focal length, and focusing at various
distance is achieved by varying the distance between the lens and imaging plane ,where the
film is located.
In human eye converse is true: the distance between the lens and the imaging plane is fixed
and the focal length needed to achieve proper focus is obtained by varying the shape of the
lens.
Muscles within the eye can be used to change the shape of the lens allowing us focus on objects
that are near or far away.
The fibers in the ciliary body accomplish this, flattening or thickening the lens for distant or
near objects, respectively
An image is focused onto the retina causing rods and cones to become excited which
ultimately send signals to the brain.
• The distance between the centre of the lens and the retina  called focal length along the
visual axis is approximately 17mm. The range of focal length is approximately 14 mm to
17mm( takes place when eye is relaxed and focused at distance greater than about 3m
• Ex: 15/100 = h/17 or h = 2.55 mm ----- from figure
2.1.3 Brightness Adaption and Discrimination
• Change in overall sensitivity of perceived brightness
• Number of distinct intensity
level that can be perceived
simultaneously is small compared to number of levels that can be
perceived
• Brightness adaptation level – current sensitivity level of the visual
system
• The human eye can adapt to a wide range (≈ 1010) of intensity levels.
The brightness that we perceive (subjective brightness) is not a simple
function of the intensity.
• In fact the subjective brightness is a logarithmic function of the light
intensity incident on the eye.
• The HVS(Human Visual System) mechanisms adapt to different
lighting conditions. The sensitivity level for a given lighting condition
is called as the brightness adaption level.
• As the lighting condition changes, our visual sensory mechanism will
adapt by changing its sensitivity. The human eye cannot respond to the
entire range of intensity levels at a given level of sensitivity.
Weber ratio
 Measure of contrast discriminationability
 Background intensity given by I
 Increment of illumination for short duration at intensity I (Figure 1.7)
 ΔIc is the increment of illumination
 when the illumination is visible half the time against background intensity I
 Weber ratio is given by ΔIc / I
 A small value of ΔIc / I implies that a small percentage change in intensity
is visible, representing good brightness discrimination
 A large value of ΔIc / I implies that a large percentage change is required
for discrimination, representing poor brightness discrimination
 Typically, brightness discrimination is poor at low levels of illumination and
improves at higher levels of background illumination (Figure1.8
Brightness Adaptation of Human Eye : Mach Band Effect
Mach Band Effect
Intensities of surrounding points effect
perceived brightness at each point.
In this image, edges between bars appear
brighter on the right side and darker on
the left side.
Simultaneous contrast
The perceived brightness of a region does not depend on the intensity of
the region, but on the context (background or surrounding‟s) on which
it is seen.
All the center squares have exactly same intensity. However, they
appear to the eye to become darker as the background gets lighter.
Optical illusion-eye fills in non existing information
Important questions
1. What are the elements of visual perception?
2. With
neat diagram explain the structure of
distribution of cones and rods in the retina ?
3. Write short note on:
i. Subjective brightness
ii. Brightness adaptation
iii. Weber ratio
iv. Mach bands
v. Simultaneous contrast
vi. Optical illusions
vii. Glare limit
an
human eye
and
2.2 Light And The Electromagnetic Spectrum
•The electromagnetic spectrum is split up according to the wavelengths of
different forms of energy.
Light emitted from the Sun is the product of black body radiation from
the intense heat generated by nuclear fusion processes within its core .
Visible Spectrum
E=hv
• Light is just a particular part of the electromagnetic spectrum that
can be sensed by the human eye.
• Color spectrum :violet, blue, green, yellow,orange and red
• The color that we perceive for an object is basically that of the light
reflected from the object.
• Light which gets perceived as gray shades from black to white is
called as monochromatic or achromatic light (without color).
• Light which gets perceived as colored is called as chromatic light.
Important terms which characterize a chromatic light source are:
• Radiance :The total amount of energy that flows from the light
source. Measured in watts.
• Luminance :It measures the amount of energy an observer
perceives from a light source. Measured in lumens.
• Brightness: Indicates how a subject perceives the light in a sense
similar to that of achromatic intensity
2.3 Image Sensing and Acquisition
• The types of images in which we are interested are generated by the
combination of an “illumination” source and the reflection or
absorption of energy from that source by the elements of the “scene”
being imaged.
• Depending on the nature of the source, illumination energy is reflected
from, or transmitted through, objects.
• Example : light reflected from a planar surface, X-rays pass
through a patient’s body
• In some applications, the reflected or transmitted energy is focused
onto a photo converter (e.g., a phosphor screen), which converts the
energy into visible light.
• There are 3 principal sensor arrangements (figure 2.12)(produce an
electrical output proportional to light intensity). (i)Single imaging Sensor
(ii)Line sensor (iii)Array sensor
• Incoming energy is transformed into a voltage by the combination of
input electrical power and sensor material that is responsive to the
particular type of energy being detected.
• The output voltage waveform is the response of the sensor(s), and
• a digital quantity is obtained from each sensor by digitizing its
response
2.3.1 Image Acquisition using a Single Sensor
• Sensor of this type is the photodiode, which is
constructed of silicon materials and whose
output voltage waveform is proportional to
light. The use of a filter in front of a sensor
improves selectivity.
• For example, a green (pass) filter in front of a light
sensor favours light in the green band of the color
spectrum. As a consequence, the sensor output will
be stronger for green light than for other
components in the visible spectrum.
• In order to generate a 2-D image using a single
sensor, there has to be relative displacements
in both the x- and y- directions between the
sensor and the area to be imaged.
Figure 2.13 shows an arrangement used in high-precision scanning, where a film negative is
mounted onto a drum whose mechanical rotation provides displacement in one dimension.
The single sensor is mounted on a lead screw that provides motion in the perpendicular
direction. Since mechanical motion can be controlled with high precision, this method is an
inexpensive (but slow) way to obtain high-resolution images. Other similar mechanical
arrangements use a flat bed, with the sensor moving in two linear directions. These types of
mechanical digitizers sometimes are referred to as microdensitometers.
2.3.2 Image acquisition using sensor strips
Linear sensor strips
• The strip provides imaging elements in one direction. Motion
perpendicular to the strip provides imaging in the other direction. This is
the type of arrangement used in most flatbed scanners.
Figure :(a) Image acquisition using linear sensor strip
• Sensing devices with 4000 or more in-line sensors are possible. Inline sensors are used routinely in which the imaging system is
mounted on an aircraft that flies at a constant altitude and speed over
the geographical area to be imaged.
• One-dimensional imaging sensor strips that respond to various bands
of the electromagnetic spectrum are mounted perpendicular to the
direction of flight
• The imaging strip gives one line of an image at a time, and the motion
of the strip completes the other dimension of a two-dimensional
image
Circular sensor strip
• Sensor strips mounted in a ring
configuration are used in medical and
industrial imaging to obtain
crosssectional (“slice”) images of 3-D
objects.
• A rotating X-ray source provides
illumination and the portion of the
sensors opposite the source collect the Xray energy that pass through the object
(the sensors obviously have to be
sensitive to X-ray energy).This is the
basis for medical and industrial
computerized axial tomography (CAT)
imaging.
Figure:(b) Image acquisition using circular sensor strip
2.3.3 Image Acquisition using Sensor Arrays
• This type of arrangement is found in digital cameras. A typical sensor
for these cameras is a CCD array, which can be manufactured with a
broad range of sensing properties and can be packaged in rugged
arrays of 4000 * 4000 elements or more.
• CCD sensors are used widely in digital cameras and other light
sensing instruments. The response of each sensor is proportional to the
integral of the light energy projected onto the surface of the sensor, a
property that is used in astronomical and other applications requiring
low noise images
• The principal manner in which array sensors are used is shown in Fig. 2.6.
• The energy from an illumination source being reflected from a scene element,
but, as mentioned at the beginning of this section, the energy also could be
transmitted through the scene elements.
• The first function performed by the imaging system is to collect the incoming
energy and focus it onto an image plane.
• If the illumination is light, the front end of the imaging system is a lens,
which projects the viewed scene onto the lens focal plane.
• The sensor array, which is coincident with the focal plane, produces outputs
proportional to the integral of the light received at each sensor.
• Digital and analog circuitry sweep these outputs and convert them to a video
signal, which is then digitized by another section of the imaging system.
2.3.4 A Simple model of image formation
• The scene is illuminated by a single source.
• The scene reflects radiation towards the camera.
• The camera senses it via chemicals on film.
2.4 Image Sampling And Quantisation
• Sampling and quantization are the two important processes used to convert continuous analog
image into digital image.
• Image sampling refers to discretization of spatial coordinates (along x axis) whereas quantization
refers to discretization of gray level values (amplitude (along y axis)). (Given a continuous image,
f(x,y), digitizing the coordinate values is called sampling and digitizing the amplitude (intensity)
values is called quantization.
• Consider a continuous image, f(x, y), that we want to convert to digital form.
• An image may be continuous with respect to the x- and y-coordinates, and also in amplitude.
• To convert it to digital form, we have to sample
amplitude.
the function in both coordinates and in
• The one dimensional function shown in fig 2.16(b) is a plot of amplitude (gray
level) values of the continuous image along the line segment AB in fig 2.16(a).
The random variation is due to the image noise.
• To sample this function, we take equally spaced samples along line AB as shown
in fig 2.16 (c).In order to form a digital function, the gray level values also must
be converted(quantizes) into discrete quantities.
• The right side of fig 2.16 (c) shows the gray level scale divided into eight discrete
levels, ranging from black to white. The result of both sampling and quantization
are shown in fig 2.16 (d).
• Consider a continuous image, f(x, y), that we want to convert to digital form.
• An image may be continuous with respect to the x- and y-coordinates, and also in
amplitude.
• To convert it to digital form, we have to sample the function in both coordinates and
in amplitude.
.
• Digitizing the coordinate values is called
sampling.
• Digitizing the amplitude values is called
quantization
Image Sampling And Quantisation
Image Sampling And Quantisation
Image Sampling And Quantisation (cont…)
2.4.2 Representing digital Images
An image may be defined as a two-dimensional function, f(x, y), where
x and y are spatial (plane) coordinates, and the amplitude of „f ‟ at any
pair of coordinates (x, y) is called the intensity or gray level of the
image at that point
2.4.3 Spatial and Intensity resolution
• Spatial resolution states that the clarity of an image cannot be determined by
the pixel resolution. The number of pixels in an image does not matter.
• Spatial resolution can be defined as the smallest discernible detail in an
image.
• Or in other way we can define spatial resolution as the number of
independent pixels values per inch.
• spatial resolution refers to is that we cannot compare two different types of
images to see that which one is clear or which one is not. If we have to
compare the two images, to see which one is more clear or which has more
spatial resolution, we have to compare two images of the same size.
Measuring spatial resolution
• Since the spatial resolution refers to clarity, so for different devices, different
measure has been made to measure it.
For example
• Dots per inch(dpi)-is usually used in monitors.
Dots per unit distance is a measure of image resolution used in the printing and
publishing industry.
• Example: quality, newspapers are printed with resolution 75dpi,magazine 133
dpi and book page dpi
• Lines per inch-used in laser printers.
• Pixels per inch- measure for different devices such as tablets , Mobile phones
e.t.
• Spatial resolution is a measure of the smallest discernable (detect with
difficulty )change in an image.
• Spatial resolusion can be stated in number of ways with line pairs per
unit distance and dots(pixel)per unit distance
• Image resolution quantifies how much close two lines (say one dark
and one light) can be to each other and still be visibly resolved. The
resolution can be specified as number of lines per unit distance, say 10
lines per mm or 5 line pairs per mm.
• Another measure of image resolution is dots per inch, i.e. the number
of discernible dots per inch
Spatial Resolution
The spatial resolution of an image is determined by how sampling was carried
out
Spatial resolution simply refers to the smallest discernable detail in an
image
– Vision specialists will often talk about pixel size
– Graphic designers will talk about dots per inch
(DPI)
Spatial resolution
Effect of Spatial Resolution
256x256 pixels
128x128 pixels
32x32 pixels
64x64 pixels
Effect of Spatial Resolution
Intensity(gray level) resolution
Gray level resolution refers to the predictable or deterministic change in the
shades or levels of gray in an image.
• Intensity(Gray level resolution )is the smallest discernable (detect with
difficulty) change in gray level.
• In short gray level resolution is equal to the number of bits per pixel.(BPP)
• The number of different colors in an image is depends on the depth of
color or bits per pixel.
• Mathematically
• The mathematical relation that can be established between gray level
resolution and bits per pixel can be given as.
L=2K
Image interpolation
• Interpolation is a basic tool used extensively in task such as
zooming,shrinking,rotating and geometric correction.
• It is process of using know data to estimate value at unknow location.
• Nearest neghbour interpolation
• Bilinear interpolation
• Bicubic interpolation
2.5 Some Basic Relationship between Pixels
• An image is denoted by f(x, y).When referring in this section to a particular pixel, we
use lowercase letters, such as p and q .
2.5.1 Neighbors of a Pixel
1. 4-neighbors of p, is denoted by N4(p).
2. Diagonal-neighbors of p, is denoted by ND (p)
3. 8-neighbors of p, is denoted by N8(p).
• 4-neighbors of p[N4(p)]
A pixel p at coordinates(x,y) has four horizontal and vertical neighbors
whose coordinates are given by
(x+1,y),(x-1,y),(x,y+1),(x,y-1)
These 4 pixels together constitute the 4-neighbors of pixel p, denoted as
N4(p).
• diagonal neighbors[ND(p)]
The set of 4 diagonal neighbors forms the diagonal neighborhood denoted as
ND(p).
(x+1,y+1), (x+1,y-1) (x-1,y+1) (x-1,y-1)
• 8-neighborhood[N8(p)]
• The set of 8 pixels surrounding the pixel p forms the 8-neighborhood
denoted as N8(p).
We have N8(p) = N4(p) ∪ ND(p).
• The concept of adjacency has a slightly different meaning from neighborhood.
Adjacency takes into account not just spatial neighborhood but also intensity
groups.
• Suppose we define a set S={0,L-1} of intensities which are considered to
belong to the same group. Two pixels p and q will be termed adjacent if both
of them have intensities from set S and both also conform to some definition
of neighborhood.
• 4 Adjacency: Two pixels p and q are termed as 4-adjacent if they have intensities from
set S and q belongs to N4(p).
• 8 Adjacency Two pixels p and q are termed as 4-adjacent if they have intensities from
set S and q belongs to N8(p)
2.5.2 Adjacency, connectivity, Regions and boundaries
• Connectivity between pixels is a fundamental concept that simplifies the definition of
numerous digital image concepts, such as regions and boundaries.
• To establish if two pixels are connected, it must be determined
1.If they are neighbors and
2.If their gray levels satisfy a specified criterion of similarity (say, if their gray levels are
equal).
• For instance, in a binary image with values 0 and 1, two pixels may be 4- neighbors, but
they are said to be connected only if they have the same value.
• Let V be the set of gray-level values used to define adjacency. In a binary image,
V={1} if we are referring to adjacency of pixels with value 1. In a grayscale image, the
idea is the same, but set V typically contains more elements.
• For example, in the adjacency of pixels with a range of possible gray-level
• values 0 to 255, set V could be any subset of these 256 values.
Adjacency
consider three types of adjacency:
(a)4-adjacency. Two pixels p and q with values from V are 4- adjacent if q is in
the set N4(p).
(b)8-adjacency. Two pixels p and q with values from V are 8- adjacent if q is in
N8(p) .
(c)m-adjacency (mixed
adjacency).Two pixels p
and
q
with values
from V are m-adjacent if
(i)q is in N4(p), or
(ii)q is in ND(p) and the set N4(p) N4(q)= has no pixels) whose values are from V.
Adjacency
A pixel p is adjacent to pixel q is they are connected. Two image subsets S1 and S2 are
adjacent if some pixel in S1 is adjacent to some pixel in S2
S1
S2
We can define type of adjacency: 4-adjacency, 8-adjacency or m-adjacency depending on type of
connectivity.
• Path A (digital) path (or curve) from pixel p with coordinates (x,y) to pixel
q with coordinates (s,t) is a sequence of distinct pixels with coordinates
(x0, y0) , (x1,y1) …………… (xn,yn)
Where (x0, y0)=(x,y), (xn,yn)=(s,t)
and pixel (xi,yi) and (xi-1,yi-1) are adjacent for 1 ≤ i ≤ n.
• Here n is the length of the path.
• If (x0, y0) = (xn,yn), the path is closed path.
• We can define 4-, 8-, and m-paths depending on the type of adjacency
specified.
Example 1
P.Q.P
Example 2:
Solution:
• Connected components Let S represent a subset of pixels in an image.
Two pixels p and q are said to be connected in S if there exists a path
between them consisting entirely of pixels in S.
• For any pixel p in S, the set of pixels connected to it in S is called a
connected component of S. If it has only one connected component,
then set S is called connected set.
Example
Solution:
• A Region R is a subset of pixels in an image such that all pixels in R
form a connected component.
• Let R be a subset of pixels in an image. We call R a region of
the image if R is a connected set. Two regions Ri,Rj are said to
be adjacent if their union forms a connected set.
• Regions that are not adjacent are said to be disjoint.
• We consider 4 and 8 adjacency when referring to regions.
Foreground and Background
• A Boundary of a region R is the set of pixels in the region that have
one or more neighbors that are not in R. If R happens to be an entire
image, then its boundary is defined as the set of pixels in the first and
last rows and columns of the image
Example:
2.5.3 Distance measure
Example:
Example:
Q.P
2.6.2 Linear and Nonlinear Operation[page no=73 -74]
The End
Download