Uploaded by Abdelrahman Shaban - 53

ECOE-426 Lecture 2

advertisement
Lecture No. 2
Course Name
Digital Image Processing
Lecture 2: Fundamentals (a)
Instructor Dr. Samia Heshmat
email samia.heshmat@aswu.edu.eg
5 March 2024
Digital Image Fundamentals
 The purpose of this chapter is to introduce a number of
basic concepts in digital image processing
 Although the field of digital image processing is built on
a foundation of mathematical
and probabilistic
formulations, human intuition and analysis play a central
role in the choice of one technique versus another, and
this choice often is made based on subjective, visual
judgments.
2
Elements of Visual Perception
 The human eye is a camera!
Iris - colored annulus with radial muscles
◼ Pupil - the hole (aperture) whose size is controlled by the iris
◼
◼
What’s the “film”?
– photoreceptor cells (rods and cones) in the retina
3
The Retina
Cross-section of eye
Cross section of retina
Pigmented
epithelium
Ganglion axons
Ganglion cell layer
Bipolar cell layer
Receptor layer
4
Retina up-close
Light
5
Two types of light-sensitive receptors
Cones
cone-shaped
less sensitive
operate in high light
color vision
Rods
rod-shaped
highly sensitive
operate at night
gray-scale vision
© Stephen E. Palmer, 2002
6
Rod / Cone sensitivity
The famous sock-matching problem…
7
Distribution of Rods and Cones
# Receptors/mm2
.
Fovea
150,000
Rods
Blind
Spot
Rods
100,000
50,000
0
Cones
Cones
80 60 40 20 0
20 40 60 80
Visual Angle (degrees from fovea)
Night Sky: why are there more stars off-center?
© Stephen E. Palmer, 2002
8
Digital Image
a grid of squares,
each of which
contains a single
color
1 pixel
each square is
called a pixel (for
picture element)
9
Digital Image
Color images have 3 values per
pixel; monochrome images have
1 value per pixel.
a grid of squares,
each of which
contains a single
color
each square is
called a pixel (for
picture element)
10
Image Formation
11
Image Formation
12
Image Formation
projection
through lens
image of object
13
A Simple Image Formation Model
f ( x, y ) = i ( x, y ) r ( x , y )
f ( x, y ) : intensity at the point (x, y )
i( x, y ) : illumination at the point (x, y)
(the amount of source illumination incident on the scene)
r ( x, y ) : reflectance/transmissivity at the point (x, y )
(the amount of illumination reflected/transmitted by the object)
where 0 < i( x, y ) <  and 0 < r ( x, y ) < 1
14
Light and EM Spectrum
c = 
E = h , h : Planck's constant.
15
Light and EM Spectrum
► The colors that humans perceive in an object are
determined by the nature of the light reflected
from the object.
e.g. green objects reflect light with wavelengths primarily
in the 500 to 570 nm range while absorbing most of the
energy at other wavelength
16
Light and EM Spectrum
► Monochromatic light: void of color
Intensity is the only attribute, from black to white
Monochromatic images are referred to as gray-scale
images
17
Light and EM Spectrum
► Chromatic light bands: 0.43 to 0.79 um
The quality of a chromatic light source:
Radiance: total amount of energy
Luminance (lm): the amount of energy an observer perceives
from a light source
Brightness: a subjective descriptor of light perception that is
impossible to measure. It embodies the achromatic notion of intensity
and one of the key factors in describing color sensation.
18
Some Typical Ranges of illumination
 Illumination
Lumen — A unit of light flow or luminous flux
Lumen per square meter (lm/m2) — The metric unit of measure
for illuminance of a surface
◼
On a clear day, the sun may produce in excess of 90,000 lm/m2 of
illumination on the surface of the Earth
◼
On a cloudy day, the sun may produce less than 10,000 lm/m2 of
illumination on the surface of the Earth
◼
On a clear evening, the moon yields about 0.1 lm/m2 of illumination
◼
The typical illumination level in a commercial office is about 1000 lm/m2
19
Some Typical Ranges of Reflectance
 Reflectance
◼
0.01 for black velvet
◼
0.65 for stainless steel
◼
0.80 for flat-white wall paint
◼
0.90 for silver-plated metal
◼
0.93 for snow
20
Image Sensing and Acquisition
 As observed before, most of the images are generated by the
combination of an “illumination” source and the reflection or
absorption of energy from that source by the elements of the
“scene” being imaged.
 For example, the illumination may originate from a source of
electromagnetic energy such as radar, infrared, or X-ray system.
 The scene elements could be objects, but they can just as easily
be molecules, rock formations, or a human brain. Depending on
the nature of the source, illumination energy is reflected from, or
transmitted through, objects
21
Image Acquisition
Transform
illumination
energy into
digital images
22
Image Acquisition Using a Single Sensor
23
Image Acquisition Using Sensor Strips
24
Image Acquisition Process
25
Image Sampling and Quantization
There are numerous ways to acquire images, but our objective in all is
the same: to generate digital images from sensed data.
 The output of most sensors is a continuous voltage waveform
whose amplitude and spatial behavior are related to the physical
phenomenon being sensed.
 To create a digital image two processes are needed to convert
continuous sense to digital image : sampling and quantization.
26
Image Sampling and Quantization
projection onto
discrete sensor
array
digital camera
27
Image Sampling and Quantization
sensors register
average color
Sampled & quantized
image
28
Sampling and Quantization
Sampling
 Digital Image is an approximation of a real
world scene
29
Sampling and Quantization
Quantization
 Digital Image is an approximation of a real
world scene
30
Sampling and Quantization
pixel grid
real image
sampled
quantized
sampled &
quantized
31
Quantization: Example
Return change using only these
32
Quantization: Example
50
Return 5
For Rs. 7
Return 5
For Rs. 9
Return 10
For Rs. 12
Return 10
For Rs. 23
Return 25
….
….
45
Change you return
For Rs. 2
40
35
30
25
20
15
10
5
0
0
5
10
15
20
25
30
35
40
45
50
Actual change
33
discrete color output
Image Formation - Quantization
continuous colors
mapped to a finite,
discrete set of
colors.
continuous color input
34
Sampling and Quantization
Digitizing the
coordinate
values
Digitizing the
amplitude
values
To convert it to digital form, we have to sample the function in both
coordinates and in amplitude
35
Sampling and Quantization
• The spatial location of each
sample is indicated by a
vertical tick mark in the
bottom part of the figure.
• The samples are shown as
small white squares
superimposed on the function.
• The set of these discrete
locations gives the sampled
function the values of the
samples still span (vertically)
a continuous range of
intensity values
• In order to form a digital function, the
intensity values also must be
converted (quantized) into discrete
quantities.
• the intensity scale divided into eight
discrete intervals, ranging from black
to white. The vertical tick marks
indicate the specific
• The continuous intensity levels are
quantized by assigning one of the
eight values to each sample.
• The assignment is made depending
on the vertical nearing from a sample.
To convert it to digital form, we have to sample the function in both
coordinates and in amplitude
36
Sampling and Quantization
• When an image is generated by a single sensing element
combined with mechanical motion the output of the sensor is
quantized as last example.
• However, spatial sampling is accomplished by selecting the
number of individual mechanical increments at which we activate
the sensor to collect data.
• Limits on sampling accuracy are determined by the quality of the
optical components of the system.
• When a sensing strip is used for image acquisition, the number
of sensors in the strip establishes the sampling limitations in one
image direction.
37
Sampling and Quantization
• When a sensing array
is used for image
acquisition, there is no
motion and the number
of sensors in the array
establishes the limits of
sampling in both
directions.
• Clearly, the quality of a
digital image is
determined to a large
degree by the number of
samples and discrete
intensity levels used in
sampling and
quantization.
38
Sampling and Quantization
 Quantization
◼
◼
8 bit quantization: 28 =256 gray levels (0: black, 255: white)
1 bit quantization: 2 gray levels (0: black, 1: white) – binary
 Sampling
◼
Commonly used number of samples (resolution)
Digital still cameras: 640x480, 1024x1024, up to 4064 x 2704
⚫ Digital video cameras: 640x480 at 30 frames/second (fps)
⚫
39
Digital Image Representation
• Let f(s, t) be the continuous image function of two
continuous variables, s and t. This f(s, t) is converted into
a digital image by sampling and quantization.
• The continuous image is sampled into a 2-D array, f(x, y),
containing M rows and N columns, where (x, y) are discrete
coordinates, and their values are x = 0, 1, 2, …., M – 1 and
y = 0, 1, 2, …., N - 1.
• The discrete values of x and y are not the values of the
physical coordinates when the image was sampled.
• The value of the image at any coordinates (x, y) is denoted
f (x, y).
40
Digital Image Representation
There are three basic ways to represent f(x,y):
1. It is a plot of the function, with two axes
determining spatial locations (x,y) and the third
axis being the values of f (intensities)
41
Digital Image Representation
There are three basic ways to represent f(x,y):
2. Shows as appear on a monitor or photograph. The
intensity of each point is proportional to the value of
f at that point. Here, there are only three equally
spaced intensity values.
If the intensity is
normalized to the
interval [0, 1],
then each point in
the image has the
value 0, 0.5, or 1.
A monitor or printer
simply converts these
three values to black,
gray, or white,
respectively.
42
Digital Image Representation
43
Representing Digital Images
 a11

A=
a
 m1
a1n 


amn 
Divided into
8x8 blocks
44
Digital Image Representation
There are three basic ways to represent f(x,y):
3. The display of the numerical values of f(x, y) as an
array (matrix). In this example, f has size elements of
600 * 600 , or 360,000 numbers.
Difficult to print the complete array
and not useful. It is useful when
only parts of the image are printed
and analyzed as numerical values
Digital Image Representation
 In equation form, The representation of an M×N numerical
array as
 f (0, 0)
 f (1, 0)
f ( x, y ) = 

...

 f ( M − 1, 0)
f (0,1)
f (1,1)
...
f ( M − 1,1)
...
...
...
...
f (0, N − 1) 
f (1, N − 1) 

...

f ( M − 1, N − 1) 
Digital Image Representation
 The representation of an M×N numerical
array as
 a0,0
 a
1,0

A=
 ...

 aM −1,0
a0,1
a1,1
...
aM −1,1
... a0, N −1 

... a1, N −1 
...
... 

... aM −1, N −1 
aij = f(x = i, y = j) = f(i, j),
Digital Image Representation
 The representation of an M×N numerical
array in MATLAB
 f (1,1)
 f (2,1)
f ( x, y ) = 
 ...

 f ( M ,1)
f (1, 2)
f (2, 2)
...
f ( M , 2)
...
...
...
...
f (1, N ) 
f (2, N ) 
... 

f (M , N ) 
Download