Lecture Notes 1 and 2

advertisement
King Saud University
Department of Computer Engineering
First Semester 1432-1433
CEN 545
Digital Image Processing
Lecture Notes 1 and 2
Dr. Naif Alajlan
Lecture 1
Introduction
Images as Signals:
CEN 545

Signal: x(t)

Image :
f ( x, y )  Intensity at x,y (continuous)
f ( x, y )  Intensity at pixel x,y (discrete)

Video image: function of space and each space point is function of time.
h(m, n)
Image processor
f (m, n)
g (m, n)
Linear Image Processing:
g (m, n)  f (m, n)  h(m, n)
In frequency domain
G (u, v)  F (u, v)  H (u, v)
Some Types of Images: Binary, Gray-scale, Color
Systems and Signals
Image Processing
Time-variant
Space-variant
1-Dimensional
2-Dimensional
f(x)
f(x,y)
Deterministic
Random
Causal
Non-causal
What is Image Processing?
Image processing is the manipulation of an image in order to improve its quality,
enhances its ability to convey visual information and make it look better.
Image processing addresses three types of image problems:
1. Contrast: Refers to the variability of image intensity across the image. Edges
are regions of relatively high local contrast. Global contrast intensity variation
across the entire image is referred to as the dynamic range of the image.
Usually contrast variation is caused by illumination.
2. Blur: caused by either a resolution or focus problems, or relative motion
between the camera and the object during the image capturing.
Blur edges correspond to gradual change of intensity locally.
3. Noise: any unwanted intensity variation in the image.
Usually modeled as a random process, but it can be highly structured and
deterministic in some instances.
Major Categories of Image Processing Problems

While most image problems are relatively simple to describe, solving them
effectively can be quite difficult.

There is a broad and continuously expanding spectrum of applications to image
processing. The two major branches of image processing are enhancement and
restoration.
o Enhancement: aimed at improving the subjective quality or the objective
utility of the image and includes: point operations, local operations and global
operations. Any of these can be linear or non-linear.
o Restoration: aims at recovering any image after degradation.
Classical linear systems theory forms the foundation for the derivation of
restoration methods:
Random
Noise ns (m, n)
Deterministic
Deterministic
f (m, n)
Linear shift
invariant
hd (m, n)
s(m, n)
LSI degradation
h(m, n)
fˆ (m, n)
minimum MSE
with f (m, n)
hd (m, n)  Model of the image degradation.
ns(m, n)  random noise.
Optimal methods find a filter h(m, n) whose output is a minimum mean square error
estimate of f (m, n) .
Drawback: Local statistical models of the image vary with spatial location and undermine
the methods which are optimal for stationary processes.
Image Processing Techniques
1. Point Operations: map each input pixel to an output pixel intensity according to an
intensity transformation. A simple linear point operation which maps the input gray
level f (m, n) to an output gray level g (m, n) is given by:
g (m, n)  af (m, n)  b
Where a and b are chosen to achieve a desired intensity variation in the image.
Note that the output g (m, n) here depends only on the input f (m, n) at m,n.
2. Local Operations: determine the output pixel intensity as some function of a
relatively small neighborhood of input pixels in the vicinity of the output location. A
general linear operator can be expressed as weighted of picture elements within a
local neighborhood N.
g (m, n) 
a
k , l N
k ,l
f (m  k , n  l )
Simple local smoothing (for noise reduction) and sharpening (for deploring or edge
enhancement) operators can be both linear and non-linear.
3. Global Operations: the outputs depend on all input pixels values. If linear, global
operators can be expressed using two-dimensional convolution.
g (m, n)  f (m, n)  h(m, n) 
 h( k , l ) f ( m  k , n  l )
k , lN
4. Adaptive Filters: whose coefficients depend on the input image
f (m, n)
h ( m, n )
g (m, n)
Statistical
estimate
5. Non-Linear Filters:

Median/order statistics

Non-linear local operations

Homomorphic filters g (m, n) 
[ f (m  k , n  l )]h(k ,l )
k ,lN
In addition to enhancement and restoration, image processing generally includes issues of
representations, spatial sampling and intensity quantization, compression or coding and
segmentation. As part of computer vision, image processing leads to feature extraction
and pattern recognition or scene analysis.
Point Operations for Contrast Enhancement
Contrast refers to the distribution of intensities or grey levels in an image and can be
local (textures or edges) or global (the dynamic range across the entire image).
Histogram:

A convenient representation of contrast of an image.

Plots the number of pixels at each intensity value.

Low-contrast images have narrow distributions.
High-contrast images have broad distributions.
Maximum contrast
Minimum contrast
Black and white image
0
255
0
255
gray image
…
Optimal contrast
0

255
Simple point operations can achieve dramatic contrast effects.
Normalizing the Histogram
Let H (k )  the number of pixels whose gray level value is k, 0  k  L  1 (for 8-bit
quantization: L=256).
P(k ) 
L 1
H (k )
, where N   H (k ) is the total number pixels in the image.
N
k 0
By allowing the gray level value to vary continuously, over [0, 1]: p(r), 0  r  1 is called
gray level probability density function (glpdf).

How do transformations (i.e., point operations) affect contrast (glpdf) of the image?
f (m, n)
Point
operation
g (m, n)
Suppose g (m, n)  af (m, n)  b
r  g  l in the input image
Let
s  g  l in the output image
 s  ar  b  T (r )
s
 s  ar  b
pg(s)
T(r)
2s  a 22r
Proof:
pf(r)
 2s  E[( s   s ) 2 ]
 E[( ar  b  a r  b) 2 ]
 E[ a 2 (r   r ) 2 ]  a 2  2r
For a > 1
a<1
r
#
contrast increasing “stretching”
contrast decreasing “compressing”
Example:
Consider s  r 2
g  l below 12 are compressed and above
s  2r
1
2
r
are
stretched 12
Enhances contrast for bright images.
1
4
1
2
Similarly, s  r enhances contrast for dark images
1
2
More generally, given pf(r) and s = T(r), what is
pg(s)?
Assume T(r) is monotically increasing and singledvalued
pg(s) ds = pf(r) dr
p g ( s) 
p f (r )
ds
dr r T 1 ( s )
Example:
(a) s  ar  b
Find pg(s) in terms of pf(r) if
(b) s  r 2
(a) s  T (r )  ar  b
r  T 1 ( s) 
s b
a
Pg ( s ) 
Pf  s a b 
ds
a
dr
(b) Pg ( s ) 
Pf
a
 s
2 s
Histogram Equalization
Find s = T(r) such that Pg ( s )  1
0  s 1
r
ds
 Pf (r )  s   Pf ( x)dx
dr
0
(uniform)
k
or
l   Pf ( j )
j 0
which is the cumulative distribution function (cdf) of the input (glpdf).
In discrete images with quantized gray level quantization artifacts prevent exact
equalization of the histogram.
Example:
Find T(r) that results in (optimal) contrast enhancement:
(a) Pf (r )  2r ,
0  r 1
r
 T (r )   2 x dx  r 2
?
0
(b) Pf (r )  1  cos 2r
r
sin 2x 
sin 2r

 T (r )   (1  cos 2x) dx   x 
r

2  0
2

0
r
Example:
(Artifacts of quantization) practical issues
Pf (k ) 
2k
56
0k 7
(a) What T(k) that best enhances the contrast of this
image?
(b) Plot Hg(l).
Solution:
2j 2 k
2 k (k  1) k (k  1)
 j 

56 j  0
56
2
56
j  0 56
k
(a) T (k )  
(b)
k
0
1
2
3
4
5
6
7
Hf(k)
0
2
4
6
8
10
12
14
Pf(k)
0/56
2/56
4/56
6/56
8/56
10/56
12/56
14/56
T(k)
0/56 = 0
2/56 = 0.036
6/56 = 0.107
12/56 = 0.214
20/56 = 0.357
30/56 = 0.536
42/56 = 0.75
56/56 = 1
l
0
0
1
2
3
4
5
7
0 – 0.125
0.125 – 0.250
0.250 – 0.375
0.375 – 0.500
0.500 – 0.625
0.625 – 0.750
0.750 – 0.875
0.875 – 1
Pf (k ) 
H f (k )
Number of pixels
Download