Digital Image Processing HW#2 Solution 02/26/2004 TA: Jessie Hsu

advertisement

Digital Image Processing HW#2 Solution

02/26/2004

TA: Jessie Hsu yh2117@columbia.edu

1. Compandor Design:

(a) Idea of the compandor

Given a pdf of the random variable u , the goal is to determine decision levels and reconstruction levels in w domain, and then tranform them back to u domain.

Thus, what we are going to do here is to

(1) Determine the transform function both from u to w and from w to u

(2) Quantize w , obtaining decision and reconstruction levels

(3) Transform decision and reconstruction levels back to u

(b) pdf of u

The pdf of u is given by p u

( u )

 

1

0

| u |

1

 u

1 otherwise

 p u

( u )

1

1

 u

 u

0

0

 u

1

1

 u

0 otherwise

Determining the transform function from u to w

The tranformation w=f(u) , as given in the problem, is

(c) pdf of w

1

As 0

 u

1 , w

 f ( u )

  u x

0 p u

( x ) dx

 x

 u

0

( 1

 x ) dx

 u

1

2 u

2

As

1

 u

0 , w

  f (

 u )

 

[(

 u )

1

2

(

 u )

2

]

 u

1

2 u

2

0

 w

1

2

 

1

2

 w

0

 w

 f ( u )

 u

 u

1

2

1 u

2 u

2

2

0

 u

1

 u

1

0 as illustrated in (e).

Determining the inverse transform function from w to u

Accordingly, we can derive the transformation from u back to w ,

As 0

 w

1

2

, u

 f

1 ( w )

1

1

2 w pick u

 f

1 ( w )

1

1

2 w so that u falls inside the interval [0,1]

0

 u

1

As

1

2

 w

0 , u

 f

1

( w )

 

1

1

2 w

 

1

2

 w

0 pick u

 f

1 ( w )

 

1

1

2 w so that u falls inside the interval [-1,0]

0

 u

1

 u

 f

1

( w )

1

1

1

2 w

1

2 w 

0

1

2

 w

1

2 w

0 as illustrated in (f).

2

(e) Transformation from u to w (f) Transformation from w to u

(g) Quantizing w (h) Quantizing u

Quantizing w , obtaining decision and reconstruction levels w can be quantized with uniform quantization,

 t

1

, t

2

, t

3

, t

4

, t

5

[

1

2

,

1

4

, 0 ,

1

4

,

1

2

]

 r

1

, r

2

, r

3

, r

4

[

3

8

,

1

8

,

1

8

,

3

8

]

Transforming decision and reconstruction levels back to u

 u

 f

1

( w )

 t

1

, t

2

, t

3

, t

4

, t

5

[

1 ,

1

1

2

, 0 , 1

1

2

, 1 ]

 r

1

, r

2

, r

3

, r

4

[

1

2

,

1

2

3

, 1

2

3

,

1

2

]

3

Calculating Mean Square Error

MSE

 i

4 

1

E [( u

 r i

)

2

]

 i

4 

1 t i t i

1

( u

 r i

) 2 p u

( u ) du

 

1

1

1

2 ( u

(

1

2

))

2

( 1

 u ) du

 0 

1

1

2

( u

(

1

 

0

1

1

2 ( u

( 1

2

3

))

2

( 1

 u ) du

  1

1

2

3

))

2

( 1

 u ) du

1

2

( u

1

2

)

2

( 1

 u ) du

1

2

6

0 .

0178

2

3

3

6

6

[Bonus]

This quantizer is NOT optimal for u .

To show this claim, we provide a better quantizer in terms of mean square error.

Sample MATLAB programs doing Lloyd-Max quantization are provided below:

% DIP HW#2 Prob 1 Bonus

% Lloyd-Max Quantizer for Triangular distribution clear;

N=20; % Number of iterations

M=2001; % Number of samples along u axis

% pdf construction u=zeros(1,M); u=[-1:2/(M-1):1]; pu=zeros(1,M); pu(1:(M-1)/2+1)=1+u(1:(M-1)/2+1); pu((M-1)/2+1:M)=1-u((M-1)/2+1:M);

% Quantization data: thresholds and reconstruction levels t=zeros(N,5); t(:,1)=-1; t(:,3)=0; t(:,5)=1; r=zeros(N,4);

% Iteration: Start with the levels obtained from the compandor

MSE=zeros(1,N); t(1,2)=-1+1/sqrt(2); t(1,4)=1-1/sqrt(2); r(1,1)=-0.5; r(1,2)=-1+sqrt(3)/2; r(1,3)=1-sqrt(3)/2; r(1,4)=0.5;

MSE(1)=MSECompute(u,pu,squeeze(t(1,:)),squeeze(r(1,:)));

4

for k=1:N-1 % kth iteration

[tnew,rnew]=LloydMax( u,pu,squeeze(t(k,:)),squeeze(r(k,:)) );

t(k+1,:)=tnew;

r(k+1,:)=rnew;

clear tnew rnew;

MSE(k+1)=MSECompute(u,pu,squeeze(t(k+1,:)),squeeze(r(k+1,:))); end

Function for computing MSE: function y=MSECompute(u,pu,t,r)

% Computes quantization error (MSE)

% Takes input (u,pu,t,r)

M=length(r); % Number of quantization levels y=0; tceil=0;

MSE=zeros(1,M); for k=1:M

% MSE in the kth interval

tfloor=tceil+1;

tceil=min( max(find( u<=t(k+1) )),length(u) );

currentu=u(tfloor:tceil);

currentpu=pu(tfloor:tceil);

MSE(k)=(sum( currentpu.*((currentu-r(k)).^2) ));

clear currentu currentpu currentnum; end y=(sum(MSE))/(length(u)/range(u));

Function for Lloyd-Max iteration: function [tnew,rnew]=LloydMax(u,pu,t,r)

% Lloyd-Max quantizer subroutine

% Computes new quantization thresholds given old ones

% Takes input (u,pu,t,r)

M=length(r); % Number of quantization levels tceil=0; tnew=zeros(1,M+1); rnew=zeros(1,M);

5

tnew(1)=t(1); tnew(M+1)=t(M+1); for k=1:M

tfloor=tceil+1;

tceil=min( max(find( u<=t(k+1) )),length(u) );

currentu=u(tfloor:tceil);

currentpu=pu(tfloor:tceil);

currentnum=tceil-tfloor+1;

rnew(k)= sum(currentu.*currentpu) / sum(currentpu);

clear currentu currentpu currentnum; end for p=2:M

tnew(p)= (r(p-1)+r(p)) / 2 ; end

Starting with the decision levels and reconstruction levels from the compandor,

 t

1 r

1

,

, t

2 r

2

,

, t

3 r

3

,

, t

4 r

4

,

 t

5

[

[

1 ,

0 .

2929 , 0 , 0 .

2929 , 1 ]

0 .

5 ,

0 .

1340 , 0 .

1340 , 0 .

5 ] with MSE = 0.0178, in the end (after 20 iterations), we get

 r

1

, r

2

 t

1

,

, r

3 t

,

2

, r

4 t

3

,

 t

4

[

, t

5

[

1 ,

0 .

3816 , 0 , 0 .

3819 , 1 ]

0 .

5877 ,

0 .

1755 , 0 .

1760 , 0 .

5877 ] with MSE = 0.0155, less than the MSE yielded by the compandor.

Therefore the compandor is not the optimal quantizer for u .

6

2. Sample MATLAB code:

(1) Histogram calculation

% HW 2, Prob 2 (1)

% Histogram computation

% 8-bits (0~255) function [hind,h]=histcomp(img) hmax=255;

[M,N]=size(img); y=double(img(:)); h=zeros(hmax+1); hind=[0:1:hmax]; for k=1:M*N

h( y(k)+1 )=h( y(k)+1 )+1; end

(a) coin.bmp (b) Histogram of coin.bmp

7

(2) Cumulative histogram calculation

% HW 2, Prob 2 (2)

% Cumulative Histogram computation

% 8-bits (0~255) function [chind,ch]=histcomp(img) intmax=255;

[M,N]=size(img); y=double(img(:)); h=zeros(intmax+1); ch=zeros(intmax+1); chind=[0:1:intmax]; for k=1:M*N

h( y(k)+1 )=h( y(k)+1 )+1; end for k=1:length(h)

ch(k)=sum( h(1:k) ); end

% Plot the cumulative histogram chmax=max(max(ch)); figure set(gcf,'color',[1,1,1]); plot(chind,ch); grid; axis([0 intmax 0 chmax+1]); title('Cumulative Histogram'); xlabel('Gray Level'); ylabel('Cumulative Histogram');

(a) coin.bmp

8

(b) Cumulative histogram of coin.bmp

(3) This is an open problem, with criteria, combinations and parameter adjustments determined by you. Here are sample codes of repective methods:

Contrast stretching (G&W p.85)

% HW 2, Prob 2 (3)-1

% Contrast stretching clear; intmax=255; img=double(imread('d:\courses\dip\hw\hw2\coin.bmp'));

[M,N]=size(img); imgcs=zeros(M,N);

% Two cut points r1=100; s1=100; r2=200; s2=200;

% Map intensities for k=1:M

for p=1:N

x=img(k,p);

% Section 1

if x>=0 & x<r1

y=x*s1/r1;

% Section 2

elseif x>=r1 & x<r2

y=(x-r1)*(s2-s1)/(r2-r1)+s1;

% Section 3

else

y=(x-r2)*(intmax-s2)/(intmax-r2)+s2;

end

imgcs(k,p)=round(y);

% Clip gray levels to 0~255

if imgcs(k,p)<0

imgcs(k,p)=0;

elseif imgcs(k,p)>intmax

imgcs(k,p)=intmax;

end

y=0;

end end

The parameters to be adjusted are (r

1

,s

1

) and (r

2

,s

2

)

9

Histogram equalization (G&W p.91)

% HW 2, Prob 2 (3)-2

% Histogram equalization clear; intmax=255; img=imread('d:\dip\hw\hw2\coin.bmp');

[M,N]=size(img); imgeq=histeq(img);

[hinind,hin]=histcomp(double(img)); imgeqy=double(imgeq(:));

[houtind,hout]=histcomp(imgeqy); hmax=max([ max(hin),max(hout) ]);

% Plot the input histogram figure set(gcf,'color',[1,1,1]); plot(hinind,hin); grid; axis([0 intmax 0 hmax+1]); title('Input Histogram'); xlabel('Gray Level'); ylabel('Histogram');

% Plot the output histogram figure set(gcf,'color',[1,1,1]); plot(houtind,hout); grid; axis([0 intmax 0 hmax+1]); title('Equalized Histogram'); xlabel('Gray Level'); ylabel('Histogram');

% Display the input and output image figure set(gcf,'color',[1,1,1]); subplot(2,1,1),imshow(uint8(img)); title('Image Before Histogram Equalization'); subplot(2,1,2),imshow(uint8(imgeq)); title('Image After Histogram Equalization'); imwrite(imgeq,gray(256),'d:\dip\hw\hw2\coineq.bmp','bmp');

10

(a) coin.bmp (b) coin.bmp after histogram equalization

(c) Histogram of coin.bmp (d) Histogram of coin.bmp after equalization

11

Histogram matching (G&W p.91)

% HW 2, Prob 2 (3)-3

% Histogram matching clear; intmax=255; img=imread('d:\courses\dip\hw\hw2\coin.bmp');

[M,N]=size(img);

% Specify target histogram hind=[0:1:intmax]; htgt=zeros(1,intmax+1); a=round( M*N/(intmax+1) ); htgt(1:(intmax+1))=a; % Flat histogram b=M*N-a*(intmax+1); for k=1:b

htgt(b)=htgt(b)+1; end imgeq=histeq(img,htgt);

[hinind,hin]=histcomp(double(img)); imgeqy=double(imgeq(:));

[houtind,hout]=histcomp(imgeqy);

The parameter to be adjusted is your target histogram. A flat histogram is used in the sample code, which yields equivalent results to histogram equalization.

12

Power-law transform (G&W p.81)

% HW 2, Prob 2 (3)-4

% Power transform clear; intmax=255; img=double(imread('d:\courses\dip\hw\hw2\coin.bmp'));

[M,N]=size(img); imgp=zeros(M,N);

% Power transform c=1; gamma=1; for k=1:M

for p=1:N

r=img(k,p);

y=c*r^(gamma);

imgp(k,p)=round(y);

% Clip gray levels to 0~255

if imgp(k,p)<0

imgp(k,p)=0;

elseif imgp(k,p)>intmax

imgp(k,p)=intmax;

end

end end

The parameter to be adjusted is γ .

13

Log transform (G&W p.79)

% HW 2, Prob 2 (3)-5

% Log transform clear; intmax=255; img=double(imread('d:\courses\dip\hw\hw2\coin.bmp'));

[M,N]=size(img); imglog=zeros(M,N);

% Log transform c=40; for k=1:M

for p=1:N

r=img(k,p);

y=c*log(1+r);

imglog(k,p)=round(y);

% Clip gray levels to 0~255

if imglog(k,p)<0

imglog(k,p)=0;

elseif imglog(k,p)>intmax

imglog(k,p)=intmax;

end

y=0;

end end

The parameter to be adjusted is c.

14

Download