TRANSFER_THESIS.DOC

advertisement
Relevant Parts of the Mphil/PhD transfer thesis of Siamak Talebi
Chapter 5
A recursive method for burst error recovery of one and twodimensional signals
5.1 Introduction
Samples of a speech signal can be lost in an erasure channel or due to cell losses in an ATM
network. Cell losses due to buffer overflow in ATM environments result into bursts of errors
rather isolated errors. If the signal is over sampled, the original signal can be recovered as
long as the average sampling rate is above the Nyquist rate [52]. This implies that if a simple
scheme can be devised, recovery from over sampled signal becomes an alternative to error
correction [40,43]. Many schemes have been proposed to recover the missing samples using
the remaining samples of the signal. Some of these schemes implement iterative and nonlinear techniques in the recovery process [10,43]. Reed-Solomon algorithm is mentioned in
[1-42] as an error correction code for complex FFT values without any extensive simulation
results or any study of its computational load or sensitivity to noise. In section 2, we shall
describe various implementations of a robust error recovery scheme for 1-D signals using
techniques similar to Peterson’s BCH decoding [1,52]. Also we shall study the sensitivity of
the simulated techniques to additive and quantization noise based on IIR filters and the
implementations of parallel and series structures of IIR filters are presented. In section 3 a
novel method for burst error recovery of images, is proposed.
5.2 The Burst Error Recovery Technique for 1-D signal
42
In this subsection, we propose a robust error recovery technique for bursts of real and
complex samples, similar to Peterson’s method for BCH decoding [52]; we shall call this
technique the Burst Error Recovery Technique (BERT). Let us assume that a signal such as
speech is over sampled with n complex samples (x(i) i=1,..,n
and X(j)=0
j=k+1,..,k+m+1=n). The missing samples are denoted by e(im) = x(im), where im denote the
positions of the lost samples; for i not equal to im, e(i) = 0. For  lost samples, the polynomial
locator for the erasure samples is

j 2im  

H ( si )    si  exp(
)    ht si t
n
 t 0
m 1 
(1)
H(sm) = 0, m = 1,2 ... 
(2)
Where
 j2  i 
 , s m  exp
si exp 
 n 
 j2  im 

,
 n  i = 1...n
The polynomial coefficients ht = 1 ... h can be found from the product in (1). In DSP
implementation [40,43], it is easier to find ht by obtaining the inverse FFT of H(s).
By multiplying (2) by e(im).(sm)r and then summing over m, we get


h  e(i )  s 
t 0
t
 r t
m 1
m
m
0
(3)
Since the inner summation is the DFT of the missing samples e(im), we get

 h  E(  r  t )  0
t 0
(4)
t
43
for r = m/2+1 ... n-m/2-1. Note that E(j) is the Fourier transform of e(i). The received
samples, d(i), can be thought of as the original over sampled signal, x(i), minus the missing
samples e(im). The error signal, e(i), is the difference between the corrupted and the original
over sampled signal and hence is equal to the values of the missing samples for i = im and is
equal to zero otherwise. In the frequency domain we have
E(j) = X(j) - D(j), j = 1 ... n
(5)
Since X(j) = 0 for j = 1 ... m/2 and j = n-m/2 ... n, then
E(j) = - D(j), j = 1 ... m/2 and j = n-m/2 ... n
(6)
The remaining values of E(j) can be found in recursion form from (4).
5.2.1 Noise Sensitivity Analyses
The BER technique is sensitive to the quantization and additive noise and also to the
computational truncation error. This issue has been briefly discussed in [39,43]. The
sensitivity to quantization and additive noise increases for large block sizes due to
accumulation of round off error. Also, it can be noticed that the sensitivity is more
accentuated for a consecutive loss of samples than for isolated losses. We start with a
heuristic explanation of the noise sensitivity and then we will give a mathematical analysis of
the problem. Consecutive losses produce large range for the error locator polynomial H(s) as
well as the polynomial coefficient ht. The zeros of H(s) are all concentrated on one side of the
unit circle. If we assume N=32 and =16, the values of the polynomial are zeros for i = 1
...16 and very large around i = 24. The previous statement can be verified by calculating the
value of H(s24) which is equal to the product of al the 16 vectors emanating from the position
44
i = 24 on the unit circles. The minimum magnitude of these 16 vectors is
2 and the
maximum magnitude is 2. Thus, we expect to have a magnitude between 28 and 216; the
actual value is 1.109*104. This produces a large dynamic range for the values of ht, starting
from 1 for ht and peaking at h/2, Fig. 1. Typical values of ht for a stable reconstruction of the
isolated losses are usually less than 1. For the case of consecutive losses, a large dynamic
range for the values of ht can be created.
2000
h
t
1000
0
0
5
10
15
t
Fig. 1. The behaviour of ht for the case of consecutive losses
The recursive solution for difference equation (4) has sensitivity to the initial conditions (6).
Therefor, huge errors can be produced in case of additive or quantization noise. From a
mathematical point of view, the sensitivity of an IIR filter increases dramatically whenever
the poles and/or zeros of the filter are clustered [50,51]. The analysis in [51] shows that any
small variations in the IIR filter coefficients can cause very large variations at the filter output
if the filter poles and/or zeros are clustered and very close to each other. Taking the unilateral
Z transform of the difference equation (4) with the initial conditions given in (6), we can
write
45

E ( z ) 
k
h  k
 E ( r )  z k r 1
r 1 h

h
1    k  z k
k 1 h

k 1
(7)

Where E (z ) is the Z-transform of E(r).
The pole and/or zero clustering case in the canonical IIR filter is equivalent to the case of
bursty losses in our technique and hence the analysis in [50,51] offers an alternative
explanation for the sensitivity to noise of the proposed technique. From [51], we can define
the zeros of E(z) in (7) to be zi + zi where i = 1.. and zi is the error in the ith zero due to
the quantization and additive noise in the E(r). Based on calculation mentioned in [51], we
can write
 

h  k k r  1



 zi
  

h
z i    k 1 
 E ( r )

r 1 
( zi  z j )
 

j 1,


j i
(8)
For the case of consecutive losses, the values of ht increases to very high values around ht/2
and hence the inner summation in (8) becomes very large. This means that any small changes
in the values of E(r) due to quantization or additive noise cause drastic variation in the
location of the zeros of (7) which change the behaviour of the IIR filter representing the
recursion in (7). We also observe from our simulation that the zeros of (7), which are
represented as poles in (8), are clustered together for the case of consecutive losses. This
observation further enhances the zero displacement in (8) in case of quantization or additive
noise.
We can show that the poles of (7) are sensitive to the round-off errors in ht by the following
46
equation




 

 z pt
z p    
 ht 

t 1 
(zp  z j )


j 1,
 j p

(9)
From the above equation, small variation in ht causes very large variation in zp which
represents the poles of (7) for the case of consecutive losses.
5.2.2 The Implementation of Parallel and Series Structures
Since the zero input response of a LTI system is similar with impulse response system, we
can consider equation (7) as transfer function of system. Recovery signal is equal to the
impulse response system, and the block diagram of system is shown in Fig .(2-a). This
analysis implies that there are other IIR filters structures such as the parallel and series
structures.
For the case of parallel implementation, in each branch, the transfer function is given as
Hk (z) 
Ak
pk  z 1
(10)
The block diagram is given in Fig. (2-b). In (10), the set ( pk k=1,.., ) are the poles of system
and are equal to the zeros of the polynomial locator (1); the computation of ( Ak k=1, .., ) is
trivial.
To compare parallel and recursion method, we use SNR that is defined as
47
N
SNR 
 x(k )
2
k 0
N
 x(k )  r (k )
(11)
2
k 0
where x(k) is the input sample, r(k) is the reconstructed sample and N is the total length of
the samples. For 50% consecutive losses and three blocks sizes the SNR are given in
table (1). For the case of series implementation, in each block we use the transfer function
is given as
H k ( z) 
z k  z 1
pk  z 1
(12)
This block diagram is given in Fig. (2-c). In (12), the set ( pk k=1,..,  ) is the poles of the
system is equal to the zeros of the polynomial locator (1). And also the set ( z k k=1,.., ) is
equal to the zeros of system and the computation for large block size is difficult and
creates truncation error. The result of SNR is given in table (1). The sensitivity to
quantization and additive noise of any error recovery technique is a very important issue
and should be studied thoroughly. The BER technique was found to be sensitive to
quantization and additive noise for the case of consecutive losses due to the behaviour of
the error locator polynomial this results in error accumulation during the recursions.
SNR
Block
16
Recursion
Size
Block Size
32
Block
64
251.922
171.551
4.308
Parallel
258.653
179.619
16.081
Series
156.047
80.572
2.288
Size
Table 1: SNR for the recursive, parallel and series simulation by Mathcad.
48
The BER technique is equal with IIR filters that can be implemented in parallel structure
(n)
X(j)

(a )-
bk  
l k
(n)
(n)
a 1
Z 1
a k
Z 1
a
Z 1
b1
bk
b
h
hl
 E( l  k ) , a k  k
h
h
Hk (z) 
Ak
pk  z 1
Hk (z) 
Ak
pk  z 1
Hk (z) 
Ak
pk  z 1
X(j)
X(j)
A
Hk ( za
)  1  1
p  z
(b)
(n)
Hk ( z ) 
X(j)
bk
z1  z 1
p1  z 1
H k ( z) 
z k  z 1
pk  z 1
Hk ( z ) 
z  z 1
p  z 1
(c)
Fig. 2. The block diagram of systems. (a) The1original system. (b) The parallel system. (c) The series system
Z
5.3 An efficient method for burst error recovery of images
49
A new method to recover bursts of errors for images, is proposed. This technique is an
extension of our previous work for the 1-D signal [39-44]. The problem of signal
reconstruction with bursty losses is transferred to a two-dimensional difference equation by
using a new transform technique. The simulation results show the feasibility of this method.
The sensitivity analysis of the proposed method shows that the proposed method is robust
against additive and quantization noise.
Pixels of an image signal can be lost in an erasure channel or due to cell losses in an ATM
network. Cell losses due to buffer overflow in ATM environments result in bursts of errors
rather than isolated errors. Also losing a few bits of a compressed image is equivalent to
bursty losses for the original image. Many schemes have been proposed to recover the
missing samples from the remaining samples of the signal. Some of these schemes implement
iterative and non-linear techniques in the recovery process [16 - 47]. But none of them is able
to recover the missing samples when large bursts of errors are lost. Also most of the previous
methods need much computational effort. In this section a new technique is proposed to
recover these bursts of losses in an erasure channel by over-sampling the original image
before packetization. This technique is based on the generalisation of the one-dimensional
method presented in [39-44].
5.3.1 The Proposed Technique
Based on the one-dimensional algorithm presented in [39-44], a robust error recovery
technique is proposed for bursts of real and complex samples. For the sake of clarity, square
images are considered, but the technique can be also applied to rectangular images. Let us
50
assume that a 2-D signal such as an image is sampled at the Nyquist rate yielding a discrete
signal (xorg(i,k), i,k=1,..,U). We use a new transform such that the kernel of the transform is
equal to exp(  j
2
2
 m  i  q1  j
 n  k  q 2 ) , where q1 and q2 are positive prime integers
N
N
with respect to N. It can be shown that this kernel is a sorted kernel of DFT. The transform of
the image is called Xorg(m,n) for m, n=1,..,U. For the sake of clarity, we shall call this
transform the Sorted Discrete Fourier Transform (SDFT). The Xorg(m,n) can be expanded by
inserting  rows and columns of zeros around it to achieve a new SDFT matrix Xover(m,n)
m,n=1,..,N=U+ . An inverse SDFT will lead to an over-sampled version of the original
signal xover(i,k) i,k=1,..,N with (NN ) complex samples. This (U, N) code is capable of
correcting a block of matrix () where  = N-U.
The missing part of the over sampled signal is denoted by e (im, kn) = x (im, kn), where (im,
kn) specifies the positions of the lost pixels. The value of e (i, k) for any position except (im,
kn) is zero. For the () lost pixels, the polynomial error locator is

H ( si , p k )   ( si  exp(
m 1
j 2im q1 
j 2k n q 2
)) ( p k  exp(
)) 
N
N
n 1


h
t, f
 si t  pk  f ,
t 0 f 0
(13)
H ( sm , p n )  0 ,
m, n 1, ,  ,
(14)
where
si  exp(
s m  exp(
j 2  iq1
),
N
j 2  im q1
),
N
p k  exp(
pn  exp(
51
j 2  kq2
),
N
j 2  k n q2
),
N
i, k 1,, N ,
m, n 1, ,  .
The polynomial coefficients (ht,f , t,f=0,..,) can be found from the product in (13). For the
DSP implementation, it is easier to find ht,f by obtaining the inverse SDFT of H(s,p).
Multiplication of (14) by e(im,kn).(sm)r.(pn)d and summation over m,n yield




  ht , f  (  e(im , kn )  sm r t  pn d  f )  0 ,
t 0 f 0
(15)
m1 n 1
The inner summation is the SDFT of the missing samples e(im,kn), hence


h
t, f
 E (  r  t ,   d  f )  0 ,
(16)
t 0 f 0


where (r,d)= +1,..,N- -1 and E(r,d) is the SDFT of e(i,k).
2
2
The reason for defining the variable (s,p) to be a root of unity is to convert the inner
summation in (15) to the SDFT of the missing samples. The received pixels, d(i,k), can be
thought of as the original over-sampled signal, xover(i,k), minus the missing pixels e(im,kn).
The error signal, e(i,k), is the difference between the corrupted and the original over-sampled
signal. Hence, it is equal to the values of the missing pixels for (i,k) = (im,kn) and is equal to
zero otherwise. The corresponding relationship in the frequency domain is
E(i,k) = Xover(i,k)-D(i,k) for (i,k) = 1,..,N.
(17)
The 2-D difference equation (14) with non-zero initial condition can be solved by a recursive
method provided that the boundary conditions are given only in an L-shaped region [46].
This can be achieved by inserting zeros in the original SDFT matrix. Considering that the
Xorg(m,n), m,n=1,..,k is a bi-periodic SDFT matrix, the zeros are inserted around the original
matrix Xorg(m,n) as shown in Fig. 3. From now on, Xover(m,n) denotes this special oversampled SDFT matrix. In ( 17), considering that Xover(i,k) = 0
52



1,  , 2 , and

for k 1, , N & i  
 and
 N    1, , N 


2



1,, , and





2
i   1,  , N  & k  
.
2
2
 N    1,, N 


2
(18)
For the above i and k we have E ( i, k )   D( i, k ) .
The remaining values of E(r,d) can be found from (16) by the following recursion
E(r, d )  
1  
   ht , f  E ( r  t , d  f ) ,
h0, 0 t 0 f 0
(19)


where ( r , d )   1,  , N  .
2
2
After solving (19), Xover(m,n) can be found from (17). By removing the inserted zeros in
Xover(m,n) and inverting this SDFT, the original image is then recovered.
To determine suitable values for q1 and q2 and their effects on the behaviours of the
algorithm, at first we choose q1=q2=1. For this choice the SDFT is identical to DFT.
In many problems, the number of pixels are large, so that bursty losses produce large
dynamic range in the error locator polynomial H(s,p) as well as the polynomial coefficients
ht,f. For example, Fig.4 shows the behaviour of H(s,p) and ht,f for an image of size (6464)
and losses of size (3232). The large dynamic ranges are due to the concentration of zeros on
one side of the semi-sphere. H(si,pj) values are zero for (i,j =1,..,32) and very large around (i,j
=48). This can be verified by calculating the value of the H(s,p) which is equal to the product
of all the (3232) vectors emanating from the position (i,j =48) on the upper half of the unit
sphere. Fig. 5 represents the location of zeros of the error locator polynomial for a (3232)
53
bursty losses in an image of size (6464).
The minimum magnitude of each of the (3232) vectors is 21/2 and the maximum
magnitude is 2. Thus, magnitude of the product vector is estimated to be between 2(32*32)*.5
and 2(32*32). This creates a large dynamic range for the value of ht,f starting from 1 for h0,0 and
peaking at h/2,/2 as shown in Fig. 4. For the case of isolated losses (every other losses), the
locations of the zeros of the error locator polynomial are symmetrically distributed on the unit
sphere as shown in Fig. 6 and hence H(si,pl) and ht,f values have very small dynamic range. In
fact, H(si,pl) values are periodic while ht,f values are all zeros except for (t,f =0,) which is
equal to 1.
For the case of bursty losses, ht,f has a large dynamic range that creates a large
computational error in (19). Therefore, the implementation of the algorithm for a large block
size of losses is impossible. Since the coefficient of ht,f have a very small dynamic range in
the case of isolated losses; it would be beneficial to transform the bursty losses in to isolated
losses by proper choices of q1 and q2. Therefore, to determine the coefficients q1 and q2, two
points should be considered. Firstly, q1 and q2 have to be prime with respect to N. Secondly,
based on the size of the block losses, the q1 and q2 values must be chosen such that the
locations of the zeros of the error locator polynomial are approximately distributed
symmetrically around the unit sphere. For example for an even number of N, when the block
size of losses is equal to (N/2,N/2), the best choice for q1 and q2 is (N/2-1). The values of the
polynomial error locator H(s,p) and the ht,f values for N=512 , =256 and q1=q2=255 are
given in Figs. 7 and 8.
The SDFT is actually derived from DFT and the fast algorithm can still be used. Because
the SDFT transform can be handled by DFT and sorting; the sorting of the elements is as
54
follows
aSDFTm , n  a DFTmod(q m , N ), mod(q
1
2 n , N )
for m,n =1,..,N that aSDFT is the element
of the SDFT transform matrix and aDFT is the element of the DFT transform matrix. The
inverse transform of the frequency coefficients is equivalent to the inverse sorting and the
inverse DFT, respectively. A small dynamic range of the coefficient ht,f, shown as Fig. 8
justifies this algorithm for a large image size and a large number of losses.
The proposed technique is very efficient in recovering bursts of errors with a performance
better than the other techniques such as the method of Conjugate Gradient (CG) using
adaptively chosen weights and block Toeplitz matrices [16-25] which is referred to as the
ABC technique in [16]. In this manner to computational complexity exact review in the case
of 1-D in [44] is done. That shows the CG technique requires above 10 times the number of
multiplications and additions required by the proposed technique. But we should consider
that to recovery of losses of size (), the proposed technique need more added zeros in the
frequency domain in compare to the CG method. In the best situation the ratio between the
recovery pixels and the added zeros is 1/3.
5.2.2 Simulation
A (256256) image is used for the simulation of the algorithm, Fig. 9-a. The image is
transformed using SDFT. A number of zeros are inserted in the rows and columns of the
SDFT matrix and a new matrix of size of (512512) is derived. By taking an inverse SDFT,
the over-sampled image is produced Fig. 9-b. This image does not appear to have any
similarity with the original image. According to the algorithm, the block size of the losses is
limited to (256256) in this case. Therefore, a block of this size is erased from the image as
shown in Fig. 9-d. The corresponding image of Fig. 9-d prior to the recovery is shown in Fig.
55
9-c. The over-sampled signal after reconstruction and its corresponding image are shown in
Figs. 9-f and 9-e, respectively. The Mean Squared Error for this simulation is equal to
1.0910-15.
None of the previous methods [4-20] are able to recover the block size of (N/2×N/2).
5.3.3. Noise Sensitivity
The sensitivity analysis for the 1-D signals based on DFT proves to be sensitive to additive
and quantization noise [44,49]. No record of such a study for 2-D signals has been reported.
For our proposed method, the sensitivity of the algorithm is simulated as follows. The
over-sampled image is quantized to 8 bits before the transmission. The Signal to Noise Ratio
is defined as
  x(i, k )
SNR  10 log(
  x(i, k )  y(i, k )
2
2
),
(20)
where x(i,k) is the original image and y(i,k) is the quantized image. The corresponding SNRs
before the transmission and after reconstruction of this simulation are equal to 38.2 dB and
35.02 dB, respectively. A white random noise of uniform distribution with an approximate
amplitude of 1/20 of the transmitted image (SNR=28.6dB) is added to the over-sampled
image. The SNR after the recovery of the original image is equal to 28.193dB. The result for
this case is shown in Fig. 10. The SNR values show that the new method is robust against
additive and quantization noise. This can be attributed to the low dynamic range of the
coefficients ht,f ,which results in small amount of accumulated round off error in the solution
56
of the difference equation (16). The solution for this difference equation is equivalent to the
zero input response of an IIR filter with the initial conditions given in (18). The 2-D
unilateral Z-transform of this IIR filter has poles identical to the zeros of the error locator
polynomial and the initial conditions affect only the zeros of the filter. The solution of (19) is
thus the inverse transform of this 2-D Z-transform, which has the same shape as the eigen
functions of the impulse response of the IIR filter.
For q1=q2=1, taking the 2-D unilateral Z-transform of the difference equation (16) with
the initial conditions given in (18), we can write

Eˆ ( z1 , z 2 ) 



  h
k 1 0 k 2  0 r 1 d 1

 k 1,  k 2
 E (r , d )  z1k1 r  z 2k 2  d
,

  h
k 1 0 k 2  0
 k 1,  k 2
z z
k1
1
(21)
k2
2
where Eˆ ( z1 , z 2 ) is the Z-transform of E(r,d).
Based on the analysis in [50-51], an alternative explanation for the sensitivity to noise of
the proposed technique can be given. From [51], we can define the zeros of Eˆ ( z1 , z 2 ) in (21) to
be ( z1l  z1l , z 2 f  z 2 f ) where (l,f) = 1 ... , and (z1l , z 2 f ) is the error in the (l,f)th zero
due to the quantization and additive noise in the E(r,d). The numerator of (21) is
A( z1 , z 2 ) 






i 1
j 1
  h k1, k 2  E (r, d )  z1k1r  z 2k 2d   (1  z1i  z11 )   (1  z 2 j  z 21 ) . (22)
k 10 k 2 0 r 1 d 1
The error in the zeroes of Eˆ ( z1 , z 2 ) , (z1l , z 2 f ) , can be expressed in terms of the error in the
E(r,d) as
57


(z1l , z 2 f )  
( z1l , z 2 f )
E (r , d )
r 1 d 1
 E (r , d ) .
(23)
Using the two forms of A(z1,z2) in (22) and the fact that
 A( z , z ) 
1
2 

 ( z1i , z 2 ) 
j



 ( z1 , z 2 )
( z1 , z 2 )  ( z1l , z 2 f )
 A( z1 , z 2 ) 

,
 
E (r , d )  E (r , d )  ( z , z ) ( z , z )
1 2
1l 2 f
i
j
(24)
we get

 ( z1 , z 2 )
i
j
E ( r , d )


h k 1, k 2 k 1r  1 k 2d  1
 z1l
 z2 f
h ,

k 11 k 2 1

 (z
i 1,
i l
1l

 z1i )   ( z 2 f  z 2 j )
,
(25)
j 1,
j f
for (l,f) = 1 … and (r,d) = 1 …. From (22) and (25), the error in the zeros of Eˆ ( z1 , z 2 ) ,
(z1l , z 2 f ) , can be written as
   h

     k 1, k 2  z1k 1r  1  z 2k 2d  1

l
f

 

h
k 11 k 2 1
 ,
 E ( r, d ) .
(z1l , z 2 f ) =  


r 1 d 1 

( z1l  z1i )   ( z 2 f  z 2 j )



i 1,
j 1,
i l
j f


(26)
As mentioned earlier for the case of consecutive losses (q1=q2=1), the values of ht,f
increases to very high values around h  ,  and hence the inner summation in (26) becomes
2
2
very large. This means that any small changes in the values of E(r,d) due to the quantization
or additive noise cause drastic variations in the locations of the zeros of (21) which change
the behaviour of the IIR filter representing the recursion in (16). We also observed from our
58
simulations that the zeros of (21), which are represented as poles in (26), are clustered
together for the case of consecutive losses. This observation further enhances the zero
displacement in (26) in case of additive or quantization noise.
Similarly, we can show that the sensitivity of the poles of (21) to the round-off errors in
ht,f, can be written as


( z1p , z 2q )   
t 1 f 1


 t 1

 z1 p z2q f 1
 
 ht , f


( z1 p  z1i )   ( z 2q  z 2 j )

i

1
,
j 1,

j q
 i p



,




(27)
The choice of suitable coefficients q1 and q2 are due to separate the poles of (21) and a small
dynamic range of the coefficients ht,f. Then the sensitivity of the proposed method reduces to
additive and quantization of the values of E(r,d) and to the round-off errors in ht,f. The
simulation is shown this fact perfectly.
5.4 Conclusion
We have shown that the new algorithm has three advantages. Firstly, it is ideal to recover
the missing pixels for large blocks of bursty errors. Secondly, in terms of complexity, it is
simpler than other techniques; thirdly, it is very robust in correcting bursts of errors with
respect to additive and quantization noise. But the disadvantage of this method is that in the
best situation the ratio between the recovery pixels and the added zeros is 1/3. Application of
this method in the case of compressed image and the extension of the method to recover
randomly distributed pixel losses are currently under investigation.
59
Download