Iterative Source- and Channel Decoding Speaker: Advisor: Inga Trusova

advertisement
Iterative Source- and Channel Decoding
Speaker: Inga Trusova
Advisor: Joachim Hagenauer
Name
Content
1. Introduction
2. System model
3. Joint Source-Channel Decoding(JSCD)
4. Iterative Source-Channel Decoding(ISCD)
5. Simulation Results
6. Conclusions
Name
-2-
Introduction
PROBLEMS EXIST
• Limited block length for source and channel coding
• Data-bits issued by a source encoder contain residual
redundancies
• Infinite block-length for achieving “perfect” channel codes
• Output bits of a practical channel decoder are not error
free
Application of the separation theorem of information theory
is not justified in practice!
Name
-3-
Introduction
GOAL:
To improve the performance of communication systems
without sacrificing resources
SOLUTION:
Joint source-channel coding & decoding (JSCCD)
• Several auto correlated source signals are considered
• Source samples are
1. quantized
2. their indexes appropriately mapped into bit vectors
3. bits are interleaved & channel-encoded
Name
-4-
Introduction
AREA OF INTEREST:
Joint source-channel decoding (JSCD)
Key idea of JSCD:
To exploit the residual redundancies in the data bits in order
To improve the overall quality of the transmission
The turbo principle (iterative decoding between components)
is a general scheme, which we apply to JSCD
Name
-5-
System Model
Initial data
• AWGN channel is assumed for transmission
• A set of input source signals has to be transmitted at each
time index k
• Only one of the inputs, the samples X k , is considered
• X k are quantized by the bit vector
Ik  Ik ,1,...,Ik ,n,...,Ik,N   L
with
(1)
N ,
Ik  0,1 and L  0,1
denoting the set of all possible N-bit vectors
Name
-6-
System Model
Figure 1: System Model
Name
-7-
System Model
As
coherently detected binary modulation (phase shift keying)
is assumed
Than
conditional pdf of the received value y k ,n at the channel
output, given that code bit v k,n  0,1
has been
transmitted, is given by

pc  y k ,n v k ,n  
Name
1
2
2

n
e

y k ,n 12v k ,n 
2 n

2
(3)
-8-
System Model
Where,
 n2 
N0
2Es
the variance
Es energy that is used to transmit each channel-code bit
N0 One-sided power spectral density of the channel noise
Note:
Nv
y

IR
The joint conditional pdf for a channel word k
to be received , given that codeword v k,n  0,1 is
transmitted, is the product of (3) over all code-bits, since
the channel noise is statistically independent
Name
-9-
System Model
IF
X k are autocorrelated
THAN
Ik 1, Ik show dependencies
AND
are modeled by first-oder stationary Markov-process, which
is described by transition probabilities P Ik / Ik 1
ASSUMPTIONS
• Transition probabilities and probability-distributions of the
bitvectors are known
• Bitvectors Ik are independent of all other data, which is
transmitted in parallel by bitvector Uk
Name
- 10 -
Joint Source-Channel Decoding
GOAL:
Distortion of the decoder output signal x k
JSCD for a fixed transmitter
min

Optimization criterion is given by the conditional expectation of
the mean square error:
D  EI
Name
k yk

xk  xˆ  Ik 
2
yk
2

(4)
- 11 -
Joint Source-Channel Decoding
In (4)
x̂ Ik  is the quantizer reproduction value corresponding to
the bitvector Ik , which is used by the source encoder to
quantize X k
y k  y 0,y1,...,y k 
(5)
is a set of channel output words which were received up to
the current time k
D
min results in the minimum mean – square
estimator
xk  EI y  xˆ  Ik  
k k
Name

Ik L
xˆ  Ik   P  Ik y k 
(6)
- 12 -
Joint Source-Channel Decoding
Bitvector a-posteriori probabilities (APPs), using the Bayesrule, are given by

P  Ik y k   Bk P Ik y k 1 p y k Ik,y k 1

(7)
Where
P  Ik y k 1
is the bitvector a-priori probability
Bk  p  y k 1 / p  y k 
is a normalizing constant
Since


P Ik Ik 1,y k 1  P Ik Ik 1
Name
- 13 -
Joint Source-Channel Decoding
A-priori probabilities are given by
P  Ik y k 1 


Ik 1L

Ik 1L
P  Ik ,Ik 1 y k 1
P  Ik Ik 1  P  Ik 1 y k 1
(8)
At k=0 the unconditional probability distribution is used in
stead of the “old” APPs
Drawback :
From (7) the term p y k Ik,y k 1 is very hard to compute
analytically

Name

- 14 -
Iterative Source-Channel Decoding
Goal:
To find more feasible, less complex way to compute at least
a good approximation
Solution:
Iterative Source-Channel Decoding (ISCD)
We write:
p  y k Ik , y k 1 
Name
p  y k , Ik , y k 1
p  Ik,y k 1
(9)
- 15 -
Iterative Source-Channel Decoding
Now,
Bitvector probability densities are approximated by the
product over the corresponding bit probability densities
N
 p  y k ,I k,n , y k 1
(10)
p  y k Ik , y k 1  n 1
N
 p Ik,n , y k 1
n 1
With the bits
Ik,n  0,1
Name
- 16 -
Iterative Source-Channel Decoding
If we insert (10) into the formula (7) which defines bitvector
a-posteriori probabilities we obtain:
P  Ik,n y k 
P  Ik y k   P  Ik y k 1 
P I
y
n 1  k,n k 1
N
(11)
The bit a-posteriori probabilities P  Ik ,n y k  can be
efficiently computed by the symbol-by-symbol APP
algorithm for a binary convolution channel code with a
small number of states.
Name
- 17 -
Iterative Source-Channel Decoding
Note:
ALL the received channel words y k up to the current time k
are used for the computation of the bit APPs, because the
bit-based a-priori information
P  Ik,n y k 1 

Ik L Ik ,n
P  Ik y k 1
(12)
For a specific bit
Ik,n  0,1
Name
- 18 -
Iterative Source-Channel Decoding
Let interpret the fraction in (11) as the extrinsic information
that we get from the channel decoder:
N

P  Ik y k   P  Ik y k 1   Pe(C ) Ik ,n  


n 1

(13)
Note:
Superscript “(C)” is used to indicate that Pe(C )  Ik ,n 
is the extrinsic information produced by the channel
decoder .
Name
- 19 -
Iterative Source-Channel Decoding
As a result we have:
A modified channel-term (btw brackets ) that includes the
reliabilities of the received bits and, additionally, the
information derived by the APP-algorithm from the
channel-code.
Drawback:
Bitvector APPs are only approximations of the optimal
values, since the bit a-priori information didn’t contain the
mutual dependencies of the bits within bitvectors
Name
- 20 -
Iterative Source-Channel Decoding
How to improve the accuracy of the bitvector APPs?
Idea:
Iterative decoding of turbo codes:
From the intermediate results for the bitvector APPs (13),
new bit APPs are computed by
P (S )  Ik,n y k  
Name

Ik L Ik,n
P  Ik y k 
(14)
- 21 -
Iterative Source-Channel Decoding
Bit extrinsic information from the source decoder:
(S ) I
P

k,n y k 
(
S
)
Pe
  Ik,n  
Pe(C )  Ik,n 
(15)
Note:
Computed extrinsic information is used as the new apriori information for the second and further runs of the
channel decoder.
Name
- 22 -
Iterative Source-Channel Decoding
SUMMARY OF ISCD:
Step 1
At each time k, compute the initial bitvector a-priori
probabilities by:
P  Ik y k 1 


Ik 1L
Name

Ik 1L
P  Ik ,Ik 1 y k 1
P  Ik , Ik 1  P  Ik 1 y k 1
(8)
- 23 -
Iterative Source-Channel Decoding
Step 2:
Use the results from step 1 in to compute the initial bit a-priori
information for the APP channel decoder.
P  Ik,n y k 1 

Ik L Ik ,n
P  Ik y k 1
(12)
Step 3:
Perform APP channel decoding
Name
- 24 -
Iterative Source-Channel Decoding
Step 4:
Perform source decoding by inserting the extrinsic bit
information from APP channel decoding into
N

P  Ik y k   P  Ik y k 1   Pe(C ) Ik ,n  


n 1

(13)
to compute new (temporary) bitvector APPs
Step 5:
If this is the last iteration proceed with step 8, otherwise
continue with step 6
Name
- 25 -
Iterative Source-Channel Decoding
Step 6:
Use the bitvector APPs of step 4 in
P (S )  Ik,n y k  

Ik L Ik,n
P  Ik y k 
(S ) I
P

k,n y k 
(
S
)
Pe
  Ik,n  
Pe(C )  Ik,n 
(14)
(15)
to compute extrinsic bit information from the source
redundancies
Name
- 26 -
Iterative Source-Channel Decoding
Step 7:
Set the extrinsic bit information from Step 6 equal to the
new bit a-priori information for the APP channel decoder in
the next iteration ; proceed with Step 3
Step 8:
Estimate the receiver output signals by
xk  EI y  xˆ  Ik  
k k

Ik L
xˆ  Ik   P  Ik y k 
(6)
using the bitvector APPs from Step 4
Name
- 27 -
Iterative source channel decoding
Figure 2: Iterative Source-Channel Decoding according to the Turbo Principle
Name
- 28 -
Iterative source channel decoding
Computation of the bitvector APPs by (13) requires bit
probabilities which can be computed from from the
output L-values:
(C ) I
P
(16)

e
k,n  o 
(
C
)
Le
Ik,n   log (C )
Pe
Ik,n  1
With inversion:
Le(C )  Ik ,n 
(C ) (I ) I

L
e
(17)
k,n k ,n
Pe(C )  Ik,n  
e e
Le(C )  Ik ,n 
1 e
Le(C )  Ik ,n 
Name
are fixed real numbers
- 29 -
Iterative source channel decoding
SIMPLIFICATION 1:
Reminder: formula (13) bitvector APPs computation:
N

(
C
)
P  Ik y k   P  Ik y k 1   Pe
Ik ,n  



n 1

(13)
Let’s insert (17) into (13) and turn the product over the
exponential functions into summations in the exponents:
 N

P  Ik y k   Ak P  Ik y k 1 exp    Le(C ) Ik ,n  Ik ,n  (18)


 n 1

Name
- 30 -
Iterative source channel decoding
Benefits of using (18) instead of (13):
• Normalizing constant Ak doesn’t depend on the variable
Ik,n
• L-values from the APP channel decoder can be integrated
into the Optimal-Estimation algorithm for APP source
decoding without converting the individual L-values back
to bit probabilities
• Strong numerical advantages
Name
- 31 -
Iterative source channel decoding
SIMPLIFICATION 2:
The computation of new bit APPs within the iteration is still
carried out by (14)
Reminder:
P (S )  Ik,n y k  

Ik L Ik,n
P  Ik y k 
(14)
But, instead of (15) for the new bit extrinsic information
Reminder:
(S ) I
P

k,n y k 
(
S
)
(15)
Pe
  Ik,n  
Pe(C )  Ik,n 
Name
- 32 -
Iterative source channel decoding
Extrinsic L-values
Le
S
are used:
Ik,n   L(S ) Ik,n   Le(C ) Ik,n 
(19)
Benefits of using (19) in stead of (15):
• Division is turned into a simple subtraction in the L-value
domain
THUS,
(C ) I
L
 k,n  from the APP channel
In ISCD the L-values e
C

P
decoder are used and the probabilities e
Ik,n 
are not required
Name
- 33 -
Quantizer Bit Mapping
Assumption:
Input is a low-pass correlation
The value of the sample xk will be close to xk-1
If:
The channel code is strong enough
L-values at

the APP channel decoder output have large magnitudes

a-priori information for the source decoder is
perfect
ISCD: APP source decoder tries to generate extrinsic
information for a particular data bit , while it exactly knows
all other bits
Name
- 34 -
Quantizer Bit Mapping
natural
optimized
Gray
Figure 3: Bit Mappings for a 3-bit Quantizer to be used in ISCD
Name
- 35 -
Simulation Results
Simulation process:
• Correlation of independent Gaussian random samples by a
first-order recursive filter (coefficient   0.9 )
• Source encoders: 5-bit Lloyd Max scalar quantizers
• 50 mutually independent bitvectors were generated, all
transmitted at time index k.
• The bits were scrambled by a random-interleaver
Name
- 36 -
Simulation Results
Simulation process (contd.):
• The bits were channel-encoded by a rate- ½ recursive
systematic convolution code (RSC-code with memory 4,
which were terminated after each block of 50 bitvectors
(250 bits)
• AWGN-channel was used for transmission
• ISCD was performed at the decoder
Name
- 37 -
Simulation Results
Figure 4: Performance of ISCD for various 5-Bit Mappings
Name
- 38 -
Conclusions
Strong quality gains are achievable by:
• Application of the turbo principle in joint source-channel
decoding
• Bitmapping of the quantizers is important for the
performance
• Optimized bit mapping of the quantizers in ISCD allows to
obtain strong quality improvements
Name
- 39 -
Thank you for your attention!
Name
- 40 -
Download