Iterative Equalization Speaker: Michael Meyer

advertisement
Iterative Equalization
âk
Decoder
s(bk)
s‘(bk)
Deinterleaver
Interleaver
s(ck)
s‘(ck)
Demapper
Mapper
Equalizer / Detector
Speaker:
Michael Meyer
michi-meyer@gmx.de
yk
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-1-
System Configuration and Receiver Structures
ak
âk
Encoder
bk
Decoder
Decoder
s(bk)
Deinterleaver
ĉk
ck
Mapper
Channel
âk
b̂ k
Interleaver
xk
âk
s(ck)
Demapper
Optimal
Detector
yk
yk
System
Receiver A:
Configuration optimal detector
x̂ k
s(xk)
Equalizer /
Detector
s(bk)
s‘(bk)
Deinterleaver
s(ck)
s‘(ck)
Demapper
Mapper
Equalizer / Detector
yk
Receiver B:
one-time equalization
and detection
Interleaver
yk
Receiver C:
turbo equalization
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-2-
Interleaver
ak
Encoder
bk
Interleaver
ck
Mapper
xk
Channel
Example for an interleaver:
A 3-random interleaver for 18 code bits
yk
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-3-
Equalization
• Methods to compensate the channel effects
Transmitter
Receiver
e.g. Multi-Path Propagation might lead to Intersignal
Interference (ISI).
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-4-
Used Channel Modell
In the following, we will use an AWGN channel with known channel
impulse response (CIR). The received signal is given by
L
y k   hl  xk  l   n , k  1,2,..., N
t
l 0
channel coefficient
In matrix form:
sent signal
t
noise
y = Hx + n
h0
As an example, we have a

y


1
 h1
length-three channel with
  h
y
h0=0.407
y   2   2
h1=0.815
   0
  
h2=0.407
n
y N  
2
e
 0
The noise is Gaussian: p(n) 
2
2
2
0
0
0
0

h0 0 0  0   x1   n1 
h1 h0 0  0   x2   n2 


h2 h1 h0
0      
  xN  nN 
   

0  h2 h1 h0 

JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-5-
The Forward / Backward Algorithm
For Receiver B, the Forward / Backward Algorithm is often
used for equalization and decoding.
As this algorithm is a basic building block for our turbo
equilization setup, we will discuss it in detail
• for equalization
• for decoding
We will continue our example to make things clear.
The example uses binary phase shift keying (BPSK)
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-6-
The Decision Rule
âk
Decoder
b̂ k
Deinterleaver
ĉk
0, if L(ck |y)  0
ĉk  
1, if L(ck |y)  0
with the log-likelihood ratio
Demapper
x̂ k
The decision rule for the equalizer is
s(xk)
Equalizer /
Detector
yk
Receiver B:
 P(c  0|y) 

L(c|y)  ln
 P(c  1|y) 
So, we have to calculate L(c|y)
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-7-
The Trellis Diagram (1)
Only FBA Matrix
Skip Trellis
(-1, -1)
(-1, -1)
(1, -1)
(1, -1)
(-1, 1)
(-1, 1)
(1, 1)
(1, 1)
state ri
state rj
input xk=xi,j output vk=vi,j
time k+2
A branch of the trellis is a four-tuple (i, j, xi,j, vi,j)
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-8-
The Trellis Diagram (2)
t
t
2L = 4
If the tapped delay line contains L elements and if we use a binary
alphabet {+1, -1}, the channel can be in one of 2L states ri. The set of
possible states is
S = {r0,r1,…,r2L-1}
At each time instance k=1,2,…,N the state of the channel is a random
variable sk S.
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
-9-
The Trellis Diagram (3)
Using a binary alphabet, a given state sk = ri can only develop into two
different states sk+1 = rj depending on the input symbol xk = xi,j = {+1, -1}.
The output symbol vk = vi,j in the noise-free case is easily calculated by
L
vk   hl xk  l
l 0
v  Hx
v2,0 = h0x2,0 + h1x3,2 + h2x3,3 = 0.407∙1 + 0.815∙1 + 0.407∙(-1) = 0.815
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 10 -
The Trellis Diagram (4)
xi,j and vi,j are uniquely identified by the index pair (i j). The set of all index
pairs (i j) corresponding to valid branches is denoted B.
e.g B = {(00), (01), (12), (13), (20), (21), (33), (32)}
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 11 -
The Joint Distribution p(sk, sk+1, y)
As we are separating the equalization from the decoding task, we
assume that the random variables xk are statistically independent (IID),
N
hence
P(x)   P(xk )
k 1
We then have to calculate
P(sk=ri, sk+1=rj | y).
This is the probability that the transmitted sequence path in the trellis
contains the branch (I, j, xi,j, vi,j) at the time instance k.
This APP (a posteriori probability) can be computed efficiently with the
forward / backward algorithm, based on a suitable decomposition of
the joint distribution
p(sk, sk+1, y) = p(y) ∙ P(sk, sk+1 | y)
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 12 -
The Decomposition
We can write the joint distribution as
p(sk, sk+1, y) = p(sk, sk+1, (y1,…,yk-1), yk, (yk+1,…,yN))
and decompose it to
 k ( sk )
 k ( sk ,sk 1)
 k 1( sk 1)

 

 



p(sk , sk 1, y)  p(sk , y1,..., y k 1)  p(sk 1, y k |sk )  p(y k 1,..., y N |sk 1)
probability that
contains all paths
through the Trellis
to come to state sk
so
s1
…
sk
probability for
the transition
from sk to sk+1
with symbol yk
yk
probability that
contains all
possible pathes
from state sk+1 to sN
sk+1
…
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
sN-1
- 13 -
sN
Skip Probabilities
The Transition Probability γ
We can further decompose the transition probability into
 k(sk, sk+1)
= P(sk+1|sk) ∙ p(yk|sk, sk+1)
Using the index pair (i j) and the set B we get
 P(xk  xi , j )  p(y k |vk  vi , j ) if (i j) B
 k(ri , rj )  
if (i j)  B
0
From the channel law yk=vk+nk and the Gaussian distribution we know that
p(y k |vk ) 
k(r0, r3) = 0
 k(r0, r0)
e
(y k  vk )2
2 2
2 2
as (03)  B
= P(xk=+1)∙p(yk|vk=1.63)
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 14 -
The Probability  (Forward)
The term k(s) can be computed via the recursion
 k(s) 

sS
with the initial value 0(s) = P(s0=s).
4
 2(rj )   1(ri ) 1(ri , rj )
2(s2)
1(s1)
i 0
1(s1,s2)
ri
so
(s) k1(s, s)
k 1
s1
rj
s2
…
Note: k contains all possible paths leading to sk.
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 15 -
The Probability β (Backward)
Analogous, the term βk(s) can be computed via the recursion
 k(s) 

sS
with the initial value βN(s) = 1 for all s
(s) k(s, s)
k 1
S.
β2(s2)
2(s2,s3)
…
ri
s2
yk
rj
β3(s3)
4
s3
…
sN
 2(rj )    3(rj ) 2(ri , rj )
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
i 0
- 16 -
The Formula For The LLR
Now, we know the APP P(xk = x|y). All we need to accomplish this task is
to sum the branch APPs P(sk, sk+1|y) over all branches that correspond
to an input symbol xk=x
P(xk  x|y) 
 P(s
k
(ij )B: xi , j  x
 ri , sk1  rj |y) 
  (r ) 
k
(ij )B: xi , j  x
i
k
(ri , rj )  k 1(rj )
To compute the APP P(xk=+1|y)
the branch APPs of the index
pairs (00),(12),(20) and (32) have
to be summed over
L(ck |y)  ln
P(ck  0|y)
P(xk  1|y)
 ln
 ln
P(ck  1|y)
P(xk  1|y)
  (r ) 
k i
(ij )B: x i , j  1
k
(ri , rj )   k 1(rj )
  (r ) 
k i
(ij )B: x i , j  1
k
(ri , rj )   k 1(rj )
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 17 -
The FBA in Matrix Form
For convenience, the forward/backward algorithm may also be
expressed in matrix-form. We need to create two matrices.
Pk
A(x)
|S|x|S|
|S|x|S|
with {Pk}I,j = k(ri, rj)
1 if (i j) is a branch with x i,j  x



A
(
x
)

with

i,j
0 otherwise
A third matrix is created by elementwise multiplication:
B(x) = A(x)∙Pk
|S|x|S|
1
0
A(1)  
1

0
0
0
A(1)  
0

0
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
0 0 0
0 1 0
0 0 0

0 1 0
1 0 0
0 0 1
1 0 0

0 0 1
- 18 -
The Algorithm
Input: Matrices Pk and Bk(x)
We calculate vectors fk |S|x1 and bk
Initialize with f0 = 1 and bN = 1
|S|x1
For k = 1 to N step 1 (forward)
fk = Pk-1fk-1
For k = N to 1 step -1 (backward)
bk=PkTbk+1
fkTBk(1)bk1
Output the LLRs: L(ck |y)  ln T
fk Bk(1)bk1
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 19 -
Soft Processing
âk
Decoder
b̂ k
s(bk)
A natural choice for the soft information s(xk) are the APPs or
similarly the LLRs L(ck|y), which are a “side product” of the
maximum a-posteriori probability (MAP) symbol detector.
Deinterleaver
ĉk
s(ck)
Demapper
x̂ k
s(xk)
Equalizer /
Detector
Also, the Viterbi equalizer may produce approximations of
L(ck|y).
For filter-based equalizers extracting s(xk) is more difficult. A
common approach is to assume that the estimation error
ˆ k  x k is Gaussian distributed with PDF p(ek)…
ek  x
yk
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 20 -
Decoding - Basics
e cL(ck|y)
• Convert the LLR L(ck|y) back to probabilities: P(ck  c|y) 
1 eL(ck|y)
c  0,1
• Deinterleave P(ck|y) to P(bk|y)
 P(b1|y) 


P
(
b
|
y
)
2
 is the input set of probabilities to the decoder
• p





P
(
b
|
y
)
N


•With the forward/backward algorithm we may again calculate the LLR L(ak|p)
t
t
For the example:
Encoder of a convolutional code,
where each incoming data bit ak
yields two code bits (b2k-1, b2k) via
b2k-1 = ak  ak-2
b2k = ak  ak-1  ak-2
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 21 -
Decoding - Trellis
(1,1)
(1,1)
(0,1)
(0,1)
(1,0)
(1,0)
(0,0)
(0,0)
state ri
state rj
input ak=ai,j output (b2k-1, b2k)=(b1,i,j, b2,i,j)
The convolutional code yields to a new trellis with branches denoted by
the tuple (i, j, ai,j, b1,i,j, b2,i,j). Set B remains {(00),(01),(12),(13),(20),(21),(33),(32)}
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 22 -
Decoding – Formulas (1)
To apply the forward/backward algorithm, we have to adjust the way Pk
and A(x) are formed. For Pk we have to redefine the transition probability
P(ak  ai , j )  P(b2 k1  b1,i , j |y)  P(b2 k  b2,i , j |y) if (i j) B
 k(ri , rj )  
if (i j) B
0
e. g.  k (r0 , r0 )  P(ak  0)


.
1
 because of IID
2
P(b2k -1  0|y)  P(b2k  0|y)


from equalizer
{Pk}I,j = k(ri, rj)
Aa(x)i, j
1 if (i j) is a branch with ai,j  x

0 otherwise
Ba(x) = Aa(x)∙Pk
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 23 -
Decoding – Formulas (2)
So, we calculate L(ak|p) using the forward / backward algorithm.
By changing A(x) we can also calculate L(b2k-1|p) and L(b2k|p) which will
later serve as a priori information for the equalizer.
L(b2k-1|p):
L(b2k|p):
Ab1(x)i , j
1 if (i j) is a branch with b1,i,j  x

0 otherwise
Ab2(x)i , j
1 if (i j) is a branch with b2,i,j  x

0 otherwise
with the set of probabilities:
 P(b1|y) 


P
(
b
|
y
)
2

p





P
(
b
|
y
)
N


JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 24 -
Decoding - Example
Aa(x)i, j  
0
0

Aa(1)  
0
0
1 0 0
1

0
0 0 1

 Aa(0)  
1 0 0
1
0
0 0 1
0 0 0
0 1 0

0 0 0
0 1 0
L(b2k-1|p): Ab1(x)i , j  
0
0

Ab1(1)  
1
0
1 0 0
1

0
0 1 0

A
(
0
)

 b1

0 0 0
0
0
0 1 0
0 0 0
0 0 1

1 0 0
0 1 0
0
0

(
1
)


b2
1
0
1 0 0
1
0
0 0 1

A
(
0
)

 b2

0 0 0
0
0
0 1 0
0 0 0
0 1 0

1 0 0
0 0 1
L(ak|p):
1 if (i j) is a branch with ai,j  x
0 otherwise
1 if (i j) is a branch with b1,i,j  x
0 otherwise
L(b2k|p):
Ab2(x)i , j
1 if (i j) is a branch with b2,i,j  x

A
0 otherwise
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 25 -
Decoding - Algorithm
For decoding, we may use the same forward/backward algorithm with
different initialization, as the encoder has to terminate at the zero state at
time steps k=0, k=K. Change Bk(x) to output L(b2k-1|p) or L(b2k|p).
Input: Matrices Pk and Bk(x)
Initialize with f0 = [1 0…0]T
|S|x1 and bN = [1 0…0]T |S|x1
For k = 1 to N step 1 (forward)
fk = Pk-1fk-1
For k = N to 1 step -1 (backward)
bk=PkTbk+1
fkTBk(0)bk1
Output the LLRs: L(ak |p)  ln T
fk Bk(1)bk1
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 26 -
Bit Error Rate (BER)
With soft information,
we may gain 2dB,
but it is still a long
way to -1.6 dB.
Performance of separate equalization and decoding with hard estimates
(dashed lines) or soft information (solid lines). The System transmits K=512
data bits and uses a 16-random interleaver to scramble N=1024 code bits.
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 27 -
Block Diagram - Separated Concept
Observations
Observations
y
Prior Probabilities
Equalizer
Forward/
Backward
Algorithm
a Posteriori
Probabilities
L(ck|y)
Prior Probabilities
Forward/
Backward
Algorithm
a Posteriori
Probabilities
Block diagram of the f/b algorithm
Let’s look again at the transition propability:
 k(ri , rj )  p(y k |vk  vi , j ) . P(xk  xi , j )
 
Deinterleaver
L(bk|p)
a Posteriori
Probabilities
L(ak|p)
Decision
Rule
Decoder
Forward/
Backward
Algorithm
âk
L(bk|y)
Prior Prob.
Observations
local evidence about
which branch in the
trellis was transversed.
Prior information
So far:
•The equalizer does not have any prior
knowledge available, so the formation of
entries in Pk relies solely on the observation y.
•The decoder forms the corresponding
entries in Pk without any local observations
but entirely based on bitwise probabilities
P(bk|y) provided by the equalizer.
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 28 -
Block Diagram - Turbo Equalization
Observations
y
Prior Probabilities
Lext(ck|p)
Equalizer
Forward/
Backward
Algorithm
a Posteriori
Probabilities
_
+
Extrinsic Information
Interleaver
Lext(ck|y)
Extrinsic Information
Deinterleaver
Lext(bk|p)
_
+
L(bk|p)
a Posteriori
Probabilities
L(ak|p)
Decision
Rule
Decoder
Forward/
Backward
Algorithm
âk
Turbo Equalization
L(ck|y)
Lext(bk|y)
Prior Prob.
Observations
Let’s look again at the transition propability:
 k(ri , rj )  p(y k |vk  vi , j ) . P(xk  xi , j )
 
local evidence about
which branch in the
trellis was transversed
Prior information
So far:
•The equalizer does not have any prior
knowledge available, so the formation of
entries in Pk relies solely on the observation y
•The decoder forms the corresponding
entries in Pk without any local oobservations
but entirely based on bitwise probabilities
P(bk|y) provided by the equalizer.
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 29 -
Block Diagram - Comparison
Observations
y
Prior Probabilities
Lext(ck|p)
Equalizer
Forward/
Backward
Algorithm
a Posteriori
Probabilities
L(ck|y)
_
Observations
y
Prior Probabilities
+
Equalizer
Forward/
Backward
Algorithm
Extrinsic Information
Interleaver
Lext(ck|y)
Extrinsic Information
Deinterleaver
Lext(bk|p)
_
+
L(bk|p)
a Posteriori
Probabilities
L(ak|p)
Decision
Rule
Decoder
Forward/
Backward
Algorithm
âk
L(bk|p)
Prior Prob.
a Posteriori
Probabilities
Receiver C:
Turbo Equalization
L(ck|y)
Deinterleaver
Lext(bk|y)
Observations
a Posteriori
Probabilities
L(ak|p)
Decision
Rule
Decoder
Forward/
Backward
Algorithm
L(bk|y)
Prior Prob.
Observations
Receiver B:
âk
separated equalization
and detection
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 30 -
Turbo Equalization - Calculation
Observations
y
Prior Probabilities
Lext(ck|p)
Equalizer
Forward/
Backward
Algorithm
a Posteriori
Probabilities
L(ck|y)
_
+
Extrinsic Information
Interleaver
Lext(ck|y)
Extrinsic Information
Deinterleaver
Lext(bk|p)
_
+
L(bk|p)
a Posteriori
Probabilities
L(ak|p)
Decision
Rule
Decoder
Forward/
Backward
Algorithm
Lext(bk|y)
Prior Prob.
Observations
Caution:
We have to split L(ck|y)=Lext(ck|y) + L(ck)
as only extrinsic information is fed back.
Lext(ck|y) does not depend on L(ck). L(ck)
would create direct positive feedback
converging usually far from the globally
optimal solution.
The interleavers are included into the iterative
update loop to further disperse the direct
feedback effect. The forward/backward
algorithm creates locally highly correlated
output. These correlations between
neighboring symbols are largely suppressed
by the interleaver.
âk
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 31 -
Turbo Equalization - Algorithm
Input:
Observation sequence y
Channel coefficients hl for l=0,1,…,L
Initialize:
Predetermine the number of iterations T
Initialize the sequence of LLRs Lext(c|p) to 0
Compute recursively for T iterations
L(c|y) = Forward/Backward(Lext(c|p))
Lext(c|y) = L(c|y) – Lext(c|p)
L(b|p) = Forward/Backward(Lext(b|y))
Lext(b|p) = L(b|p) – Lext(b|y)
Output:
Compute data bit estimates âk from L(ak|y)
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 32 -
Turbo Equalization - BER
The system transmits K=512 data bits and uses a
16-random interleaver to scramble N=1024
code bits. Figure A uses separate equalization
and detection.
Figure B uses turbo MMSE Equalization with 0, 1,
2, 10 iterations. Figure C uses turbo MAP
equalization after the same iterations. The line
marked with “x” is the performance with K =
25000 and 40-random interleaving after 20
iterations.
A
B
Turbo MMSE
C
Turbo MAP
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 33 -
No MMSE
Turbo Equalization – Exit Charts [2]
Receiver EXIT chart at 4 dB ES/N0
Receiver EXIT chart at 0.8 dB ES/N0
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 34 -
Linear Equalization
The computational effort is so far determined by the number of trellis states. An 8-ary alphabet
gives 8L states in the trellis.
 Linear filter-based approaches perform only simple operations on the received symbols, which
are usually applied sequentially to a subset of M observed symbols yk.
e.g. yk=(yk-5 yk-4 … yk+5)T M=11
~
~
A channel of length L can be expressed as y k  Hx k  nk with H M x (M+L).
T
Any type of linear processing of yk to compute x̂k can be expressed as xˆk  fk y k  bk.
~
The channel law immediately suggests fkT  H , the zero-forcing approach. With noise present, an
estimate xˆ k  x k  fkT nk is obtained. This approach suffers from “noise enhancement”, which can
~ is ill conditioned.
be severe if H
This effect can be avoided using linear minimum mean square error (MMSE) estimation
minimizing E[|x k  x
ˆ k |2 ]. The equation used is
ˆ  fkT y k
x


~ ~ 1~
fk   2I  HHH Hu
It is also possible to nonlinearly process previous estimates to find the x̂ k besides the linear
processing of yk (decision-feedback equalization (DFE).
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 35 -
Complexity [2]
Approach
Real Multiplications
Real Additions
MAP Equalizer
3 ∙ 2mM + 2m ∙ 2m(M-1)
3 ∙ 2mM + 2(m-1) ∙ 2m(M-1)
exact MMSE LE
16N2 + 4M2 + 10M – 4N - 4
8N2 + 2M2 - 10N + 2M + 4
approx. MMSE LE (I)
4N + 8M
4N + 4M - 4
approx. MMSE LE (II)
10M
10M - 2
MMSE DFE
16N2 + 4M2 + 10M – 4N - 4
8N2 + 2M2 - 10N + 2M + 4
M: Channel impulse response length
N: Equalizer filter length
2m: Alphabet length of the signal constellation
DFE: Decision Feedback Equalization
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 36 -
Comparison
• The MMSE approaches have reduced complexity.
•The MMSE approaches perform as well as the BER-optimal MAP
approach, only requiring a few more iterations.
•However, the MAP equalizer may handle SNR ranges where all other
approaches fail.
Ideas
• treat scenarios with unknown channel characteristics, e. g. combined
channel estimation and equalization using a-priori information
• Switch between MAP and MMSE algorithms depending on fed back
soft information
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 37 -
Thank you for your attention!
Questions & Comments ?
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 38 -
References
[1]
Koetter, R.; Singer, A.; Tüchler, M.:
Turbo Equalization
IEEE Signal Processing Magazine, vol. 21, no. 1, pp 67-80, Jan 2004
[2]
Tüchler, M.; Koetter, R.; Singer, A.:
Turbo Equalization: Principles and New Results
IEEE Trans. Commun., vol. 50, pp. 754-767, May 2002
[3]
Tüchler, M.; Singer, A.; Koetter, R.:
Minimum Mean Squared Error Equalization Using A-priori Information
IEEE Trans. Signal Processing, vol. 50, pp. 673-683, March 2002
JASS 05 St. Petersburg - Michael Meyer - Iterative Equalization
- 39 -
Download