Belief-Propagation with Information Correction: Near Maximum-Likelihood Decoding of Low-Density Parity-Check Codes

advertisement
Belief-Propagation with Information
Correction: Near Maximum-Likelihood
Decoding of LDPC Codes
Ned Varnica+, Marc Fossorier#, Alek Kavčić+
+Division
of Engineering and Applied Sciences
Harvard University
#Department
of Electrical Engineering
University of Hawaii
March 2004
Division of Engineering and Applied Sciences
Outline
• Motivation – BP vs ML decoding
• Improved iterative decoder of LDPC codes
• Types of BP decoding errors
• Simulation results
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 2
Division of Engineering and Applied Sciences
LDPC Code Graph
•
•
Parity check matrix
HN
c
– Variable (symbol) nodes
vi  V, i = 0, 1, …, N-1
– Parity check nodes
cj  C, j = 0 , 1, … , Nc-1
xN
A non-zero entry in H
an edge in G
C
•
...
dG (v1 )  3
dG (v2 )  2
dG (v3 )  3
Code rate
– R = k/N, k  N-Nc
•
dG (v0 )  3
Bipartite Tanner code graph
G = (V,E,C)
. . .
Belief Propagation
– Iterative propagation of
conditional probabilities
V
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 3
Division of Engineering and Applied Sciences
Standard Belief-Propagation on LDPC Codes
•
Locally operating
– optimal for cycle-free graphs
• Optimized LDPC codes (Luby et al 98, Richardson, Shokrollahi & Urbanke 99,
Hou, Siegel & Milstein 01, Varnica & Kavcic 02)
– sub-optimal for graphs with cycles
•
Good finite LDPC have an exponential number of cycles in their
Tanner graphs (Etzion, Trachtenberg and Vardy 99)
•
Encoder constructions
•
BP to ML performance gap due to convergence to pseudocodewords (Wiberg 95, Forney et al 01, Koetter & Vontobel 03)
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 4
Division of Engineering and Applied Sciences
Examples
• Short Codes
• Long Codes
- e.g. Tanner code with N = 155, k = 64,
diam = 6, girth = 8, dmin = 20
10
- e.g. Margulis Code with N = 2640
k = 1320
0
10
ML Decoder
BP Decoder
10
10
10
WER
WER
-2
10
-3
10
-4
10
10
10
10
10
BP Decoder
ML upper bound
-1
-1
10
10
0
-5
0
0.5
1
1.5
2
2.5
3
3.5
4
E b / N 0 [dB]
10
-2
-3
-4
-5
-6
-7
-8
1
1.5
Eb / N0
2
2.5
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 5
Division of Engineering and Applied Sciences
Goals
• Construct decoder
– Improved BP decoding performance
– More flexibility in performance versus complexity
– Can nearly achieve ML performance with much lower
computational burden
• Reduce or eliminate LDPC error floors
• Applications
– Can use with any “off-the-shelf” LDPC encoder
– Can apply to any communication/data storage channel
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 6
Division of Engineering and Applied Sciences
Subgraph Definitions
x  {0,1}N
transmitted binary
vector
channel
ri   hk x i  k
k 0
r  RN
received
 w i vector
BCJR
detector
BP decoder
x^(L)  {0,1}N
decoded vector
after L iterations
J
•
^
Syndrome s = H x(L)
•
^
CS(L) - Set of unsatisfied check nodes (SUC) CS(L) = {ci : (Hx(L))i  0}
•
VS(L) - Set of variable nodes incident to c  CS(L)
•
ES(L) - Set of edges connecting VS(L) and CS(L)
Definition 1: SUC graph GS(L) = (VS(L) , ES(L) , CS(L)
by SUC CS(L)
•
) is
graph induced
dGs(v) - Degree in SUC graph GS(L) for v  V
• dGs(v)  dG(v)
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 7
Division of Engineering and Applied Sciences
Properties of SUC graph
Observation 1: The higher the degree dGs(v) of a node v  Vs(L)
the more likely is v to be in error
e.g. Statistics for Tanner (155,64) code blocks for which BP failed on
AWGN channel at SNR = 2.5 dB
dGs = 0
dGs = 1
dGs = 2
dGs = 3
Channel information LLR ( log(ptrue/pfalse) )
2.8
2.3
1.5
1.1
LLR messages received from check nodes
3.6
1.6
0.2
- 0.7
Percentage of variable nodes in error
8.1
9.3
31.2
51.1
• Select v node
• Perform information correction
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 8
Division of Engineering and Applied Sciences
Node Selection Strategy 1
Strategy 1: Determine SUC graph and select the node with maximal
degree dGs in SUC graph GS(L)
CS( L )
dGS (v0 )  2
d GS  1 dG (v2 )  2
S
d GS  1
d GS  1
d GS  1
d GS  1 d GS  1 d GS  1 d GS  1
VS( L )
d GS  1
d GS  1
d GS (v12 )  2
Select node v0 or v2 or v12
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 9
Division of Engineering and Applied Sciences
Properties of SUC graph, cntd
Definition 2: Nodes v1 and v2 are neighbors with respect to SUC if
there exist c  CS(L) incident to both v1 and v2
• nv(m) - number of neighbors
CS(L)
of v
with degree dGs = m
...
nv(2) = 1 and nv(1) = 4
. . .
dGs (v)  2
dGs  2
dG s  1 dG s  1
dG s  1
dG s  1
Observation 2: The smaller the number of neighbors (wrt to SUC graph)
with high degree, the more likely v is to be in error
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 10
Division of Engineering and Applied Sciences
Node Selection Strategy 2
Strategy 2: Among nodes with maximal degree dGs select a node
with minimal number of highest degree neighbors
CS( L )
d GS (v0 )  2
d GS  1 d G (v2 )  2
S
d GS  1
d GS  1
d GS  1
d GS  1 d GS  1 d GS  1 d GS  1
nv0(2) = nv12(2) = 1; nv2(2) = 2
nv0(1) = 4; nv12(1) = 6
d GS  1
d GS  1
d GS (v12 )  2
Select node v0
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 11
Division of Engineering and Applied Sciences
Alternatives to Strategy 2
•
•
•
max
dGs = max dGs(v)
vV
max
Set of suspicious nodes Sv
max
= {v : dGs(v) = dGs }
max
Edge penalty function
r(v,c) =
vn  Nc\{v}
dGs(vn); if Nc \ {v}  
0
; if Nc \ {v} = 
(Nc - set of v nodes incident to c)
•
Penalty function R(v) =  r(v,c) –  r(v,c)
c  Cs
c  Cs
max
•
Select vp  Sv
•
Numerous related approaches possible
as vp = argmin
R(v)
max
v  Sv
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 12
Division of Engineering and Applied Sciences
Node Selection Strategy 3
• Decoder input on node vi
p( x i  1 | r )
O(v i )  log
p( x i  0 | r )
• Memoryless AWGN channel:
Observation 3: A variable node v is more likely to be incorrect if its decoder
input is less reliable, i.e., if |O(v)| is lower
Strategy 3: Among nodes with maximal degree dGs select node with minimal
input reliability |O(v)|
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 13
Division of Engineering and Applied Sciences
Message Passing - Notation
C
•
Set of log-likelihood ratios
messages on v nodes:
.
Decoder input:
O = [O (v0 ), …, O (vN-1)]
•
Channel detector (BCJR)
input
B = [B (v0 ), …, B (vN-1)]
.
. . .
C
. . V.
. . .
. . .
M = (C,O)
•
.
O(V0 )
B (V )
O(V1 )
.
B(V )
0
O
. .
1
T
T
. . .
O(VN 1 )
B (V
N 1
)
T
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 14
Division of Engineering and Applied Sciences
Symbol Correction Procedures
•
j=1
Replace decoder and detector input
LLRs corresponding to selected vp
1.
2.
j=2
S
O (vp) = +S and B (vp) = +S
O (vp) = –S and B (vp) = –S
3
S
1
•
•
For each test perform
additional Kj iterations
Μ (0)
(0)
start v p
S
Μ
(4)
vp
S
Μ
(5)
vp
(5)
5
Max number of attempts
(stages) jmax
2
S
( 3)
Μ (2)
vp
7
S
8
S
9
S
(4)
4
Test 2j combinations at stage j
vp
(1)
Perform correction in stages
•
Μ
(3)
Μ (1)
vp
•
j=3
( 2)
10  S
11
S
12
S
13
S
(6)
6
S
Μ
(6)
vp
14  S
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 15
Division of Engineering and Applied Sciences
Symbol Correction Procedures
j=1
• “codeword listing” approach
– Test all 2jmax possibilities
– W – collection of valid codeword
candidates
– Pick the most likely candidate
j=2
S
3
S
1
• e.g. for AWGN channel set
vp
start Μ ( 0 )
– Stop at a first valid codeword
– Faster convergence, slightly
worse performance for large jmax
( 3)
7
S
8
S
9
S
(4)
S
Μ
(4)
vp
S
Μ
(5)
vp
(5)
4
• “first codeword” approach
vp
(1)
^
vp
Μ
(3)
Μ (1)
x = argmin d(r,w)
wW
j=3
10  S
(0)
5
2
S
Μ (2)
vp
( 2)
11
S
12
S
13
S
(6)
6
S
Μ
(6)
vp
14  S
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 16
Division of Engineering and Applied Sciences
Parallel and Serial Implementation ( jmax= 3 )
j=1
j=2
S
3
S
1
Μ
S
vp
( 3)
2
S
(1)
9
S
1
(4)
S
(0)
start v p
S
5
Μ
vp
Μ
(4)
vp
Μ (0)
start
12
13
Μ
(6)
vp
6
S
Μ
( 2)
vp
S
(2)
S
14  S
8
S
S
7 S
10  S
Μ
( 2)
vp
Μ (1)
vp
S
4 S
(2)
(0)
S
(6)
S
vp
Μ (2)
( 2)
vp
(1)
9
(5)
( 2)
6
vp
Μ (5)
vp
(2)
S
3
Μ (1)
5
10  S
11
j=3
S
S
Μ (0)
j=2
(3)
8
4
S
7
j=1
Μ (1)
vp
2
j=3
(1)
11
S
13
S
(2)
12
S
Μ
( 2)
vp
14
S
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 17
Division of Engineering and Applied Sciences
Complexity - Parallel Implementation
j=1
j=2
S
3
S
1
Μ
(1)
vp
(1)
Μ
start v p
7
S
8
S
9
S
S
vp
S
5
S
Μ (2)
vp
Decoding restarted
– M need not be stored
Μ (3)
– higher Kj required
( 3)
Μ
(4)
vp
10  S
(0)
2
•
(4)
4
(0)
j=3
Μ
(5)
vp
(5)
( 2)
11
S
12
S
13
S
•
Decoding continued
–
–
–
–
M need to be stored
storage  (2jmax)
lower Kj required
“first codeword” procedure fastest convergence
(6)
6
S
Μ
(6)
vp
14  S
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 18
Division of Engineering and Applied Sciences
Can we achieve ML?
Fact 1: As jmax N, “codeword listing” algorithm with Kj = 0, for j < jmax,
and Kjmax = 1 becomes ML decoder
For low values of jmax (jmax << N) performs very close to ML decoder
ML decoder
“codeword listing” procedure
original BP (max 100 iter)
0
10
– Tanner (N = 155, k = 64) code
-1
10
– jmax = 11, Kj = 10
-2
WER
•
10
– Decoding continued
– faster decoding
– M need to be stored
-3
10
-4
10
– ML almost achieved
-5
10
0
0.5
1
1.5
2
2.5
Eb / N0 [dB]
3
3.5
4
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 19
Division of Engineering and Applied Sciences
Pseudo-codewords Elimination
•
Pseudo-codewords compete with codewords in locally-operating BP
decoding (Koetter & Vontobel 2003)
•
c - a codeword in an m-cover of G
•
i - fraction of time vi  V assumes incorrect value in c
•
 = (0,1, …,N-1) - pseudo-codeword
•
pseudo-distance (for AWGN)
 N 1


 i 

 i 0

N 1

i 0
•
2
2
i
Eliminate a large number of pseudo-codewords by forcing symbol ‘0’ or
symbol ‘1’ on nodes vp
– Pseudo-distance spectra improved
– Can increase min pseudo-distance if jmax is large enough
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 20
Division of Engineering and Applied Sciences
Types of BP decoding errors
Definition 3: Decoder D has reached a steady state in the interval [L1,L2]
if Cs(L) = Cs(L1) for all L [L1,L2]
1.
Very high SNRs (error floor region)
Stable errors on saturated subgraphs:
• decoder reaches a steady state and fails
• messages passed in SUC graph saturated
2.
Medium SNRs (waterfall region)
Unstable Errors:
•
decoder does not reach a steady state
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 21
Division of Engineering and Applied Sciences
SUC Properties in Error Floor Region
d (v )
Theorem 1: In the error floor region d Gs (v )   G 
 2 


Corollary: For regular LDPC codes with
d Gmax  3
max
d Gs
1
•
Information correction for high SNRs (error floor region)
– Pros:
– Small size SUC
– Faster convergence
– Cons:
– dGs plays no role in node selection
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 22
Division of Engineering and Applied Sciences
Simulation Results
10
Tanner (155,64) code
– Regular (3,5) code
10
– Channel: AWGN
– Strategy 3
– jmax = 11, Kj = 10
– More than 1dB gain
10
-1
-2
WER
•
ML decoder
“codeword listing” procedure
“first codeword” procedure
original BP (max 100 iter)
0
10
10
-3
-4
– ML almost achieved
10
-5
0
0.5
1
1.5
2
E /N
b
0
2.5
3
3.5
4
[dB]
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 23
Division of Engineering and Applied Sciences
Simulation Results
10
Tanner (155,64) code
10
-1
ML decoder
– Regular (3,5) code
– Channel: AWGN
– Strategy 3
– “First codeword”
procedure
– jmax = 4,6,8 and 11
– Kj = 10
10
-2
WER
•
Original BP (400 iter)
Str 3 (L = 100, jmax = 4, K = 10)
Str 3 (L = 100, jmax = 6, K = 10)
Str 3 (L = 100, jmax = 8, K = 10)
Str 3 (L = 100, jmax = 11, K = 10)
0
10
10
10
-3
-4
-5
1
1.5
2
2.5
3
3.5
Eb / N0 [dB]
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 24
Division of Engineering and Applied Sciences
Simulation Results – Error Floors
Margulis (2640,1320) code
10
– Regular (3,6) code
10
– Channel: AWGN
10
– Strategy 3
10
– “First codeword”
procedure
WER
•
10
10
– jmax = 5, Kj = 20
– More than 2 orders of
magnitudes WER
improvement
10
10
10
0
Original BP
Strategy 3; jmax=5, Kj = 20
-1
-2
-3
-4
-5
-6
-7
-8
1
1.5
2
2.5
Eb / N0
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 25
Division of Engineering and Applied Sciences
Simulation Results – ISI Channels
0
Dicode - Strategy 2
Dicode - orig BP (100 iter)
EPR4 - Strategy 2
EPR4 - orig BP (100 iter)
10
– Tanner (155,64) code
– Channels:
–Dicode (1-D)
–EPR4 (1-D)(1+D)2
EPR4
-1
10
-2
Dicode
–Strategy 2
– jmax = 11, Kj = 20
WER
10
-3
10
– 1dB gain
-4
10
– 20 % of detected errors are
ML
-5
10
1.5
2
2.5
3
3.5
4
4.5
Eb / N0
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 26
Division of Engineering and Applied Sciences
Conclusion
• Information correction in BP decoding of LDPC codes
– More flexibility in performance vs complexity
– Can nearly achieve ML performance with much lower
computational burden
– Eliminates a large number of pseudo-codewords
• Reduces or eliminates LDPC error floors
• Applications
– Can use for any “off-the-shelf” LDPC encoder
– Can apply to any communication/data storage channel
Varnica, Fossorier,Kavčić – Near ML decoding of LDPC codes
slide 27
Division of Engineering and Applied Sciences
Download