Iterative Decoders - PowerPoint PPT Presentation

1 / 34
About This Presentation
Title:

Iterative Decoders

Description:

Iterative encoder/decoder systems. Soft-Input-Soft-Output (SISO) Decoders ... and Puncture. u1, u2, u3, ... p11, p21, p31, ... p12, p22, p32, ... Noise. x. y. y -1 ... – PowerPoint PPT presentation

Number of Views:358
Avg rating:3.0/5.0
Slides: 35
Provided by: yeo9
Category:

less

Transcript and Presenter's Notes

Title: Iterative Decoders


1
  • Iterative Decoders
  • Engling Yeo
  • Department of Electrical Engineering and Computer
    SciencesUniversity of California, Berkeley

2
Outline
  • Objectives
  • Background
  • Iterative encoder/decoder systems
  • Soft-Input-Soft-Output (SISO) Decoders
  • Maximum A-Posteriori Probability Decoder (BCJR)
  • Soft-Output Viterbi Decoder (SOVA)
  • Low Density Parity Check (LDPC) Decoder
  • Computational Complexities
  • Future

3
Architectural Objectives for Iterative Decoders
  • Codes based on Turbo or Low Density Parity Check
    (LDPC) Codes.
  • High speed applications. Magnetic Storage, VDSL,
    Gigabit Ethernet
  • Minimal control logic.
  • Minimized memory requirement by encouraging reuse
    of storage elements.
  • Avoid use of general purpose SRAM memories.

4
Background
5
Introduction
  • Turbo convolutional coders are used in 3GPP and
    IS-2000 standards
  • Turbo product codes are used in HiperLAN and
    Gigabit Ethernet standards
  • Other LDPC codes, or codes based on Message
    Passing Algorithm are hot topic of discussion at
    ISIT 2000, ICC 2000, etc.

6
Turbo Codes Background
  • Discovered by Berrou, et. al., 1993
  • BER lt 10-5 at Eb/N00.7dB and Rate1/2
  • (Shannon Limit 0.2dB)
  • Encoder Architecture
  • Parallel concatenation through random interleaver
    of recursive systematic convolutional (RSC)
    encoders.
  • Decoder Architecture
  • Iterative cooperation between soft-input-soft-outp
    ut A-Posteriori Probability (APP) decoders for
    the constituent codes.
  • Message Passing Algorithms
  • Class of Iterative Decoding.

7
Message Passing Analogy (Trellis-based codes)
How many people are there?
8
Message Passing Analogy (LDPC codes)
Output _at_ each node (Marginalized sum of inputs
1 )
9
Research Issues and Directions
  • Performance Prediction
  • Error-rate analysis is difficult, but some
    progress has been made.
  • Residual error statistics
  • Turbo decoder errors may pose problems for
    conventional outer error-correction codes e.g.
    Reed-Solomon codes
  • Decoder complexity and delay
  • Architectures to allow trades offs amongst
    implementation complexity, performance and
    decoding latency.

10
Iterative Decoder Implementation Issues
  • Large implementation size.
  • High Power dissipation.
  • Longer latencies and need for innovations in
    controller.
  • Error burst affects actual performance gain after
    ECC.
  • Performance in presence of media noise not fully
    understood.

11
ITERATIVE DECODERS
12
Turbo Code Architecture (Parallel Concatenation)
Encoder
Decoder
13
Iterative Decoding for Partial Response Channels
(Serial Concatenation)
? Pseudo Random Interleaver
  • Inner Code Trellis (Convolutional) Code
    based on Partial Response Channel
  • Outer Code Convolutional or LDPC Outer Code
  • T. Souvignier, et. al., Turbo Decoding for PR4
    parallel vs. serial concatenation, ICC 1999

14
Unrolled Iterative Decoder
Channel Observations
Extrinsic
SISO
Intrinsic
Unrolled decoder employs multiple pipelined SISO
stages to achieve desired throughput rates (gt
1Gbps)
  • G. Masera, et. al., "VLSI Architectures for Turbo
    Codes", IEEE Transactions On VLSI Systems, Vol.
    7, No. 3, Sep. 1999.

15
MAP Decoders (BCJR)
  • L. Bahl, et. al, Optimal Decoding of Linear
    Codes for Minimizing Symbol Error Rate, IEEE
    Trans. Inform. Theory, March 1974.

16
MAP Algorithm
  • Each bit decision affected by received value of
    both prior and future symbols.
  • Bi-directional trellis path propagation.
  • Forward Propagation ?(k) Backward Propagation
    ?(k)
  • Backward propagation leads to difficulties in
    windowed approaches.
  • Solution 3-Window Algorithm Backward
    Propagation.
  • A. Viterbi, An Intuitive Justification and a
    Simplified Implementation of the MAP Decoder for
    Convolutional Codes, IEEE J. on Selected Areas
    in Comm., Vol. 16 No. 2, Feb 1998

17
MAP Decoder Block Architecture
18
Soft Output Viterbi Decoders (SOVA)
  • Hagenauer, J. Hoeher, P. A Viterbi algorithm
    with soft-decision outputs and its applications.
    GLOBECOM '89

19
SOVA Implementation
  • Provides a measure of confidence by comparing the
    difference in path metric between the most likely
    path (?) and the next most likely path (b).
  • Realize a SOVA decoder by cascading a typical VA
    survival memory unit with a SOVA section.

20
SOVA Implementation
  • Viterbi Algorithm goes through an initial pass to
    determine most likely path ?. This includes the
    Add-Compare-Select and Traceback sections.
  • A second traceback operation searches for the
    path, ?i , that has
  • Result at the end of the traceback that differs
    from the ML decision.
  • Minimum path difference from ML path, a.

21
Decoders for Low Density Parity Check Codes (LDPC)
22
LDPC Overview
  • LDPC representation by either parity check matrix
    or the bi-partite graph.
  • Message passing Bit-to-Check / Check-to-Bit
  • Total Number of Edges 18432

GALLAGER R. G., IRE Trans. Info. Theory, Vol.
8(1962) p. 21
23
Construction of Good LDPC Codes
  • Folk Wisdom All codes are good, except those
    that we know of.
  • Randomly-chosen code will turn out to be good
    with high probability.
  • LDPC codes with short cycles known to inhibit
    performance.
  • Eliminating short cycles of length 4 leads to
    Steiner Tree Problem
  • Every pair of bit nodes must not have more than 1
    common check node
  • Every pair of check nodes must not have more than
    1 common bit node.

24
Steiner Tree Solution
  • Matrix based on Euclidean or Projective
    geometries in Galois fields.
  • Rows in PC matrix are shifted cyclically.
  • Non-zero entries not confined within a reasonable
    window width (e.g. 5 of block size).

25
Pipelined Architecture of LDPC Decoder
MEMORY
MEMORY
Bit to Check
Check to Bit
MEMORY
MEMORY
  • Randomness of connectivity in bi-partite graph
    inhibits any kind of memory reuse.
  • Two banks of memory alternating between read and
    write required.
  • Total memory requirement 72k words

26
Bit-to-Check Message Computation
Qi,4 Ri,1 Ri,2 Ri,3
27
Check-to-Bit Computations
R1,j f(Q2,j , Q3,j ,. . . , Q36,j )
28
Comparison of SISO Decoders
29
Summary of Computational Complexities
30
Summary of Memory Requirements
31
Future
  • Choice of SISO decoder dependent on number of
    variables.
  • SNR of intended environment
  • Targeted BER performance
  • Message wordlength
  • Latencies
  • Number of iterations
  • Timing Recovery
  • Error Propagation
  • Moores Law

32
Conclusion
33
Conclusion
  • Various building blocks for an iterative decoder
    suitable for magnetic recording channels have
    been presented.
  • Proposed timing schedule of MAP decoder allows
    high-speed memory access pattern with minimal
    control logic.
  • SOVA decoder achieved through minimal extensions
    to a Viterbi decoder, and use of high speed
    register exchange.
  • Pipelined LDPC decoder suffers from large memory
    requirements despite low computational costs.

34
Performance of Turbo Codes
Write a Comment
User Comments (0)
About PowerShow.com