TURBO CODES - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

TURBO CODES

Description:

concatenated recursive systematic encoders. pseudo-random interleavers ... Operates between modular encoders to permute all poor input sequences (low ... – PowerPoint PPT presentation

Number of Views:1686
Avg rating:3.0/5.0
Slides: 24
Provided by: michell268
Learn more at: http://cs.uccs.edu
Category:
Tags: codes | turbo | encoders

less

Transcript and Presenter's Notes

Title: TURBO CODES


1
TURBO CODES
  • Michelle Stoll

2
A Milestone in ECCs
  • Based on convolutional codes
  • multiple encoders used serially to create a
    codeword
  • defined as triple (n, k, m)
  • n encoded bits generated for every k data bits
    recd, where m represents the number of memory
    registers used

3
Enhancements
  • Added features include
  • concatenated recursive systematic encoders
  • pseudo-random interleavers
  • soft input/soft output (SISO) iterative decoding

4
Accolades
  • TCs nearly achieve Shannons channel capacity
    limit first to get within 0.7 dB
  • Do not require high transmission power to deliver
    low bit error rate
  • Considered most powerful class of ECCs to-date

5
Sidebar Shannon Limit
  • Defines the fundamental transmission capacity of
    a communication channel
  • Claude Shannon from Bell Labs proved
    mathematically that totally random sets of
    codewords could achieve channel capacity,
    theoretically permitting error-free transmission

6
Shannon Limit, cont
  • Use of random sets of codewords not a practical
    solution
  • channel capacity can only be attained when k data
    bits mapped to n code symbols approach infinity
  • Cost of a code, in terms of computation required
    to decode it, increases closer to the Shannon
    limit
  • Coding paradox find good codewords the deliver
    BERs close to the Shannon limit, but not overly
    complex
  • ECCs addressing both have been elusive for years
  • until advent of TCs, best codes were outside 2 dB
    of Shannons Limit
  • All codes are good, except the ones we can think
    of.
  • Folk theorem

7
Performance Bounds
The performance floor is in the vicinity of a BER
of 10-5
8
Turbo Code History
  • Claude Berrou, Alain Glavieux, and Punja
    Thitimajshima presented their paper Near Shannon
    Limit Error-Correcting Coding and Decoding
    Turbo Codes in 1993
  • Their results were received with great skepticism
  • in fact, the paper was initially rejected
  • independent researchers later verified their
    simulated BER performance

9
Anatomy Encoder
  • Two encoders, parallel concatenation of codes
  • can use the same clock, decreasing delay
  • d blocks of n bits length sent to each encoder
  • encoder 1 receives bits as-is, encodes the parity
    bits y1, and concatenates them with original data
    bits
  • encoder 2 receives pre-shuffled bit string from
    interleaver, encodes the parity bits y2
  • multiplexer receives a string of size 3n of
    parity bits and original data bits from encoder
    1, and parity bits from encoder 2

10
Turbo Encoder Schematic
Example original data 01101 Encoder 1
creates parity bits 10110 and appends original
01101 Encoder 2 receives pre-shuffled bit string
and create parity bits 11100 Multiplexer
receives 011011011011100
11
Non-Uniform Interleaver
  • Irregular permutation map used to produce a
    pseudo-random interleaver no block interleaving
  • Nonuniform interleaving assures a maximum
    scattering of data, introducing quasi-random
    behavior in the code
  • recall Shannons observation
  • Operates between modular encoders to permute all
    poor input sequences (low-weight CWs) into good
    input sequences producing large-weight output CWs

12
Anatomy Decoder
  • Decoder is most complex aspect of turbo codes
  • But imposes the greatest latency in the process
    as it is serial, iterative
  • Two constituent decoders are trying to solve the
    same problem from different perspectives
  • Decoders make soft decisions about data
    integrity, passing the extrinsic bit reliability
    information back and forth
  • Hence the name turbo in reference to a turbo
    engine

13
Decoder, cont
  • Inspects analog signal level of the received bits
  • then turns the signal into integers which lend
    confidence to what the value should actually be
  • Next, examines parity bits and assigns bit
    reliabilities for each bit
  • Bit reliabilities are expressed as log likelihood
    ratios that vary between a positive and negative
    bound
  • in practice, this bound is quite large, between
    -127 and 127
  • the closer the LLR is to one side, the greater
    the confidence assigned one way or the other.

14
Decoder, contLog Likelihood Ratio (LLR)
  • The probability that a data bit d 1, Pr d
    1, is expressed as
  • What is passed from one decoder to the other are
    bit reliabilities
  • its computations with respect to the estimation
    of d, without taking its own input into account
  • The input related to d is thus a single shared
    piece of information

15
Decoder, cont
  • Decoder modules dec1 and dec2 receive input
  • dec1 passes its bit reliability estimate to
    interleaver, dec2
  • if dec1 successful, it wouldve passed few or no
    errors to dec2
  • Decoder module dec2 processes its input as well
    as the bit reliability from dec1
  • refines the confidence estimate, then passes to
    de-interleaver
  • This completes the first iteration.
  • If no further refinements needed (i.e. acceptable
    confidence) the data is decoded and passed to
    upper layer
  • Otherwise, the output is passed back to dec1 for
    another iteration

16
Turbo Decoder Schematic
17
Decoding Drawbacks
  • To achieve near-optimum results, a relatively
    large number of decoding iterations are required
    (on the order of 10 to 20)
  • This increases computational complexity and
    output delay
  • one way to mitigate delay is to use a stop rule
  • Select some pre-determined number of interations
    to perform
  • if convergence is detected before the number is
    reached, stop

18
Puncturing
  • Another way to address latency is through code
    puncturing
  • puncturing will change the code rate, k/n,
    without changing any of its attributes
  • instead of transmitting certain redundant values
    in the codeword these values are simply not
    transmitted
  • i.e. a Rate 1/2 code can be increased to a Rate
    2/3 code by dropping every other output bit from
    the parity stream

19
Complexity
  • Because the decoder is comprised of two
    constituent decoders, it is twice as complex as a
    conventional decoder when performing a single
    iteration
  • two iterations require twice the computation,
    rendering it four times as complex as a
    conventional decoder

20
Latency
  • Latency on the decoding side is the biggest
    drawback of Turbo Codes
  • Decoding performance is influenced by three broad
    factors interleaver size, number of iterations,
    and the choice of decoding algorithm
  • these can be manipulated, with consequences

21
Ongoing Research
  • Turbo coding is responsible for a renaissance in
    coding research
  • Turbo codes, turbo code hybrids being applied to
    numerous problems
  • Multipath propagation
  • Low-density parity check (LDPC)
  • Software implementation!
  • turbo decoding at 300 kbits/second using 10
    iterations per frame. With a stopping rule in
    place, the speed can be doubled or tripled

22
Turbo Codes in Practice
  • Turbo codes have made steady inroads into a
    variety of practical applications
  • deep space
  • mobile radio
  • digital video
  • long-haul terrestrial wireless
  • satellite communications
  • Not practical for real-time, voice

23
More Information
  • Excellent high-level overview
  • Guizzo, Erico. Closing in on the Perfect
    Code, IEEE Spectrum, March 2004.
  • Very informative four-part series on various
    aspects of TCs
  • Gumas, Charles Constantine. Turbo Codes ev up
    error-correcting performance. (part I in the
    series) EE Times Network at http//archive.chipcen
    ter.com/dsp/DSP000419F1.html
  • The paper that started it all
  • Berrou, Glavieux, and Thitimajshima. Near
    Shannon Limit Error-Correcting Coding and
    Decoding Turbo Codes Ecole Superieure des
    Telecommunications de Bretagne, France. 1993.
  • Complete bibliography soon available on my CS522
    page
Write a Comment
User Comments (0)
About PowerShow.com