Convolutional Codes Representation and Encoding - PowerPoint PPT Presentation

1 / 63
About This Presentation
Title:

Convolutional Codes Representation and Encoding

Description:

modification to the decoder algorithm. Sometimes modification process can be applied multiple times in succession ... Viterbi algorithm is a ML (optimum) ... – PowerPoint PPT presentation

Number of Views:547
Avg rating:3.0/5.0
Slides: 64
Provided by: Ince9
Category:

less

Transcript and Presenter's Notes

Title: Convolutional Codes Representation and Encoding


1
Convolutional CodesRepresentation and Encoding
  • Many known codes can be modified by an extra
    code symbol or by
  • deleting a symbol
  • Can create codes of almost any
    desired rate
  • Can create codes with slightly
    improved performance
  • The resulting code can usually be decoded with
    only a slight
  • modification to the decoder algorithm.
  • Sometimes modification process can be applied
    multiple times in
  • succession

2
Modification to Known Codes
  • Puncturing delete a parity symbol
  • (n,k) code ? (n-1,k) code
  • Shortening delete a message symbol
  • (n,k) code ? (n-1,k-1) code
  • Expurgating delete some subset of codewords
  • (n,k) code ? (n,k-1) code
  • Extending add an additional parity symbol
  • (n,k) code ? (n1,k) code

3
Modification to Known Codes
  • 5. Lengthening add an additional message symbol
  • (n,k) code ? (n1,k1) code
  • 6. Augmenting add a subset of additional code
    words
  • (n,k) code ? (n,k1) code

4
Interleaving
  • We have assumed so far that bit errors are
    independent from one
  • bit to the next
  • In mobile radio, fading makes bursts of error
    likely.
  • Interleaving is used to try to make these errors
    independent again

Depth Of Interleaving
5
Concatenated Codes
  • Two levels of coding
  • Achieves performance of very long code rates
    while maintaining
  • shorter decoding complexity
  • Overall rate is product of individual code
    rates
  • Codeword error occurs if both codes fail.
  • Error probability is found by first evaluating
    the error probability of
  • inner decoder and then evaluating the error
    probability of outer
  • decoder.
  • Interleaving is always used with concatenated
    coding

6
Block Diagram of Concatenated Coding Systems
Data Bits
Outer Encoder
Inner Encoder
Modulator
Interleave
Channel
Data Out
Inner Decoder
Outer Decoder
De-Modulator
De- Interleave
7
Practical Application Coding for CD
  • Each channel is sampled at 44000 samples/second
  • Each sample is quantized with 16 bits
  • Uses a concatenated RS code
  • Both codes constructed over GF(256)
    (8-bits/symbol)
  • Outer code is a (28,24) shortened RS code
  • Inner code is a (32,28) extended RS code
  • In between coders is a (28,4) cross-interleaver
  • Overall code rate is r 0.75
  • Most commercial CD players dont exploit full
    power of the error correction coder

8
Practical Application Galileo Deep Space Probe
  • Uses concatenated coding
  • Inner code rate is ½, constraint length 7
    convolutinal encoder
  • Outer Code (255,223) RS code over GF(256)
    corrects any burst errors from convolutional
    codes
  • Overall Code Rate is r 0.437
  • A block interleaver held 2RS Code words
  • Deep space channel is severely energy limited
    but not bandwidth limited

9
IS-95 CDMA
  • The IS-95 standard employs the rate (64,6)
    orthogonal (Walsh) code on the reverse link
  • The inner Walsh Code is concatenated with a
    rate 1/3, constraint length 9 convolutional code

Data Transmission in a 3rd Generation PCS
  • Proposed ETSI standard employs RS Codes
    concatenated with
  • convolutional codes for data communication
  • Requirements
  • Ber of the order of 10-6
  • Moderate Latency is acceptable
  • CDMA2000 uses turbo codes for data transmission
  • ETSI has optional provisions for Turbo Coding

10
  • A Common Theme from Coding Theory
  • The real issue is the complexity of the decoder.
  • For a binary code, we must match 2n possible
    received sequences with code words
  • Only a few practical decoding algorithms have
    been found
  • Berlekamp-Massey algorithm for clock codes
  • Viterbi algorithm (and similar technique) for
  • convolutional codes
  • Code designers have focused on finding new codes
    that work with known algorithms

11
  • Block Versus Convolutional Codes
  • Block codes take k input bits and produce n
    output bits, where k and n are large
  • there is no data dependency between blocks
  • useful for data communcations
  • Convolutional codes take a small number of
    input bits and produce a
  • small number of output bits each time period
  • data passes through convolutional codes in a
    continuous stream
  • useful for low- latency communications

12
  • Convolutional Codes
  • k bits are input, n bits are output
  • Now k n are very small (usually k1-3, n2-6)
  • Input depends not only on current set of k
    input bits, but also on past
  • input.
  • The number of bits which input depends on is
    called the "constraint
  • length" K.
  • Frequently, we will see that k1

13
Example of Convolutional Code k1, n2, K3
convolutional code
14
Example of Convolutional Code k2, n3, K2
convolutional code
15
  • Representations of Convolutional Codes
  • Encoder Block Diagram (shown above)
  • Generator Representation
  • Trellis Representation
  • State Diagram Representation

16
  • Convolutional Code Generators
  • One generator vector for each of the n output
    bits
  • The length of the generator vector for a rate
    rk/n
  • code with constraint length K is K
  • The bits in the generator from left to right
    represent the
  • connections in the encoder circuit. A 1
    represents a link from
  • the shift register. A 0 represents no
    link.
  • Encoder vectors are often given in octal
    representation

17
Example of Convolutional Code k1, n2, K3
convolutional code
18
Example of Convolutional Code k2, n3, K2
convolutional code
19
  • State Diagram Representation
  • Contents of shift registers make up "state" of
    code
  • Most recent input is most significant bit of
    state.
  • Oldest input is least significant bit of state.
  • (this convention is sometimes reverse)
  • Arcs connecting states represent allowable
    transitions
  • Arcs are labeled with output bits transmitted
    during transition

20
Example of State Diagram Representation Of
Convolutional Codes k1, n2, K3
convolutional code
21
  • Trellis Representation of Convolutional Code
  • State diagram is unfolded a function of time
  • Time indicated by movement towards right
  • Contents of shift registers make up "state" of
    code
  • Most recent input is most significant bit of
    state.
  • Oldest input is least significant bit of state.
  • Allowable transitions are denoted by connects
    between
  • states
  • transitions may be labeled with transmitted bits

22
Example of Trellis Diagram k1, n2, K3
convolutional code
23
Encoding Example Using Trellis Representation k1,
n2, K3 convolutional code
  • We begin in state 00
  • Input Data 0 1 0 1 1 0 0
  • Output 0 0 1 1 0 1 0 0 10 10 1 1

24
  • Distance Structure of a Convolutional Code
  • The Hamming Distance between any two distinct
    code sequences
  • and is the number of
    bits in which they differ
  • The minimum free Hamming distance dfree of a
    convolutional code is the smallest Hamming
    distance separating any two distinct code
    sequences

25
  • Search for good codes
  • We would like convolutional codes with large
    free distance
  • must avoid catastrophic codes
  • Generators for best convolutional codes are
    generally found via computer search
  • search is constrained to codes with regular
    structure
  • search is simplified because any permutation of
    identical
  • generators is equivalent
  • search is simplified because of linearity.

26
Best Rate 1/2 Codes
27
Best Rate 1/3 Codes
28
Best Rate 2/3 Codes
29
  • Summary of Convolutional Codes
  • Convolutional Codes are useful for real-time
    applications because
  • they can be continously encoded and decoded
  • We can represent convolutional codes as
    generators, block
  • diagrams, state diagrams, and trellis
    diagrams
  • We want to design convolutional codes to
    maximize free distance
  • while maintaining non-catastrophic
    performance

30
Viterbi Algorithm for Convolutional Code
31
Convolutional Encoder
  • A Convolutional code is specified by three
    parameters (n,k,K) or (k/n,K) where
  • Rck/n is the rate efficiency, determining the
    number of data bits per coded bit.
  • K is the size of the shift register.
  • Constraint length nK, i.e. the effect of each
    bit have its influence on nK bits.

32
Convolutional Encoder (2,1,3)
33
Encoder
data
tail
codeword
0
0
1
34
Effective code rate L is the number of data
bits and k1 is assumed
Current state input Next state output
S0 00 0 S0 00
S0 00 1 S2 11
S1 01 0 S0 11
S1 01 1 S2 00
S2 10 0 S1 10
S2 10 1 S3 01
S3 11 0 S1 01
S3 11 1 S3 10
35
Trellis Diagram
  • Trellis diagram is an extension of the state
    diagram that shows the passage of time.

36
Tail bits
Input bits
Output bits
S0 00
S2 10
S1 01
S3 11
37
Maximum likelihood
  • If the input sequence messages are equally
    likely, the optimum decoder which minimizes the
    probability of error is the Maximum likelihood
    decoder.
  • Choose the path with maximum metric among all
    the paths in the trellis. This path is the
    closest path to the transmitted sequence.
  • Choose the path with minimum Hamming distance
    from the received sequence.
  • Choose the path which with minimum Euclidean
    distance to the received sequence.

38
Viterbi Algorithm
  • The Viterbi algorithm performs Maximum likelihood
    decoding.
  • It find a path through trellis with the largest
    metric (minimum Hamming distance/minimum
    Euclidean distance ).
  • At each step in the trellis, it compares the
    metric of all paths entering each state, and
    keeps only the path with the largest metric
    (minimum Hamming distance) together with its
    metric. The selected path is known as survivor
    path.
  • It proceeds in the trellis by eliminating the
    least likely paths.

39
Procedure
  • Label all the branches in the trellis with their
    corresponding branch metric.
  • For each state in the trellis at the time ti
    which is denoted by Si(i0,1,2,3), compute a
    parameter G(Si, ti).
  • Set G(Si, ti)0 for i2
  • At time ti , compute the partial path metrics for
    all the paths entering each state.
  • Set G(Si, ti) equal to the best partial path
    metric entering each state at time ti.
  • Keep the survivor path and delete the dead paths
    from the trellis.

40
For 0 bit
For 1 bit
2
1
00
00
00
S0 (00)
G(1)2
G(2)3
11
11
11
1
0
G(1)0
S2 (10)
G(2)3
10
0
S1 (01)
G(2)0
01
2
S3 (11)
G(2)2
Received 11 10
Received 11
Decoded message 1
Decoded message 1 0
41
For 0 bit
For 1 bit
2
1
0
00
00
00
S0 (00)
G(3)3,2
G(3)2
2
11
11
1
0
2
11
11
S2 (10)
G(3)5,0
G(3)0
0
10
00
10
0
1
S1 (01)
G(3)4,3
G(3)3
1
01
01
2
1
01
S3 (11)
G(3)4,3
G(3)3
1
10
Received 11 10 00
Decoded message 1 0 1
42
For 0 bit
For 1 bit
2
2
00
S0 (00)
G(4)3
G(4)4,3
0
11
11
0
0
G(4)2,5
G(4)2
S2 (10)
00
2
3
10
1
S1 (01)
G(4)1,4
G(4)1
1
01
3
01
1
S3 (11)
G(4)1,4
G(4)1
10
1
Received 11 10 00
11
Decode message 1 0 1
1
43
For 0 bit
For 1 bit
3
2
00
G(5)1
G(5)5,1
S0 (00)
0
11
11
0
2
S2 (10)
G(5)3,3
G(5)3
00
2
1
10
1
S1 (01)
G(5)3,2
G(5)2
1
01
01
1
1
S3 (11)
G(5)3,2
G(5)2
10
1
Received 11 10 00
11 11
Decode message 1 0 1
0 0
44
Software Implementation
daa
S0 (00) a
a
a
S0 (00) a
dab
S2 (10) b
b
b
S2 (10) b
dbc
dca
dbd
S1 (01) c
dcb
S1 (01) c
c
c
ddc
S3 (11) d
S3 (11) d
d
d
ddd
45
Add Compare Select Computation
Ga
Gc




daa
dca
dab
dcb
Compare
Compare
Select 1 of 2
Select 1 of 2
Ga
Gb
46
Problems on Viterbi Algorithm
  • Computational complexity increases exponentially
    with constraint length.
  • The usually used Hamming distance in VA is
    sub-optimum and therefore lose some performance.
  • Viterbi algorithm is a ML (optimum) algorithm if
    Euclidean distance is used.

47
Application of Viterbi Alogorithim
  • Convolutional decoding and channel trellis
    decoding.
  • Speech and character recognition .
  • Optical character recognition.
  • DNA sequence analysis .

48
Turbo Codes History
  • IEEE International Comm conf 1993 in Geneva
  • Berrou, Glavieux. Near Shannon Limit
    Error-Correcting Coding Turbo codes
  • Provided virtually error free communication at
    data date/power efficiencies beyond most expert
    though

49
Turbo codes
  • 30 years ago. Forney
  • Nonsystematic
  • Nonrecursive combination of conv. Encoders
  • Berrou et al at 1993
  • Recursive Systematic
  • Based on pseudo random
  • Works better for high rates or high level of
    noise
  • Return to zero sequences

50
Turbo Encoder
X
Y1
Y2
51
Turbo codes
  • Parallel concatenated
  • The k-bit block is encoded N times with different
    versions (order)
  • Pro the sequence remains RTZ is 1/2Nv
  • Randomness with 2 encoders error pro of 10-5
  • Permutations are to fix dmin

52
Recursive Systematic Coders
53
Return to zero sequences
  • Non recursive encoder state goes to zero after v
    0.
  • RSC goes to zero with P 1/2v
  • if one wants to transform conv. into block code
    it is automatically built in.
  • Initial state i will repeat after encoding k

54
Convolutional Encoders
55
Turbo Decoding
56
Turbo Decoding
  • Criterion
  • For n probabilistic processors working together
    to estimate common symbols, all of them should
    agree on the symbols with the probabilities as a
    single decoder could do

57
Turbo Decoder
58
Turbo Decoder
  • The inputs to the decoders are the Log likelihood
    ratio (LLR) for the individual symbol d.
  • LLR value for the symbol d is defined ( Berrou)
    as

59
Turbo Decoder
  • The SISO decoder reevaluates the LLR utilizing
    the local Y1 and Y2 redundancies to improve
    the confidence
  • The value z is the extrinsic value determined by
    the same decoder and it is negative if d is 0 and
    it is positive if d is 1
  • The updated LLR is fed into the other decoder and
    which calculates the z and updates the LLR for
    several iterations
  • After several iterations , both decoders converge
    to a value for that symbol.

60
Turbo Decoding
  • Assume
  • Ui modulating bit 0,1
  • Yi received bit, output of a correlator. Can
    take any value (soft).
  • Turbo Decoder input is the log likelihood ratio
  • R(ui) log P(YiUi1)/(P(YiUi0)
  • For BPSK, R(ui) 2 Yi/ (var)2
  • For each data bit, calculate the LLR given that a
    sequence of bit were sent

61
Turbo Decoding
  • Compare the LLR output, to see if the estimate is
    towards 0 or 1 then take HD

62
Turbo Codes Performance
63
Turbo Codes Applications
  • Deep space exploration
  • France SMART-1 probe
  • JPL equipped Pathfinder 1997
  • Mobile 3G systems
  • In use in Japan
  • UMTS
  • NTT DoCoMo
  • Turbo codes pictures/video/mail
  • Convolutional codes voice
Write a Comment
User Comments (0)
About PowerShow.com