Title: EEE436
1EEE436
- DIGITAL COMMUNICATION
- Coding
En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) nazriee_at_eng.usm.my Room 2.14
2Error-Correcting capability of the Convolutional
Code
- The error-correcting capability of a
convolutional code is determined by its
constraint length K L 1 where L is the number
of bits of message sequence in the shift
registers and also the free distance, dfree - The constraint length of a convolutional code,
expressed in terms of message bits is equal to
the number of shifts over which a single message
bit can influence the encoder output. - In an encoder with an L-stage shift register, the
memory of the encoder equals L message bits and
KL1 shifts are required for a message bit to
enter the shift register and finally come out.
Thus the constraint length K. - The constraint length determines the maximum free
distance of a code - Free distance is equal to the minimum Hamming
distance between any two codewords in the code. - A convolutional code can correct t errors if
dfree gt 2t
3Error-correction
- The free distance can be obtained from the state
diagram by splitting node a into ao and a1
4- Rules
- A branch multiplies the signal at its input node
by the transmittance characterising that branch - A node with incoming branches sums the signals
produced by all of those branches - The signal at a node is applied equally to all
the branches outgoing from that node - The transfer function of the graph is the ratio
of the output signal to the input signal
5- D on a branch describes the Hamming weight of the
encoder output corresponding to that branch - The exponent of L is always equal to one since
the length of each branch is one. - Let T(D,L) denote the transfer function of the
signal flow graph - Using rules 1, 2 and 3 to obtain the following
input-output relations - b D2Lao Lc
- c DLb DLd
- dDLb DLd
- a1D2Lc
6- Solving the set of equations for the ratio a1/a0,
the transfer function of the graph is given by - T(D,L) D5L3 / 1-DL(1L)
- We can use the binomial expansion and power
series to get the expression for the distance
transfer function (with L1) as - T(D,1)D5 2D6 4D7 ..
- Since the free distance is the minimum Hamming
distance between any two codewords in the code
and the distance transfer function T(D,1)
enumerates the number of codewords that are a
given distance apart, it follows that the
exponent of the first term in the expansion
T(D,1) defines the free distance5. - Therefore the (2,1,2) convolutional encoder with
constraint length K 3, can only correct up to 2
errors.
7Error-correction
Constraint Length, K Maximum free distance, dfree
2 3
3 5
4 6
5 7
6 8
7 10
8 10
8Turbo Codes A relatively new class of
convolutional codes first introduced in 1993 A
basic turbo encoder is a recursive systematic
encoder that employs two convolutional encoders
(recursive systematic convolutional or RSC in
parallel, where the second encoder is preceded by
a pseudorandom interleaver to permute the symbol
sequence Turbo code is also known as Parallel
Concatenated Codes (PCC)
Systematic bits, xk
Message bits
Parity bits, y1k
RSC encoder 1
Puncture MUX
Totransmitter
Interleaver
RSC encoder 2
Parity bits, y2k
9Turbo Codes The input data stream is applied
directly to encoder 1 and the pseudorandomly
reordered version of the same data stream is
applied to encoder 2 Both encoders produce the
parity bits. The parity bits and the original bit
stream are multiplexed and then transmitted The
block size is determined by the size of the
interleaver for example 65, 536 is
common) Puncture is applied to remove some parity
bits to maintain the code rate at 1/2 . For
example, by eliminating odd parity bits from the
first RSC and the even parity bits from the
second RSC
Systematic bits, xk
Message bits
Parity bits, y1k
RSC encoder 1
Puncture MUX
Totransmitter
Interleaver
RSC encoder 2
Parity bits, y2k
10RSC encoder for Turbo encoding
11RSC encoder for Turbo encoding
0011
0001
0001
0001
0001
1111
Recursive
Non-recursive
More 1s for recursive gives better error
performance
12Turbo decoding
Turbo decoder consists of two maximum a posterior
(MAP) decoders and a feedback path Decoding
operates on the noisy version of the systematic
bits and the two sets of parity bits in two
decoding stages to produce estimate of the
original message bits The first decoder takes the
information from the received signal and
calculates the A posterior probability (APP)
value This value is then used as the a priori
probability value for the second decoder The
output is then fedback to the first decoder where
the process repeats in an iterative fashion with
each iteration producing more refined estimates.
13Turbo decoding uses BCJR algorithm BCJR ( Bahl,
Cocke, Jelinek and Raviv, 1972) algorithm is a
maximum a posteriori probability (MAP) decoder
which minimizes the bit errors by estimating the
a posteriori probabilitities of the individual
bits in a code word. It takes into account the
recursive character of the RSC codes and computes
a log-likelihood ratio to estimate the APP for
each bit.
14- Low Density Parity Check (LDPC) codes
- An LDPC code is specified in terms of its
parity-check matrix, H that has the following
structural properties - Each row consists of ? 1s
- Each column consists of ? 1s
- The number of 1s in common between any two
columns is no greater than 1 ie ? 0 or 1 - Both ? and ? are small compared with the length
of the code - LDPC codes are recognised in the form of (n, ?,
?) - H is said to be a low density parity check matrix
- H has constant row and column weights (? and ? )
- Density of H total number of 1s divided by
total number of entries in H
15Low Density Parity Check (LDPC) codes Example
(15, 4, 4) LDPC code
Each row consists of ?4 1s Each column
consists of ?4 1s The number of 1s in common
between any two columns is no greater than 1 ie
? 0 or 1 Both ? and ? are small compared with
the length of the code Density 4/15 0.267
16Low Density Parity Check (LDPC) codes
Constructing H For a given choice of ? and ?,
form a k? by k? matrix H (where k a positive
integer gt 1) that consists of ? k-by-k?
submatrix, H1, H2,.H? Each row of a submatrix
has ? 1s and each column of a submatrix contains
a single 1 Therefore each submatrix has a total
of k? 1s. Based on this, construct H1 by
appropriately placing the 1s. For
,the ith row of H1 contains all its ? 1s in
columns (i-1) ?1 to i? The other submatrices
are merely column permutations of H1
17Low Density Parity Check (LDPC) codes Example
Constructing H Choice of ?4 and ?3 and
k5 Form a k? by k? (15-by-20) matrix H that
consists of ?3 k-by-k? (5-by-20) submatrix, H1,
H2,H?3 Each row of a submatix has ?4 1s and
each column of a submatrix contains a single
1 Therefore each submatrix has a total of k?20
1s. Based on this, construct H1 by
appropriately placing the 1s. For
,the ith row of H1 contains all its ?4 1s in
columns (i-1) ?1 to i? The other submatrices
are merely column permutations of H1
18Low Density Parity Check (LDPC) codes Example
Constructing H for (20, 3, 4) LDPC code
19Construction of Low Density Parity Check (LDPC)
codes There are many techniques of constructing
the LDPC codes Constructing LDPC codes with
shorter block is easier than the longer ones For
large block sizes, LDPC codes are commonly
constructed by first studying the behaviour of
decoders. Among the techniques are Pseudo-random
techniques, Combinatorial approaches and finite
geometry. These are beyond the scope of this
lecture. For this lecture, we see how short LDPC
codes are constructed from a given parity check
matrix. For example a (6,3) linear LDPC code
given by the following H
20Construction of Low Density Parity Check (LDPC)
codes For example a (6,3) linear LDPC code given
by the following H The 8 codewords can be
obtained by putting the parity-check matrix H
into this form HPT I In-k where
Pcoefficient matrix and In-k identity
matrix The generator matrix is, G In-k I
P At the receiver, HPT I In-k is used to
check the error syndrome.
Exercise Generate the codeword for m001 and
show how the receiver performs the error checking