Coding Theory - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

Coding Theory

Description:

Suppose 1 is sent, it will be decoded as 0 iff the block received is 001, 010, ... 0001111, which is the intended code word, which is decoded to be the message 0001. ... – PowerPoint PPT presentation

Number of Views:747
Avg rating:3.0/5.0
Slides: 60
Provided by: ITS7
Category:
Tags: coding | decoded | theory

less

Transcript and Presenter's Notes

Title: Coding Theory


1
Coding Theory
  • Nikki
  • Mark
  • April 28th, 2006

2
What is Coding Theory?
  • Algebraic codes are now used in CD players, fax
    machines, modems, and bar code scanners.
  • Algebraic coding has nothing to do with secret
    codes.
  • The goal of coding theory is to devise message
    encoding and decoding methods that are reliable,
    efficient, and reasonably easy to implement.

3
Why do we need Coding Theory?
  • To motivate this theory, we imagine wanting to
    send one of two signals to a spaceship on Mars.
  • 0 to orbit
  • 1 to land
  • But it is possible that some sort of interference
    (noise) can cause an incorrect message to be
    received.

4
Adding Redundancy to our Messages
  • To decrease the effects of noise, we add
    redundancy to our messages.
  • First method repeat the digits multiple times.
  • Thus, the computer is programmed to take any
    five-digit message received and decode the result
    by majority rule.

5
Majority Rule
  • So, if we sent 00000, and the computer receives
    any of the following, it will still be decoded as
    0.
  • 00000 11000 Notice that for the
  • 10000 10100 computer to decode
  • 01000 10010 incorrectly, at least
  • 00010 10001 three errors must be
  • 00001 etc. made.

6
Independent Errors
  • Using the five-time repeats, and assuming the
    errors happen independently, it is less likely
    that three errors will occur than two or fewer
    will occur.
  • This is called the maximum likelihood decoding.

7
More complicated numbers
  • We are going to send a sequence of 0s and 1s of
    length 500. Assume the probability of an error
    occurring is .01 for any particular digit (and
    they happen independently).
  • No redundancy
  • P(error free)

8
Three fold Repetition Scheme
  • Sending each digit three times, and decoding each
    block of three digits by majority rule, the
    probability increases
  • Suppose 1 is sent, it will be decoded as 0 iff
    the block received is 001, 010, 100, 000. The
    probability that this will occur is
  • P(error)
  • gt P(error free for one digit) gt .9997
  • gtP(error free message) gt.9997500 .86

9
A Purdy Picture
Original message 0
Encoded message 00000
Noise
Received message 10001
Decoded message 0
10
Why dont we use this?
  • Repetition codes have the advantage of
    simplicity, both for encoding and decoding
  • But, they are too inefficient!
  • In a five-fold repetition code, 80 of all
    transmitted information is redundant.
  • Can we do better?
  • Yes!

11
What is a linear code?
  • Def An (n,k) linear code over a finite field F
    is a k-dimensional subspace V of the vector space
  • n copies
  • over F. The members of V are called the code
    words. When F is , the code is called
    binary.

12
A Hamming (7,4) code
  • A Hamming code of (n,k) means the message of k
    digits long is encoded into the code word of n
    digits.
  • The 16 possible messages
  • 0000 1010 0011 1111
  • 0001 1100 1110
  • 0010 1001 1101
  • 0100 0110 1011
  • 1000 0101 0111

13
Encoding with a Generating Matrix
  • A generating matrix is one of k x n matrix, with
    a k x k Identity matrix adjoined with a k x (n
    k) matrix.
  • Our generating matrix
  • G

14
Encoding a message
  • For each message v, find the left coset vG
  • Let v 0 1 1 0
  • vG 0 1 1 0
  • 0 1 1 0 0 1 1

15
Encoding those messages
  • Message ? codeword
  • 0000 ? 0000000 0110 ? 0110011
  • 0001 ? 0001111 0101 ? 0101010
  • 0010 ? 0010110 0011 ? 0011001
  • 0100 ? 0100101 1110 ? 1110000
  • 1000 ? 1000011 1101 ? 1101001
  • 1010 ? 1010101 1011 ? 1011010
  • 1100 ? 1100110 0111 ? 0111100
  • 1001 ? 1001100 1111 ? 1111111

16
Hamming Distance
  • Def The Hamming distance between two vectors of
    a vector space is the number of components in
    which they differ, denoted d(u,v).

Mr. Hamming
17
Hamming Distance
  • Ex. 1 The Hamming distance between
  • v 1 0 1 1 0 1 0
  • u 0 1 1 1 1 0 0
  • d(u, v) 4
  • Notice d(u,v) d(v,u)

18
Hamming weight of a Vector
  • Def The Hamming weight of a vector is the number
    of nonzero components of the vector, denoted
    wt(u).

19
Hamming weight of a code
  • Def The Hamming weight of a linear code is the
    minimum weight of any nonzero vector in the code.

20
Hamming Weight
  • Ex. 2 The Hamming weight of
  • v 1 0 1 1 0 1 0
  • u 0 1 1 1 1 0 0
  • w 0 1 0 0 1 0 1
  • are
  • wt(v) 4
  • wt(u) 4
  • wt(w) 3

21
Theorem 31.1 Properties of Hamming Distance and
Weight
  • For any vectors u, v, and w,
  • 1) d(u,v) wt(u v).
  • 2) d(u,v) lt d(u,w) d(w,v).
  • Proof
  • 1) Observe that both d(u, v) and
  • wt(u v) equal the number of
  • positions in which u and v differ.

22
Proving Part 2
  • 2) d(u,v) lt d(u,w) d(w,v).
  • Proof
  • 2) Note that, if u and v differ in the ith
    position and u and w agree in the ith position,
    then w and v differ in the ith position. Refer
    to Example 2.

23
Example of Proof
  • v 1 0 1 1 0 1 0
  • u 0 1 1 1 1 0 0
  • w 0 1 0 0 1 0 1
  • d(u,v) 4 d(u,w) 3
  • d(w,v) 7
  • And d(u,v) lt d(u,w) d(w,v)
  • ? 4 lt 3 7.

24
Theorem 31.2 Correcting Capability of a Linear
Code
  • If the Hamming weight of a linear code is at
    least 2t 1, then the code can correct any t or
    fewer errors. Alternatively, the same code can
    detect any 2t or fewer errors.
  • Example Using our original code, the Hamming
    weight is 3 2(1) 1.
  • Thus, it can correct 1 error,
  • OR, it can detect 2 errors.

25
Why OR?
  • Well do this by counterexample.
  • We have our (7,4) Hamming Code, and the weight is
    3 2(1) 1.
  • Say we receive the word 0001010.
  • It could have had one error, in which 0101010 was
    intended, and we can detect that single error.

26
Why OR, cont.
  • It could have had two errors, in which 0000000,
    0001111, 1011010 could be candidates.
  • What do we do?
  • If we chose detection, we would note that too
    many possibilities exist, and we would ask for
    retransmission.
  • If we chose correction, we could potentially make
    a mistake and assume that the message was
    intended to be 0101010.

27
Moral of the story
  • We must be careful about detecting and correcting
    errors.
  • It is safest to ask for retransmission if
    possible, to eliminate any guess-work.

28
A better check
  • If we write the Hamming weight of a linear code
    in the form
  • 2t s 1
  • We can correct any t errors AND detect any t s
    errors.

29
An example for ya
  • Lets assume some linear code has a Hamming
    weight of 5.
  • Well, 5 2(1) 2 1
  • So we can correct any 1 error and detect up to 3
    errors.
  • Or, 5 2(0) 4 1
  • So we can detect up to any 4 errors.
  • Or, 5 2(2) 0 1
  • So we can correct any 2 or fewer errors.

30
Lets Play a Game!
  • Yay for Math Horizons!
  • This uses the fact that each error is uniquely
    determined.
  • Two different code words differ in at least three
    different positions.

31
(No Transcript)
32
Creating the Parity-Check Matrix
  • How to make H, the Parity-Check Matrix from G,
    the Generating Matrix.
  • G Ik A
  • Compute A
  • H

33
Homework 1
  • From your textbook
  • Page 540, numbers 4, 5, and 12.
  • Not too bad, we promise!

34
Decoding the Code Words
  • Four types of decoding methods for linear codes
  • Nearest-neighbor
  • Parity-Check Matrix
  • Coset Decoding
  • Same Coset-Same Syndrome

35
The Steps ofNearest Neighbor Decoding
  • This is a comparison between the received code
    word, v, and all possible code words. Look for
    the smallest distance between v and v.
  • If v is unique, assume v v.
  • If v is not unique, ask for retransmission.

36
Nearest Neighbor Example
  • Ex. 3
  • We receive 0001110. This is not in our list of
    possible code words from our original list.
  • We compare 0001110 to each code word, and see
    that it differs from 0001111 by only 1, and all
    others by more than one.
  • We assume 0001110 was intended to be 0001111.
  • We decode 0001111 to its original message of 0001.

37
Creating the Parity-Check Matrix
  • How to make H, the Parity-Check Matrix from G,
    the Generating Matrix.
  • G Ik A
  • Compute A
  • H

38
How to Use the Parity-Check Matrix to Decode
  • For any received word w, compute wH.
  • If wH is the zero vector, assume that no error
    was made.
  • If there is exactly one nonzero element,
  • and a row i of H such that wH is s times row
    i, assume that the sent word was w (0 s 0),
    where s occurs in the ith component. If there is
    more than one such instance, do not decode.
  • If wH does not fit into either category, we know
    that at least two errors occurred in
    transmission, and we do not decode.

39
Parity-Check Decoding Example
  • Using G
  • Create H

40
Example cont.
  • We receive 0001110.
  • Compute wH
  • We get 001.
  • 001 is the 7th (last) row of our Parity-Check
    Matrix H.
  • Thus we assume there is a problem in position 7.
  • Compute 0001110 0000001
  • 0001111, which is the intended code word,
    which is decoded to be the message 0001.

41
Theorem 31.3 Parity-Check Matrix Decoding
  • Parity-check matrix decoding will correct any
    single error iff the rows of the parity-check
    matrix are nonzero and no one row is a scalar
    multiple of any other row.

42
Example of Thm. 31.3
  • A Good! Rows are linearly
    independent
  • B Bad! Row 1 and row
  • 3 are NOT linearly independent.

43
How to Use Coset Decoding
  • To use coset decoding, one must create a
    standard array for the linear code.
  • Once the standard array is constructed, find the
    received code word in the array and identify the
    column in which it lies.
  • Decode the code word as the number at the top of
    the column minus the coset leader.

44
How to create the dreaded Standard Array
  • Use C, the group of code words, as the heading
    for each column.
  • Next, look at all the possible vectors, and
    choose the next available one with minimum
    weight.
  • This becomes the first coset leader, place this
    vector as the beginning of the second row.

45
Constructing the Standard Array
  • C 0000000, 0001111, 0010110, 0011001
  • The vector 1000000 is the next number with least
    weight, so it becomes the first coset leader.

46
Continuing the Standard Array
  • Next, add the coset leader to each member in C,
    the column headings.
  • Of all the code words left, find the one with
    least weight, and this becomes the coset leader
    for the third row.
  • Continue this until all possible code words are
    placed in the standard array.

47
Standard Array
  • The vector 0100000 is the next code word with
    least weight that is still left, so it becomes
    the next coset leader.

48
The Complete Standard Array
  • We cant complete the whole array, itd be 32
    rows, but heres as much as we need to decode
    0001110


49
Decoding with the Standard Array
  • We received 0001110. Notice 0001111 is the
    column heading, and 0000001 is the coset leader.

50
Final Decoding
  • So, we found 0001110 in the second column, under
    0001111, with a coset leader of 0000001.
  • So, we assume the column heading is the intended
    message 0001111.
  • We could also do the received word 0001110, plus
    the coset leader, 0000001, to get 0001111.

51
Theorem 31.4 Coset Decoding is Nearest-Neighbor
Decoding
  • In coset decoding, a received word w is decoded
    as a code word c such that d(w, c) is minimal.
  • Proof
  • Let C be a linear code, and let w be any
    received word.
  • Suppose that v is the coset leader for the
    coset wC.
  • Thus, wC vC.

52
Proof Continued
  • Thus, w v c, for some c in C.
  • ? w is decoded as c.
  • Now, if c is any code word, then w-c exists
    in w C v C, so the
  • wt(w-c) gt wt(v), since v was chosen because
    wt(v) is minimal among the members of vC.
  • Therefore, d(w,c) wt(w-c) gt wt(v) wt(w-c)
    d(w,c)
  • Thus, w is decoded as a code word c such that
    d(w,c) is minimal.

53
The Syndrome
  • Def If an (n, k) linear code over F has
    parity-check matrix H, then, for any vector u in
    , the vector uH is called the syndrome of
    u.

54
How to use Syndrome Decoding
  • Calculate wH, the syndrome of w.
  • Find the coset leader v such that
  • wH vH.
  • Assume that the vector sent was
  • w v.

55
Syndrome Decoding Example
  • Let w 0001110
  • Let H be our parity-check matrix
  • Thus, wH 001
  • 1000000H 011
  • 0100000H 101
  • 0010000H 110
  • 0001000H 111
  • 0000100H 100
  • 0000010H 010
  • 0000001H 001
  • Thus, v 0000001

56
Syndrome Decoding Continued
  • So, we have w 0001110
  • And we have v 0000001
  • Finally, all we have to do is compute
  • w v 0001111. Which was the intended
    message.

57
Theorem 31.5 Same Coset - Same Syndrome
  • Let C be an (n, k) linear code over F with a
    parity-check matrix H. Then, two vectors of
    are in the same coset of C iff they have the
    same syndrome.
  • Proof
  • Two vectors u and v are in the same coset of
    C iff u-v is in C, since C is closed by
    definition. Note, vH 0 iff v is in C.
  • Thus, u and v are in the same coset iff 0
    (u-v)H uH vH.
  • ? 0 uH vH ? uH vH.

58
Other Codes
  • So, we only touched base on linear codes, and
    more specifically, the linear Hamming codes.
  • There are many others!
  • Reed-Solomon Codes
  • Self-Dual Codes
  • Golay Codes
  • Cyclic Codes
  • Quadric Residue Codes
  • Bose-Chaudhuri-Hocquenghem (B.C.H.) Codes
  • Etc.

59
Any Questions?
  • Homework
  • Problems from the chapter, page 540
  • 17 (and complete the standard array given to you)
    and 29
  • Use hint in back of book for 29. Sorry, we had
    to have some Abstract in there for ya!
Write a Comment
User Comments (0)
About PowerShow.com