Title: Information and Coding Theory
1 Information and Coding Theory Some important
classes of cyclic codes. Bose-Chaudhuri-Hocquenghe
m (BCH) codes. Reed-Solomon codes.
Juris Viksna, 2013
2Determinants
Lets start with something we known very well from
linear algebra )
3Determinants
In general case where and Mij is
obtained from by deleting the row and column
containing aij.
4Determinants
In general case where and Mij is
obtained from A by deleting the row and column
containing aij. What we will need here is that
set of n polynomials of degree n is linearly
independent if and only if determinant of matrix
formed by their coefficients is non-zero.
5Vandermonde matrices and determinants
We will consider determinant of special form
6Vandermonde matrices and determinants
7Vandermonde matrices and determinants
Adapted from V.Pless
8BCH codes - we already had an example
Let's recall the decoding procedure of Hamming
codes The syndrome of the received vector was
either 0 (no errors), or f(i), if error has
occured in i-th position. And f(i) here is just a
binary representation of i. Can we design
something similar for larger number of
errors? E.g. a code correcting 2 errors, with an
appropriate function f, such that for erros
occuring in positions i and j these values (i and
j) are easily computable from the sum f(i)f(j)?
As it turns out, the problem is solvable, if
for example f(i)a3i, where a is a primitive root
of GF(16).
9BCH codes - the idea
- How can we specify cyclic codes
- by specifying generator polynomial
- Are there other possibilities?
- Assume that generator polynomial for code C is
f(x)1xx3. It is minimal polynomial in GF(8)
with roots a, a2 and a4. - Hence, a polynomial h(x) is in C if and only if
it's roots include a, a2 and a4.
10BCH codes - the alphabet
Until now we generally considered codes
consisting of binary vectors. If we consider a
BCH code over GF(2m), the codewords still can be
represented as binary vectors, however, in
general the encoding is done for vectors with
elements from GF(2k) for some k lt m and
correction of t errors means that code can
recover the message if up to t subblocks of
length k have been corrupted. This is a useful
property in case of burst errors , i.e. if in
case of an error several consecutive bits tend to
be corrupted. For example, consider data endoding
on CD - if a CD is scratched, then it is
very likely that more than just one consecutive
bit will be corrupted.
11BCH codes - some examples
Lets consider GF(2r) and let g(x) be a primitive
polynomial with coefficients from GF(2). It has
degree r and defines a cyclic code
over GF(2). Let ? be a root of g(x). For every
codeword c(x) we have c(?)g(?)a(?) 0. We also
have c(?) c0c1?1c2?2...cn?n, where n
2r?1. We have that c is orthogonal to H (1 ?
?2 ... ?n). Notice that we just have defined a
code equivalent to Ham(r,2), i.e. Hamming codes
can be regarded as cyclic! Notice that c is
orthogonal also to H (1 ?i ?2i ... ?ni), for i
1,2,4,...,2r ? 1.
12BCH codes
Example q2, m4, ?5 we consider minimal
polynomials for ?, ?2, ?3, ?4, these will be
x4 x3 1 and x4 x3 x2 x 1.
Code has dimension 7 (but we need to have minimal
polynomials to know this. H is almost a parity
check matrix - i.e. it is orthogonal, but may
have some extra rows.
13BCH codes
Example q2, m4, ?5 we consider minimal
polynomials for ?, ?2, ?3, ?4, these will be
x4 x3 1 and x4 x3 x2 x 1.
H is almost a parity check matrix - i.e. it is
orthogonal, but may have some extra rows.
However, notice that any 4 columns of H are
linearly independent - thus minimal distance will
be at least 5.
14BCH codes - code length
- We always will be able to obtain codes of length
n qr?1. What about other sizes? - Lets consider q2 and r11, m 2r?1 2047.
- we can have code of length n m 2047.
- for other values of n polynomial xn?1 should
divide xm?1. - m 2047 89?23.
- x2047?1 is divisible by x89?1 and x23?1.
- to obtain codes of length n 89 we should
consider roots that are powers of ? ?23, where
? is a primitive element of GF(211). - to obtain codes of length n 23 we should
consider roots that are powers of ? ?89, where
? is a primitive element of GF(211).
15Another useful reminder - Singleton bound
Yet another proof Observe that rank H n?k.
Any dependence of s columns in parity check
matrix yields a codeword of weight s (any set of
non-zero components in codeword ? dependence
relation in H). Thus d ? 1 ? n ? k. Thus to
show that minimal distance is d, it suffices to
show that any d columns of parity check matrix H
are linearly independent.
16BCH codes
Adapted from V.Pless
17BCH codes
The determinant is non-zero, thus, ??d.
Adapted from V.Pless
18BCH codes
This trivially follows from the fact that H has
(??1) rows, however this isnt a particularly
good bound.
Adapted from V.Pless
19Some known BCH codes
20Some known BCH codes
21Reed-Solomon codes
The code was invented in 1960 by Irving S. Reed
and Gustave Solomon, who were then members of
MIT Lincoln Laboratory. Their seminal article
was "Polynomial Codes over Certain Finite
Fields." When it was written, digital technology
was not advanced enough to implement the
concept. The key to application of Reed-Solomon
codes was the invention of an efficient decoding
algorithm by Elwyn Berlekamp, a professor of
electrical engineering at the University of
California, Berkeley. Today they are used in
disk drives, CDs, telecommunication and digital
broadcast protocols.
22Reed-Solomon codes
Adapted from V.Pless
23RS codes - a geometric interpretation
Adapted from P.Shankar
24Reed-Solomon codes
Adapted from V.Pless
25Some applications of Reed-Solomon codes
PDF-417
QR code
Information storage on CD, CD-ROM, DVD, hard
drives etc
Adapted from wikipedia.org
26Decoding of Reed-Solomon codes
Reed-Solomon proposition Look at all possible
subsets of k symbols from n symbols in which
errors could have occurred ) The number of
subsets is - around 359 billion for
(255,249) code that can correct 3
errors. Fortunately there are more practical
methods Peterson-Gorenstein-Zierler
algorithm Berlekamp-Massey algorithm
Adapted from wikipedia.org
27Decoding of BCH codes
In case of one error correcting code we simply
know y1 ?i, giving us error position. For two
error correcting code we have y1 ?i ?j, y2
?i3 ?j3. We managed to solve also this case. In
general case we have equation system where
Yi gives error values (just 1 in binary case) and
Xi gives error positions. Can we solve this?
Adapted from V.Pless
28Decoding of BCH codes
We possess the values Si (2t of them) and we
need to find the values of Xi (r of them) and
the values of Yi (r of them, but in binary case
all of these will be equal to 1). r?t in this
case is the number of errors that has occurred.
Error-locator polynomial s(x) (1 ?
xX1) (1 ? xX2)? ?? (1 ? xXr) 1s1 x...
srxr. s(x) has zeros at the inverses from code
locations. Can we compute si-s from Si-s?
Adapted from V.Pless
29Decoding of BCH codes
s(x) (1 ? xX1) (1 ? xX2)? ?? (1 ? xXr)
1s1 x... srxr. Let xXi?1 and let multiply
both sides of equation by YiXijr. We get
Adapted from V.Pless
30Decoding of BCH codes
Adapted from V.Pless
31Decoding of BCH codes
Adapted from V.Pless
32Decoding of BCH codes
Adapted from V.Pless
33Decoding of BCH codes
Proof Let
and
Adapted from V.Pless
34Decoding of BCH codes
Peterson-Gorenstein-Zierler decoding scheme
(binary case)
Adapted from V.Pless