Introduction to Information theory channel capacity and models - PowerPoint PPT Presentation

1 / 38
About This Presentation
Title:

Introduction to Information theory channel capacity and models

Description:

Homework: compare the block and convolutional interleaving w.r.t. delay. Interleaving: block ... row wise. Interleaving: convolutional. input sequence 0. input ... – PowerPoint PPT presentation

Number of Views:1413
Avg rating:3.0/5.0
Slides: 39
Provided by: vin51
Category:

less

Transcript and Presenter's Notes

Title: Introduction to Information theory channel capacity and models


1
Introduction to Information theorychannel
capacity and models
  • A.J. Han Vinck
  • University of Essen
  • May 2011

2
This lecture
  • Some models
  • Channel capacity
  • Shannon channel coding theorem
  • converse

3
some channel models
Input X P(yx) output Y
transition probabilities
memoryless - output at time i depends only on
input at time i - input and output alphabet finite
4
Example binary symmetric channel (BSC)
  • 1-p
  • 0 0
  • p
  • 1
  • 1-p

Error Source
E
X

Output
Input
E is the binary error sequence s.t. P(1)
1-P(0) p X is the binary information sequence Y
is the binary output sequence
5
from AWGN to BSC

p
Homework calculate the capacity as a function of
A and ?2
6
Other models
1-e e e 1-e
0 1
0 (light on) 1 (light off)
0 1
0 E 1
X Y
p 1-p
P(X0) P0
P(X0) P0
Z-channel (optical)
Erasure channel (MAC)
7
Erasure with errors
1-p-e
0 1
0 E 1
e
p
p
e
1-p-e
8
burst error model (Gilbert-Elliot)
Random error channel outputs independent
P(0) 1- P(1)
Error Source
Burst error channel outputs dependent
P(0 state bad ) P(1state bad )
1/2 P(0 state good ) 1 - P(1state good
) 0.999
Error Source
State info good or bad
transition probability
Pgb
Pbb
good
bad
Pgg
Pbg
9
channel capacity
  • I(XY) H(X) - H(XY) H(Y) H(YX) (Shannon
    1948)
  • H(X) H(XY)
  • notes
  • capacity depends on input probabilities
  • because the transition probabilites are fixed

X
Y
channel
10
Practical communication system design
Code book
Code word in
receive
message
estimate
2k
decoder
channel
Code book
with errors
n
There are 2k code words of length n k is the
number of information bits transmitted in n
channel uses
11
Channel capacity
Definition The rate R of a code is the ratio
k/n, where k is the number of information bits
transmitted in n channel uses Shannon showed
that for R ? C encoding methods exist with
decoding error probability 0
12
Encoding and decoding according to Shannon
Code 2k binary codewords where p(0) P(1)
½ Channel errors P(0 ?1) P(1 ? 0) p i.e.
? error sequences ? 2nh(p) Decoder search
around received sequence for codeword with ? np
differences
space of 2n binary sequences
13
decoding error probability
  • For t errors t/n-pgt ?
  • 0 for n ? ?
  • (law of large numbers)
  • 2. gt 1 code word in region
  • (codewords random)

14
channel capacity the BSC
I(XY) H(Y) H(YX) the maximum of H(Y)
1 since Y is binary H(YX) h(p)
P(X0)h(p) P(X1)h(p)
  • 1-p
  • 0 0
  • p
  • 1
  • 1-p

X Y
Conclusion the capacity for the BSC CBSC 1-
h(p) Homework draw CBSC , what happens for p gt
½
15
channel capacity the BSC
Explain the behaviour!
16
channel capacity the Z-channel
Application in optical communications
H(Y) h(P0 p(1- P0 ) ) H(YX) (1 - P0 )
h(p) For capacity, maximize I(XY) over P0
0 1
0 (light on) 1 (light off)
X Y
p 1-p
P(X0) P0
17
channel capacity the erasure channel
Application cdma detection
1-e e e 1-e
I(XY) H(X) H(XY) H(X) h(P0 )
H(XY) e h(P0) Thus Cerasure 1 e
(check!, draw and compare with BSC and Z)
0 1
0 E 1
X Y
P(X0) P0
18
Capacity and coding for the erasure channel
Code 2k binary codewords where p(0) P(1)
½ Channel errors P(0 ?E) P(1 ? E) e
Decoder search around received sequence for
codeword with ? ne differences
space of 2n binary sequences
19
decoding error probability
  • For t erasures t/n-egt ?
  • 0 for n ? ?
  • (law of large numbers)
  • gt 1 candidate codeword agrees in n(1-e)
    positions after ne positiona are erased
    (codewords random)

20
Erasure with errors calculate the capacity!
1-p-e
0 1
0 E 1
e
p
p
e
1-p-e
21
example
0 1 2
0 1 2
1/3
1/3
  • Consider the following example
  • For P(0) P(2) p, P(1) 1-2p
  • H(Y) h(1/3 2p/3) (2/3 2p/3) H(YX)
    (1-2p)log23
  • Q maximize H(Y) H(YX) as a function of p
  • Q is this the capacity?
  • hint use the following log2x lnx / ln 2
    d lnx / dx 1/x

22
channel models general diagram
P11
y1
x1
P21
Input alphabet X x1, x2, , xn Output
alphabet Y y1, y2, , ym Pji
PYX(yjxi) In general calculating capacity
needs more theory
P12
y2
x2
P22


xn
Pmn
ym
The statistical behavior of the channel is
completely defined by the channel transition
probabilities Pji PYX(yjxi)
23
clue
I(XY) is convex ? in the input
probabilities i.e. finding a maximum is simple
24
Channel capacity converse
For R gt C the decoding error probability gt 0
Pe
k/n
C
25
Converse For a discrete memory less channel

channel
Xi Yi
Source generates one out of 2k equiprobable
messages
encoder
channel
decoder
source
m Xn Yn
m
Let Pe probability that m ? m
26
converse R k/n
k H(M) I(MYn)H(MYn) Xn is a
function of M Fano ? I(XnYn)
1 k Pe ? nC 1 k Pe
1 C n/k - 1/k ? Pe
Pe ? 1 C/R - 1/nR Hence for large n, and
R gt C, the probability of error Pe gt 0
27
We used the data processing theorem Cascading of
Channels
The overall transmission rate I(XZ) for the
cascade can not be larger than I(YZ), that is
28
Appendix
Assume binary sequence P(0) 1 P(1) 1-p
t is the of 1s in the sequence Then n ? ? ,
? gt 0 Weak law of large numbers Probability
( t/n p gt ? ) ? 0 i.e. we expect with high
probability pn 1s
29
Appendix
Consequence
1. 2. 3.
n(p- ?) lt t lt n(p ?) with high probability
Homework prove the approximation using ln N! N
lnN for N large. Or use the Stirling
approximation
30
Binary Entropy h(p) -plog2p (1-p) log2
(1-p)
Note h(p) h(1-p)
31
Capacity for Additive White Gaussian Noise
Noise
Input X
Output Y
W is (single sided) bandwidth
Input X is Gaussian with power spectral density
(psd) S/2W Noise is Gaussian with psd
?2noise Output Y is Gaussian with psd ?y2 ?
S/2W ?2noise
For Gaussian Channels ?y2 ??x2 ??noise2
32
Noise
X
Y
X
Y
33
Middleton type of burst channel model
0 1
0 1
Transition probability P(0)
channel 1
channel 2
Select channel k with probability Q(k)

channel k has transition probability p(k)
34
Fritzman model
  • multiple states G and only one state B
  • Closer to an actual real-world channel

35
Interleaving from bursty to random
bursty
Message interleaver channel
interleaver -1 message encoder
decoder
random error
Note interleaving brings encoding and decoding
delay Homework compare the block and
convolutional interleaving w.r.t. delay
36
Interleaving block
Channel models are difficult to derive - burst
definition ? - random and burst errors ? for
practical reasons convert burst into random error
read in row wise
transmit column wise
1 0 0 1 1
0 1 0 0 1
1 0 000
0 0 1 1 0
1 0 0 1 1
37
De-Interleaving block
read in column wise this row contains 1 error
1 0 0 1 1
0 1 0 0 1
1 e e e e
e e 1 1 0
1 0 0 1 1
read out row wise
38
Interleaving convolutional
input sequence 0 input sequence 1 delay of b
elements ??? input sequence m-1 delay of
(m-1)b elements Example b 5, m 3
in
out
Write a Comment
User Comments (0)
About PowerShow.com