Title: DIGITAL COMMUNICATIONS
1DIGITAL COMMUNICATIONS
- Part I Source Encoding(Chapter 6)
2Why digital?
- Ease of signal generation
- Regenerative repeating capability
- Increased noise immunity
- Lower hardware cost
- Ease of computer/communication integration
3Basic block diagram
Info source
Source encoder
Channel encoder
Digital modulation
CH
channel
Output transducer
Source decoder
Channel decoder
Digital demod
4Some definitions
- Information source
- Raw datavoice, audio
- Source encoderconverts analog info to a binary
bitstream - Channel encodermap bitstream to a pulse pattern
- Digital modulator RF carrier modulation of bits
or bauds
5A bit of history
- Foundation of digital communication is the work
of Nyquist(1924) - Problemhow to telegraph fastest on a channel of
bandwidth W? - Ironically, the original model for communications
was digital! (Morse code) - First telegraph link was established between
Baltimore and Washington in 1844
6Nyquist theorem
- Nyquist theorem, still standing today, says that
over a channel of bandwidth W, we can signal
fastest with no interference at a rate no more
than 2W - Any faster and we will get intersymbol
interference - He further proved that the pulse shape that
achieves this rate is a sinc
7Signaling too fast
- Here is what might happen when signaling exceeds
Nyquists rate
Transmittted bitstream
Received bitstream
Pulse smearing could have been avoided if
pulses had more separation, I.e. bitrate reduced
8Shannon channel capacity
- Claude Shannon, a Bell Labs Mathematician proved
in 1948 that a communication channel is
fundamentally speed-limited. This limit is given
by - CWlog2(1P/NoW) bits/sec
- Where W is channels bandwidth, P signal power
and No is noise spectral density
9Implications of channel capacity
- If data rate is kept below channel capacity, RltC,
then t is theoretically possible to achieve
error-free transmission - If data rate exceeds channel capacity, error-free
transmission is no longer possible
10First step toward digital comm sampling theorem
- Main question can a finite number of samples of
a continuous wave be enough to represent the
information? OR - Can you tell what the original signal was below?
11How to fill in the blanks?
- Could you have guessed this? Is there a unique
signal connecting the sample
12Sampling schemes
- There are at least 3 sampling schemes
- Ideal
- Flat-top
- Sample and hold
13Ideal sampling
- Ideal sampling refers to the type of samples
taken. Here, we are talking about impulse
like(zero width) samples.
Ts
14Ideal sampler
- Multiply the continuous signal g(t) with a train
of impulses
g(t)
g?(t)?g(nTs) ?(t-nTs)
??(t-nTs)
Ts
15Key question
- What is the proper sampling rate to allow for a
perfect reconstruction of the signal from its
samples? - To answer this question, we need to know how g(t)
and g?(t) are related?
16Spectrum of g?(t)
- g?(t) is given by the following product
- g?(t)g(t)?g(t-nTs)
- Taking Fourier transform
- G?(f) G(f)fs??(f-nfs)
- Graphical rendition of this convolution follows
next
17Expanding the convolution
- We can exchange convolution and summation
- G?(f)G(f)fs??(f-nfs) fs ? G(f) ? (f-nfs)
- Each convolution shifts G(f) to f nfs
G(f)
G(f) ? (f-nfs)
nfs
18G?(f)final result
- Spectrum of the sampled signal is then given by
- This is simply the replication of the original
continuous signal at multiples of sampling rate
G?(f)fs ? G(f-nfs)
19 Showing the spectrum of g?(t)
- Each term of the convolution is the original
spectrum shifted to a multiple of sampling
frequency
G(f)
G?(f)
fs
fs 2fs
20Recovering original signal
- It is possible to recover the original spectrum
by lowpass filtering the sampled signal
G?(f)
fs
W
fs 2fs
LPF
W
-W
21Nyquist sampling rate
- In order to cleanly extract baseband (original)
spectrum, we need sufficient separation with the
adjacent sidebands - Min. separation can be found as follows
G?(f)
fs-wgtW
fs
fsgt2W
W
fs
22Sampling below Nyquistaliasing
- If signal is sampled below its Nyquist rate,
spectral folding, or aliasing, occurs.
Lowpass filtering will not recover the baseband
spectrum intact as a result of spectral folding
fslt2W
23Sample-and-hold
- A practical way of sampling a signal is
sample-and-hold operation. Here is the
ideasignal is sampled and its value held until
the next sample
24Issues
- Here are the questions we need to answer
- What is the sampling rate now?
- Can the message be recovered?
- What price do we pay for going with a practical
approach
25Modeling sample-and-hold
- The result of sample-and-hold can be simulated by
writing the sampled signal as - s(t)?m(nTs)h(t-nTs)
- Where h(t) is a basic square pulse and m(t) is
the baseband message
This is a square pulse h(t) scaled by
signal Sample at that point, ie m(nTs)h(t-nTs)
Ts
26A systems view
- It is possible to come up with a system that does
sample-and-hold.
h(t)
h(t)
X
Ts
Ideal sampling
Each impulse generates a square pulse,h(t), at
the output. Outputs are also spaced by Ts this we
have a sample-and- hold signal
27Message reconstruction
- Key question can we go back to the original
signal after sample-and-hold ? - This question can be answered in the frequency
domain
28Spectrum of the sample-and-hold signal
- Sample-and-hold signal is generated by passing an
ideally sampled signal, m?(t), through a filter
h(t). Therefore, we can write - s(t) m?(t)h(t)
- or
- S(f) M?(f)H(f)
Known( it is a sinc)
Contains message M(f)
what we have available
29Is message recoverable?
- Lets look at the individual components of S(f).
From ideal sampling results - M?(f)fs?M(f-kfs)
M?(f)
30Problems with message recovery
- The problem here is we dont have access to
M?(f). If we did, it would be like ideal sampling - What we do have access to is S(f)
- S(f) M?(f)H(f)
- We therefore have a distorted version of an
ideally sampled signal
31Example message
- Lets show what is happening. Assume a message
spectrum that is flat as follows
M(f)
W
-W
M?(f)
fs 2fs
32Sample-and-hold spectrum
- We dont see M?(f). We see M?(f)H(f). Since h(t)
was a square pulse of width Ts, H(f) is sinc(fTs)
.
M?(f).
W
f
H(f)
f
1/Tsfs
33Distortion potential
- The original analog message is in the lowpass
term of M?(f) - H(f) through the product M?(f)H(f) causes a
distortion of this term. - Lowpass filtering of the sample-and-hold signal
will only recover a distorted message
34Illustrating distortion
M?(f)
W
2fs
fs
H(f)
want to recover this
f
1/Tsfs
Sample and hold signal. If lowpass filtered, the
original Message is not recovered
What is actually recovered
35How to control distortion?
- In order to minimize the effect of H(f) on
reconstruction, we must make H(f) as flat as
possible in the message bandwidth(-W,W) - What does it mean? It means move the first zero
crossing to the right by increasing the sampling
rate, or decreasing pulse width
36Does it make sense?
- The narrower the pulse, hence higher sampling
rate, the more accurate you can capture signal
variations
37Variation on sample-and-hold
- Contrast the two following arrangements
Ts
?
sample period and pulse width are not the same
38How does this affect reconstruction?
- The only thing that will change is h(t) and hence
H(f)
M?(f)
W
f
H(f)
want to recover this
different zero crossing
1/?
f
Sample and hold signal. If lowpass filtered, the
original Message is not recovered
What is actually recovered
39How to improve reconstruction?
- Again, we need to flatten out H(f) within (-W,W).
and the way to do it is to use narrower pulses
(smaller ?)
40Sample-and-hold converges to ideal sampling
- If reducing the pulse width of h(t) is a good
idea, why not take it to the limit and make them
zero? - We can do that in which case sample-and-hold
collapses to ideal sampling(impulses are zero
width pulses)
41Pulse Code Modulation
- Filtering, Sampling, Quantization and Encoding
42Elements of PCM Transmitter
- Encoder consists of 5 pieces
- Transmission path
Continuous message
Quantizer
Encoder
LPF
Sampler
Regenerative repeater
Regenerative repeater
43Quantization
- Quantization is the process of taking continuous
samples and converting them to a finite set of
discrete levels
?
1.52
1.2
.86
-0.41
44Defining a quantizer
- Quantizer is defined by its input/output
characteristics continuous values in, discrete
values out
out
out
in
in
Output remains constant Even as input varies over
a range
Midtread type
Midrise type
45Quantization noise/error
- Quantizer clearly discards some information.
Question is how much error is committed?
q(m)
Message(m)
Quantized message (v)
Errorqm-v
46Illustrating quantization error
Sampled quantized
v3
?
Quantization error
v2
v1
? quantizer step size
47More on ?
- ? Controls how fine samples are quantized.
Equivalently, ? controls quantization error. - To determine ? we need to know two parameters
- Number of quantization levels
- Dynamic range of the signal
48? for a uniform quantizer
- Let sample values lie in the range ( -mmax,
mmax). We also want to have exactly L levels at
the output of the quantizer. Simple math tells us
max
L levels
?2mmax/L
min
49Quantization error bounds
- Quantization error is bounded by half the step
size
Level 2
Error q
?
Error q
Level 1
qlt?/2
50Statistics of q
- Quantization error is random. It can be positive
or negative with equal probability. - This is an example of a uniformly distributed
random variable.
Density function f(q)
1/?
q
?/2
-?/2
51Quantization noise power
- Any uniformly distributed random variable in the
range (-a/2 to a/2) has an average
power(variance) given by a2/12. - Here, quantization noise range is ?, therefore
?2q ?2/12
52Signal-to-quantization noise
- Leaving aside random noise, there is always a
finite quantization noise. - Let the original continuous signal have power
Pltm2(t)gt and quantization noise variance(power)
?2q - (SNR)qP/ ?2q12P/ ?2
53Substituting for ?
- We have related step size to signal dynamic range
and number of quantization levels - Therefore, signal to quantization noise(sqnr)
- sqnr(SNR)q3P/m2maxL2
?2mmax/L
54Example
- Let m(t)cos(2?fmt). What is the signal to
quantization noise ratio(sqnr) for a 256-level
quantizer - Average message power P is 0.5, therefore
- sqnr(3x0.5/1)25629830450dB
55Nonuniform quantizer
- Uniform quantization is a fantasy. Reason is that
signal amplitude is not equally spread out. It
occupies mostly low amplitude levels
56Solutionnonuniform intervals
- Quantize fine where amplitudes spend most of
their time
57Implementing nonuniform quantizationcompanding
- Signal is first processed through a nonlinear
device that stretches low amplitudes and
compresses large amplitudes
Large amplitudes pressed
output
Low amplitudes stretched
input
58A-law and ?-law
- There are two companding curves, A-law and ?-law.
Both are very similar - Each has an adjustment parameter that controls
the degree of companding (slope of the curve) - Following companding, a uniform quantization is
used
59Encoder
- Quantizer outputs are merely levels. We need to
convert them to a bitstream to finish the A/D
operation - There are many ways of doing this
- Natural coding
- Gray coding
60Natural coding
- How many bits does it take to represent L-levels?
The answer is - nlog2L bits/sample
- Natural coding is a simple decimal to binary
conversion - 0000
- 1001
- 2010
- 3011
- .
- 7111
Encoder output(3 bits per sample
Quantizer levels(8)
61Gray coding
- Here is the problem with natural coding if
levels 2(010) and 1(001) are mistaken, then we
suffer two bit errors - We want an encoding scheme that assigns code
words to adjacent levels that differ in at most
one bit location
62Gray coding example
- Take a 4-bit quantizer (16 levels). Adjacent
levels differ by juts one bit
0 0 0 0 1 1 0 0 0 0 2 0 1 0 0 3 0 1 0 1 4
1 1 0 1 .
63Quantizer word size
- Knowing n, we can refer to n-bit quantizers
- For example, if L256 with n8bits/sample
- We are then looking at an 8-bit quantizer
64Interaction between sqnr and bit/sample
- Converting sqnr to dB provides a different
insight. Take 10log10(sqnr) - sqnrkL2 where k3P/m2max
- In dB
- (sqnr)dB?20logL ?20log2n
- (sqnr)dB ?6n dB
65sqnr varies linearly with bits/sample
- What we just saw says higher sqnr is achieved by
increasing n(bits/sample). - Question then is, what keeps us from doing that
for ever thus getting arbitrarily large sqnrs?
66Cost factor
- We can increase number of bits/sample hence
quantization levels but at a cost - The cost is in increased bandwidth but why?
- One clue is that as we go to finer quantization,
levels become tightly packed and difficult to
discern at the receiver hence higher error rates.
There is also a bandwidth cost
67Basis for finding PCM bandwidth
- Nyquist said in a channel with transmission
bandwidth BT, we can transmit at most 2BT pulses
per second - R(pulses/second)lt2BT(Hz)
- Or
- BT(Hz)gtR/2(pulses/second)
68Transmission over phone lines
- Analog phone lines are limited to 4KHz in
bandwidth, what is the fastest pulse rate
possible? - Rlt2BT2x40008000 pulses/sec
- Thats it? Modems do a bit faster than this!
- One way to raise this rate is to stuff each pulse
with multiple bits. More on that later
69Accomodating a digital source
- A source is generating a million bits/sec. What
is the minimum required transmission bandwidth. - BTgtR/2106/2500 KHz
70PCM bit rate
- The bit rate at the output of encoder is simply
the following product - R(bits/sec)n(bits/sample)xfs(samples/sec)
- Rnfs bits/sec
quantized
Encoded at 5 bits/sample
1 0 1 1 0 1
71PCM bandwidth
- But we know sampling frequency is 2W.
Substituting fs2W in Rn fs - R2nW (bits/sec)
- We also had BTgtR/2. Replacing R we get
- BTgtnW
72Comments on PCM bandwidth
- We have established a lower bound(min) on the
required bandwidth. - The cost of doing PCM is the large required
bandwidth. The way we can measure it is - Bandwidth expansion quantified by
- BT/Wgtn (bits/sample)
73Bandwidth expansion factor
- Similar to FM, there is a bandwidth expansion
factor relative to baseband, i.e. - ?BT/Wgtn
- Lets say we have 8 bits/sample meaning it takes
, at a minimum, 8 times more than baseband
bandwidth to do PCM
74PCM bandwidth example
- Want to transmit voice (4KHz ) using an 8-bit
PCM. How much bandwidth is needed? - We know W4KHz, fs8 KHz and n8.
- BTgtnW8x400032KHz
- This is the minimum PCM bandwidth under ideal
conditions. Ideal has to do with pulse shape used
75Bandwidth-power exchange
- We said using finer quantization (more
bits/sample) enhances sqnr because - (sqnr)dB ?6n dB
- At the same time we showed bandwidth increases
linearly with n. So we have a trade-off
76sqnr improvement
- Lets say we increase n by 1 from 8 to 9
bits/sample. As result, sqnr increases by 6 dB - sqnr ?6x8 ?48
- sqnr ?6x9 ?54
6dB
77Bandwidth increase
- Going from n 8 bits/sample, to 9 bits/sample,
min. bandwidth rises from 8W to 9W. - If message bandwidth is 4 KHz, then
- BT64 KHz for n8
- BT72 KHz for n9
8 KHz or 12.5 increase
78Is it worth it?
- Lets look at the trade-off
- Cost in increased bandwidth12.5
- Benefit in increased sqnr 6dB
- Every 3 dB means a doubling of the sqnr ratio. So
we have quadrupled sqnr by paying 12.5 more in
bandwidth
79Another way to look at the exchange
- We provided 12.5 more bandwidth and ended up
with 6 dB more sqnr. - If we are satisfied with the sqnr we have, we can
dial back transmitted power by 6 dB and suffer no
loss in sqnr - In other words, we have exchanged bandwidth for
lower power
80Similarity with FM
- PCM and FM are examples of wideband modulation.
All such modulations provide bandwidth-power
exchange but at different rates. Recall ?BT/W - FM.SNR?2
- PCM..SNR22?
Much more sensitive to beta, Better exchnage
81Complete PCM system design
- Want to transmit voice with average power of 1/2
watt and peak amplitude 1 volt using 256 level
quantizer. Find - sqnr
- Bit rate
- PCM bandwidth
82Signal to quantization noise
- We had
- sqnr3P/m2maxL2
- We have L256, P1/2 and mmax1.
- sqnr9830450 dB
83PCM bitrate
- Bit rate is given by
- R2nW (bits/sec)2x8x400064 Kb/sec
- This rate is a standard PCM voice channel
- This is why we can have 56K transmission over the
digital portion of telephone network which can
accomodating 64 Kb/sec.
84PCM bandwidth
- We can really talk about minimum bandwidth given
by - BTminnW8x400032 KHz
- In other words, we need a minimum of 32 KHz
bandwidth to transmit 64 KB/sec of data.
85Realistic PCM bandwidth
- Rule of thumb to find the required bandwidth for
digital data is that bandwidthbit rate - BTR
- So for 64 KB/sec we need 64 KHz of bandwidth
- One hertz per bit
86Differential PCM
- Concept of differential encoding is of great
importance in communications - The underlying idea is not to look at samples
individually but to look at past values as well. - Often, samples change very little thus a
substantial compression can be achieved
87Why differential?
- Lets say we have a DC signal and blindly go
about PCM-encoding it. Is it smart? - Clearly not. What we have failed to realize is
that samples dont change. We can send the first
sample and tell the receiver that the rest are
the same
88Definition of differential encoding
- We can therefore say that in differential
encoding, what is recorded and ultimately
transmitted is the change in sample amplitudes
not their absolute values - We should send only what is NEW.
89Where is the saving?
- Consider the following two situations
- The right samples are adjacent sample differences
with much smaller dynamic range requiring fewer
quantization levels
2
0
-0.4
2
2
2
2
-0.4
0.4
-0.8
0.4
1.6
0.8
1.6
1.6
0.8
90Implementation of DPCMprediction
- At the heart of DPCM is the idea of prediction
- Based on n-1 previous samples, encoder generates
an estimate of the nth sample. Since the nth
sample is known, prediction error can be found.
This error is then transmitted
91Illustrating prediction
- Here is what is happening at the transmitter
To be trasmited
Prediction error
Past samples(already sent)
Prediction of the Current sample
Only Prediction error is sent
92What does the receiver do?
- Receiver has the identical prediction algorithm
available to it. It has also received all
previous samples so it can make a prediction of
its own - Transmitter helps out by supplying the prediction
error which is then used by the receiver to
update the predicted value
93Interesting speculation
- What if our power of prediction was perfect? In
other words, what if we could predict the next
sample with no error?. What kind of communication
system would be looking at?
94Prediction error
- Let m(t) be the message and Ts sample interval,
then prediction error is given
Prediction error
95Prediction filter
- Prediction is normally done using a weighted sum
of N previous samples - The quality of prediction depends on the good
choice of weights wi
96Finding the optimum filter
- How do you find the best weights?
- Obviously, we need to minimize the prediction
error. This is done statistically - Choose a set of weights that gives the lowest (on
average) prediction error
97Prediction gain
- Prediction provides an SNR improvement by a
factor called prediction gain
98How much gain?
- On average, this gain is about 4-11 dB.
- Recall that 6 dB of SNR gain can be exchanged for
1 bit per sample - At 8000 samples/sec(for speech) we can save 1 to
2 bits per sample thus saving 8-16 Kb/sec.
99DPCM encoder
- Prediction error is used to correct the estimate
in time for the next round of prediction
quantizer
encoder
Prediction error
Input sample
Prediction error
-
Prediction
Updated prediction
N-tap prediction
100Delta modulation (DM)
- DM is actually a very simplified form of DPCM
- In DM, prediction of the next sample is simply
the previous sample
Prediction error
Estimate of
101DM encoder-diagram
out
?
in
-?
1-bit quantizer
Prediction error(??)
Input sample
Prediction error
-
Prediction
Updated prediction
Delay Ts
102DM encoder operation
- Prediction error generates ? at the output of
quantizer - If error is positive, it means prediction is
below sample value in which case the estimate is
updated by ? for the next step
103Slope overload effect
- Signal rises faster than prediction ? too small
samples
Ts
?
predictions
initial estimate
104Steady state granular noise
- Prediction can track the signal prediction error
small
Two drops to reach the signal
?
105Shortcomings of DM
- It is clearly the prediction stage that is
lacking - Samples must be closely taken to insure that
previous-sample prediction algorithm is
reasonably accurate - This means higher sample rates
106Multiplexing
- Concurrent communications calls for some form of
multiplexing. There are 3 categories - FDMA(frequency division multiple access)
- TDMA(time division multiple access)
- CDMA(code division multiple access)
- All 3 enjoy a healthy presence in the
communications market
107FDMA
- In FDM, multiple users can be on at the same time
by placing them in orthogonal frequency bands
guardband
user 1 user 2 user N
TOTAL BANDWIDTH
108FDMA exampleAMPS
- AMPS, wireless analog standard, is a good example
- Reverse link(mobile-to-base) 824-849MHz
- Forward link 869-894 MHz
- channel bandwidth30 KHz
- total channels 833
- Modulation FM, peak deviation 12.5 KHz
109TDMA
- Where FDMA is primarily an analog standard, TDMA
and CDMA are for digital communication - In TDMA, each user is assigned a time slot, as
opposed to a frequency slot in FDMA
110Basic idea behind TDMA
- Take the following 3 digital lines
frame
111TDM-PCM
TDM-PAM
TDM-PCM(bits)
quantizer and encoder
quantizer and encoder
channel
lpf
decoder
lpf
112Parameters of TDM-PCM
- A TDM-PCM line multiplexing M users is
characterized by the following parameters - data rate(bit or pulse rate)
- bandwidth
113TDM-PCM Data rate
- Here is what we have
- M users
- Each sampled at Nyquist rate
- Each sample PCMd into n bit words
- Total bit rate then is
- RM(users)xfs(samples /sec/user)xn(bits/sec)
- nMfs bits sec
114TDM-PCM bandwidth
- Recall Nyquist bandwidth. Given R pulses per
second, we need at least R/2 Hz. - In reality we need more (depending on the pulse
shape) so - BTRnMfs Hz
115T1 line
- Best known of all TDM schemes is ATTs T1 line
- T1 line multiplexes 24 voice channels(4KHz) into
one single bitstream running at the rate of 1.544
Mb/sec. Lets see how
116T1 line facts
- Each of the 24 voice lines are sampled at 8 KHz
- Each sample is then encoded into 8 bits
- A frame consists of 24 samples, one from each
line - Some data bits are preempted for control and
supervisory signaling
117T1 line structureall frames except 1,7,13,19...
channel 24
channel 2
channel 1
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
information bits (8-bits per sample)
FRAME(repeats)
118Inserting non-data bits
- In addition to data, we need slots for signaling
bits (on-hook/off hook, charging) - Every 6th frame (1,7,13,19..) is selected and the
least significant bit per channel is replaced by
a signaling bit
channel 24
channel 2
channel 1
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
119Framing bit
- Timing is of utmost significance in T1. We MUST
be able to know where the beginning of each frame
is - At the end of each frame a single bit is added to
help with frame identification
channel 24
channel 2
channel 1
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
F
information bits (8-bits per sample)
120T1 frame length
- How long is one frame?One revolution generates
frame
sampled at 8KHz
rotates at 8000 revs/sec.
24
frame length1/8000 125 microseconds
121T1 bit rate per frame
- Data rate
- 8x24192 bits per frame
- Framing bit rate
- 1 bit per frame
- Total per frame
- 193 bits/frame
122Total T1 bit rate
- We know there are 8000 frames a sec. and there
are 193 bits per frame. Therefore - T1 rate193x80001.544 Mb/sec
123Signaling rate component
- Not all 1.544 Mb/sec is data. Since every 8th bit
per channel of every 6th frame was replaced by
signaling bit, we have - signaling rate(8000)(1/6)1333 bits/sec
124TDM hierarchy
- It is possible to build upon T1 as follows
64 kb/sec
DS-2
DS-1
1st level multiplexer
2nd level multiplexer
3rd level multiplexer
DS-3
24
DS-0
DS-3 44.736 Mb/sec
DS-2 6.312 Mb/sec
7 lines
DS-1 1.544 MB/sec
125Recommended problems