Communications Part II a - PowerPoint PPT Presentation

1 / 43
About This Presentation
Title:

Communications Part II a

Description:

7.4 Equalizing frequency selective mobile radio channels. 7.5 Standards ... 9.3 Rake-Receiver. 9.4 Multi User Interference. 10. MIMO Systems. 10.1 System Model ... – PowerPoint PPT presentation

Number of Views:56
Avg rating:3.0/5.0
Slides: 44
Provided by: stefan100
Category:

less

Transcript and Presenter's Notes

Title: Communications Part II a


1
Communications Part II (a)
  • Mobile Radio Transmission
  • 7.1 Models for Mobile Communications
  • 7.2 BER for non-frequency selective Channels
  • 7.3 Diversity
  • 7.4 Equalizing frequency selective mobile radio
    channels
  • 7.5 Standards for Mobile Radio Systems
  • 8. OFDM
  • 8.1 Principles of OFDM
  • 8.2 Equalization
  • 8.3 Channel Estimation for OFDM
  • 8.4 Analog Channel
  • 5. Equalization
  • ZF-solution for linear equalizers
  • MMSE-Solution
  • Decision-Feedback Structure
  • Adaptive Equalization
  • Convergence of LMS
  • 6. Maximum Likelihood Sequence Estimation
  • 6.1 Forney-Receiver
  • 6.2 Viterbi-Algorithm
  • 6.3 Error Probability at Viterbi Detection
  • 6.4 Channel Estimation

2
Communications Part II (b)
  • 9. CDMA
  • 9.1 Principles of CDMA
  • 9.2 Spreading Codes
  • 9.3 Rake-Receiver
  • 9.4 Multi User Interference
  • 10. MIMO Systems
  • 10.1 System Model
  • 10.2 SIMO Systems (Maximum Ratio Combining)
  • 10.3 MISO (Space-Time-Codes, Beamforming)
  • 10.4 Multi Layer Transmission

3
6. Optimum Receiver under ISI Conditions6.1.
Forney-Receiver (MLSE)
MLSE Maximum Likelihood Sequence Estimation
Block diagram for transmission system
up- sampling
data sequence
impulse shaping
ISI- Channel
T symbol clock
4
Optimal Receiver (MLSE)
Some definitions
Data vector
Impulse response vector
Received vector (without noise)
Noisy received vector
5
Convolutional Matrix
Full equation system
Toeplitz structure
0 causality 0 finite impulse response
Equation system in matrix notation
with convolutional matrix
nL
6
Example 1 causal convolutional matrix
7
Example 2 Transposed convolutional matrix with
conjugate elements
8
MLSE-Receiver
Convolution of x(k) with c(k) can be expressed by
convolutional matrix C.
9
MLSE-Receiver
  • After removing zeros in

Cw contains every w-th column of convolution
matrix C
10
Maximum Sequence Estimation (MLSE)
  • M-ary modulation, L data symbols ?
    hypotheses for noiseless received signal
  • Choose the most probable sequence (white channel
    noise)

?

(Gaussian distribution)
Formulation by means of symbol vector d ?
convolutional matrices
11
MLSE-Receiver
M-ary modulation, L data symbols only ML receive
signals possible
with
ML-Criterion
Interpretation
12
MLSE-Receiver
Optimal Receiver for ISI Channels and AWGN
down sampling
matched filter
ML criterion
13
Optimal receiver for ISI Channelswith
decorrelation filter (Forney receiver)
Noise in x(i) is coloured by matched filtering
14
Forney receiver
ML-Criterion
Modified ML-Criterion, Euclidean metric
15
Whitening condition
Comprise noise components in vectors
Autocorrelation matrix at decorrelator output
Assumption of white noise
We are chossing our decorrelation filter such
that N(i) becomes a white noise process.
16
Whitening condition (cont.)
Pre- und post multiplication with PH and P
Assume existence of (PHP)-1
We are scaling the decorrelation filter to fulfill
It follows the whitening condition
17
6.2 Viterbi-Algorithm
  • Forney Receiver optimal, but needs to calculate
    Euclidean distances between

received sequence and every possible noiseless
sequence.
Data sequence of length L, M-ary modulation
possible sequences
18
6.2 Viterbi-Algorithm
Example BPSK, channel 2nd order
19
6.2 Viterbi-Algorithm
channel order
memories
example



4 possible states
1, 1
S
0



1,-1
S
1



-1,1
S
2



-1,-1
S
3
20
6.2 Viterbi-Algorithm

1
d

1
d




1,-1
S
1,1
S
1
0

1
d


-1
d
1
d

-1
d

-1
d




-1,1
S
-1,-1
S
2
3

-1
d
Trellis describes channel state over time
(according to input data vector d)
21
6.2 Viterbi-Algorithm
Number of transitions ending in a state M
22
Trellis diagram Example
? BPSK, channel order 2
23
Viterbi Equalization Example
Signal levels z(i) for BPSK and channel
Input d(i)
output value z(i)
Trellis segment
1/2
S0 1,1
-1/1
1/1
S1 1,-1
-1/0
1/0
S2 -1,1
-1/-1
1/-1
S3 -1,-1
-1/-2
24
Viterbi Equalization Example
25
Trellis diagram Error event
? BPSK, channel order 2 example for error event
2
3
4
1
i 0
5
7
6
S0 1,1
d1
d-1
d-1
S1 1,-1
d-1
d-1
d-1
d1
d1
S2 -1,1
d1
d1
d-1
d-1
d-1
S3 -1,-1
steady state
-1
-1
1
-1
-1
1
1
-1
-1
-1
1
-1
-1
1
1
-1
0
1
0
0
0
26
6.3 Error Probability with Viterbi
  • Error event

path merging
  • Probability of error event

27
Symbol Error Probability
Hamming weight Number of non-zero elements of
  • Elements with minimum values of dominate
    the sum!

PSK
QAM
  • Bit error probability
  • Burst-Errors Interleaver necessary for the
    channel decoder independent errors

28
6.3 Error Probability for MLSE
  • Definition of error vectors
  • Symbol error probability
  • and SNR loss factor

with
is the a-priori probability of error event
with
Individual channel H determines specific error
events e.
29
Error Probability for MLSE
  • Simplification Term with
    dominates the sum
  • Symbol error probability for M-PSK and M-QAM

Bit error probability
average bit errors per symbol
30
Worst Case Channels
  • a)
  • b)

31
Worst-Case Channels for MLSE (2nd order)
32
Error Probability for MLSE
SNR loss of approx. 2.3 dB
33
6.5 Channel Estimation
-
Channel Impulse response Channel model
data estimated or pilot data
output signal model output difference
signal (error) noise
34
Channel Estimation
  • Linear equation system with l unknown channel
    coefficients h(i) and N linear independent
    equations

35
Channel Estimation
  • Vector definitions

36
Channel Estimation
  • Difference signal (error)
  • Minimize squared error
  • Derivatative with respect to hH and set it to
    zero
  • Solution pseudo-inverse

37
Channel Estimation
  • Problem
  • Solution Orthogonal sequences of pilot data
    where
  • Example
  • Channel length and observation window

has non-integer elements ? matrix multiplic.
required for
i
i-2
i
i2
i
i1
i3
i3
38
Channel Estimation
  • multiplication of matrices containing orthogonal
    sequences
  • results in
  • with (3)

39
Standard Deviation for maximum likelihood
channel estimation
Influence of channel estimation on Viterbi
detection, N 4
40
GSM Channel Estimation
GSM-Burst 142 bits
  • Midable 26 Training bit in the middle of the
    burst ? min. estimation error in case of
    strongly time dependent channel coefficient
  • 2 58 data bits (? SNR loss of approx. 1 dB)
  • 3 tail bits for trellis termination ( guard
    interval 2 8.25 µs)

41
GSM Training Sequences
  • Memory 5 ? estimation of max. 6 channel taps
    possible
  • 6 channel taps ? 32 Viterbi states, 64
    transitions
  • different training sequences for cell
    identification

42
Turbo Channel Estimation
  • Initial channel estimation based on training
    sequence (pilots)
  • Demodulation/decoding of whole burst ?
    generation of pseudo pilots
  • Re-encoding/modulation, interleaving
  • channel estimation based on whole burst
  • Iteration repeat previous steps several times

43
Simulation results
Bit Error Rate at different burst positions
Extreme Doppler conditions Bad Urban 500
Hz Velocity 300 km/h Leveling after 3 Iter.
Write a Comment
User Comments (0)
About PowerShow.com