Title: EE 551451, Fall, 2006 Communication Systems
1EE 551/451, Fall, 2006Communication Systems
- Zhu Han
- Department of Electrical and Computer Engineering
- Class 15
- Oct. 10th, 2006
2Outline
- Homework
- Exam format
- Second half schedule
- Chapter 7
- Chapter 16
- Chapter 8
- Chapter 9
- Standards
- Estimation and detection this class chapter 14,
not required - Estimation theory, methods, and examples
- Detection theory, methods, and examples
- Information theory next Tuesday chapter 15, not
required
3Estimation Theory
- Consider a linear process
- y H q n
- y observed data
- q sending information
- n additive noise
- If q is known, H is unknown. Then estimation is
the problem of finding the statistically optimal
H, given y, q and knowledge of noise properties. - If H is known, then detection is the problem of
finding the most likely sending information q,
given y, H and knowledge of noise properties. - In practical system, the above two steps are
conducted iteratively to track the channel
changes then transmit data.
4Different Approaches for Estimation
- Minimum variance unbiased estimators
- Subspace estimators
- Least Squares
- Maximum-likelihood
- Maximum a posteriori
has no statistical basis
uses knowledge of noise PDF
uses prior information about q
5Least Squares Estimator
- Least Squares
- qLS argmin y Hq2
- Natural estimator want solution to match
observation - Does not use any information about noise
- There is a simple solution (a.k.a.
pseudo-inverse) - qLS (HTH)-1 HTy
- What if we know something about the noise?
- Say we know Pr(n)
6Maximum Likelihood Estimator
- Simple idea want to maximize Pr(yq)
- Can write Pr(n) e-L(n) , n y Hq, and
- Pr(n) Pr(yq) e-L(y, q)
- if white Gaussian n, Pr(n) e-n2/2 s2 and
- L(y, q) y-Hq2/2s2
- qML argmax Pr(yq) argmin L(y, q)
- called the likelihood function
- qML argmin y-Hq2/2s2
- This is the same as Least Squares!
7Maximum Likelihood Estimator
- But if noise is jointly Gaussian with cov. matrix
C - Recall C , E(nnT). Then
- Pr(n) e-½ nT C-1 n
- L(yq) ½ (y-Hq)T C-1 (y-Hq)
- qML argmin ½ (y-Hq)TC-1(y-Hq)
- This also has a closed form solution
- qML (HTC-1H)-1 HTC-1y
- If n is not Gaussian at all, ML estimators become
complicated and non-linear - Fortunately, in most channel noise is usually
Gaussian
8Estimation example - Denoising
- Suppose we have a noisy signal y, and wish to
obtain the noiseless image x, where - y x n
- Can we use Estimation theory to find x?
- Try H I, q x in the linear model
- Both LS and ML estimators simply give x y!
- ? we need a more powerful model
- Suppose x can be approximated by a polynomial,
i.e. a mixture of 1st p powers of r - x Si0p ai ri
9Example Denoising
Least Squares estimate
q LS (HTH)-1HTy
10Maximum a Posteriori (MAP) Estimate
- This is an example of using a signal prior
information - Priors are generally expressed in the form of a
PDF Pr(x) - Once the likelihood L(x) and prior are known, we
have complete statistical knowledge - LS/ML are suboptimal in presence of prior
- MAP (aka Bayesian) estimates are optimal
likelihood
posterior
prior
11Maximum a Posteriori (Bayesian) Estimate
- Consider the class of linear systems y Hx n
- Bayesian methods maximize the posterior
probability - Pr(xy) ? Pr(yx) . Pr(x)
- Pr(yx) (likelihood function) exp(- y-Hx2)
- Pr(x) (prior PDF) exp(-G(x))
- Non-Bayesian maximize only likelihood
- xest arg min y-Hx2
- Bayesian
- xest arg min y-Hx2 G(x) ,
- where G(x) is obtained from the prior
distribution of x - If G(x) Gx2 ? Tikhonov Regularization
12Expectation and Maximization (EM)
- Expectation and Maximization (EM) algorithm
alternates between performing an expectation (E)
step, which computes an expectation of the
likelihood by including the latent variables as
if they were observed, and a maximization (M)
step, which computes the maximum likelihood
estimates of the parameters by maximizing the
expected likelihood found on the E step. The
parameters found on the M step are then used to
begin another E step, and the process is
repeated. - E-step Estimation for unobserved event (which
Gaussian is used), conditioned on the
observation, using the values from the last
maximization step. - M-step You want to maximize the expected
log-likelihood of the joint event
13Minimum-variance unbiased estimator
- Biased and unbiased estimators
- An unbiased estimator of parameters, whose
variance is minimized for all values of the
parameters. - The Cramer-Rao Lower Bound (CRLB) sets a lower
bound on the variance of any unbiased estimator. - Biased estimator might have better performances
than unbiased estimator in terms of variance. - Subspace methods
- MUSIC
- ESPRIT
- Widely used in RADA
- Helicopter, Weapon detection (from feature)
14What is Detection
- Deciding whether, and when, an event occurs
- a.k.a. Decision Theory, Hypothesis testing
- Presence/absence of signal
- RADA
- Received signal is 0 or 1
- Stock goes high or not
- Criminal is convicted or set free
- Measures whether statistically significant change
has occurred or not
15Detection
16Hypothesis Testing with Matched Filter
- Let the signal be y(t), model be h(t)
- Hypothesis testing
- H0 y(t) n(t) (no signal)
- H1 y(t) h(t) n(t) (signal)
- The optimal decision is given by the Likelihood
ratio test (Nieman-Pearson Theorem) - Select H1 if L(y) Pr(yH1)/Pr(yH0) gt g
- otherwise select H0
17Signal detection paradigm
- Signal trials
- Noise trials
18Signal Detection
19Receiver operating characteristic (ROC) curve
20Matched Filters
- Optimal linear filter for maximizing the signal
to noise ratio (SNR) at the sampling time in the
presence of additive stochastic noise - Given transmitter pulse shape g(t) of duration T,
matched filter is given by hopt(t) k g(T-t)
for all k
21Questions?