Lecture 6 Resolution and Generalized Inverses - PowerPoint PPT Presentation

About This Presentation
Title:

Lecture 6 Resolution and Generalized Inverses

Description:

Lecture 6 Resolution and Generalized Inverses – PowerPoint PPT presentation

Number of Views:150
Avg rating:3.0/5.0
Slides: 53
Provided by: BillM184
Category:

less

Transcript and Presenter's Notes

Title: Lecture 6 Resolution and Generalized Inverses


1
Lecture 6 ResolutionandGeneralized Inverses
2
Syllabus
Lecture 01 Describing Inverse ProblemsLecture
02 Probability and Measurement Error, Part
1Lecture 03 Probability and Measurement Error,
Part 2 Lecture 04 The L2 Norm and Simple Least
SquaresLecture 05 A Priori Information and
Weighted Least SquaredLecture 06 Resolution and
Generalized Inverses Lecture 07 Backus-Gilbert
Inverse and the Trade Off of Resolution and
VarianceLecture 08 The Principle of Maximum
LikelihoodLecture 09 Inexact TheoriesLecture
10 Nonuniqueness and Localized AveragesLecture
11 Vector Spaces and Singular Value
Decomposition Lecture 12 Equality and Inequality
ConstraintsLecture 13 L1 , L8 Norm Problems and
Linear ProgrammingLecture 14 Nonlinear
Problems Grid and Monte Carlo Searches Lecture
15 Nonlinear Problems Newtons Method Lecture
16 Nonlinear Problems Simulated Annealing and
Bootstrap Confidence Intervals Lecture
17 Factor AnalysisLecture 18 Varimax Factors,
Empircal Orthogonal FunctionsLecture
19 Backus-Gilbert Theory for Continuous
Problems Radons ProblemLecture 20 Linear
Operators and Their AdjointsLecture 21 Fréchet
DerivativesLecture 22 Exemplary Inverse
Problems, incl. Filter DesignLecture 23
Exemplary Inverse Problems, incl. Earthquake
LocationLecture 24 Exemplary Inverse Problems,
incl. Vibrational Problems
3
Purpose of the Lecture
Introduce the idea of a Generalized Inverse, the
Data and Model Resolution Matrices and the Unit
Covariance Matrix Quantify the spread of
resolution and the size of the covariance Use
the maximization of resolution and/or covariance
as the guiding principle for solving inverse
problems
4
Part 1The Generalized Inverse,the Data and
Model Resolution Matricesand the Unit
Covariance Matrix
5
all of the solutions
of the form
mest Md v
6
lets focus on this matrix
mest Md v
7
rename it the generalized inverseand use the
symbol G-g
mest G-gd v
8
(lets ignore the vector v for a
moment)Generalized Inverse G-goperates on the
data to give an estimate of the model
parametersifdpre Gmestthenmest G-gdobs
9
Generalized Inverse G-gif dpre Gmest then
mest G-gdobs sort of looks like a matrix
inverseexceptM?N, not squareandGG-g?I and
G-gG?I
10
so actuallythe generalized inverse is not a
matrix inverse at all
11
dpre Gmest and mest G-gdobs
plug one equation into the other
dpre Ndobs with N GG-g
data resolution matrix
12
Data Resolution Matrix, N
dpre Ndobs
How much does diobs contribute to its own
prediction?
13
ifNI
dpre dobs
dipre diobs
diobs completely controls its own prediction
14
The closer N is to I, the more diobs controls
its own prediction
15
straight line problem
16
dpre N
dobs
only the data at the ends control their own
prediction
17
dobs Gmtrue and mest G-gdobs
plug one equation into the other
mest Rmtrue with R G-gG
model resolution matrix
18
Model Resolution Matrix, R
mest Rmtrue
How much does mitrue contribute to its own
estimated value?
19
ifRI
mest mtrue
miest mitrue
miest reflects mitrue only
20
else ifR?I
miest Ri,i-1mi-1true Ri,imitrue
Ri,i1mi1true
miest is a weighted average of all the
elements of mtrue
21
The closer R is to I, the more miest reflects
only mitrue
22
Discrete version ofLaplace Transform
large c d is shallow average of m(z) small c
d is deep average of m(z)
23
e-chiz
?
m(z)
integrate
e-cloz
dlo
z
?
integrate
dhi
z
z
24
mest R
mtrue
the shallowest model parameters are best
resolved
25
Covariance associated with the Generalized Inverse
unit covariance matrix divide by s2 to remove
effect of the overall magnitude of the
measurement error
26
unit covariance for straight line problem
model parameters uncorrelated when this term
zero happens when data are centered about the
origin
27
Part 2 The spread of resolution and the size of
the covariance
28
a resolution matrix has small spread if only its
main diagonal has large elementsit is close to
the identity matrix
29
Dirichlet Spread Functions
30
a unit covariance matrix has small size if its
diagonal elements are smallerror in the data
corresponds to only small error in the model
parameters(ignore correlations)
31
(No Transcript)
32
Part 3 minimization ofspread of
resolutionand/orsize of covarianceas the
guiding principlefor creating a generalized
inverse
33
over-determined casenote that forsimple least
squares
G-g GTG-1GT
model resolution RG-gG GTG-1GTGI always the
identify matrix
34
suggests that we try to minimize the spread of
the data resolution matrix, Nfind G-g that
minimizes spread(N)
35
spread of the k-th row of N
now compute
36
first term
37
second term
third term is zero
38
which is justsimple least squares
putting it all together
G-g GTG-1GT
39
the simple least squares solutionminimizes the
spread of data resolutionand has zero spread of
the model resolution
40
under-determined casenote that forminimum
length solution
G-g GT GGT-1
data resolution NGG-g G GT GGT-1 I always
the identify matrix
41
suggests that we try to minimize the spread of
the model resolution matrix, Rfind G-g that
minimizes spread(R)
42
which is justminimum length solution
minimization leads to GGTG-g GT
G-g GT GGT-1
43
the minimum length solutionminimizes the spread
of model resolutionand has zero spread of the
data resolution
44
general case
leads to
45
a Sylvester Equation, so explicit solution in
terms of matrices
general case
leads to
46
1
special case 1
0
e2
I
GTGe2IG-gGT G-gGTGe2I-1GT
damped least squares
47
0
special case 2
1
e2
I
G-gGGTe2I GT G-gGT GGTe2I-1
damped minimum length
48
so
  • no new solutions have arisen
  • just a reinterpretation of previously-derived
    solutions

49
reinterpretation
  • instead of solving for estimates of the model
    parameters
  • We are solving for estimates of weighted averages
    of the model parameters,
  • where the weights are given by the model
    resolution matrix

50
criticism of Direchlet spread() functionswhen m
represents m(x)is that they dont capture the
sense of being localized very well
51
These two rows of the model resolution matrix
have the same spread
Rij
Rij
index, j
index, j
i
i
but the left case is better localized
52
we will take up this issue in the next lecture
Write a Comment
User Comments (0)
About PowerShow.com