Title: Lecture 12 Equality and Inequality Constraints
1Lecture 12 Equality and Inequality Constraints
2Syllabus
Lecture 01 Describing Inverse ProblemsLecture
02 Probability and Measurement Error, Part
1Lecture 03 Probability and Measurement Error,
Part 2 Lecture 04 The L2 Norm and Simple Least
SquaresLecture 05 A Priori Information and
Weighted Least SquaredLecture 06 Resolution and
Generalized Inverses Lecture 07 Backus-Gilbert
Inverse and the Trade Off of Resolution and
VarianceLecture 08 The Principle of Maximum
LikelihoodLecture 09 Inexact TheoriesLecture
10 Nonuniqueness and Localized AveragesLecture
11 Vector Spaces and Singular Value
Decomposition Lecture 12 Equality and Inequality
ConstraintsLecture 13 L1 , L8 Norm Problems and
Linear ProgrammingLecture 14 Nonlinear
Problems Grid and Monte Carlo Searches Lecture
15 Nonlinear Problems Newtons Method Lecture
16 Nonlinear Problems Simulated Annealing and
Bootstrap Confidence Intervals Lecture
17 Factor AnalysisLecture 18 Varimax Factors,
Empirical Orthogonal FunctionsLecture
19 Backus-Gilbert Theory for Continuous
Problems Radons ProblemLecture 20 Linear
Operators and Their AdjointsLecture 21 Fréchet
DerivativesLecture 22 Exemplary Inverse
Problems, incl. Filter DesignLecture 23
Exemplary Inverse Problems, incl. Earthquake
LocationLecture 24 Exemplary Inverse Problems,
incl. Vibrational Problems
3Purpose of the Lecture
Review the Natural Solution and SVD Apply SVD to
other types of prior information and to equality
constraints Introduce Inequality Constraints and
the Notion of Feasibility Develop Solution
Methods Solve Exemplary Problems
4Part 1Review the Natural SolutionandSVD
5subspacesmodel parameters mp can affect
datam0 cannot affect data data dp can be fit
by modeld0 cannot be fit by any model
6natural solutiondetermine mp by solving
dp-Gmp0set m00
7natural solutiondetermine mp by solving
dp-Gmp0set m00
error reduced to its minimum Ee0Te0
8natural solutiondetermine mp by solving
dp-Gmp0set m00
solution length reduced to its minimum LmpTmp
9Singular Value Decomposition (SVD)
10singular value decomposition
UTUI and VTVI
11suppose only p ?s are non-zero
12suppose only p ?s are non-zero
only first p columns of U
only first p columns of V
13UpTUpI and VpTVpIsince vectors mutually
pependicular and of unit length
UpUpT?I and VpVpT?Isince vectors do not span
entire space
14The Natural Solution
15The Natural Solution
natural generalized inverse G-g
16resolution and covariance
17Part 2Application of SVD to other types of
prior informationand toequality constraints
18general solution to linear inverse problem
19general minimum-error solution
2 lectures ago
20general minimum-error solution
plus amount a of null vectors
natural solution
21you can adjust a to match whatevera priori
information you want
for examplemltmgt by minimizing Lm-ltmgt2
w.r.t. a
22you can adjust a to match whatevera priori
information you want
for examplemltmgt by minimizing Lm-ltmgt2
w.r.t. a
get a V0Tltmgt so m Vp?p-1UpTd V0V0Tltmgt
23equality constraintsminimize E with constraint
Hmh
24Step 1find part of solution constrained by
Hmh SVD of H (not G) H Vp?pUpT so mVp?p-1UpT
h V0a
25Step 2convert Gmd into and equation for a
GVp?p-1UpTh GV0a d and rearrange GV0a
d - GVp?p-1UpTh Ga d
26Step 3solve Ga d for a using least squares
27Step 4 reconstruct m from a mVp?p-1UpTh V0a
28Part 3 Inequality Constraints and the Notion of
Feasibility
29Not all inequality constraints provide new
information x gt 3 x gt 2
30Not all inequality constraints provide new
information x gt 3 x gt 2
follows from first constraint
31Some inequality constraints are incompatible x gt
3 x lt 2
32Some inequality constraints are incompatible x gt
3 x lt 2
nothing can be both bigger than 3 and smaller
than 2
33every row of the inequality constraint Hm
h divides the space of m into two parts one
where a solution is feasible one where it is
infeasible the boundary is a planar surface
34when all the constraints are considered
together they either create a feasible volume or
they dont if they do, then the solution must be
in that volume if they dont, then no solution
exists
35(No Transcript)
36now consider the problem of minimizing the error
E subject to inequality constraints Hm h
37if the global minimum is inside the feasible
region then the inequality constraints have no
effect on the solution
38but if the global minimum is outside the feasible
region then the solution is on the surface of
the feasible volume
39but if the global minimum is outside the feasible
region then the solution is on the surface of
the feasible volume
the point on the surface where E is the smallest
40feasible
Emin
infeasible
41furthermore the feasible-pointing normal to the
surface must be parallel to ?E else you could
slide the point along the surface to reduce the
error E
42Emin
43Kuhn Tucker theorem
44its possible to find a vector y with yi0 such
that
45its possible to find a vector y with y0 such
that
feasible-pointing normals to surface
46its possible to find a vector y with y0 such
that
feasible-pointing normals to surface
the gradient of the error
47its possible to find a vector y with y0 such
that
feasible-pointing normals to surface
is a non-negative combination of feasible normals
the gradient of the error
48its possible to find a vector y with y0 such
that
feasible-pointing normals to surface
y specifies the combination
is a non-negative combination of feasible normals
the gradient of the error
49its possible to find a vector y with y0 such
that
for linear case with Gmd
50its possible to find a vector y with y0 such
that
some coefficients yi are positive
51its possible to find a vector y with y0 such
that
the solution is on the corresponding constraint
surface
some coefficients yi are positive
52its possible to find a vector y with y0 such
that
some coefficients yi are zero
53its possible to find a vector y with y0 such
that
the solution is on the feasible side of the
corresponding constraint surface
some coefficients yi are zero
54Part 4 Solution Methods
55simplest caseminimize E subject to migt0(HI
and h0)iterative algorithm with two nested
loops
56Step 1
- Start with an initial guess for m
- The particular initial guess m0 is feasible
- It has all its elements in mE
- constraints satisfied in the equality sense
57Step 2
- Any model parameter mi in mE that has associated
with it a negative gradient ?Ei can be changed
both to decrease the error and to remain
feasible. - If there is no such model parameter in mE, the
Kuhn Tucker theorem indicates that this m is
the solution to the problem.
58Step 3
- If some model parameter mi in mE has a
corresponding negative gradient, then the
solution can be changed to decrease the
prediction error. - To change the solution, we select the model
parameter corresponding to the most negative
gradient and move it to the set mS. - All the model parameters in mS are now recomputed
by solving the system GSmSdS in the least
squares sense. The subscript S on the matrix
indicates that only the columns multiplying the
model parameters in mS have been included in the
calculation. - All the mEs are still zero. If the new model
parameters are all feasible, then we set m m'
and return to Step 2.
59Step 4
- If some of the elements of mS are infeasible,
however, we cannot use this vector as a new guess
for the solution. - So, we compute the change in the solution and
add as much of this vector as possible to the
solution mS without causing the solution to
become infeasible. - We therefore replace mS with the new guess mS a
dm, where is the largest choice that can be made
without some mS becoming infeasible. At least one
of the mSis has its constraint satisfied in the
equality sense and must be moved back to mE. The
process then returns to Step 3.
60In MatLab
61example
gravitational field depends upon density
via the inverse square law
62example
gravitational force depends upon density
model parameters
via the inverse square law
theory
63(No Transcript)
64more complicated caseminimize m2 subject to
Hmh
65this problem is solved by transformation to the
previous problem
66solve by non-negative least squares
then compute mi as
with ed-Gm
67In MatLab
68(No Transcript)
69yet more complicated caseminimize d-Gm2
subject to Hmh
70this problem is solved by transformation to the
previous problem
71minimize m subject to Hmh
where
and
72(No Transcript)
73In MatLab
Up, Lp, Vp svd(G,0) lambda
diag(Lp) rlambda 1./lambda Lpi
diag(rlambda) transformation 1 Hp
-HVpLpi hp h HpUp'dobs
transformation 2 Gp Hp, hp' dp
zeros(1,length(Hp(1,))), 1' mpp
lsqnonneg(Gp,dp) ep dp - Gpmpp mp
-ep(1end-1)/ep(end) take mp back to m mest
VpLpi(Up'dobs-mp) dpre Gmest