Title: Is the dot product n.K negative
1Lecture 15
2(No Transcript)
3(No Transcript)
4(No Transcript)
5Is the dot product n.K negative?
6Is the dot product n.K negative?
K
7Is the dot product n.K negative?
K
n
8If not, then the cue cannot be seen by the camera.
K
n
9Direction-cosine matrix
K
n
10K
n
11K
n
12K
n
13K
n
14Consider the unit normal to the three cues
indicated below.
15The unit-normal test and c.s. proximity test may
not resolve ambiguity with either of these.
16Any thoughts for inferring occlusion?
17The Extended Kalman Filter
18Outline
- Illustrate our particular use of the EKF with
video of experimental systems - Develop EKF using related examples for those not
yet familiar with it - Discuss some of the practical ways in which we
have found it useful as well as some of the
pitfalls
19Visual guidance of a forklift
20Visual guidance of wheelchair
21The EKF enables both teaching
and tracking.
22EKF needed (in part) due to limitations of
odometry
odometry Use of differential equations to
relate wheel rotation to evolving
position/orientation
With longer trajectories, these odometry-based
integrals for position deteriorate.
23Estimation based on odometry alone is
particularly poor in the presence of
high-wheel-slip pivoting.
24Odometry onlyorDeadReckoning
Dq1 Dq2
25Dq1 Dq2
Real-time sample of right-wheel increments.
26Dq1 Dq2
Real-time sample of left-wheel increments.
27Wheel-rotation samples.
Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
28Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
29Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
30Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
31Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
32Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
33Dq1 Dq2
0.1050 0.0950 0.1099 0.0901
0.1148 0.0852 0.1195 0.0805 0.1240
0.0760 0.1282 0.0718 0.1322
0.0678 0.1359 0.0641 0.1392 0.0608
0.1421 0.0579
34Example of stochastic modeling and analysis.
35Consider the one-dimensional case where both
wheels turn exactly together.
36Consider the one-dimensional case where both
wheels turn exactly together.
Rather than time, a will be our independent
variable, since plant model is kinematics-based.
37(No Transcript)
38Solution x(a) x(0) Ra
39(No Transcript)
40Unknown term accounts for uncertainty in the
model.
41Unknown term accounts for uncertainty in the
model.
42Unknown term accounts for uncertainty in the
model.
43Smaller Q gt higher confidence in model.
44Medium Q gt moderate confidence in model.
45Larger Q gt little confidence in model.
46We use this stochastic model to produce an
ongoing, Gaussian probability distribution for
the true x.
47(No Transcript)
48(No Transcript)
49(No Transcript)
50(No Transcript)
51(No Transcript)
52(No Transcript)
53(No Transcript)
54(No Transcript)
55Gaussian probability distribution parameterized
by m and s.
56Gaussian probability distribution parameterized
by m and s.
57(No Transcript)
58(No Transcript)
59(No Transcript)
60Expectation of x
61f Gaussian
62(No Transcript)
63(No Transcript)
64(No Transcript)
65(No Transcript)
66(No Transcript)
67(No Transcript)
68Expectation of (x-m)2
69Expectation of (x-m)2
70(No Transcript)
71(No Transcript)
72Need rate of change of m and s2.
73Recall our stochastic o.d.e.
74Actual or true value of x.
75Best estimate of x mean of the pdf for x.
76Error in the best estimate.
77Substitute into governing equation.
78Substitute into governing equation.
79The best estimate or mean advances in accordance
with the deterministic equation.
80The best estimate or mean advances in accordance
with the deterministic equation.
81Subtract the lower from the upper.
82Subtract the lower from the upper.
83Consider the variance of the probability
distribution for x E(Dx2)P
84Consider the variance of the probability
distribution for x E(Dx2)P
85(No Transcript)
86(No Transcript)
87(No Transcript)
88(No Transcript)
89(No Transcript)
90(No Transcript)
91(No Transcript)
92w is both zero-mean and uncorrelated with x(0).
93(No Transcript)
94(No Transcript)
95(No Transcript)
96This is a statement of uncorrelated
white noise.
97(No Transcript)
98Factor of ½ comes from symmetry of d function.
99(No Transcript)
100stochastic equations
101(No Transcript)
102(No Transcript)
103Note that our level of certainty diminishes with
more and more wheel rotation.
104(No Transcript)
105E(x(a))E(x(0))Ra
106E(x(a))E(x(0))Ra0.00.5a
107E(x(a))E(x(0))Ra0.00.5a
108E(x(a))E(x(0))Ra0.00.5a P(a)P(0)Qa
109E(x(a))E(x(0))Ra0.00.5a
P(a)P(0)Qa0.00.12a
110E(x(a))E(x(0))Ra0.00.5a
P(a)P(0)Qa0.00.12a
111This means that position certainty is diminishing
with distance traveled.
112The variance of the probability distribution
increases linearly with distance traveled, a.
113The variance of the probability distribution
increases linearly with distance traveled, a.
114The variance of the probability distribution
increases linearly with distance traveled, a.
115The variance of the probability distribution
increases linearly with distance traveled, a.
116The variance of the probability distribution
increases linearly with distance traveled, a.
117The variance of the probability distribution
increases linearly with distance traveled, a.
118The variance of the probability distribution
increases linearly with distance traveled, a.
119The variance of the probability distribution
increases linearly with distance traveled, a.
120The variance of the probability distribution
increases linearly with distance traveled, a.
121The variance of the probability distribution
increases linearly with distance traveled, a.
122E(x(10))5.0 P(10)1.2
123Determine the probability that the true value of
x(10) is within plus or minus 0.1 of the mean of
5.0.
124(No Transcript)
125We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
126We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
127We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
128We could use tables to determine the probability
by recognizing that this corresponds to the
region within plus or minus 0.1/1.21/20.09128
standard deviations of the mean.
129How do we use an observation _at_ a10 to alter the
apriori values of
130stochastic observation equation
131Deterministic portion based on camera model.
132Random portion additive, and Gaussian (as with
the process equations) with E(v)0, E(v2)R.
133Random portion additive, and Gaussian (as with
the process equations) with E(v)0, E(v2)R.
134Bayes theorem
135(No Transcript)
136Apriori probability density function for x(10)
that we already know
137This is our aposteriori probability density
function because it is conditioned on the
observation z.
138This is our aposteriori probability density
function because it is conditioned on the
observation z.
139(No Transcript)
140(No Transcript)
141(No Transcript)
142(No Transcript)
143(No Transcript)
144(No Transcript)
145(No Transcript)
146pdf for x(10) conditioned on observation z is our
aposteriori pdf.
147The highest value of this pdf is therefore the
mean of our aposteriori pdf.
148The highest value of this pdf is therefore the
mean of our aposteriori pdf.
149The highest value of this pdf is therefore the
mean of our aposteriori pdf.
150The highest value of this pdf is therefore the
mean of our aposteriori pdf.
151The highest value of this pdf is therefore the
mean of our aposteriori pdf.
152The highest value of this pdf is therefore the
mean of our aposteriori pdf.
153The highest value of this pdf is therefore the
mean of our aposteriori pdf.
154Note that x only appears in the arguments of the
exponentials.
155Note that x only appears in the arguments of the
exponentials.
156Note that x only appears in the arguments of the
exponentials.
157or
158or
159Return for a moment to the exponential of the
aposteriori function
160Return for a moment to the exponential of the
aposteriori function
This must be equal to the exponential part of
161It follows that
162It follows that
Compare this to our updated best estimate of
x(10)
163It follows that
Compare this to our updated best estimate of
x(10)
It follows that we may write
164It follows that
Compare this to our updated best estimate of
x(10)
It follows that we may write
165It follows that
Innovation _at_ a10.
It follows that we may write
166In the absence of new samples, the variance of
the probability distribution continues to grow.
167This means that position certainty is diminishing
with distance traveled.
168(No Transcript)
169(No Transcript)
170(No Transcript)
171(No Transcript)
172With occasional observations, the rate of
increase of uncertainty may be reduced.
173With more frequent observations, uncertainty can
be kept near zero.
174At the same time, best estimates of the mean of
the probability density function for x are being
adjusted with each observation.
175(No Transcript)
176(No Transcript)
177(No Transcript)
178(No Transcript)
179(No Transcript)
180(No Transcript)
181 a z . 1.0
-0.30766 2.0 -0.27405
182 a z . 1.0
-0.30766 2.0 -0.27405
183Apriori
a z . 1.0
-0.30766 2.0 -0.27405
184Apriori
a z . 1.0
-0.30766 2.0 -0.27405
185 a z . 1.0
-0.30766 2.0 -0.27405
186 a z . 1.0
-0.30766 2.0 -0.27405
187 a z . 1.0
-0.30766 2.0 -0.27405
188 a z . 1.0
-0.30766 2.0 -0.27405
189 a z . 1.0
-0.30766 2.0 -0.27405
190 a z . 1.0
-0.30766 2.0 -0.27405
191 a z . 1.0
-0.30766 2.0 -0.27405
192 a z . 1.0
-0.30766 2.0 -0.27405
193 a z . 1.0
-0.30766 2.0 -0.27405
194 a z . 1.0
-0.30766 2.0 -0.27405
195 a z . 1.0
-0.30766 2.0 -0.27405
196 a z . 1.0
-0.30766 2.0 -0.27405
197 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
198 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
Note that, since this is a simulation, we may
create data consistent with any particular
real event.
199 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
These data happen to be consistent with a
vehicle with a leaky tire. R ranges from .55 to
.45 over the course of the maneuver.
200 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
The data are also consistent with an initial
position that is different from the assumed zero.
The true (but unknown) initial position is
x(0) -0.1.
201We can accomodate both these unmodeled, unknown
effects covered the initial position error
via an initial variance P(0) different from
zero and the changing wheel radius by nonzero
process noise variance Q.
a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
202 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
However, the extended Kalman filter also allows
us to estimate R together with the current
position x, by defining a 2-dimensional state
vector, x1x, x2R.
203 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
In such a case the same data could be applied in
the estimation of a 2- random-variable
joint- probability-density function.
204 a z . 1.0
-0.30766 2.0 -0.27405 3.0
-0.23773 4.0 -0.19840 5.0
-0.15567 6.0 -0.10913
7.0 -0.05827 8.0 -0.00249
9.0 0.05890 10.0 0.12676
In such a case the same data could be applied in
the estimation of a 2- random-variable
joint- probability-density function.
205(No Transcript)
206(No Transcript)
207(No Transcript)
208(No Transcript)
209(No Transcript)
210(No Transcript)
211?
212(No Transcript)
213(No Transcript)
214(No Transcript)
215(No Transcript)
216(No Transcript)
217(No Transcript)
218(No Transcript)
219(No Transcript)
220(No Transcript)
221P (Po-1 HTR-1H)-1
222P (Po-1 HTR-1H)-1
223Trajectory reality vs. best estimates.
224Trajectory reality vs. best estimates.
225Trajectory reality vs. best estimates.
226Trajectory reality vs. best estimates.
227Note that it takes a while for these aposteriori
estimates for R to lock on to the underlying
reality.
228For the first couple of corrections, they
actually get worse before improving.
229Estimates of the state element of interest, x1,
are not necessarily improved by estimating
parameters such as R in real time.
230Some practical considerations.
231Departure, w(a), from the ideal no-slip
assumption (i.e. our process model) is
deterministic, but too complicated to model.
232Unlike w(a) model error is actually deterministic
and generally not additive c.l.t. may apply
anyway.
233Ensuring that E(v)0, however, takes some thought.
234Here observation biases in teaching
are largely replicated in tracking.
235Since it is difficult to create really good plant
models a priori, it is tempting to try to
estimate everything.
236The bad effects of (partially) neglected
nonlinearily are generally exacerbated when more
random estimation variables are introduced.
237The use of estimates to create/identify
observations (which are in turn used to modify
these same observations.)
- Very tempting and convenient.
- Requires maintaining of high precision (low P).
- Once this slips away it can be impossible to
recover.
238For us the EKF has been very useful, and has
worked well
- It is efficient numerically.
- It automatically weights observations in a way
that takes advantage of the information they
contain for each individual element of the state,
especially in light of information contained in
prior observations. - It balances observation error against model or
plant error. - It is forgiving insofar as choice of R, Q and
P(0) values. - It has always been a very stable estimator
239http//www.bastiansolutions.com/products/automated
-guided-vehicles/default.asp