Aibo Localization Part 2 - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

Aibo Localization Part 2

Description:

The take-home message here is to be careful using range estimates when the ... The orientation of the dog is also free, so you cannot rely upon the absolute ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 56
Provided by: donat157
Category:
Tags: aibo | dogs | free | good | home | localization | part | to

less

Transcript and Presenter's Notes

Title: Aibo Localization Part 2


1
Aibo Localization Part 2
CSE398/498 18 March 05
2
Administration
  • Mid-semester feedback questionnaire results
  • Plans for the remainder of the semester
  • Localization Challenge
  • Kicking Challenge/Goalie Challenge
  • Scrimmages
  • Direct methods for robot localization

3
References
  • A. Kelly, Introduction to Mobile Robots Course
    Notes, Position Estimation 1-2
  • A. Kelly, Introduction to Mobile Robots Course
    Notes, Uncertainty 1-3
  • G. Welch G. Bishop, An Introduction to the
    Kalman Filter
  • P. Maybeck, Stochastic Models, Estimation
    Control, Chapter 1 (attached to WB tutorial)

4
Range-based Position Estimation
r1
r2
  • One range measurement is insufficient to estimate
    the robot pose estimate
  • We know it can be done with three (on the plane)?
  • Q Can it be done with two?
  • A Yes.

5
Range-based Position Estimation (contd)
  • So we get
  • From our first equation we have
  • which leaves us 2 solutions for yr.
  • However by the field geometry we know that yr
    y1, from which we obtain

6
Range-based Position Estimation (contd)
  • So we get
  • From our first equation we have
  • which leaves us 2 solutions for yr.
  • However by the field geometry we know that yr
    y1, from which we obtain

r1
r2
ISSUE 1 The circle intersection may be
degenerate if the range errors are significant
or in the vicinity of the line x2-x1, y2-y1T
7
Range-based Position Estimation (contd)
  • So we get
  • From our first equation we have
  • which leaves us 2 solutions for yr.
  • However by the field geometry we know that yr
    y1, from which we obtain

ISSUE 2 Position error will be a function of
the relative position of the dog with respect to
the landmark
8
Error Propagation from Fusing Range Measurements
(contd)
  • In order to estimate the Aibo position, it was
    necessary to combine 2 imperfect range
    measurements
  • These range errors propagate through non-linear
    equations to evolve into position errors
  • Lets characterize how these range errors map
    into position errors
  • First a bit of mathematical review

9
Error Propagation from Fusing Range Measurements
(contd)
  • Recall the Taylor Series
  • is a series expansion of a function f(x) about a
    point a
  • Let y represent some state value we are trying to
    estimate, x is the sensor measurement, and f is a
    function that maps sensor measurements to state
    estimates
  • Now lets say that the true sensor measurement x
    is corrupted by some additive noise dx. The
    resulting Taylor series becomes
  • Subtracting the two and keeping only the first
    order terms yields

10
Error Propagation from Fusing Range Measurements
(contd)
  • For multivariate functions, the approximation
    becomes
  • where the nxm matrix J is the Jacobian matrix or
    the matrix form of the total differential written
    as
  • The determinant of J provides the ratio of
    n-dimensional volumes in y and x. In other
    words, J represents how much errors in x are
    amplified when mapped to errors in y

11
Error Propagation from Fusing Range Measurements
(contd)
  • Lets look at our 2D example to see why this is
    the case
  • We would like to know how changes in the two
    ranges r1 and r2 affect our position estimates.
    The Jacobian for this can be written as
  • The determinant of this is simply
  • This is the definition of the vector (or cross)
    product

12
Error Propagation from Fusing Range Measurements
(contd)
  • Flashback to 9th grade
  • Recall from the definition of the vector product
    that the magnitude of the resulting vector is
  • Which is equivalent to the area of the
    parallelogram formed from the 2 vectors
  • This is our error or uncertainty volume

13
Error Propagation from Fusing Range Measurements
(contd)
  • For our example, this means that we can estimate
    the effects of robot/landmark geometry on
    position errors by merely calculating the
    determinant of the J
  • This is known as the position (or geometric)
    dilution of precision (PDOP/GDOP)
  • The effects of PDOP are well studied with respect
    to GPS systems
  • Lets calculate PDOP for our own 2D GPS system.
    What is the relationship between range errors and
    position errors?

14
Error Propagation from Fusing Range Measurements
(contd)
  • So we have
  • which is not readily separable with for x and y.
    However, it is in a perfect form to solve for
    the Jacobian with respect to x and y. So we can
    easily calculate the inverse Jacobian as

15
Error Propagation from Fusing Range Measurements
(contd)
  • If we take the determinant of this, we obtain
  • and as Evan pointed out, the numerator
    corresponds to the magnitude of the cross product
  • where the norm terms are equivalent to our range
    measurements. So, this all reduces to

16
Error Propagation from Fusing Range Measurements
(contd)
  • However, we need the determinant of the original
    Jacobian. However, we note that
  • which yields
  • which means the error in your x,y position is
    NOT dependent on the range from the target, and
    will blow up as the vergence angle between your
    range measurements approaches 0 or 180 degrees

17
Position Dilution of Precision (PDOP) for 2
Range Sensors
The take-home message here is to be careful using
range estimates when the vergence angle to the
landmarks is small
  • The blue robot is the true poistion.
  • The red robot shows the position
  • estimated using range measurements
  • corrupted with random Gaussian noise
  • having a standard deviation equal to
  • 5 of the true range

18
Error Propagation from Fusing Range Measurements
(contd)
Bad
Good
QUESTION Why arent the uncertainty regions
parallelograms?
Bad
19
Inferring Orientation
  • Using range measurements, we can infer the
    position of the robot without knowledge of its
    orientation ?
  • With its position known, we can use this to infer
    ?
  • In the camera frame C, the bearings to the
    landmarks are measured directly as
  • In the navigation frame N, the bearing angles to
    the landmarks are
  • So, the robot orientation is merely

NOTE Estimating ? this way compounds the
position error and the bearing error. If you
have redundant measurements, use them.
20
Bearings-based Position Estimation
  • There will most likely be less uncertainty in
    bearing measurements than range measurements
    (particularly at longer ranges)
  • Q Can we directly estimate our position with
    only two bearing measurements?
  • A No.

21
Bearings-based Position Estimation
  • The points (x1,y1), (x2,y2), (xr,yr) define a
    circle where the former 2 points define a chord
    (dashed line)
  • The robot could be located anywhere on the
    circles lower arc and the inscribed angle
    subtended by the landmarks would still be a2-
    a1
  • The orientation of the dog is also free, so you
    cannot rely upon the absolute bearing
    measurements.
  • A third measurement (range or bearing is required
    to estimate position)

NOTE There are many other techniques/constraints
you can use to improve the direct position
estimate. These are left to the reader as an
exercise.
22
Range-based Position Estimation Revisited
  • We saw that even small errors in range
    measurements could result in large position
    errors
  • If the errors are random, additive noise then
    averaging range measurements should have the
    effect of smoothing (or canceling) out these
    errors
  • Lets relook our simulation, but use as our
    position estimate the average of our 3 latest
    position estimates. In other words

23
Position Estimation Performance forDirect and
3-element Average Estimates
Direct Estimate
Mean Filter
  • The blue robot is the true poistion.
  • The red robot shows the position
  • estimated using range measurements
  • corrupted with random Gaussian noise
  • having a standard deviation equal to
  • 5 of the true range

24
Position Error Comparison
Position Estimates Errors from 3 element Mean
Filter
Direct Position Estimate Errors (unfiltered)
  • Averaging or temporal filtering is perhaps the
    simplest technique for filtering estimates over
    time

25
Issues with Mean Filtering
  • Mean filtering will work well as long as the
    position changes at each time step are small
    compared to the range measurement errors
  • What would be even better would be to incorporate
    knowledge from the motion of our robot
  • Recall that we issue motion commands to the robot
    by setting x, y, and a velocities for a given
    duration
  • Why not use this information to predict our next
    position, and then correct this using our sensor
    measurements?
  • Issue Our motion model will be far from perfect
  • Q How can we intelligently combine noisy sensor
    measurements with a noisy motion model to obtain
    the best estimates for the pose of our robot?
  • A

26
The Discrete Kalman Filter
27
Bayesian Filters
  • Bayesian filters are recursive data processing
    algorithms
  • They rely upon probability axioms to propagate
    not only an estimate of the state over time, but
    also an estimate for the uncertainty in the state
    estimate.
  • Used in the context of robot localization, a
    Bayesian filter provides estimates for the
    position and orientation of the Aibo, as well as
    an estimate for the uncertainty in the position
    estimates
  • The two most prevalent forms of Bayesian filters
    are Kalman filters and particle filters
  • As the name suggests, these filters rely upon
    Bayes Law for updating both the state and the
    state uncertainty estimates

28
Notation Review
  • Matrices are denoted by a capital letter. In
    text, they will be bold (e.g. A)
  • Vectors are denoted by a lowercase letter. In
    text, they will be bold (e.g. x). In Microsoft
    Equations, they will have an overscore
  • Scalars are lowercase letters without emphasis
  • xk- denotes the prior or estimate for the state
    vector x at time step k before the measurement
    update phase
  • xk denotes the estimate for the state vector x
    at time step k after the measurement update phase

QUESTION What is the state that we are trying
to estimate in the localization problem?
29
What is a Kalman Filter?
  • Optimal recursive data fusion algorithm
  • Predictor-Corrector style algorithm
  • Processes all available sensor measurements in
    estimating the value of parameters of interest
    using
  • Knowledge of system and sensor dynamics
  • Statistical models reflecting uncertainty in
    system noise and sensor dynamics
  • Any information regarding initial conditions

30
What is a Kalman Filter (contd)?
  • Optimal in the sense that for systems which can
    be described by a linear model e.g.
  • and for which the process and measurement noises
    wk and vk are normally distributed, the Kalman
    filter is the provably optimal estimator
    (estimate has minimum error variance)
  • In our case, process noise corresponds to
    uncertainty in the motion model, measurement
    noise is from uncertainty in the sensing model, x
    denotes the state being estimated (the robot
    pose) and z the sensor measurements

31
What is a Kalman Filter (contd)?
  • Recursive in the sense that it is memory-less
  • Does not require all previous data to be
    maintained in memory and reprocessed at each time
    step
  • Propagates first and second order statistics only
    (i.e. mean and variance/covariance)

32
The Gaussian Distribution
  • A 1-D Gaussian distribution is defined as
  • In 2-D (assuming uncorrelated variables) this
    becomes
  • In n dimensions, it generalizes to

In the Kalman filter, the process and measurement
noise MUST be normally distributed for optimality
to hold!
33
A Quick Primer/Refresher
  • The expected value for a random variable X is
    (i.e. the mean) defined as
  • The variance of X about the mean is defined as

34
A Quick Primer/Refresher (contd)
  • When X is a vector, the variance is expressed in
    terms of a covariance matrix C where
  • The resulting matrix has the form
  • where ?ij corresponds to the degree of
    correlation between variables Xi and Xj

35
The Correlation Coefficient
  • Correlation is a means to estimate how two
    functions/series are correlated. For a discrete
    series, it is defined as
  • where ? denotes the correlation coefficient
  • The denominator normalizes the correlation
    coefficient such that

36
Correlation Examples
37
Correlation Examples
38
Correlation Examples
39
Properties of the Covariance Matrix
  • The covariance matrix C is symmetric and positive
    definite
  • Through a similarity transform with the proper
    rotation matrix R, C can be decomposed as CRDRT,
    where
  • That is, the columns of R correspond to the
    eigenvectors of C, and the elements of the
    diagonal matrix D its eigenvalues
  • These also correspond to the primary axes of the
    PDF and the variances, respectively
  • This means you can always define a coordinate
    system where the values will be uncorrelated!

40
Example
  • QUESTION Given a normal distribution with
    covariance matrix
  • characterize the PDF
  • SOLUTION Solving for the eigenvalues, we get
  • and solving for the first EVE we obtain

What this means for us is that we can always
extract a nice 2D ellipse to reflect our
positional uncertainty on the plane.
41
Our Simple Example Revisited
  • Consider a ship sailing east with a perfect
    compass trying to estimate its position.
  • You estimate the position x from the stars as
    z1100 with a precision of sx4 miles

x
42
A Simple Example (contd)
  • Along comes a more experienced navigator, and she
    takes her own sighting z2
  • She estimates the position x z2 125 with a
    precision of sx3 miles
  • How do you merge her estimate with your own?

x
43
A Simple Example (contd)
x
44
A Simple Example (contd)
  • With the distributions being Gaussian, the best
    estimate for the state is the mean of the
    distribution, so
  • or alternately
  • where Kt is referred to as the Kalman gain, and
    must be computed at each time step

Correction Term
45
A Simple Example (contd)
  • OK, now you fall asleep on your watch. You wake
    up after 2 hours, and you now have to re-estimate
    your position
  • Let the velocity of the boat be nominally 20
    miles/hour, but with a variance of s2w4
    miles2/hour
  • What is the best estimate of your current
    position?

x
46
A Simple Example (contd)
  • The next effect is that the gaussian is
    translated by a distance and the variance of the
    distribution is increased to account for the
    uncertainty in dynamics

x
47
A Simple Example (contd)
  • OK, this is not a very accurate estimate. So,
    since youve had your nap you decide to take
    another measurement and you get z3165 miles
  • Using the same update procedure as the first
    update, we obtain
  • and so on

48
The Predictor-Corrector Approach
  • In this example, prediction came from using
    knowledge of the vehicle dynamics to estimate its
    change in position
  • The analogy to the Aibo would be integrating
    information from the robot kinematics (i.e. you
    give it a desired x, y, a velocities for a time
    ?t) to estimate changed in position
  • The correction is accomplished through making
    exteroceptive observations and then fusing this
    with your current estimate
  • This is akin to updating position estimates using
    landmark information, etc.
  • In practice, the prediction rate is typically
    much higher than the correction

49
The Discrete Kalman Filter
  • The Kalman filter addresses the problem of
    estimating the state x ? Rn of a discrete-time
    controlled process governed by the linear
    difference equation
  • and with a measurement z ? Rm that is
  • where wk and vk represent the process and
    measurement noise. They are assumed independent,
    white, and with gaussian PDFs

NOTE The matrices A,B,H,Q R may be time
varying
50
The Discrete Kalman Filter (contd)
  • At each time step, the KF propagates both a state
    estimate xk and an estimate for the error
    covariance Pk. The latter provides an indication
    of the uncertainty associated with the state
    estimate
  • As mentioned previously, the KF is a
    predictor-corrector algorithm. Prediction comes
    in the time update phase, and correction in the
    measurement update phase

Predict
Measurement Update
Time Update
Correct
51
The Time Update Phase
  • Predict the state ahead
  • Project the error covariance ahead

52
Estimate Errors
  • Let xk denote the true state at time k
  • Let xk- denote the prior estimate for xk
  • Let xk denote the posterior estimate for xk
  • The corresponding estimate errors are then
  • The corresponding error covariances are then
  • To minimize the posterior error covariance, the
    Kalman gain takes the form

53
The Measurement Update Phase
  • Compute the Kalman Gain Kk
  • Update the estimate based on the new measurement
    zk
  • Update the error covariance

54
The Discrete Kalman Filter
Predict
Correct
55
Summary
  • For linear processes with white Gaussian noise,
    the KF provides a provably optimal means for
    fusing measurement information
  • In practice, the filter is tuned empirically by
    finding optimal estimates for Q and R
  • The linear constraints can be lifted, but the
    optimality of the filter is lost
  • This is the basis for the Extended Kalman Filter
    (EKF) which we will discuss in the sequel
Write a Comment
User Comments (0)
About PowerShow.com