SLAM Summer School 2004 - PowerPoint PPT Presentation

1 / 82
About This Presentation
Title:

SLAM Summer School 2004

Description:

Some of then and some of the notes I have supplied are more detailed than ... ( without having to worry about the awkwardness of a real system to start with). I ... – PowerPoint PPT presentation

Number of Views:54
Avg rating:3.0/5.0
Slides: 83
Provided by: pnew5
Category:

less

Transcript and Presenter's Notes

Title: SLAM Summer School 2004


1
SLAM Summer School 2004
  • An Introduction to SLAM Using an EKF
  • Paul Newman
  • Oxford University Robotics Research Group

2
A note to students
  • The lecture I give will not include all these
    slides. Some of then and some of the notes I have
    supplied are more detailed than required and
    would take too long to deliver. I have included
    them for completeness and background for
    example the derivation of the Kalman filter from
    Bayes Rule.
  • I have included in the package a working matlab
    implementation of EKF based SLAM. You should be
    able to see all the properties of SLAM at work
    and be able to modify at your leisure. (without
    having to worry about the awkwardness of a real
    system to start with). I cannot cover all I
    would like to in the time available where
    applicable, to fill gaps, I forward reference
    other talks that will be given during the week. I
    hope the talk, the slides and the notes will whet
    you appetite regarding what I reckon is great
    area of research.
  • Above all, please please ask me to explain stuff
    that is unclear this school is about you
    learning, not us lecturing.
  • regards
  • Paul Newman

3
Overview
  • Kalman Filter was the first tool employed in SLAM
    Smith Self and Cheeseman.
  • Linear KFs implement Bayes rule. No hokie-ness
  • We can analyse KF properties easily and learn
    interesting things about Bayesian SLAM
  • The vanilla, monolithic, KF-SLAM formulation is a
    fine tool for small local areas
  • But we can do better for large areas as other
    speakers will mention

4
5 Minutes on Estimation
5
Estimation is ..
Estimation Engine
Data
Estimate
Prior Beliefs
6
Minimum Mean Squared Error Estimation
Choose x so argument is minimised
Expectation operator (average)
7
Evaluating.
From probability theory
Very Important Thing
8
Recursive Bayesian Estimation
Key idea one mans posterior is anothers prior
-)
Sequence of data (measurements)
We want conditional mean (mmse) of x given Zk
Can we iteratively calculate this ie every
time a new measurement comes in, update our
estimate?
9
Yes
At time k
At time k-1
Explains data at time k as function of x at time
k
10
And if these distributions are Gaussian turning
the handle (see supporting material) leads to the
Kalman filter
11
Kalman Filtering
  • Ubiquitous estimation tool
  • Simple to implement
  • Closely related to Bayes estimation and MMSE
  • Immensely Popular in robotics
  • Real time
  • Recursive (can add data sequentially)
  • It maintains the sufficient statistics of a
    Multidimensional
  • Gaussian PDF

It is not that complicated! (trust me)
12
Overall Goal
To come up with a recursive algorithm that
produces an estimate of state by processing data
from a set of explainable measurements and
incorporating some kind of plant model
Measurement model
Sensor H1
KF
Estimate
Sensor H2
Sensor Hn
Plant Model
Prediction/plant model
True underlying state x
13
Covariance is..
Multi-dimensional analogy of variance
mean
P is a symmetric matrix that describes a
1-standard deviation contour ( ellipsoid in 3D
) of the pdf
14
The ij notation
true
estimated
Data up to tj
This is useful for derivations but we can never
use it in a calc asx is unknown truth!
15
The Basics
Well use these equations as a starting point I
have supplied a full derivation in the support
presentation and notes think of a KF as an
off-the-shelf estimation tool
16
(No Transcript)
17
Crucial Characteristics
  • Asynchronisity
  • Prediction Covariance Inflation
  • Update Covariance Deflation
  • Observability
  • Correlations

18
Nonlinear Kalman Filtering
  • Same trick as in Non-linear Least Squares
  • Linearise around a current estimate using
    jacobian
  • Problem becomes linear again

Complete derivation is in the notes but
19
Recalculate Jacs at each iteration
20
Using The EKF in Navigation
21
Vehicle Models - Prediction
control
Truth model
22
Noise is in control.
23
Effect of control noise on uncertainty
24
Using Dead-Reckoned Data
25
Navigation Architecture
26
Background T-Composition
Compounding transformations
27
Just functions!
28
Deduce an Incremental Move
These can be in massive error
But the common error is subtracted out here
29
Use this move as a control
Substitution into Prediction equation (using J1
and J2 as Jacobians)
Diagonal covariance matrix (3x3) of error in uo
30
Feature Based Mapping and Navigation
Look at the code!!
31
Mapping vs Localisation
32
Problem Space
33
Problem Geometry
34
Landmarks / Features
Things that standout to a sensor Corners,
windows, walls, bright patches, texture
Map
Point Feature called i
35
Observations / Measurements
  • Relative
  • On Vehicle sensing environment
  • Radar
  • Cameras
  • Odometry (really)
  • Sonar Laser
  • Absolute
  • Relies on infrastructure
  • GPS
  • Compass

How smart can we be with relative only
measurements?
36
And once again
It is all about probability
37
From Bayes Rule..
Input is measurements conditioned on map and
vehicle
Data
We want to use Bayes rule to invert this and
get maps and vehicles given measurements.
38
Problem 1 - Localisation
Remove line p(.) 1 from notes. Mistake
39
We can use a KF for this!
Plant Model
Remember u is control, Js are a fancy way of
writing jacobians (composition operator). Q is
strength of noise in plant model.
40
Processing Data
r
41
Implementation
No features seen here
42
Location Covariance
43
Location Innovation
44
Problem II Mapping
Map
With known vehicle
The state vector is the map
45
But how is map built?
Key Point State Vector GROWS!
New, bigger map
Obs of new feature
Old map
State augmentation
46
How is P augmented?
Simple! Use the transformation of covariance
rule..
G is the feature initialisation function
47
Leading to
Angle from Veh to feature
Vehicle orientation
48
So what are models h and f?
h is a function of the feature being observed
f is simply the identity transformation
49
Turn the handle on the EKF
All hail the Oracle ! How do we know whatfeature
we are observing?
50
Problem III SLAM
Build a map and use it at the same time
This a cornerstone of autonomy
51
Bayesian Framework
52
How Is that sum evaluated?
  • A current area of interest/debate
  • Monte-carlo Methods
  • Thin Junction Trees
  • Grid based techniques
  • Kalman Filter
  • All have their individual pros and cons
  • All try to estimate p(xkZk) state of the world
    given data

53
Naïve SLAM
A union of Localisation and Mapping
State vector has vehicle AND map
Why naïve? Computation!
54
Prediction
Note The control is noisy u unominalnoise
Note features stay still- no noise added and
jacobian is identity
55
Feature Initialisation
This whole function is y(.)from discussion of
state augmentationin mapping section
These last two lines are g()
This is our new expanded covariance
56
EKF SLAM DEMO
Look at the code provided!!
57
(No Transcript)
58
Laser Sensing
  • Fast
  • Simple
  • Quantisation Errors

59
Extruded Museum
60
SLAM in action
At MIT - in collaboration with J. Leonard, J.
Tardos and J. Neira
61
Human Driven Exploration
62
Navigating
63
Autonomous Homing
64
Its not a simulation.
Homing
HomingFinal Adjustment
High Expectations of Students
65
(No Transcript)
66
The Convergence and Stability of SLAM
By analysing the behaviour of the LG-KF we can
learn about the governing properties of the SLAM
problem which are actually completely
intuitive.
67
We can show that
  • The determinant of any submatrix of the map
    covariance matrix decreases monotonically as
    observations are successively made.
  • In the limit as the number of observations
    increases, the landmark estimates become fully
    correlated.
  • In the limit, the covariance associated with any
    single landmark location estimate is determined
    only by the initial covariance in the vehicle
    location estimate.

68
(No Transcript)
69
(No Transcript)
70
(No Transcript)
71
Prediction
72
Observation
73
Update
74
(No Transcript)
75
(No Transcript)
76
Proofs Condensed (9)
77
Take home points
  • The entire structure of the SLAM problem
    critically depends on maintaining complete
    knowledge of the cross correlation between
    landmark estimates. Minimizing or ignoring cross
    correlations is precisely contrary to the
    structure of the problem.
  • As the vehicle progresses through the environment
    the errors in the estimates of any pair of
    landmarks become more and more correlated, and
    indeed never become less correlated.
  • In the limit, the errors in the estimates of any
    pair of landmarks becomes fully correlated. This
    means that given the exact location of any one
    landmark, the location of any other landmark in
    the map can also be determined with absolute
    certainty.
  • As the vehicle moves through the environment
    taking observations of individual landmarks, the
    error in the estimates of the relative location
    between different landmarks reduces monotonically
    to the point where the map of relative locations
    is known with absolute precision.
  • As the map converges in the above manner, the
    error in the absolute location of every landmark
    (and thus the whole map) reaches a lower bound
    determined only by the error that existed when
    the first observation was made.

(We didnt prove this here. However it is an
excellent test for consistency in new SLAM
algorithms)
This is all under the assumption that we observe
all features equally often for other cases
see Kim Sun-Joon PhD MIT 2004
78
Issues
79
Data Association a big problem
  • How do we decide which feature (if any) is being
    observed?
  • How do we close loops? Non-trivial.
  • Jose Neira will talk to you about this but a
    naïve approach is
  • simply to look through all features and take the
    one for which
  • ?t S-1? e
  • is smallest and less than a threshold (choosen
    from a Chi
  • squared distribution it - turns out)
  • If e is too large we introduce a new feature into
    the map

80
The Problem with Single Frame EKF SLAM
  • It is uni-modal. It cannot cope with ambiguous
    situations
  • It is inconsistent - the linearisations lead to
    errors whichunderestimate the covariance of the
    underlying pdf
  • It is fragile - if the estimated is in error
    the linearisation is
  • Very poor disaster.
  • But the biggest problem is..

81
SCALING.
The Smith Self Cheeseman KF solution scales with
thesquare of the number of mapped things
Why quadratic? Because everything is
correlated to everything else 0.5 N2
correlations to maintain in P
Autonomy Unknown Terrain Long Missions
Duration ?
? We need sustainable SLAM with O(1) complexity
82
Closing thoughts
  • An EKF is a great way to learn about SLAM and
    bounds on achievable performance.
  • EKFs are easy to implement
  • They work fine for small workspaces
  • But they do have a downside e.g uni-modal and
    brittle and scale badly
  • In upcoming talks youll be told much more about
    map scaling and data-association issues. Try and
    locate these issuesin this opening talk even
    better come face to face with themby using the
    example code!
  • Many thanks for your time.
  • PMN
Write a Comment
User Comments (0)
About PowerShow.com