Title: SLAM Summer School 2004
1SLAM Summer School 2004
- An Introduction to SLAM Using an EKF
- Paul Newman
- Oxford University Robotics Research Group
2A note to students
- The lecture I give will not include all these
slides. Some of then and some of the notes I have
supplied are more detailed than required and
would take too long to deliver. I have included
them for completeness and background for
example the derivation of the Kalman filter from
Bayes Rule. - I have included in the package a working matlab
implementation of EKF based SLAM. You should be
able to see all the properties of SLAM at work
and be able to modify at your leisure. (without
having to worry about the awkwardness of a real
system to start with). I cannot cover all I
would like to in the time available where
applicable, to fill gaps, I forward reference
other talks that will be given during the week. I
hope the talk, the slides and the notes will whet
you appetite regarding what I reckon is great
area of research. - Above all, please please ask me to explain stuff
that is unclear this school is about you
learning, not us lecturing. - regards
- Paul Newman
3Overview
- Kalman Filter was the first tool employed in SLAM
Smith Self and Cheeseman. - Linear KFs implement Bayes rule. No hokie-ness
- We can analyse KF properties easily and learn
interesting things about Bayesian SLAM - The vanilla, monolithic, KF-SLAM formulation is a
fine tool for small local areas - But we can do better for large areas as other
speakers will mention
45 Minutes on Estimation
5Estimation is ..
Estimation Engine
Data
Estimate
Prior Beliefs
6Minimum Mean Squared Error Estimation
Choose x so argument is minimised
Expectation operator (average)
7Evaluating.
From probability theory
Very Important Thing
8Recursive Bayesian Estimation
Key idea one mans posterior is anothers prior
-)
Sequence of data (measurements)
We want conditional mean (mmse) of x given Zk
Can we iteratively calculate this ie every
time a new measurement comes in, update our
estimate?
9Yes
At time k
At time k-1
Explains data at time k as function of x at time
k
10And if these distributions are Gaussian turning
the handle (see supporting material) leads to the
Kalman filter
11Kalman Filtering
- Ubiquitous estimation tool
- Simple to implement
- Closely related to Bayes estimation and MMSE
- Immensely Popular in robotics
- Real time
- Recursive (can add data sequentially)
- It maintains the sufficient statistics of a
Multidimensional - Gaussian PDF
It is not that complicated! (trust me)
12Overall Goal
To come up with a recursive algorithm that
produces an estimate of state by processing data
from a set of explainable measurements and
incorporating some kind of plant model
Measurement model
Sensor H1
KF
Estimate
Sensor H2
Sensor Hn
Plant Model
Prediction/plant model
True underlying state x
13Covariance is..
Multi-dimensional analogy of variance
mean
P is a symmetric matrix that describes a
1-standard deviation contour ( ellipsoid in 3D
) of the pdf
14The ij notation
true
estimated
Data up to tj
This is useful for derivations but we can never
use it in a calc asx is unknown truth!
15The Basics
Well use these equations as a starting point I
have supplied a full derivation in the support
presentation and notes think of a KF as an
off-the-shelf estimation tool
16(No Transcript)
17Crucial Characteristics
- Asynchronisity
- Prediction Covariance Inflation
- Update Covariance Deflation
- Observability
- Correlations
18Nonlinear Kalman Filtering
- Same trick as in Non-linear Least Squares
- Linearise around a current estimate using
jacobian - Problem becomes linear again
Complete derivation is in the notes but
19Recalculate Jacs at each iteration
20Using The EKF in Navigation
21Vehicle Models - Prediction
control
Truth model
22Noise is in control.
23Effect of control noise on uncertainty
24Using Dead-Reckoned Data
25Navigation Architecture
26Background T-Composition
Compounding transformations
27Just functions!
28Deduce an Incremental Move
These can be in massive error
But the common error is subtracted out here
29Use this move as a control
Substitution into Prediction equation (using J1
and J2 as Jacobians)
Diagonal covariance matrix (3x3) of error in uo
30Feature Based Mapping and Navigation
Look at the code!!
31Mapping vs Localisation
32Problem Space
33Problem Geometry
34Landmarks / Features
Things that standout to a sensor Corners,
windows, walls, bright patches, texture
Map
Point Feature called i
35Observations / Measurements
- Relative
- On Vehicle sensing environment
- Radar
- Cameras
- Odometry (really)
- Sonar Laser
- Absolute
- Relies on infrastructure
- GPS
- Compass
How smart can we be with relative only
measurements?
36And once again
It is all about probability
37From Bayes Rule..
Input is measurements conditioned on map and
vehicle
Data
We want to use Bayes rule to invert this and
get maps and vehicles given measurements.
38Problem 1 - Localisation
Remove line p(.) 1 from notes. Mistake
39We can use a KF for this!
Plant Model
Remember u is control, Js are a fancy way of
writing jacobians (composition operator). Q is
strength of noise in plant model.
40Processing Data
r
41Implementation
No features seen here
42Location Covariance
43Location Innovation
44Problem II Mapping
Map
With known vehicle
The state vector is the map
45But how is map built?
Key Point State Vector GROWS!
New, bigger map
Obs of new feature
Old map
State augmentation
46How is P augmented?
Simple! Use the transformation of covariance
rule..
G is the feature initialisation function
47Leading to
Angle from Veh to feature
Vehicle orientation
48So what are models h and f?
h is a function of the feature being observed
f is simply the identity transformation
49Turn the handle on the EKF
All hail the Oracle ! How do we know whatfeature
we are observing?
50Problem III SLAM
Build a map and use it at the same time
This a cornerstone of autonomy
51Bayesian Framework
52How Is that sum evaluated?
- A current area of interest/debate
- Monte-carlo Methods
- Thin Junction Trees
- Grid based techniques
- Kalman Filter
- All have their individual pros and cons
- All try to estimate p(xkZk) state of the world
given data
53Naïve SLAM
A union of Localisation and Mapping
State vector has vehicle AND map
Why naïve? Computation!
54Prediction
Note The control is noisy u unominalnoise
Note features stay still- no noise added and
jacobian is identity
55Feature Initialisation
This whole function is y(.)from discussion of
state augmentationin mapping section
These last two lines are g()
This is our new expanded covariance
56EKF SLAM DEMO
Look at the code provided!!
57(No Transcript)
58Laser Sensing
- Fast
- Simple
- Quantisation Errors
59Extruded Museum
60SLAM in action
At MIT - in collaboration with J. Leonard, J.
Tardos and J. Neira
61Human Driven Exploration
62Navigating
63Autonomous Homing
64Its not a simulation.
Homing
HomingFinal Adjustment
High Expectations of Students
65(No Transcript)
66The Convergence and Stability of SLAM
By analysing the behaviour of the LG-KF we can
learn about the governing properties of the SLAM
problem which are actually completely
intuitive.
67We can show that
- The determinant of any submatrix of the map
covariance matrix decreases monotonically as
observations are successively made. - In the limit as the number of observations
increases, the landmark estimates become fully
correlated. - In the limit, the covariance associated with any
single landmark location estimate is determined
only by the initial covariance in the vehicle
location estimate.
68(No Transcript)
69(No Transcript)
70(No Transcript)
71Prediction
72Observation
73Update
74(No Transcript)
75(No Transcript)
76Proofs Condensed (9)
77Take home points
- The entire structure of the SLAM problem
critically depends on maintaining complete
knowledge of the cross correlation between
landmark estimates. Minimizing or ignoring cross
correlations is precisely contrary to the
structure of the problem. - As the vehicle progresses through the environment
the errors in the estimates of any pair of
landmarks become more and more correlated, and
indeed never become less correlated. - In the limit, the errors in the estimates of any
pair of landmarks becomes fully correlated. This
means that given the exact location of any one
landmark, the location of any other landmark in
the map can also be determined with absolute
certainty. - As the vehicle moves through the environment
taking observations of individual landmarks, the
error in the estimates of the relative location
between different landmarks reduces monotonically
to the point where the map of relative locations
is known with absolute precision. - As the map converges in the above manner, the
error in the absolute location of every landmark
(and thus the whole map) reaches a lower bound
determined only by the error that existed when
the first observation was made.
(We didnt prove this here. However it is an
excellent test for consistency in new SLAM
algorithms)
This is all under the assumption that we observe
all features equally often for other cases
see Kim Sun-Joon PhD MIT 2004
78Issues
79Data Association a big problem
- How do we decide which feature (if any) is being
observed? - How do we close loops? Non-trivial.
- Jose Neira will talk to you about this but a
naïve approach is - simply to look through all features and take the
one for which - ?t S-1? e
- is smallest and less than a threshold (choosen
from a Chi - squared distribution it - turns out)
- If e is too large we introduce a new feature into
the map
80The Problem with Single Frame EKF SLAM
- It is uni-modal. It cannot cope with ambiguous
situations - It is inconsistent - the linearisations lead to
errors whichunderestimate the covariance of the
underlying pdf - It is fragile - if the estimated is in error
the linearisation is - Very poor disaster.
- But the biggest problem is..
81SCALING.
The Smith Self Cheeseman KF solution scales with
thesquare of the number of mapped things
Why quadratic? Because everything is
correlated to everything else 0.5 N2
correlations to maintain in P
Autonomy Unknown Terrain Long Missions
Duration ?
? We need sustainable SLAM with O(1) complexity
82Closing thoughts
- An EKF is a great way to learn about SLAM and
bounds on achievable performance. - EKFs are easy to implement
- They work fine for small workspaces
- But they do have a downside e.g uni-modal and
brittle and scale badly - In upcoming talks youll be told much more about
map scaling and data-association issues. Try and
locate these issuesin this opening talk even
better come face to face with themby using the
example code! - Many thanks for your time.
- PMN