Title: Dr' Jizhong Xiao
1Advanced Mobile Robotics
Probabilistic Robotics (II)
- Dr. Jizhong Xiao
- Department of Electrical Engineering
- City College of New York
- jxiao_at_ccny.cuny.edu
2Outline
- Localization Methods
- Markov Localization (review)
- Kalman Filter Localization
- Particle Filter (Monte Carlo Localization)
- Current Research Topics
- SLAM
- Multi-robot Localization
3Robot Navigation
Fundamental problems to provide a mobile robot
with autonomous capabilities
- Where am I going
- Whats the best way there?
- Where have I been? ? how to create an
environmental map with imperfect sensors? - Where am I? ? how a robot can tell where it is
on a map? - What if youre lost and dont have a map?
Mission Planning
Path Planning
Mapping
Localization
Robot SLAM
4Representation of the Environment
- Environment Representation
- Continuos Metric x,y,q
- Discrete Metric metric grid
- Discrete Topological topological grid
5Localization Methods
- Markov Localization
- Central idea represent the robots belief by a
probability distribution over possible positions,
and uses Bayes rule and convolution to update
the belief whenever the robot senses or moves - Markov Assumption past and future data are
independent if one knows the current state - Kalman Filtering
- Central idea posing localization problem as a
sensor fusion problem - Assumption Gaussian distribution function
- Particle Filtering
- Monte-Carlo method
- SLAM (simultaneous localization and mapping)
- Multi-robot localization
6Probability Theory
Bayes Formula
)
(
)
(
)
(
)
(
)
,
(
x
P
x
y
P
y
P
y
x
P
y
x
P
Þ
Normalization
7Probability Theory
Bayes Rule with background knowledge
Þ
Law of Total Probability
8Bayes Filters Framework
- Given
- Stream of observations z and action data u
- Sensor model P(zx).
- Action model P(xu,x).
- Prior probability of the system state P(x).
- Wanted
- Estimate of the state X of a dynamical system.
- The posterior of the state is also called Belief
9Markov Assumption
Measurement probability
?
State transition probability
?
- Markov Assumption
- past and future data are independent if one knows
the current state
- Underlying Assumptions
- Static world, Independent noise
- Perfect model, no approximation errors
10Bayes Filters
z observation u action x state
11Bayes Filter Algorithm
- Algorithm Bayes_filter( Bel(x),d )
- h0
- If d is a perceptual data item z then
- For all x do
-
-
- For all x do
-
- Else if d is an action data item u then
- For all x do
-
- Return Bel(x)
12Bayes Filters are Familiar!
- Kalman filters
- Particle filters
- Hidden Markov models
- Dynamic Bayesian networks
- Partially Observable Markov Decision Processes
(POMDPs)
13Localization
Determining the pose of a robot relative to a
given map of the environment
- Given
- Map of the environment.
- Sequence of sensor measurements.
- Wanted
- Estimate of the robots position.
- Problem classes
- Position tracking
- Global localization
- Kidnapped robot problem (recovery)
14Markov Localization
- Algorithm Markov_Localization
- For all do
-
-
- Endfor
- Return Bel(x)
15(No Transcript)
16Localization Methods
- Markov Localization
- Central idea represent the robots belief by a
probability distribution over possible positions,
and uses Bayes rule and convolution to update
the belief whenever the robot senses or moves - Markov Assumption past and future data are
independent if one knows the current state - Kalman Filtering
- Central idea posing localization problem as a
sensor fusion problem - Assumption Gaussian distribution function
- Particle Filtering
- Monte-Carlo method
- SLAM (simultaneous localization and mapping)
- Multi-robot localization
17Kalman Filter Localization
18Introduction to Kalman Filter (1)
- Two measurements
- Weighted leas-square
- Finding minimum error
- After some calculation and rearrangements
Gaussian probability density function
Take weight as
Best estimate for the robot position
19Introduction to Kalman Filter (2)
- In Kalman Filter notation
Best estimate of the state at time
K1 is equal to the best prediction of value
before the new measurement is taken,
plus a weighting of value times the difference
between and the best prediction
at time k
20Introduction to Kalman Filter (3)
- Dynamic Prediction (robot moving)
- u velocity w noise
- Motion
- Combining fusion and dynamic prediction
21Kalman Filter for Mobile Robot Localization
Five steps 1) Position Prediction, 2)
Observation, 3) Measurement prediction, 4)
Matching, 5) Estimation
22Kalman Filter for Mobile Robot Localization
- Robot Position Prediction
- In the first step, the robots position at time
step k1 is predicted based on its old location
(time step k) and its movement due to the control
input u(k)
f Odometry function
23Robot Position Prediction Example
Odometry
24Kalman Filter for Mobile Robot Localization
- Observation
- The second step is to obtain the observation
Z(k1) (measurements) from the robots sensors
at the new location at time k1 - The observation usually consists of a set n0 of
single observations zj(k1) extracted from the
different sensors signals. It can represent raw
data scans as well as features like lines, doors
or any kind of landmarks. - The parameters of the targets are usually
observed in the sensor frame S. - Therefore the observations have to be transformed
to the world frame W or - the measurement prediction have to be transformed
to the sensor frame S. - This transformation is specified in the function
hi (seen later).
25Observation Example
- Raw Date of Laser Scanner
Extracted Lines
Extracted Lines in Model Space
Sensor (robot) frame
26Kalman Filter for Mobile Robot Localization
- Measurement Prediction
- In the next step we use the predicted robot
position and the map M(k) to generate multiple
predicted observations zt. - They have to be transformed into the sensor frame
- We can now define the measurement prediction as
the set containing all ni predicted observations - The function hi is mainly the coordinate
transformation between the world frame and the
sensor frame
27Measurement Prediction Example
- For prediction, only the walls that are in the
field of view of the robot are selected. - This is done by linking the individuallines to
the nodes of the path
28Measurement Prediction Example
- The generated measurement predictions have to be
transformed to the robot frame R - According to the figure in previous slide the
transformation is given by - and its Jacobian by
29Kalman Filter for Mobile Robot Localization
- Matching
- Assignment from observations zj(k1) (gained by
the sensors) to the targets zt (stored in the
map) - For each measurement prediction for which an
corresponding observation is found we calculate
the innovation - and its innovation covariance found by applying
the error propagation law - The validity of the correspondence between
measurement and prediction can e.g. be evaluated
through the Mahalanobis distance
30Matching Example
31Matching Example
- To find correspondence (pairs) of predicted and
observed features we use the Mahalanobis
distance - with
32Estimation Applying the Kalman Filter
- Kalman filter gain
- Update of robots position estimate
- The associate variance
33Estimation 1D Case
- For the one-dimensional case with
we can show that the
estimation corresponds to the Kalman filter for
one-dimension presented earlier.
34Estimation Example
- Kalman filter estimation of the new robot
position - By fusing the prediction of robot position
(magenta) with the innovation gained by the
measurements (green) we get the updated estimate
of the robot position (red)
35(No Transcript)
36Monte Carlo Localization
- One of the most popular particle filter method
for robot localization - Current Research Topics
- Reference
- Dieter Fox, Wolfram Burgard, Frank Dellaert,
Sebastian Thrun, Monte Carlo Localization
Efficient Position Estimation for Mobile Robots,
Proc. 16th National Conference on Artificial
Intelligence, AAAI99, July 1999
37MCL in action
Monte Carlo Localization -- refers to the
resampling of the distribution each time a new
observation is integrated
38Monte Carlo Localization
- the probability density function is represented
by samples - randomly drawn from it
- it is also able to represent multi-modal
distributions, and thus localize the robot
globally - considerably reduces the amount of memory
required and can integrate measurements at a
higher rate - state is not discretized and the method is more
accurate than the grid-based methods - easy to implement
39Probabilistic Robotics
- Sebastian Thrun
p( B A ) p( A )
p( A B )
p( B )
- Definition of marginal probability
S
p( A ) p( A ? B )
all B
S
p( A ) p( A B ) p(B)
all B
- Definition of conditional probability
40Setting up the problem
The robot does (or can be modeled to) alternate
between
- sensing -- getting range observations o1,
o2, o3, , ot-1, ot - acting -- driving around (or ferrying?) a1,
a2, a3, , at-1
local maps
whence?
41Setting up the problem
The robot does (or can be modeled to) alternate
between
- sensing -- getting range observations o1,
o2, o3, , ot-1, ot - acting -- driving around (or ferrying?) a1,
a2, a3, , at-1
We want to know rt -- the position of the
robot at time t
- but well settle for p(rt) -- the
probability distribution for rt
What kind of thing is p(rt) ?
42Setting up the problem
The robot does (or can be modeled to) alternate
between
- sensing -- getting range observations o1,
o2, o3, , ot-1, ot - acting -- driving around (or ferrying?) a1,
a2, a3, , at-1
We want to know rt -- the position of the
robot at time t
- but well settle for p(rt) -- the
probability distribution for rt
What kind of thing is p(rt) ?
We do know m -- the map of
the environment
p( o r, m )
-- the sensor model
p( rnew rold, a, m )
-- the accuracy of desired action a
43Robot modeling
map m and location r
p( o r, m ) sensor model
p( r, m ) .75
p( r, m ) .05
potential observations o
44Robot modeling
map m and location r
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
p( r, m ) .75
p( r, m ) .05
potential observations o
probabilistic kinematics -- encoder
uncertainty
- red lines indicate commanded action
- the cloud indicates the likelihood of various
final states
45Robot modeling how-to
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
(0) Model the physics of the sensor/actuators (w
ith error estimates)
theoretical modeling
(1) Measure lots of sensing/action results and
create a model from them
empirical modeling
- take N measurements, find mean (m) and st. dev.
(s) and then use a Gaussian model
- or, some other easily-manipulated model...
0 if x-m gt s
0 if x-m gt s
p( x )
p( x )
1 otherwise
1- x-m/s otherwise
(2) Make something up...
46Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
47Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
48Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
49Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
Create r1 samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
50Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
Create r1 samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
The robot moves, a1
For each sample r1, move it according to the
model p(r2 a1, r1, m)
51Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
Create r1 samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
The robot moves, a1
For each sample r1, move it according to the
model p(r2 a1, r1, m)
52MCL in action
Monte Carlo Localization -- refers to the
resampling of the distribution each time a new
observation is integrated
53Discussion MCL
54References
- Dieter Fox, Wolfram Burgard, Frank Dellaert,
Sebastian Thrun, Monte Carlo Localization
Efficient Position Estimation for Mobile Robots,
Proc. 16th National Conference on Artificial
Intelligence, AAAI99, July 1999 - Dieter Fox, Wolfram Burgard, Sebastian Thrun,
Markov Localization for Mobile Robots in Dynamic
Environments, J. of Artificial Intelligence
Research 11 (1999) 391-427 - Sebastian Thrun, Probabilistic Algorithms in
Robotics, Technical Report CMU-CS-00-126, School
of Computer Science, Carnegie Mellon University,
Pittsburgh, USA, 2000