Dr' Jizhong Xiao - PowerPoint PPT Presentation

1 / 54
About This Presentation
Title:

Dr' Jizhong Xiao

Description:

Fundamental problems to provide a mobile robot with autonomous capabilities: Where am I going ... acting -- driving around (or ferrying?) a1, a2, a3, ..., at-1 ... – PowerPoint PPT presentation

Number of Views:110
Avg rating:3.0/5.0
Slides: 55
Provided by: ningxi
Category:
Tags: ferrying | jizhong | xiao

less

Transcript and Presenter's Notes

Title: Dr' Jizhong Xiao


1
Advanced Mobile Robotics
Probabilistic Robotics (II)
  • Dr. Jizhong Xiao
  • Department of Electrical Engineering
  • City College of New York
  • jxiao_at_ccny.cuny.edu

2
Outline
  • Localization Methods
  • Markov Localization (review)
  • Kalman Filter Localization
  • Particle Filter (Monte Carlo Localization)
  • Current Research Topics
  • SLAM
  • Multi-robot Localization

3
Robot Navigation
Fundamental problems to provide a mobile robot
with autonomous capabilities
  • Where am I going
  • Whats the best way there?
  • Where have I been? ? how to create an
    environmental map with imperfect sensors?
  • Where am I? ? how a robot can tell where it is
    on a map?
  • What if youre lost and dont have a map?

Mission Planning
Path Planning
Mapping
Localization
Robot SLAM
4
Representation of the Environment
  • Environment Representation
  • Continuos Metric x,y,q
  • Discrete Metric metric grid
  • Discrete Topological topological grid

5
Localization Methods
  • Markov Localization
  • Central idea represent the robots belief by a
    probability distribution over possible positions,
    and uses Bayes rule and convolution to update
    the belief whenever the robot senses or moves
  • Markov Assumption past and future data are
    independent if one knows the current state
  • Kalman Filtering
  • Central idea posing localization problem as a
    sensor fusion problem
  • Assumption Gaussian distribution function
  • Particle Filtering
  • Monte-Carlo method
  • SLAM (simultaneous localization and mapping)
  • Multi-robot localization

6
Probability Theory
Bayes Formula


)
(
)

(
)
(
)

(
)
,
(
x
P
x
y
P
y
P
y
x
P
y
x
P
Þ

Normalization
7
Probability Theory
Bayes Rule with background knowledge
Þ
Law of Total Probability
8
Bayes Filters Framework
  • Given
  • Stream of observations z and action data u
  • Sensor model P(zx).
  • Action model P(xu,x).
  • Prior probability of the system state P(x).
  • Wanted
  • Estimate of the state X of a dynamical system.
  • The posterior of the state is also called Belief

9
Markov Assumption
Measurement probability
?
State transition probability
?
  • Markov Assumption
  • past and future data are independent if one knows
    the current state
  • Underlying Assumptions
  • Static world, Independent noise
  • Perfect model, no approximation errors

10
Bayes Filters
z observation u action x state
11
Bayes Filter Algorithm
  • Algorithm Bayes_filter( Bel(x),d )
  • h0
  • If d is a perceptual data item z then
  • For all x do
  • For all x do
  • Else if d is an action data item u then
  • For all x do
  • Return Bel(x)

12
Bayes Filters are Familiar!
  • Kalman filters
  • Particle filters
  • Hidden Markov models
  • Dynamic Bayesian networks
  • Partially Observable Markov Decision Processes
    (POMDPs)

13
Localization
Determining the pose of a robot relative to a
given map of the environment
  • Given
  • Map of the environment.
  • Sequence of sensor measurements.
  • Wanted
  • Estimate of the robots position.
  • Problem classes
  • Position tracking
  • Global localization
  • Kidnapped robot problem (recovery)

14
Markov Localization
  • Algorithm Markov_Localization
  • For all do
  • Endfor
  • Return Bel(x)

15
(No Transcript)
16
Localization Methods
  • Markov Localization
  • Central idea represent the robots belief by a
    probability distribution over possible positions,
    and uses Bayes rule and convolution to update
    the belief whenever the robot senses or moves
  • Markov Assumption past and future data are
    independent if one knows the current state
  • Kalman Filtering
  • Central idea posing localization problem as a
    sensor fusion problem
  • Assumption Gaussian distribution function
  • Particle Filtering
  • Monte-Carlo method
  • SLAM (simultaneous localization and mapping)
  • Multi-robot localization

17
Kalman Filter Localization
18
Introduction to Kalman Filter (1)
  • Two measurements
  • Weighted leas-square
  • Finding minimum error
  • After some calculation and rearrangements

Gaussian probability density function
Take weight as
Best estimate for the robot position
19
Introduction to Kalman Filter (2)
  • In Kalman Filter notation

Best estimate of the state at time
K1 is equal to the best prediction of value
before the new measurement is taken,
plus a weighting of value times the difference
between and the best prediction
at time k
20
Introduction to Kalman Filter (3)
  • Dynamic Prediction (robot moving)
  • u velocity w noise
  • Motion
  • Combining fusion and dynamic prediction

21
Kalman Filter for Mobile Robot Localization
Five steps 1) Position Prediction, 2)
Observation, 3) Measurement prediction, 4)
Matching, 5) Estimation
22
Kalman Filter for Mobile Robot Localization
  • Robot Position Prediction
  • In the first step, the robots position at time
    step k1 is predicted based on its old location
    (time step k) and its movement due to the control
    input u(k)

f Odometry function
23
Robot Position Prediction Example
Odometry
24
Kalman Filter for Mobile Robot Localization
  • Observation
  • The second step is to obtain the observation
    Z(k1) (measurements) from the robots sensors
    at the new location at time k1
  • The observation usually consists of a set n0 of
    single observations zj(k1) extracted from the
    different sensors signals. It can represent raw
    data scans as well as features like lines, doors
    or any kind of landmarks.
  • The parameters of the targets are usually
    observed in the sensor frame S.
  • Therefore the observations have to be transformed
    to the world frame W or
  • the measurement prediction have to be transformed
    to the sensor frame S.
  • This transformation is specified in the function
    hi (seen later).

25
Observation Example
  • Raw Date of Laser Scanner

Extracted Lines
Extracted Lines in Model Space
Sensor (robot) frame
26
Kalman Filter for Mobile Robot Localization
  • Measurement Prediction
  • In the next step we use the predicted robot
    position and the map M(k) to generate multiple
    predicted observations zt.
  • They have to be transformed into the sensor frame
  • We can now define the measurement prediction as
    the set containing all ni predicted observations
  • The function hi is mainly the coordinate
    transformation between the world frame and the
    sensor frame

27
Measurement Prediction Example
  • For prediction, only the walls that are in the
    field of view of the robot are selected.
  • This is done by linking the individuallines to
    the nodes of the path

28
Measurement Prediction Example
  • The generated measurement predictions have to be
    transformed to the robot frame R
  • According to the figure in previous slide the
    transformation is given by
  • and its Jacobian by

29
Kalman Filter for Mobile Robot Localization
  • Matching
  • Assignment from observations zj(k1) (gained by
    the sensors) to the targets zt (stored in the
    map)
  • For each measurement prediction for which an
    corresponding observation is found we calculate
    the innovation
  • and its innovation covariance found by applying
    the error propagation law
  • The validity of the correspondence between
    measurement and prediction can e.g. be evaluated
    through the Mahalanobis distance

30
Matching Example
31
Matching Example
  • To find correspondence (pairs) of predicted and
    observed features we use the Mahalanobis
    distance
  • with

32
Estimation Applying the Kalman Filter
  • Kalman filter gain
  • Update of robots position estimate
  • The associate variance

33
Estimation 1D Case
  • For the one-dimensional case with
    we can show that the
    estimation corresponds to the Kalman filter for
    one-dimension presented earlier.

34
Estimation Example
  • Kalman filter estimation of the new robot
    position
  • By fusing the prediction of robot position
    (magenta) with the innovation gained by the
    measurements (green) we get the updated estimate
    of the robot position (red)

35
(No Transcript)
36
Monte Carlo Localization
  • One of the most popular particle filter method
    for robot localization
  • Current Research Topics
  • Reference
  • Dieter Fox, Wolfram Burgard, Frank Dellaert,
    Sebastian Thrun, Monte Carlo Localization
    Efficient Position Estimation for Mobile Robots,
    Proc. 16th National Conference on Artificial
    Intelligence, AAAI99, July 1999

37
MCL in action
Monte Carlo Localization -- refers to the
resampling of the distribution each time a new
observation is integrated
38
Monte Carlo Localization
  • the probability density function is represented
    by samples
  • randomly drawn from it
  • it is also able to represent multi-modal
    distributions, and thus localize the robot
    globally
  • considerably reduces the amount of memory
    required and can integrate measurements at a
    higher rate
  • state is not discretized and the method is more
    accurate than the grid-based methods
  • easy to implement

39
Probabilistic Robotics
- Sebastian Thrun
p( B A ) p( A )
  • Bayes rule

p( A B )
p( B )
  • Definition of marginal probability

S
p( A ) p( A ? B )
all B
S
p( A ) p( A B ) p(B)
all B
  • Definition of conditional probability

40
Setting up the problem
The robot does (or can be modeled to) alternate
between
  • sensing -- getting range observations o1,
    o2, o3, , ot-1, ot
  • acting -- driving around (or ferrying?) a1,
    a2, a3, , at-1

local maps
whence?
41
Setting up the problem
The robot does (or can be modeled to) alternate
between
  • sensing -- getting range observations o1,
    o2, o3, , ot-1, ot
  • acting -- driving around (or ferrying?) a1,
    a2, a3, , at-1

We want to know rt -- the position of the
robot at time t
  • but well settle for p(rt) -- the
    probability distribution for rt

What kind of thing is p(rt) ?
42
Setting up the problem
The robot does (or can be modeled to) alternate
between
  • sensing -- getting range observations o1,
    o2, o3, , ot-1, ot
  • acting -- driving around (or ferrying?) a1,
    a2, a3, , at-1

We want to know rt -- the position of the
robot at time t
  • but well settle for p(rt) -- the
    probability distribution for rt

What kind of thing is p(rt) ?
We do know m -- the map of
the environment
p( o r, m )
-- the sensor model
p( rnew rold, a, m )
-- the accuracy of desired action a
43
Robot modeling
map m and location r
p( o r, m ) sensor model
p( r, m ) .75
p( r, m ) .05
potential observations o
44
Robot modeling
map m and location r
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
p( r, m ) .75
p( r, m ) .05
potential observations o
probabilistic kinematics -- encoder
uncertainty
  • red lines indicate commanded action
  • the cloud indicates the likelihood of various
    final states

45
Robot modeling how-to
p( o r, m ) sensor model
p( rnew rold, a, m ) action model
(0) Model the physics of the sensor/actuators (w
ith error estimates)
theoretical modeling
(1) Measure lots of sensing/action results and
create a model from them
empirical modeling
  • take N measurements, find mean (m) and st. dev.
    (s) and then use a Gaussian model
  • or, some other easily-manipulated model...

0 if x-m gt s
0 if x-m gt s
p( x )
p( x )
1 otherwise
1- x-m/s otherwise
(2) Make something up...
46
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
47
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
48
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
49
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
Create r1 samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
50
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
Create r1 samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
The robot moves, a1
For each sample r1, move it according to the
model p(r2 a1, r1, m)
51
Monte Carlo Localization
Start by assuming p( r0 ) is the uniform
distribution.
take K samples of r0 and weight each with an
importance factor of 1/K
Get the current sensor observation, o1
For each sample point r0 multiply the importance
factor by p(o1 r0, m)
Normalize (make sure the importance factors add
to 1)
You now have an approximation of p(r1 o1, ,
m)
and the distribution is no longer uniform
Create r1 samples by dividing up large clumps
each point spawns new ones in proportion to its
importance factor
The robot moves, a1
For each sample r1, move it according to the
model p(r2 a1, r1, m)
52
MCL in action
Monte Carlo Localization -- refers to the
resampling of the distribution each time a new
observation is integrated
53
Discussion MCL
54
References
  • Dieter Fox, Wolfram Burgard, Frank Dellaert,
    Sebastian Thrun, Monte Carlo Localization
    Efficient Position Estimation for Mobile Robots,
    Proc. 16th National Conference on Artificial
    Intelligence, AAAI99, July 1999
  • Dieter Fox, Wolfram Burgard, Sebastian Thrun,
    Markov Localization for Mobile Robots in Dynamic
    Environments, J. of Artificial Intelligence
    Research 11 (1999) 391-427
  • Sebastian Thrun, Probabilistic Algorithms in
    Robotics, Technical Report CMU-CS-00-126, School
    of Computer Science, Carnegie Mellon University,
    Pittsburgh, USA, 2000
Write a Comment
User Comments (0)
About PowerShow.com