Dr. Jizhong Xiao - PowerPoint PPT Presentation

1 / 45
About This Presentation
Title:

Dr. Jizhong Xiao

Description:

Title: Introduction Author: Ning Xi Ralab Last modified by: Prof. Xiao Created Date: 7/10/2002 8:39:25 PM Document presentation format: On-screen Show – PowerPoint PPT presentation

Number of Views:135
Avg rating:3.0/5.0
Slides: 46
Provided by: ningxi
Category:
Tags: data | fusion | jizhong | xiao

less

Transcript and Presenter's Notes

Title: Dr. Jizhong Xiao


1
Probabilistic Robotics
Advanced Mobile Robotics
  • Dr. Jizhong Xiao
  • Department of Electrical Engineering
  • City College of New York
  • jxiao_at_ccny.cuny.edu

2
Robot Navigation
Fundamental problems to provide a mobile robot
with autonomous capabilities
  • Where am I going
  • Whats the best way there?
  • Where have I been? ? how to create an
    environmental map with imperfect sensors?
  • Where am I? ? how a robot can tell where it is
    on a map?
  • What if youre lost and dont have a map?

Mission Planning
Path Planning
Mapping
Localization
Robot SLAM
3
Representation of the Environment
  • Environment Representation
  • Continuos Metric x,y,q
  • Discrete Metric metric grid
  • Discrete Topological topological grid

4
Localization, Where am I?
  • Odometry, Dead Reckoning
  • Localization base on external sensors, beacons
    or landmarks
  • Probabilistic Map Based Localization

5
Localization Methods
  • Mathematic Background, Bayes Filters
  • Markov Localization
  • Central idea represent the robots belief by a
    probability distribution over possible positions,
    and uses Bayes rule and convolution to update
    the belief whenever the robot senses or moves
  • Markov Assumption past and future data are
    independent if one knows the current state
  • Kalman Filtering
  • Central idea posing localization problem as a
    sensor fusion problem
  • Assumption gaussian distribution function
  • Particle Filtering
  • Central idea Sample-based, nonparametric Filter
  • Monte-Carlo method
  • SLAM (simultaneous localization and mapping)
  • Multi-robot localization

6
Markov Localization
  • Applying probability theory to robot localization
  • Markov localization uses an explicit, discrete
    representation for the probability of all
    position in the state space.
  • This is usually done by representing the
    environment by a grid or a topological graph with
    a finite number of possible states (positions).
  • During each update, the probability for each
    state (element) of the entire space is updated.

7
Markov Localization Example
  • Assume the robot position is one- dimensional

The robot is placed somewhere in the environment
but it is not told its location
The robot queries its sensors and finds out it is
next to a door
8
Markov Localization Example
The robot moves one meter forward. To account for
inherent noise in robot motion the new belief is
smoother
The robot queries its sensors and again it finds
itself next to a door
9
Probabilistic Robotics
  • Falls in between model-based and behavior-based
    techniques
  • There are models, and sensor measurements, but
    they are assumed to be incomplete and
    insufficient for control
  • Statistics provides the mathematical glue to
    integrate models and sensor measurements
  • Basic Mathematics
  • Probabilities
  • Bayes rule
  • Bayes filters

10
  • The next slides are provided by the authors of
    the book "Probabilistic Robotics, you can
    download from the website 
  • http//www.probabilistic-robotics.org/

11
Probabilistic Robotics
Mathematic Background Probabilities Bayes
rule Bayes filters
12
Probabilistic Robotics
  • Key idea Explicit representation of uncertainty
    using the calculus of probability theory
  • Perception state estimation
  • Action utility optimization

13
Axioms of Probability Theory
  • Pr(A) denotes probability that proposition A is
    true.

14
A Closer Look at Axiom 3
15
Using the Axioms
16
Discrete Random Variables
  • X denotes a random variable.
  • X can take on a countable number of values in
    x1, x2, , xn.
  • P(Xxi), or P(xi), is the probability that the
    random variable X takes on value xi.
  • P( ) is called probability mass function.
  • E.g.

.
17
Continuous Random Variables
  • X takes on values in the continuum.
  • p(Xx), or p(x), is a probability density
    function.
  • E.g.

p(x)
x
18
Joint and Conditional Probability
  • P(Xx and Yy) P(x,y)
  • If X and Y are independent then P(x,y) P(x)
    P(y)
  • P(x y) is the probability of x given y P(x
    y) P(x,y) / P(y) P(x,y) P(x y) P(y)
  • If X and Y are independent then P(x y) P(x)

19
Law of Total Probability, Marginals
Discrete case
Continuous case
20
Bayes Formula
If y is a new sensor reading
Prior probability distribution
?
Posterior probability distribution
?
Generative model, characteristics of the sensor
?
?
Does not depend on x
21
Normalization
Algorithm
22
Conditioning
  • Law of total probability

23
Bayes Rule with Background Knowledge
24
Conditioning
  • Total probability

25
Conditional Independence
  • equivalent to
  • and

26
Simple Example of State Estimation
  • Suppose a robot obtains measurement z
  • What is P(openz)?

27
Causal vs. Diagnostic Reasoning
  • P(openz) is diagnostic.
  • P(zopen) is causal.
  • Often causal knowledge is easier to obtain.
  • Bayes rule allows us to use causal knowledge

28
Example
  • P(zopen) 0.6 P(z?open) 0.3
  • P(open) P(?open) 0.5
  • z raises the probability that the door is open.

29
Combining Evidence
  • Suppose our robot obtains another observation z2.
  • How can we integrate this new information?
  • More generally, how can we estimateP(x z1...zn
    )?

30
Recursive Bayesian Updating
Markov assumption zn is independent of
z1,...,zn-1 if we know x.
31
Example Second Measurement
  • P(z2open) 0.5 P(z2?open) 0.6
  • P(openz1)2/3
  • z2 lowers the probability that the door is open.

32
Actions
  • Often the world is dynamic since
  • actions carried out by the robot,
  • actions carried out by other agents,
  • or just the time passing by
  • change the world.
  • How can we incorporate such actions?

33
Typical Actions
  • The robot turns its wheels to move
  • The robot uses its manipulator to grasp an object
  • Plants grow over time
  • Actions are never carried out with absolute
    certainty.
  • In contrast to measurements, actions generally
    increase the uncertainty.

34
Modeling Actions
  • To incorporate the outcome of an action u into
    the current belief, we use the conditional pdf
  • P(xu,x)
  • This term specifies the pdf that executing u
    changes the state from x to x.

35
Example Closing the door
36
State Transitions
  • P(xu,x) for u close door
  • If the door is open, the action close door
    succeeds in 90 of all cases.

37
Integrating the Outcome of Actions
Continuous case Discrete case
38
Example The Resulting Belief
39
Bayes Filters Framework
  • Given
  • Stream of observations z and action data u
  • Sensor model P(zx).
  • Action model P(xu,x).
  • Prior probability of the system state P(x).
  • Wanted
  • Estimate of the state X of a dynamical system.
  • The posterior of the state is also called Belief

40
Markov Assumption
Measurement probability
?
State transition probability
?
  • Markov Assumption
  • past and future data are independent if one knows
    the current state
  • Underlying Assumptions
  • Static world, Independent noise
  • Perfect model, no approximation errors

41
Bayes Filters
z observation u action x state
42
Bayes Filter Algorithm
  1. Algorithm Bayes_filter( Bel(x),d )
  2. h0
  3. If d is a perceptual data item z then
  4. For all x do
  5. For all x do
  6. Else if d is an action data item u then
  7. For all x do
  8. Return Bel(x)

43
Bayes Filters are Familiar!
  • Kalman filters
  • Particle filters
  • Hidden Markov models
  • Dynamic Bayesian networks
  • Partially Observable Markov Decision Processes
    (POMDPs)

44
Summary
  • Bayes rule allows us to compute probabilities
    that are hard to assess otherwise.
  • Under the Markov assumption, recursive Bayesian
    updating can be used to efficiently combine
    evidence.
  • Bayes filters are a probabilistic tool for
    estimating the state of dynamic systems.

45
Thank You
Homework 3 Exercises 2 and 3 on pp3637 of
textbook
Next class March 3, 2008
Write a Comment
User Comments (0)
About PowerShow.com