Title: Rigid motions
1Robot Localization
- Localization problem
- Given
- a map of the world
- ego-centric observations of the world
- How can a robot estimate its position and
orientation w.r.t. the map?
These notes closely follow chapters in
Probabilistic Robotics by Thrun, Burgard, and Fox.
2The problem
- Assumptions/limitations
- Were dealing w/ mobile robots here the robot
operation space is - The robot is equipped w/ a sensor system that
characterizes the environment in some relevant
way - Time-of-flight laser ranging scanner
- camera / image processing system
- Example SICK LMS200
- Distance measured by time-of-flight of a laser
beam - One laser beam is deflected by a mirror such that
is sweeps through a 180DEG arc. - For each point on the arc, time-of-flight is
measured, resulting in a distance. - The result is a two-dimensional range image.
3Joint probability, Independence
Joint probability probability that both a and b
are true
If a are b are independent, then
4Conditional probability
- Conditional probability
- Probability of a given b
Product rule for probabilities where
is the joint probability of both events
Marginalization (theorem of total probability)
5Bayes Rule
Bayes rule
Where i indexes the set of possible events
liklihood
prior
posterior
Normalizing constant
6Bayes Rule Example
- Consider the following problem
- 1 of women above age 40 have breast cancer
- 80 of women w/ breast cancer will get a positive
mammogram. - 9.6 of women w/o breast cancer will also get a
positive mammogram - A woman gets a positive mammogram whats the
probability that she has breast cancer? - (most doctors answer 70-80)
7Bayes Rule Example
- 1 of women above age 40 have breast cancer
- 80 of women w/ breast cancer will get a positive
mammogram. - 9.6 of women w/o breast cancer will also get a
positive mammogram
8Bayes Rule Example
Probability of breast cancer and a positive
mammogram
9Bayes Rule Example
Probability of breast cancer and a positive
mammogram
Probability of a positive mammogram
this this expression, we have marginalized over
the breast cancer variable
Probability of breast cancer given a positive
mammogram
10Conditional independence
Conditional independence
This implies
11Conditioning on Other Random Variables
12Application to robot localization
Suppose a robot may be in one of k states
- Lets say that the probability that the robot is
in a particular state is conditioned on some
other variables - A set of sensor measurements,
- The set of prior actions,
Your estimate of the robots position can be
represented as a probability distribution the
belief distribution
13Updating the belief distribution over time
- Consider the state of the robot over time
- The state changes in response to actions
- The state is characterized by sensor evidence
- We assume that both of these things are
non-deterministic - State of robot changes stochastically in response
to actions - Sensor evidence does not precisely characterize
state of robot
14Evolution of state over time
previous state
timestep
action
next state
We will make the markov assumption the
probability distribution over future states is
conditionally independent of past states, given
the current state
15Bayesian filtering to update belief over time
Belief state at time t due to action
Marginalize over
16Bayesian filtering to update belief over time
Bayes rule
Markov assumption
17Bayesian filtering to update belief over time
18Bayesian filtering algorithm
- Bayes filtering algorithm
- Repeat on each time step
- for all
-
-
19Discrete Bayes filter
Discrete bayes filter Input
- Repeat on each time step
- for all
-
-
20Grid localization Example
- Robot localization
- Robot is in one of sixteen cells
- Actions are deterministic
- Robot senses walls using bump sensor.
- Can robot localize itself over the course of time?
21Particle filter implementation of bayes filter
- Whereas the discrete bayes filter estimated the
posterior probabilities for a static set of
states, the particle filter adapts its
representation to the distribution being
estimated. - Non parameteric
- Each particle is a hypothesis regarding the true
state of - Typically use a large number of samples
- In the limit, the probability that a sample is
included in the set is directly proportional to
the posterior probability
22Particle filter implementation of bayes filter
- Particle sets initialized to null
- For m1 to M
- sample from posterior after control
- weight particle based on measurement
- temporary sample set
- Next
- For m1 to M
- draw i from with probability
- update sample set
- Next
23Particle filter implementation of bayes filter
- 3. sample from posterior after control
- implements the forward model of action
- samples from the forward model, given previous
state and action
- 4. weight particle based on measurement
- weight is called the importance factor
- particles that match the observation are weighted
more heavily
24Particle filter implementation of bayes filter
- 8. draw i from with probability
- update sample set
- re-sampling step
- randomly draws with replacement elements from
with probability proportional to the importance
factor
25Particle filter example
from Dieter Foxs localization examples
26Mobile robot motion models
- Management of the sample set
- The advantage of the particle filter is that it
can focus representation on high-probability
regions of the hypothesis space - Associated problem you need to inject
randomness somehow. - Where does randomness come from in algorithm? Is
this always a source of random particles? - Add extra random particles?
27Application of particle filter to mobile robot
localization
Robot motion model
Environment measurement model
28Robot motion model
- State transition model probability of next state
given current state and control action - In general, we would have to represent this
distribution somehow (Reinforcement Learning
encodes the distribution as a multinomial.) - The particle filter only requires us to sample
from the distribution
29Geometry of robot motion
If we knew exact velocities and angular
velocities, then we could perfectly update robot
position
30Stochastic robot motions
Since our measurement of velocity and angular
velocity are subject to noise, the following is
more realistic
31Stochastic robot motions example
from
32Measurement model likelihood fields
Estimate probability of an observation given
current state and a known map
- Assume there are two sources for an observation
- The sensor beam hit an object
- The sensor beam did not see an object and reports
an object at the edge of its range.
33Measurement model likelihood fields
- The sensor beam hit an object
- Assume that the probability of hitting an object
is proportional to the distance from the nearest
object
- The sensor beam did not see an object and reports
an object at the edge of its range. - This is modeled by a peak in the distribution at
the at the max range.
34Measurement model likelihood fields
Pictures of likelihood fields
35Likelihood field algorithm
Input
36Measurement model drawbacks
- This approach does not consider the case that a
line of sight may be blocked when computing the
linkihood of a given sensor reading - Liklihood is computed as if the sensor beam can
see through walls.
37Landmark-based approaches
- Instead of considering an un-differentiated map,
assume that some set of landmarks exists that the
robot is capable of identifying - The robot can identify distance and bearing to a
landmark. - Note that this does not uniquely identify the
robot location the robot may be on a circle
around the landmark.
38Monte Carlo localization
-
- For m1 to M
-
-
-
- Next
- For m1 to M
- draw i from with probability
- update sample set
- Next
39Monte Carlo Localization
- Advantages
- Easy to implement
- Representation adapts to posterior distribution
- Potential problems
- Cannot recover from kidnapped robot problem
- Inject random particles
40Monte Carlo Localization videos
41Monte Carlo Localization videos
42Using Bayes filter for localization Markov
Localization
Estimate belief state as a Gaussian distribution
43Using Bayes filter for localization Markov
Localization
Picture of markov localization
44Binary Bayesian filtering static state log-odds
- Binary state
- Each state takes on two values
Log odds
45Binary Bayesian filtering static state log-odds
The log-odds representation simplifies the bayes
filter expression
46Binary Bayesian filtering static state log-odds
The log-odds representation simplifies the bayes
filter expression