Title: Bayesian Filtering
1Bayesian Filtering
Dieter Fox
2Probabilistic Robotics
- Key idea Explicit representation of uncertainty
- (using the calculus of probability theory)
- Perception state estimation
- Control utility optimization
3Bayes Filters Framework
- Given
- Stream of observations z and action data u
- Sensor model P(zx).
- Action model P(xu,x).
- Prior probability of the system state P(x).
- Wanted
- Estimate of the state X of a dynamical system.
- The posterior of the state is also called Belief
4Markov Assumption
- Underlying Assumptions
- Static world
- Independent noise
- Perfect model, no approximation errors
5Bayes Filters
z observation u action x state
6Bayes Filters are Familiar!
- Kalman filters
- Particle filters
- Hidden Markov models
- Dynamic Bayesian networks
- Partially Observable Markov Decision Processes
(POMDPs)
7Localization
Using sensory information to locate the robot in
its environment is the most fundamental problem
to providing a mobile robot with autonomous
capabilities. Cox 91
- Given
- Map of the environment.
- Sequence of sensor measurements.
- Wanted
- Estimate of the robots position.
- Problem classes
- Position tracking
- Global localization
- Kidnapped robot problem (recovery)
8Bayes Filters for Robot Localization
9Probabilistic Kinematics
- Odometry information is inherently noisy.
p(xu,x)
x
x
u
u
10Proximity Measurement
- Measurement can be caused by
- a known obstacle.
- cross-talk.
- an unexpected obstacle (people, furniture, ).
- missing all obstacles (total reflection, glass,
). - Noise is due to uncertainty
- in measuring distance to known obstacle.
- in position of known obstacles.
- in position of additional obstacles.
- whether obstacle is missed.
11Mixture Density
How can we determine the model parameters?
12Raw Sensor Data
Measured distances for expected distance of 300
cm.
Sonar
Laser
13Approximation Results
Laser
Sonar
300cm
400cm
14Representations for Bayesian Robot Localization
- Kalman filters (late-80s?)
- Gaussians
- approximately linear models
- position tracking
- Discrete approaches (95)
- Topological representation (95)
- uncertainty handling (POMDPs)
- occas. global localization, recovery
- Grid-based, metric representation (96)
- global localization, recovery
Robotics
AI
- Particle filters (99)
- sample-based representation
- global localization, recovery
- Multi-hypothesis (00)
- multiple Kalman filters
- global localization, recovery
15Discrete Grid Filters
16Piecewise Constant Representation
17Grid-based Localization
18Sonars and Occupancy Grid Map
19Tree-based Representation
Idea Represent density using a variant of Octrees
20Tree-based Representations
- Efficient in space and time
- Multi-resolution
21Particle Filters
22Particle Filters
- Represent belief by random samples
- Estimation of non-Gaussian, nonlinear processes
- Monte Carlo filter, Survival of the fittest,
Condensation, Bootstrap filter, Particle filter - Filtering Rubin, 88, Gordon et al., 93,
Kitagawa 96 - Computer vision Isard and Blake 96, 98
- Dynamic Bayesian Networks Kanazawa et al., 95d
23Importance Sampling
Weight samples w f / g
24Particle Filter Algorithm
25Particle Filters
26Sensor Information Importance Sampling
27Robot Motion
28Sensor Information Importance Sampling
29Robot Motion
30Sample-based Localization (sonar)
31Using Ceiling Maps for Localization
Dellaert et al. 99
32Vision-based Localization
33Under a Light
Measurement z
P(zx)
34Next to a Light
Measurement z
P(zx)
35Elsewhere
Measurement z
P(zx)
36Global Localization Using Vision
37Localization for AIBO robots
38Adaptive Sampling
39KLD-sampling
- Idea
- Assume we know the true belief.
- Represent this belief as a multinomial
distribution. - Determine number of samples such that we can
guarantee that, with probability (1- d), the
KL-distance between the true posterior and the
sample-based approximation is less than e. - Observation
- For fixed d and e, number of samples only depends
on number k of bins with support
40Example Run Sonar
41Example Run Laser
42Kalman Filters
43Bayes Filter Reminder
44Gaussians
45Gaussians and Linear Functions
46Kalman Filter Updates in 1D
47Kalman Filter Algorithm
- Algorithm Kalman_filter( mt-1, St-1, ut, zt)
- Prediction
-
-
- Correction
-
-
-
- Return mt, St
48Nonlinear Dynamic Systems
- Most realistic robotic problems involve nonlinear
functions
49Linearity Assumption Revisited
50Non-linear Function
51EKF Linearization (1)
52EKF Linearization (2)
53EKF Linearization (3)
54Particle Filter Projection
55Density Extraction
56Sampling Variance
57EKF Algorithm
- Extended_Kalman_filter( mt-1, St-1, ut, zt)
- Prediction
-
-
- Correction
-
-
-
- Return mt, St
58Landmark-based Localization
59EKF Prediction Step
60EKF Observation Prediction Step
61EKF Correction Step
62Estimation Sequence (1)
63Estimation Sequence (2)
64Comparison to GroundTruth
65EKF Summary
- Highly efficient Polynomial in measurement
dimensionality k and state dimensionality n
O(k2.376 n2) - Not optimal!
- Can diverge if nonlinearities are large!
- Works surprisingly well even when all assumptions
are violated!
66Linearization via Unscented Transform
EKF
UKF
67UKF Sigma-Point Estimate (2)
EKF
UKF
68UKF Sigma-Point Estimate (3)
EKF
UKF
69Unscented Transform
Sigma points
Weights
Pass sigma points through nonlinear function
Recover mean and covariance
70UKF Prediction Step
71UKF Observation Prediction Step
72UKF Correction Step
73EKF Correction Step
74Estimation Sequence
EKF PF UKF
75Estimation Sequence
EKF UKF
76Prediction Quality
EKF UKF
77UKF Summary
- Highly efficient Same complexity as EKF, with a
constant factor slower in typical practical
applications - Better linearization than EKF Accurate in first
two terms of Taylor expansion (EKF only first
term) - Derivative-free No Jacobians needed
- Still not optimal!
78SLAM Simultaneous Localization and Mapping
79Mapping with Raw Odometry
80SLAM Simultaneous Localization and Mapping
- Full SLAM
- Online SLAM
- Integrations typically done one at a time
81SLAM Mapping with Kalman Filters
- Map with N landmarks(2N3)-dimensional Gaussian
- Can handle hundreds of dimensions
82SLAM Mapping with Kalman Filters
83SLAM Mapping with Kalman Filters
84SLAM Mapping with Kalman Filters
Map Correlation matrix
85Graph-SLAM
- Full SLAM technique
- Generates probabilistic links
- Computes map only occasionally
- Based on Information Filter form
86Graph-SLAM Idea
87Robot Poses and Scans Lu and Milios 1997
- Successive robot poses connected by odometry
- Sensor readings yield constraints between poses
- Constraints represented by Gaussians
- Globally optimal estimate
88Loop Closure
- Use scan patches to detect loop closure
- Add new position constraints
- Deform the network based on covariances of matches
Before loop closure
After loop closure
89Efficient Map Recovery
- Minimize constraint function JGraphSLAM using
standard optimization techniques (gradient
descent, Levenberg Marquardt, conjugate gradient)
90Mapping the Allen Center
91Rao-Blackwellised Particle Filters
92Rao-Blackwellized Mapping
Compute a posterior over the map and possible
trajectories of the robot
map and trajectory
measurements
robot motion
map
trajectory
93FastSLAM
Robot Pose
2 x 2 Kalman Filters
Particle M
Begin courtesy of Mike Montemerlo
94FastSLAM Simulation
- Up to 100,000 landmarks
- 100 particles
- 103 times fewer parameters than EKF SLAM
Blue line true robot path Red line estimated
robot path Black dashed line odometry
95Victoria Park Results
- 4 km traverse
- 100 particles
- Uses negative evidence to remove spurious
landmarks
Blue path odometry Red path estimated path
End courtesy of Mike Montemerlo
96Motion Model for Scan Matching
Raw Odometry
Scan Matching
97Rao-Blackwellized Mapping with Scan-Matching
Map Intel Research Lab Seattle
Loop Closure
98Rao-Blackwellized Mapping with Scan-Matching
Map Intel Research Lab Seattle
Loop Closure
99Rao-Blackwellized Mapping with Scan-Matching
Map Intel Research Lab Seattle
100Example (Intel Lab)
- 15 particles
- four times faster than real-timeP4, 2.8GHz
- 5cm resolution during scan matching
- 1cm resolution in final map
joint work with Giorgio Grisetti
101Outdoor Campus Map
- 30 particles
- 250x250m2
- 1.75 km (odometry)
- 20cm resolution during scan matching
- 30cm resolution in final map
- 30 particles
- 250x250m2
- 1.088 miles (odometry)
- 20cm resolution during scan matching
- 30cm resolution in final map
joint work with Giorgio Grisetti
102DP-SLAM Eliazar Parr
scale 3cm
Runs at real-time speed on 2.4GHz Pentium 4 at
10cm/s
103Consistency
104Results obtained with DP-SLAM 2.0 (offline)
Eliazar Parr, 04
105Close up
End courtesy of Eliazar Parr
106Fast-SLAM Summary
- Full and online version of SLAM
- Factorizes posterior into robot trajectories
(particles) and map (EKFs). - Landmark locations are independent!
- More efficient proposal distribution through
Kalman filter prediction - Data association per particle
107Ball Tracking in RoboCup
- Extremely noisy (nonlinear) motion of observer
- Inaccurate sensing, limited processing power
- Interactions between target and environment
- Interactions between robot(s) and target
Goal Unified framework for modeling the ball and
its interactions.
108Tracking Techniques
- Kalman Filter
- Highly efficient, robust (even for nonlinear)
- Uni-modal, limited handling of nonlinearities
- Particle Filter
- Less efficient, highly robust
- Multi-modal, nonlinear, non-Gaussian
- Rao-Blackwellised Particle Filter, MHT
- Combines PF with KF
- Multi-modal, highly efficient
109Dynamic Bayes Network for Ball Tracking
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
110Robot Location
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
111Robot and Ball Location (and velocity)
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
112Ball-Environment Interactions
None
Grabbed
Bounced
Deflected
Kicked
113Ball-Environment Interactions
(
0
.
8
)
Robot loses grab
(
residual prob
.)
(
0
.
2
)
None
Grabbed
Within grab range
h
and robot grabs
t
i
w
)
(
prob
.
from model
)
0
s
.
n
R
t
1
o
c
(
o
b
i
)
e
s
p
b
a
0
j
(
i
1
)
)
l
.
l
b
a
.
o
0
l
l
1
0
1
)
o
m
t
.
o
.
(
(
1
0
C
k
0
Kick fails
(
0
.
1
)
(
(
n
i
.
o
c
9
)
k
s
Bounced
Deflected
Kicked
114Integrating Discrete Ball Motion Mode
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
115Grab Example (1)
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
116Grab Example (2)
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
117Inference Posterior Estimation
l
l
l
Landmark detection
z
z
z
k-1
k
k-2
Robot localization
r
r
r
Map and robot location
k-2
k-1
k
u
u
Robot control
k-2
k-1
m
m
m
Ball motion mode
k
k-1
k-2
b
b
b
Ball tracking
Ball location and velocity
k-2
k-1
k
b
b
b
z
z
z
Ball observation
k-2
k-1
k
118Rao-Blackwellised PF for Inference
- Represent posterior by random samples
- Each sample contains robot location, ball
mode, ball Kalman filter - Generate individual components of a particle
stepwise using the factorization
119Rao-Blackwellised Particle Filter for Inference
Robot localization
r
r
Map and robot location
k-1
k
m
m
Ball motion mode
k
k-1
b
b
Ball tracking
Ball location and velocity
k-1
k
- Draw a sample from the previous sample set
120Generate Robot Location
l
Landmark detection
z
k
Robot localization
r
r
Map and robot location
k-1
k
u
Robot control
k-1
m
m
Ball motion mode
k-1
k
b
Ball tracking
b
Ball location and velocity
k-1
k
121Generate Ball Motion Model
l
Landmark detection
z
k
Robot localization
r
r
Map and robot location
k-1
k
u
Robot control
k-1
m
m
Ball motion mode
k
k-1
b
Ball tracking
b
Ball location and velocity
k-1
k
122Update Ball Location and Velocity
l
Landmark detection
z
k
Robot localization
r
r
Map and robot location
k-1
k
u
Robot control
k-1
m
m
Ball motion mode
k
k-1
b
Ball tracking
b
Ball location and velocity
k-1
k
b
z
k
123Importance Resampling
- Weight sample byif observation is landmark
detection and byif observation is ball
detection. - Resample
124Ball-Environment Interaction
125Ball-Environment Interaction
126Tracking and Finding the Ball
- Cluster ball samples by discretizing pan / tilt
angles - Uses negative information
127Experiment Real Robot
- Robot kicks ball 100 times, tries to find it
afterwards - Finds ball in 1.5 seconds on average
128Simulation Runs
Reference Observations
129Comparison to KF (optimized for straight motion)
130Comparison to KF (inflated prediction noise)
131Orientation Errors
180
RBPF
KF
160
140
120
100
Orientation Error degrees
80
60
40
20
0
2
3
4
5
6
7
8
9
10
11
Time sec
132Conclusions
- Bayesian filters are the most successful
technique in robotics (vision?) - Many instances (Kalman, particle, grid, MHT,
RBPF, ) - Special case of dynamic Bayesian networks
- Recently hierarchical models