Kalman Filters and Dynamic Bayesian Networks - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Kalman Filters and Dynamic Bayesian Networks

Description:

Kalman Filters and Dynamic Bayesian Networks Markoviana Reading Group Srinivas Vadrevu Arizona State University Outline Introduction Gaussian Distribution ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 30
Provided by: rakaposhi
Category:

less

Transcript and Presenter's Notes

Title: Kalman Filters and Dynamic Bayesian Networks


1
Kalman Filters andDynamic Bayesian Networks
  • Markoviana Reading Group
  • Srinivas Vadrevu
  • Arizona State University

2
Outline
  • Introduction
  • Gaussian Distribution
  • Introduction
  • Examples (Linear and Multivariate)
  • Kalman Filters
  • General Properties
  • Updating Gaussian Distributions
  • One-dimensional Example
  • Notes about general case
  • Applicability of Kalman Filtering
  • Dynamic Bayesian Networks (DBNs)
  • Introduction
  • DBNs and HMMs
  • DBNs and HMMs
  • Constructing DBNs

3
HMMs and Kalman Filters
  • Hidden Markov Models (HMMs)
  • Discrete State Variables
  • Used to model sequence of events
  • Kalman Filters
  • Continuous State Variables, with Gaussian
    Distribution
  • Used to model noisy continuous observations
  • Examples
  • Predict the motion of a bird through dense jungle
    foliage at dusk
  • Predict the direction of the missile through
    intermittent radar movement observations

4
Gaussian (Normal) Distribution
  • Central Limit Theorem The sum of n statistical
    independent random variables converges for n ? 8
    towards the Gaussian distribution (Applet
    Illustration)
  • Unlike the binomial and Poisson distribution, the
    Gaussian is a continuous distribution
  • ?? mean of distribution (also at the same place
    as mode and median)
  • ?2 variance of distribution
  • y is a continuous variable (-8???y ??8?
  • Gaussian distribution is fully defined by its
    mean and variance

5
Gaussian Distribution Examples
  • Linear Gaussian Distribution
  • Mean, ? and Variance, ?
  • Multivariate Gaussian Distribution
  • For 3 random variables
  • Mean, ? m1 m2 m3
  • Covariance Matrix, Sigma v11 v12 v13
  • v21 v22 v23
  • v31 v32 v33

6
Kalman Filters General Properties
  • Estimate the state and the covariance of the
    state at any time T, given observations, xT
    x1, , xT
  • E.g., Estimate the state (location and velocity)
    of airplane and its uncertainty, given some
    measurements from an array of sensors
  • The probability of interest is P(ytxT)
  • Filtering the state
  • T current time, t
  • Predicting the state
  • T lt current time, t
  • Smoothing the state
  • T gt current time, t

7
(No Transcript)
8
(No Transcript)
9
Gaussian Noise Example
  • Next State is linear function of current state,
    plus some Gaussian noise
  • Position Update
  • Gaussian Noise

10
Updating Gaussian Distributions
  • Linear Gaussian family of distributions remains
    closed under standard Bayesian network operations
  • One-step predicted distribution
  • Current distribution P(Xte1t) is Gaussian
  • Transition model P(Xt1xt) is linear Gaussian
  • The updated distribution
  • Predicted distribution P(Xt1e1t) is Gaussian
  • Sensor model P(et1Xt1) is linear Gaussian
  • Filtering and Prediction (From 15.2)

11
(No Transcript)
12
(No Transcript)
13
(No Transcript)
14
(No Transcript)
15
(No Transcript)
16
(No Transcript)
17
(No Transcript)
18
(No Transcript)
19
(No Transcript)
20
(No Transcript)
21
One-dimensional Example
  • Update Rule (Derivations from Russel Norvig)
  • Compute new mean and covariance matrix from the
    previous mean and covariance matrix
  • Variance update is independent of the observation
  • Another variation of the update rule (from Max
    Welling, Caltech)
  • - ?t1 is weighted mean of new observation Zt1
    and the old mean ?t
  • Observation unreliable ? ?2z is large (more
    attention to old mean)
  • Old mean unreliable ? ?2t is large (more
    attention to observation)
  • - ?2 is variance or uncertainty, K is the Kalman
    gain
  • K 0 ? no attention to measurement
  • - K 1 ? complete attention to measurement

22
The General Case
  • Multivariate Gaussian Distribution
  • Exponent is a quadratic function of the random
    variables xi in x
  • Temporal model with Kalman filtering
  • F linear transition model
  • H linear sensor model
  • Sigma_x transition noise covariance
  • Sigma_z sensor noise covariance
  • Update equations for mean and covariance

23
Illustration
24
Applicability of Kalman Filtering
  • Popular applications
  • Navigation, guidance, radar tracking, sonar
    ranging, satellite orbit computation, stock prize
    prediction, landing of Eagle on Moon, gyroscopes
    in airplanes, etc.
  • Extended Kalman Filters (EKF) can handle
    Nonlinearities in Gaussian distributions
  • Model the system as locally linear in xt in the
    region of xt ?t
  • Works well for smooth, well-behaved systems
  • Switching Kalman Filters multiple Kalman filters
    in parallel, each using different model of the
    system
  • A weighted sum of predictions used

25
Applicability of Kalman Filters
26
Dynamic Bayesian Networks
  • Directed graphical models of stochastic processes
  • Extend HMMs by representing hidden (and observed)
    state in terms of state variables, with possible
    complex interdependencies
  • Any number of state variables and evidence
    variables
  • Dynamic or Temporal Bayesian Network???
  • Model structure does not change over time
  • Parameters do not change over time
  • Extra hidden nodes can be added (mixture of
    models)
  • 2TBN
  • Structure is replicated from slice to slice
  • Stationary First-Order Markov process

27
DBNs and HMMs
  • HMM as a DBN
  • Single state variable and single evidence
    variable
  • Discrete variable DBN as an HMM
  • Combine all state variables in DBN into a single
    state variable (with all possible values of
    individual state variables)
  • Efficient Representation (with 20 boolean state
    variables, DBN needs 160 probabilities, whereas
    HMM needs roughly a trillion probabilities)
  • Analogous to Ordinary Bayesian Networks vs Fully
    Tabulated Joint Distributions

28
DBNs and Kalman Filters
  • Kalman filter as a DBN
  • Continuous variables and linear Gaussian
    conditional distributions
  • DBN as a Kalman Filter
  • Not possible
  • DBN allows any arbitrary distributions
  • Lost keys example

29
Constructing DBNs
  • Required information
  • Prior distributions over state variables P(X0)
  • The transition model P(Xt1Xt)
  • The sensor model P(EtXt)
  • Intra-Slice topology
  • Inter-Slice topology (2TBN assumption)
Write a Comment
User Comments (0)
About PowerShow.com