A Constraint Generation Approach to Learning Stable Linear Dynamical Systems - PowerPoint PPT Presentation

About This Presentation
Title:

A Constraint Generation Approach to Learning Stable Linear Dynamical Systems

Description:

Van Overschee and B. De Moor Subspace Identification for Linear Systems. Kluwer, 1996 ... Van Overschee and B. De Moor Subspace Identification for Linear ... – PowerPoint PPT presentation

Number of Views:42
Avg rating:3.0/5.0
Slides: 64
Provided by: carnegieme
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: A Constraint Generation Approach to Learning Stable Linear Dynamical Systems


1
A Constraint Generation Approach to Learning
Stable Linear Dynamical Systems
  • Sajid M. SiddiqiByron BootsGeoffrey J. Gordon

Carnegie Mellon University
NIPS 2007poster W22
2
steam
3
Application Dynamic Textures1
steam
river
fountain
  • Videos of moving scenes that exhibit stationarity
    properties
  • Dynamics can be captured by a low-dimensional
    model
  • Learned models can efficiently simulate realistic
    sequences
  • Applications compression, recognition, synthesis
    of videos

1 S. Soatto, D. Doretto and Y. Wu. Dynamic
Textures. Proceedings of the ICCV, 2001
4
Linear Dynamical Systems
  • Input data sequence

5
Linear Dynamical Systems
  • Input data sequence

6
Linear Dynamical Systems
  • Input data sequence
  • Assume a latent variable that explains the data

7
Linear Dynamical Systems
  • Input data sequence
  • Assume a latent variable that explains the data

8
Linear Dynamical Systems
  • Input data sequence
  • Assume a latent variable that explains the data
  • Assume a Markov model on the latent variable

9
Linear Dynamical Systems
  • Input data sequence
  • Assume a latent variable that explains the data
  • Assume a Markov model on the latent variable

10
Linear Dynamical Systems2
  • State and observation models
  • Dynamics matrix
  • Observation matrix

2 Kalman, R. E. (1960). A new approach to linear
filtering and prediction problems. Trans.ASME-JBE
11
Linear Dynamical Systems2
  • This talk
  • Learning LDS parameters from data while ensuring
    a stable dynamics matrix A more efficiently and
    accurately than previous methods
  • State and observation models
  • Dynamics matrix
  • Observation matrix

2 Kalman, R. E. (1960). A new approach to linear
filtering and prediction problems. Trans.ASME-JBE
12
Learning Linear Dynamical Systems
  • Suppose we have an estimated state sequence

13
Learning Linear Dynamical Systems
  • Suppose we have an estimated state sequence
  • Define state reconstruction error as the
    objective

14
Learning Linear Dynamical Systems
  • Suppose we have an estimated state sequence
  • Define state reconstruction error as the
    objective
  • We would like to learn such that

i.e.
15
Subspace Identification3
  • Subspace ID uses matrix decomposition to estimate
    the state sequence

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
16
Subspace Identification3
  • Subspace ID uses matrix decomposition to
    estimate the state sequence
  • Build a Hankel matrix D of stacked observations

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
17
Subspace Identification3
  • Subspace ID uses matrix decomposition to
    estimate the state sequence
  • Build a Hankel matrix D of stacked observations

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
18
Subspace Identification3
  • In expectation, the Hankel matrix is inherently
    low-rank!

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
19
Subspace Identification3
  • In expectation, the Hankel matrix is inherently
    low-rank!

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
20
Subspace Identification3
  • In expectation, the Hankel matrix is inherently
    low-rank!

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
21
Subspace Identification3
  • In expectation, the Hankel matrix is inherently
    low-rank!
  • Can use SVD to obtain the low-dimensional state
    sequence

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
22
Subspace Identification3
  • In expectation, the Hankel matrix is inherently
    low-rank!
  • Can use SVD to obtain the low-dimensional state
    sequence

For D with k observations per column,

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
23
Subspace Identification3
  • In expectation, the Hankel matrix is inherently
    low-rank!
  • Can use SVD to obtain the low-dimensional state
    sequence

For D with k observations per column,

3 P. Van Overschee and B. De Moor Subspace
Identification for Linear Systems. Kluwer, 1996
24
  • Lets train an LDS for steam textures using this
    algorithm, and simulate a video from it!


xt 2 R40
25
Simulating from a learned model
The model is unstable
26
Notation
  • ?1 , , ?n eigenvalues of A (?1 gt gt ?n
    )
  • ?1,,?n unit-length eigenvectors of A
  • ?1,,?n singular values of A (?1 gt ?2 gt
    ?n)
  • S? matrices with ?1 1
  • S? matrices with ?1 1

27
Stability
  • a matrix A is stable if ?1 1, i.e. if

28
Stability
  • a matrix A is stable if ?1 1, i.e. if

29
Stability
  • a matrix A is stable if ?1 1, i.e. if

?1 0.3
?1 1.295
30
Stability
  • a matrix A is stable if ?1 1, i.e. if

xt(1), xt(2)
?1 0.3
xt(1), xt(2)
?1 1.295
31
Stability
  • a matrix A is stable if ?1 1, i.e. if
  • We would like to solve

xt(1), xt(2)
?1 0.3
xt(1), xt(2)
?1 1.295
s.t.
32
Stability and Convexity
  • But S? is non-convex!

33
Stability and Convexity
  • But S? is non-convex!

A1
A2
34
Stability and Convexity
  • But S? is non-convex!
  • Lets look at S? instead
  • S? is convex
  • S? µ S?

A1
A2
35
Stability and Convexity
  • But S? is non-convex!
  • Lets look at S? instead
  • S? is convex
  • S? µ S?

A1
A2
36
Stability and Convexity
  • But S? is non-convex!
  • Lets look at S? instead
  • S? is convex
  • S? µ S?
  • Previous work4 exploits these properties to
    learn a stable A by solving the semi-definite
    program

A1
A2
s.t.
4 S. L. Lacy and D. S. Bernstein. Subspace
identification with guaranteed stability using
constrained optimization. In Proc. of the ACC
(2002), IEEE Trans. Automatic Control (2003)
37
  • Lets train an LDS for steam again, this time
    constraining A to be in S?

38
Simulating from a Lacy-Bernstein stable texture
model
Model is over-constrained. Can we do better?
39
Our Approach
  • Formulate the S? approximation of the problem as
    a semi-definite program (SDP)
  • Start with a quadratic program (QP) relaxation of
    this SDP, and incrementally add constraints
  • Because the SDP is an inner approximation of the
    problem, we reach stability early, before
    reaching the feasible set of the SDP
  • We interpolate the solution to return the best
    stable matrix possible

40
The Algorithm
41
The Algorithm
objective function contours
42
The Algorithm
objective function contours
  • A1 unconstrained QP solution (least squares
    estimate)

43
The Algorithm
objective function contours
  • A1 unconstrained QP solution (least squares
    estimate)

44
The Algorithm
objective function contours
  • A1 unconstrained QP solution (least squares
    estimate)
  • A2 QP solution after 1 constraint (happens to be
    stable)

45
The Algorithm
objective function contours
  • A1 unconstrained QP solution (least squares
    estimate)
  • A2 QP solution after 1 constraint (happens to be
    stable)
  • Afinal Interpolation of stable solution with the
    last one

46
The Algorithm
objective function contours
  • A1 unconstrained QP solution (least squares
    estimate)
  • A2 QP solution after 1 constraint (happens to be
    stable)
  • Afinal Interpolation of stable solution with the
    last one
  • Aprevious method Lacy Bernstein (2002)

47
  • Lets train an LDS for steam using constraint
    generation, and simulate

48
Simulating from a Constraint Generation stable
texture model
Model captures more dynamics and is still stable
49
  • Least Squares
  • Constraint Generation

50
Empirical Evaluation
  • Algorithms
  • Constraint Generation CG (our method)
  • Lacy and Bernstein (2002) LB-1
  • finds a ?1 1 solution
  • Lacy and Bernstein (2003)LB-2
  • solves a similar problem in a transformed space
  • Data sets
  • Dynamic textures
  • Biosurveillance baseline models (see paper)

51
(No Transcript)
52
Reconstruction error
  • Steam video

decrease in objective (lower is better)
number of latent dimensions
53
Running time
  • Steam video

Running time (s)
number of latent dimensions
54
Conclusion
  • A novel constraint generation algorithm for
    learning stable linear dynamical systems
  • SDP relaxation enables us to optimize over a
    larger set of matrices while being more efficient

55
Conclusion
  • A novel constraint generation algorithm for
    learning stable linear dynamical systems
  • SDP relaxation enables us to optimize over a
    larger set of matrices while being more efficient
  • Future work
  • Adding stability constraints to EM
  • Stable models for more structured dynamic textures

56
  • Thank you!

57
(No Transcript)
58
Subspace ID with Hankel matrices
  • Stacking multiple observations in D forces
    latent states to model the future
  • e.g. annual sunspots data with 12-year cycles


k 12
k 1
t
t
First 2 columns of U
59
Stability and Convexity
  • The state space of a stable LDS lies inside some
    ellipse

60
Stability and Convexity
  • The state space of a stable LDS lies inside some
    ellipse
  • The set of matrices that map a particular ellipse
    into itself (and hence are stable) is convex

61
Stability and Convexity
  • The state space of a stable LDS lies inside some
    ellipse
  • The set of matrices that map a particular ellipse
    into itself (and hence are stable) is convex
  • If we knew in advance which ellipse contains our
    state space, finding A would be a convex problem.
    But we dont

?
?
?
62
Stability and Convexity
  • The state space of a stable LDS lies inside some
    ellipse
  • The set of matrices that map a particular ellipse
    into itself (and hence are stable) is convex
  • If we knew in advance which ellipse contains our
    state space, finding A would be a convex problem.
    But we dont
  • and the set of all stable matrices is non-convex

?
?
?
63
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com