Orthogonalization via Deflation - PowerPoint PPT Presentation

1 / 49
About This Presentation
Title:

Orthogonalization via Deflation

Description:

S = ( sij ) a symmetric positive semi-definite n x n matrix. With eigenvalues l1 l2 ... Monotony : l1 ... rk ... r2 r1 0. THE POWER METHOD ... – PowerPoint PPT presentation

Number of Views:155
Avg rating:3.0/5.0
Slides: 50
Provided by: www2C57
Category:

less

Transcript and Presenter's Notes

Title: Orthogonalization via Deflation


1
  • Orthogonalization via Deflation
  • By
  • Achiya Dax
  • Hydrological Service
  • Jerusalem , Israel
  • e-mail dax20_at_water.gov.il

2
  • Overview
  • Motivation The Symmetric Case
  • Rectangular Quotients
  • Retrieval of Singular Vectors
  • Rectangular Iterations
  • Orthogonalization via Deflation
  • Applications

3
  • The Symmetric Case
  • S ( sij ) a symmetric positive
    semi-definite n x n matrix
  • With eigenvalues l1 ³ l2 ³ ... ³
    ln ³ 0
  • and eigenvectors v1 , v2 , ,
    vn
  • S vj lj vj , j 1, , n . S V
    V D
  • V v1 , v2 , , vn , VT V V VT I
  • D diag l1 , l2 , , ln
  • S V D VT S lj vj vjT

4
  • Low-Rank Approximations
  • S l1v1v1T lnvnvnT
  • T1 l1v1v1T
  • T2 l1v1v1T l2v2v2T
  • .
  • .
  • .
  • Tk l1v1v1T l2v2v2T lkvkvkT
  • Tk is a low - rank approximation of
    order k .

5
  • The Euclidean vector norm
  • u ( u1 , u2 , , un )T
  • u 2 ( uT u ) ½ S uj 2½
  • The Frobenius matrix norm
  • A ( aij ) , A F S S aij

6
  • A Minimum Norm Problem
  • Let the vector v solve the minimum norm
    problem
  • minimize E (v) S - v vT
    F 2 .
  • Then
  • v1 v / v 2 and l1
    (v)T v
  • constitutes a dominant eigenpair of S .

7
  • The Rayleigh Quotient
  • r r (v , S) vT S v / vTv
  • solves the minimum norm problem
  • minimize f ( q ) S v - q v 2
  • r estimates an eigenvalue corresponding to
    V

8
  • A Simple Error Bound
  • Given
  • r r (v , S) vT S v / vTv
  • there exists an eigenvalue l of S such
    that
  • l - r S v - r v 2

9
  • A Second Minimum Norm Problem
  • Given any unit vector u , u 2
    1 ,
  • the Rayleigh Quotient
  • r (u) uT S u / uT u uT
    S u
  • solves the one parameter problem
  • minimize f (q) S - q u uT
    F 2

10
  • The Power Method
  • Starting with some unit vector p0 .
  • The k th iteration, k 1, 2, 3, ,
  • Step 1 Compute wk S pk-1
  • Step 2 Compute rk (pk-1)T wk
  • Step 3 Normalize pk wk / wk 2

11
  • THE POWER METHOD
  • Asymptotic Rates of Convergence
  • ( Assuming l1 gt l2 )
  • pk a v1 at a linear rate, proportional to
    l2 / l1
  • rk a l1 at a linear rate, proportional to
    (l2 /l1)2
  • Monotony l1 ³ ³ rk ³ ³ r2 ³ r1 gt 0

12
  • THE POWER METHOD
  • The asymptotic rates of convergence
  • depend on the ratio l2 / l1
  • and can be arbitrarily slow.
  • Yet rk provides a fair estimate of l1
  • within a few iterations !
  • For a worst case analysis see
  • D.P. OLeary, G.W. Stewart and J.S.
    Vandergraft, Estimating the largest eigenvalue
  • of a positive definite matrix, Math. of Comp.,
    33(1979), pp. 1289 1292.

13
  • THE POWER METHOD
  • An eigenvector vj is called
  • large if lj ³ l1 / 2 and small if
    lj lt l1 / 2 .
  • In most of the practical situations,
  • for small eigenvectors pkT vj becomes
    negligible
  • after a small number of iterations.
  • Thus, after a few iterations pk actually lies
  • in a subspace spanned by large
    eigenvectors.

14
  • Deflation by Subtraction
  • S l1 v1 v1T ln vn vnT .
  • S1 S - l1 v1 v1T l2 v2 v2T ln vn
    vnT .
  • S2 S1 - l2 v2 v2T l3 v3 v3T ln
    vn vnT .
  • .
  • .
  • .
  • Sn-1 ln vn vnT .
  • Sn 0 .

  • Hotelling (1933, 1943)

15
  • Can we extend these tools
  • to rectangular matrices?

16
  • The Rectangular Case
  • A ( aij ) a real m x n matrix , p
    min m , n
  • With singular values s1 ³ s2 ³ ³ sp
    ³ 0 ,
  • Left singular vectors u1 , u2 ,
    , up
  • Right singular vectors v1 , v2 ,
    , vp
  • A vj sj uj , AT uj sj v
    j 1, , p .

17
  • The Singular Value Decomposition
  • A U S V T
  • S diag s1 , s2 , , sp , p
    min m , n
  • U u1 , u2 , , up , UT U
    I
  • V v1 , v2 , , vp , VT V
    I
  • A V U S AT U V S
  • A vj sj uj , AT uj sj vj j
    1, , p .

18
  • Low - Rank Approximations
  • A U S VT S sj uj
    vjT
  • A s1 u1 v1T s2 u2 v2T sp
    up vpT .
  • B1 s1 u1 v1T
  • B2 s1 u1 v1T s2 u2 v2T
  • .
  • .
  • .
  • Bk s1 u1 v1T s2 u2 v2T sk
    uk vkT
  • Bk is a low - rank approximation of
    order k .
  • (Also called "truncated SVD or filtered
    SVD.)

19
  • A Minimum Norm Problem
  • Let the vectors u and v solve the
    problem
  • minimize F ( u , v) A - u vT
    F2
  • then
  • u1 u / u 2 , v1 v /
    v 2 ,
  • and
  • s1 u 2 v
    2
  • ( See the Eckhart-Young Theorem.)

20
  • The Rectangular Quotient
  • Given two vectors , u and v ,
  • the Rectangular Rayleigh Quotient
  • r(u , v) uT A v / u 2 v
    2
  • estimates the corresponding singular value.

21
  • The Rectangular Rayleigh Quotient
  • Given two unit vectors , u and v ,
  • the Rectangular Rayleigh Quotient
  • r(u , v) uT A v /
    u 2 v 2
  • solves the following three problems
  • minimize f1(q) A - q
    u vT F
  • minimize f2(q) A v -
    q u 2
  • minimize f3(q) AT u -
    q v 2

22
  • Approximating a left singular vector
  • Given a right singular vector , v1 , the
    corresponding
  • left singular vector , u1 , is attained by
    solving
  • the least norm problem
  • minimize g ( u ) A - u v1T F
    2
  • That is,
  • u1 A v1 / v1T v1 .
  • ( The rows of A are orthogonalized against
    v1T .)

23
  • Approximating a right singular vector
  • Given a left singular vector , u1 , the
    corresponding
  • right singular vector , v1 , is attained by
    solving
  • the least norm problem
  • minimize h ( v ) A u1 vT F
    2
  • That is,
  • v1 AT u1 / u1T u1 .
  • (The columns of A are orthogonalized
    against u1 .)

24
  • Rectangular Iterations - Motivation
  • The k th iteration , k 1, 2, 3, ,
  • starts with uk-1 and vk-1 and ends
    with uk and vk .
  • Given vk-1 the vector uk is obtained by
    solving the problem
  • minimize
    g(u) A - u vk-1T F 2 .
  • That is,

  • uk A vk-1 / vk-1T vk-1 .
  • Then , vk is obtained by solving the problem
  • minimize
    h(v) A - uk vT F 2 ,
  • which gives

  • vk AT uk / ukT uk .

25
  • Rectangular Iterations Implementation
  • The k th iteration , k 1, 2, 3, ,
  • uk A vk-1 / vk-1T vk-1 ,
  • vk AT uk / ukT uk .
  • The sequence vk / vk 2 is obtained
    by applying
  • the Power Method on the matrix ATA .
  • The sequence uk / uk 2 is obtained
    by applying
  • the Power Method on the matrix AAT .

26
  • Left Iterations
  • uk A vk-1 / vk-1T vk-1 ,
  • vk AT uk / ukT uk .
  • ---------------------
    --------------------------------------------------
    --------------------------------
  • vkT vk vkTAT uk / ukT uk
  • Right Iterations
  • vk AT uk-1 / uk-1T
    uk-1 ,
  • uk A vk / vkT vk .
  • -------------------
    --------------------------------------------------
    ---------------------------------
  • ukT uk ukTA vk / vkT
    vk
  • Can one see a difference?

27
  • Some Useful Relations
  • In both cases we have
  • ukT uk vkT vk ukTA vk ,
  • uk 2 vk 2 ukT A vk / uk 2
    vk 2 r(uk , vk) ,
  • and h(uk , vk) ukT A vk / ukT uk
    vkTvk 1 .
  • The objective function F ( u , v ) A -
    u vT F 2
  • satisfies F ( uk , vk) A F
    2 - ukT uk vkT vk
  • and F( uk , vk) - F( uk1 , vk1)
  • uk1T uk1 vk1T vk1 - ukT uk vkT
    vk gt 0

28
  • Convergence Properties
  • Inherited from the Power Method , assuming
    s1 gt s2 .
  • The sequences uk / uk 2 and vk
    / vk 2
  • converge at a linear rate, proportional to
    (s2 / s1 ) 2 .
  • ukT uk vkT vk a ( s1 )
    2
  • at a linear rate, proportional to (s2 / s1
    ) 4
  • Monotony
  • ( s1 )2 ³ uk1T uk1 vk1T vk1 ³ ukT uk
    vkT vk gt 0

29
  • Convergence Properties
  • rk uk 2 vk
    2
  • provides a fair estimate of s1
  • within a few rectangular iterations !

30
  • Convergence Properties
  • After a few rectangular iterations
  • rk , uk , vk
  • provides a fair estimate of a
  • dominant triplet
  • r1 , u1 , v1
    .

31
  • Deflation by Subtraction
  • A1 A s1 u1 v1T sp up vpT .
  • A2 A1 - s1 u1 v1T s2 u2 v2T sp up
    vpT
  • A3 A2 - s2 u2 v2T s3 u3 v3T sp vp
    vpT
  • .
  • .
  • .
  • Ak1 Ak - sk uk vkT sk1 uk1 vk1T
    spup vpT
  • .
  • .
  • .

32
  • Deflation by Subtraction
  • A1 A
  • A2 A1 - s1 u1 v1T
  • A3 A2 - s2 u2 v2T
  • .
  • .
  • .
  • Ak1 Ak - sk uk vkT
  • .
  • .
  • .
  • where sk , uk , vk denotes a computed
  • dominant singular triplet of Ak .

33
  • The Main Motivation
  • At the k th stage , k 1, 2, ,
  • a few rectangular iterations
  • provide a fair estimate of
  • a dominant triplet of AK .

34
  • Low - Rank Approximation Via Deflation
  • s1 ³ s2 ³ ³ sp ³ 0 ,
  • A s1 u1 v1T s2 u2 v2T sp up
    vpT .
  • B1 s1 u1 v1T ( means computed
    values )
  • B2 s1 u1 v1T s2 u2 v2T
  • .
  • .
  • .
  • Bl s1 u1 v1T s2 u2 v2T sl
    ul vlT
  • Bl is a low - rank approximation of
    order l .
  • ( Also called "truncated SVD or the
    filtered part of A . )

35
  • Low - Rank Approximation of Order l
  • A s1 u1 v1T s2 u2 v2T sp up
    vpT .
  • Bl s1 u1 v1T s2 u2 v2T sl ul
    vlT
  • Bl Ul
    Sl VlT
  • Ul u1 , u2 , , ul ,
  • Vl v1 , v2 , , vl ,
  • Sl diag s1 , s2 , ,
    sl
  • ( means computed values )

36
  • What About Orthogonality ?
  • Does UlT Ul I and VlT Vl I
    ?
  • The theory behind the Power Method suggests
    that
  • the more accurate are the computed singular
    triplets
  • the smaller is the deviation from
    orthogonality .
  • Is there a difference
  • ( regarding deviation from orthogonality )
  • between Ul and Vl ?

37
  • Orthogonality Properties
  • ( Assuming exact arithmetic . )
  • Theorem 1 Consider the case when each
    singular
  • triplet, sj , uj , vj , is computed
    by a finite
  • number of "Left Iterations". ( At least
    one
  • iteration for each triplet. ) In this case
  • UlT Ul I and UlT Al
    0
  • regardless the actual number of iterations !

38
  • Left Iterations
  • uk A vk-1 / vk-1T vk-1 ,
  • vk AT uk / ukT uk .
  • Right Iterations
  • vk AT uk-1 / uk-1T
    uk-1 ,
  • uk A vk / vkT vk .
  • Can one see a difference?

39
  • Orthogonality Properties
  • ( Assuming exact arithmetic . )
  • Theorem 2 Consider the case when each
    singular
  • triplet, sj , uj , vj , is computed
    by a finite
  • number of Right Iterations". ( At least
    one
  • iteration for each triplet. ) In this case
  • VlT Vl I and Al Vl 0
  • regardless the actual number of iterations !

40
  • Finite Termination
  • Assuming exact arithmetic , r rank ( A
    ) .
  • Corollary In both cases we have
  • A Br s1 u1 v1T sr ur vrT
    ,
  • regardless the number of iterations
  • per singular triplet !

41
  • A New QR Decomposion
  • Assuming exact arithmetic , r rank ( A
    ) .
  • In both cases we obtain an effective
  • rank revealing QR decomposition
  • A Ur Sr VrT .
  • In Left Iterations UrT Ur I .
  • In Right Iterations VrT Vr I .

42
  • The Orthogonal Basis Problem
  • Is to compute an orthogonal basis of Range ( A
    ).
  • The Householder and Gram-Schmidt
  • orthogonalizations methods use a
  • column pivoting for size policy,
  • which completely determine the basis.

43
  • The Orthogonal Basis Problem
  • The new method ,
  • Orthogonalization via Deflation ,
  • has larger freedom in choosing the basis.
  • At the k th stage, the ultimate choice for a
  • new vector to enter the basis is uk ,
  • the k th left singular vector of A .
  • ( But accurate computation of uk
  • can be too expensive. )

44
  • The Main Theme
  • At the kth stage ,
  • a few rectangular iterations
  • are sufficient to provide
  • a fair subtitute of uk .

45
  • Applications
  • Missing data reconstruction.
  • Low - rank approximations of large
  • sparse matrices.
  • Low - rank approximations of tensors.

46
  • Applications in Missing Data Reconstruction
  • Consider the case when some entries of A are
    missing.
  • Missing Data in DNA Microarrays
  • Tables of Annual Rain Data
  • Tables of Water Levels in Observation
    Wells
  • Web Search Engines
  • Standard SVD algorithms are unable to handle
    such
  • matrices.
  • The Minimum Norm Approach is easily adapted
  • to handle matrices with missing entries .

47
  • A Modified Algorithm
  • The objective function
  • F ( u , v ) A - u vT
    F 2
  • is redefined as
  • F ( u , v ) S S ( aij ui
    vj ) 2 ,
  • where the sum is restricted to known entries of
    A .
  • ( As before,
  • u (u1, u2, , um)T and v (v1, v2,
    , vn)T
  • denote the vectors of unknowns. )

48
  • Overview
  • Motivation The Symmetric Case
  • Rectangular Quotients
  • Retrieval of Singular Vectors
  • Rectangular Iterations
  • Orthogonalization via Deflation
  • Applications

49
  • The END
  • Thank You
Write a Comment
User Comments (0)
About PowerShow.com