Threshold partitioning for iterative aggregation - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

Threshold partitioning for iterative aggregation

Description:

Ivana Pultarova. Czech Technical University in Prague, Czech Republic. ILAS 2004. 2. We consider column stochastic irreducible matrix B of type N N. ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 18
Provided by: tata46
Category:

less

Transcript and Presenter's Notes

Title: Threshold partitioning for iterative aggregation


1
Threshold partitioning for iterative aggregation
disaggregation method
  • Ivana Pultarova
  • Czech Technical University in Prague, Czech
    Republic

2
  • We consider column stochastic irreducible matrix
    B of type N N.
  • The Problem is to find stationary probability
    vector xp, xp 1,
  • We explore the iterative aggregation-disaggregatio
    n method (IAD).
  • Notation
  • Spectral decomposition of B, B P Z, P 2 P,
    ZP PZ 0, r(Z) lt 1 (spectral radius).
  • Number of aggregation groups n, n lt N.
  • Restriction matrix R of type n N. The elements
    are 0 or 1, all column sums are 1.
  • Prolongation N n matrix S(x) for any positive
    vector x
  • (S(x))ij xi iff (R)ji 1, then divide all
    elements in each column with the sum of the
    column.
  • Projection N N matrix P(x) S(x) R.
  • . denote 1-norm.

3
Iterative aggregation disaggregation
algorithm step 1. Take the first approximation
x0 RN, x0 gt 0, and set k 0. step 2. Solve
RBsS(xk) zk1 zk1, zk1 Rn,
zk1 1, for (appropriate) integer s,
(solution on the coarse level). step
3. Disaggregate xk1,1 S(xk) zk1. step 4.
Compute xk1 Btxk1,1 for appropriate integer
t, (smoothing on the fine level). step 5. Test
whether xk1 xk is less then a prescribed
tolerance. If not, increase k and go to step 2.
If yes, consider xk1 be the solution of the
problem.
4
  • Propositon 1.
  • If s t then the computed approximations xk, k
    1,2,, follow the formulae
  • BsP(xk) xk1 xk1,
  • xk1 (I ZsP(xk))-1 xp,
  • xk1 xp J(xk) (xk xp),
  • where J(x) Bs(I P(x) Zs)-1(I P(x))
  • and also J(x) Bs(I P(x) P(x) J(x)).
  • Proposition 2.
  • Let V be a global core matrix associated with Bs.
    Then
  • J(x) V(I P(x) V)-1(I P(x)) and J(x)
    V(I P(x) P(x) J(x)).

5
Note. The global core matrix V is here ?P Z
s. Using Z k? 0 for k ? 8, we have V
?P Z s 0 for a given ? and for a
sufficiently large s. This is equivalent to B s
P Z s (1- ?) P.
6
Local convergence. It is known that for arbitrary
integers t 1 and s 1 there existsa
neighborhood O of xp such, that if xk O then
xr O, r k 1, k 2,, and that xk1 -
xp c ak xk - xp, where c R and a
min Vloc µ, (I-P(xp))Z(I-P(xp))µ, where
.µ is some special norm in RN. Here, Vloc is
a local core matrix associated with B. Thus,
the local convergence rate of IAD algorithm is
the same or better comparing with the Jacobi
iteration of the original matrix B.
7
Global convergence. From Proposition 2 we have
J(xk) V I P(xk) V
P(xk) J(xk), i.e. J(xk) (1 ?) lt
2?. So that the sufficient condition for the
global convergence of IAD is? lt 1/3, i.e. the
relation B s gt (2/3) P is the sufficient
condition for the global convergence of IAD
method. (It also means r(Z s) 1/3. B s (2/3)
P is equivalent to P/3 Z s 0. Then P 3Z s
0 is a spectral decomposition of an irreducible
column stochastic matrix and then r(Z s) 1/3.)
8
In practical computation of large problems we
cannot verify the validity of relation B s ? P
gt 0 to estimate the value of s. But, we can
predict the constant k for which B k gt 0. The
value is known to be less than or equal to N 2- 2
N 2.
9
We propose a new method for achieving B s ?P gt
0 with some ? gt 0. Let I B M W be a
regular splitting, M -1 0, W 0. Then the
solution of Problem is identical with solution of
(M W) x 0. Denoting Mx y and setting y
y / y , we have (I WM -1) y 0, where
WM -1 is column stochastic matrix. Thus, the
solution of the Problem is transformed to the
solution of WM -1 y y, y 1, for
any regular splitting M, W of the matrix I B.
10
  • The good choice of M, W.
  • According to IAD algorithm we will use a block
    diagonal matrix M which is composed of blocks M1,
    Mn , each of them invertible.
  • To achieve (WM -1) s gt 0 for low s, we need
  • Mi-1gt 0, i 1,, n,
  • nnz (WM -1) gtgt nnz (B), (number of
    nonzeros).

11
  • Algorithm of a good partitioning.
  • step 1. For an apropriate threshold t, 0 lt t lt 1,
    use Tarjans parametrized algorithm to find the
    irreducible diagonal blocks Bi, i 1,,n, of
    the permuted matrix B, (we now suppose B
    permuted B).
  • step 2. Compose the block diagonal matrix BTar
    from the blocks Bi, i 1,,n, and set
  • M I BTar / 2 and W M (I B).
  • Properties of WM -1 .
  • WM -1 is irreducible.
  • Diagonal blocks of WM -1 are positive.
  • (WM -1) s is positive for s n2 - 2n 3, n
    is the number of aggregation groups. (n
    3 ? s 2)
  • The second largest eigenvalue of the
    aggregated n n matrix is approximately the
    same as that of WM -1.

12
Example 1. Matrix B is composed from n n
blocks of size m. We set e 0.01, d 0.01.
Then B is normalized.
13
Example 1. a) IAD method for WM -1 and threshold
Tarjans block matrix M, s 1, r(ZWM) 0.9996.
(Exact solution red, the last of approximations
- black circles). b) Power iterations for WM -1
and the same M as in a), s 1, r(ZWM) 0.9996.
(Exact solution red, the last of 500
approximations - black circles. No local
convergence effect.). c) Rates of convergence of
a) and b).
14
Example 2. Matrix B is composed from n n
blocks of size m. We set e 0.01, d 0.01.
Then B B C (10 of C are 0.1) and
normalized.
15
Example 2. IAD for B and WM -1. Power method for
B and WM -1. Convergence rate for IAD and
power method.
16
Example 2. Another random entries. a) IAD for B
and WM -1. b) Power method for B and WM
-1. c) Convergence rate for IAD and power method.
17
I. Marek and P. Mayer Convergence analysis of an
aggregation/disaggregation iterative method for
computation stationary probability
vectors Numerical Linear Algebra With
Applications, 5, pp. 253-274, 1998 I. Marek and
P. Mayer Convergence theory of some classes of
iterative aggregation-disaggregation methods for
computing stationary probability vectors of
stochastic matrices Linear Algebra and Its
Applications, 363, pp. 177-200, 2003 G. W.
Stewart Introduction to the numerical solutions
of Markov chains, 1994 A. Berman, R. J.
Plemmons Nonnegative matrices in the mathematical
sciences, 1979 G. H. Golub, C. F. Van
Loan Matrix Computations, 1996 ETC.
Write a Comment
User Comments (0)
About PowerShow.com