Title: Model Order Reduction for Large Scale Dynamical Systems Lecture 2: Tools From Matrix Theory I what y
1Model Order Reduction for Large Scale Dynamical
SystemsLecture 2 Tools From Matrix Theory
I(what you ought to know already)
NXP Semiconductors
Tamara Bechtold
2Outline
- Vector Spaces with Scalar Product
- Linear Mapping
- Projectors
- Order Reduction via Projection
- Summary
3Outline
- Vector Spaces with Scalar Product
- Linear Mapping
- Projectors
- Order Reduction via Projection
- Summary
4Matrices and Vectors in Rn
- DEFINITION An m x n matrix in Rn is a
rectangular scheme of mn real numbers, which are
called elements and are ordered in m rows and n
columns. An m x 1 matrix is called column vector.
An 1 x n matrix is called row vector. - But what is it really?
- In the real world you start from the physical
phenomena, which might take place in each point
of the space (e. g. thermal flow) or in some
points of space (e. g. electrical current flow).
In both cases you might end up in an equation
system of the form - Hence, for us the matrix is something that
multiplies the solution/state vector.
5Remember
control
simulation
6Scalar (Inner) Product
- DEFINITION Scalar product or inner product of
two vectors x,y from Rn is the number - THEOREM Scalar product in Rn is
- (S1) Linear
- (S2) Symmetric
- (S3) Positive definite
7Vector Spaces
- DEFINITION Vector space V over R is a non-empty
assembly upon which an addition
and a scalar multiplication
are defined with the following rules - Elements of V are called vectors. is
called zero vector. The elements of R are called
scalars. -
8Subspaces spanned by
- DEFINITION A non-empty sub-assembly U from the
vector space V is called subspace, if it is
closed for addition and scalar multiplication - THEOREM A subspace is also a vector space.
- DEFINITION All linear combinations of vectors
a1, ,al from V are called subspace spanned by
a1, ,al (spanning set). We write - In connection with the dynamical system
- We will define Krylov or reachability subspaces
9Linear Independancy, Basis, Dimension
- DEFINITION lgt2 vectors a1, ,al are linear
dependant if one of them can be represented as a
linear combination of others. - DEFINITION A linear independent spanning set of
space V is called basis of V. Hence, n columns
of a nxn matrix B are
a basis of Rn, if B is regular. - DEFINITION Number of basis vectors of the vector
space V (with finite spanning set) is called the
dimension of V. - THEOREM Each vector can be uniquely represented
as a linear combination of the basis vectors b1,
,bn of V - DEFINITION Coefficients are called the
coordinates of x with respect to the basis b1,
,bn . Vector is a
coordinate vector of x.
10Orthonormal Basis
- DEFINITION A basis is called orthogonal if each
two basis vectors are mutually orthogonal -
- It is called orthonormal if the basis
vectors are unity vectors -
- Note Coordinates in respect to the orthonormal
basis are - In other words only in the orthonormal basis,
one can set - It follows that for matrix Bb1, ,bn holds
- COROLLARY In each vector space with scalar
product and finite dimension, there exist an
orthonormal basis. Gram-Schmidt algorithm
constructs it from an arbitrary basis (we will
come back to it later).
11Change of Basis, Coordinate Transformation
- THEOREM Let be a
coordinate vector of arbitrary vector with
respect to the old basis Bb1, ,bn and let
be a coordinate vector of
x with respect to some new basis Bb1, ,bn
- Than the coordinate transformation can be
expressed via the regular transformation matrix T
as - Change of basis from B to B can be expressed as
12Orthogonal Change of Basis
- How does the transformation matrix T look like,
if both basis B and B are orthonormal? - THEOREMTransformation matrix of the change
between two orthonormal basis is orthogonal - Note An orthogonal matrix is square and has
orthonormal columns. if it is not square it is
called a matrix with orthogonormal columns if
the columns are not normalized it is called a
matrix with orthogonal columns.
13Range, Nullspace, Rank
- DEFINITION The range of a matrix A (map F),
written range(A) or R(A), is the set of vectors
that can be expressed as Ax for some x. - THEOREM range(A) is the space spanned by the
columns of A. - DEFINITION The null-space (kernel) of A, written
null(A) or N(A), is the set of vectors x that
satisfy Ax 0, where 0 is the 0 - vector in Rm. - DEFINITION The column rank of a matrix is the
dimension of its column space. Similarly, the row
rank of a matrix is the dimension of the space
spanned by its rows. They are always equal! - An m x n matrix of full rank is one that has the
maximal possible rank, that is min(m,n). Hence, a
matrix of full rank with m n must have n
linearly independent columns. - THEOREM A matrix with m n
has full rank if and only if it maps no two
distinct vectors to the same vector.
14Inverse
- DEFINITION A nonsingular or invertable matrix A
is a square matrix of full rank. Its inverse is
marked with A-1 and it fulfills - THEOREM For , the following
conditions are equivalent - (a) A has an inverse A-1
- (b) rank(A)n
- (c) range(A) Rn
- (d) null(A) 0
- (e) 0 is not an eigenvalue of A
- (f) 0 is not a singular value of A
- (g) det(A) ? 0
- Moore-Penrose pseudoinverse given
find X that solves - is called
generalized (Moore-Penrose) inverse that
satisfies -
15Outline
- Vector Spaces with Scalar Product
- Linear Mapping
- Projectors
- Order Reduction via Projection
- Summary
16Matrices as Linear Mapping
- DEFINITION A mapping
between two vector spaces X and Y (over
R) is called linear, if for all
and all - X is called domain and Y is called
image(range) of F. - Let be a m x n matrix. The
assignment is linear mapping
from Rn to Rm, which we denote with A - EXAMPLE Coordinate mapping
17Mapping with Coordinate Transformation
- Let X and Y be a vector spaces of dimensions n
and m and let
(coordinate mapping with respect to the old
bases)
(coordinates with respect to the old bases)
(coordinate transformations)
(coordinates with respect to the new bases)
18Main Question
- How far can we simplify the mapping matrix by
choosing the proper bases?
19Outline
- Vector Spaces with Scalar Product
- Linear Mapping
- Projectors
- Order Reduction via Projection
- Summary
20Projectors
- DEFINITION A linear mapping
(square matrix) is called projection if it
fulfills
Oblique projection
Orthogonal projection
21Complementary Projectors
- DEFINITION If P is a projector, I P is also a
projector, as - The matrix I P is called the complementary
projector to P. - Onto what space does I P project?
- We can also see that
. This shows that P separates
Rn into two spaces, S1 range(P) and S2
null(P), such that - S1 and S2 are said to be the complementary
subspaces. - We say that P is the projector onto S1 along S2.
22Orthogonal Projectors
- DEFINITION An orthogonal projector is one that
projects onto a subspace S1 along S2, where, S1
and S2 are orthogonal. (Warning orthogonal
projectors are not orthogonal matrices!) - THEOREM A projector P is orthogonal if and only
if P PT. - THOREM An orthogonal projection
onto the column
space R(A), of a matrix with
rank n(m) is given by - THOREM An orthogonal projection
onto the column
space R(Q), of a m x n matrix Q (q1qn) with
orthonormal columns is given by - THEOREM For an orthogonal projection is vallied
23Rank-One Orthogonal Projection
- THEOREM An m x n matrix A has rank 1 if and only
if it can be expressed as an outer product of
and - THEOREM The orthogonal projection Pq of
onto direction defined by a unit vector q is
given by - For arbitrary nonzero direction vector a, the
analogous formula is - Note that any higher rank projection
can be made as a sum of rank-one
projections
24DEMO Orthogonale Projections
- The following demo (with German comments) can be
found at - http//ismi.math.uni-frankfurt.de/analysisfuerinfo
rmatiker/Vorlesung8a.pdf
25Gram-Schmidt Orthogonalization
- ALGORITHM Let a1,a2, be a finite, linear
independent assembly of vectors. We compute the
same number of vectors b1, b2, as - THEOREM After k steps of Gram-Schmidt algorithm
we have - If a1, ,ak is a basis of V, so is b1, ,bk
an othonormal basis of V.
26Gram-Schmidt in R2
27Gram-Schmidt in R3
28Outline
- Vector Spaces with Scalar Product
- Linear Mapping
- Projectors
- Order Reduction via Projection
- Summary
29Recapitulation of Lecture 1
- Projection constitutes a unifying feature of MOR
methods - Projection can be seen as a truncation in an
appropriate basis
30Transformation
- Remember, we need to project
- The change of basis
leads to - In general we can always partition x, T and T-1
as - This turns the original system into
- Attention still no approximation!
Remember T is regular
31Truncation
- The approximation occurs if it turns out that x2
can be neglected (if e. g. due to the change of
base ) and so the T1 and T2
entries can be truncated as well - Note the solution vector has been
approximated by - In case when T is orthonormal
(Petrov-Galerkin approximation)
(Galerkin approximation)
32Where is the Projection?
- If we replace into the
above equation, we get
/V (from the left)
33In General Case We Have Oblique Projection
- If we replace into the
above equation, we get - Remember the complementary subspaces S1
range(P) and S2 null(P). In this case P VWT,
S1 range(VWT) range(V), S2 null(VWT)
null(WT), because V has a full rank r.
/V (from the left)
34Orthogonal Vs Oblique
- Main question why the oblique projection at all?
- Remember WTAV and VTAV are the matrices that
multiply the state/solution vector! Hence, we
want to transform(decompose) them to the simplest
possible form. In some cases Petrov-Galerkin
approximation might be easier to compute than the
Galerkin one.
35MOR as Mapping?
(coordinate transformations)
Homework how can MOR be interpreted via mapping?
36Outline
- Vector Spaces with Scalar Product
- Linear Mapping
- Projectors
- Order Reduction via Projection
- Summary
37Summary
- Linear algebra tools are crucial for MOR
- Unifying feature of MOR is a projection (Galerkin
or Petrov-Galerkin approximation) - Matrix x state vector should be simple to compute
- Next lecture Matrix approximations
38Thank you