Eigen vectors - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Eigen vectors

Description:

Theorem A n n square matrix A is diagonalizable if and only if A ... the rank of A are called the singular values of A, and 1 ... Eigen vectors Author: – PowerPoint PPT presentation

Number of Views:197
Avg rating:3.0/5.0
Slides: 20
Provided by: GCCCD
Category:
Tags: eigen | values | vectors

less

Transcript and Presenter's Notes

Title: Eigen vectors


1
Chapter 5 Eigenvalues and Eigenvectors
2
Definition Let A be an nn matrix. Suppose that
x is a non-zero vector in ?n and ? is a number
(possibly zero) such that
Ax ? x then x is
called an eigenvector of A and is called an
eigenvalue of A We say that ? is the eigenvalue
associated with x and x is an eigenvector
corresponding to ?.
3
Theorem The number ? is an eigenvalues of the nn
matrix A if and only if it satisfies the
equation
det( ?In A ) 0
Definition When expanded, the determinant det(
xIn A ) is a polynomial in x of degree n,
called the characteristic polynomial of A,
denoted by ?A(x). The equation det( xIn A )
0 is called the characteristic equation of A.
4
Let ? be an eigenvalue of the nn matrix
A. Theorem The zero vector and the set of
eigenvectors of A associated with ? form a
subspace of ?n (this is actually the null space
of ?In A ). Definition The subspace mentioned
above is called the eigenspace of A associated
with ?, denoted by W?
Theorem The dimension of W? ? the multiplicity
of (x - ? ) in ?A(x).
5
  • Theorem
  • Let A be an nn matrix with eigenvalues ?1, ?2,
    , ?n (not necessarily distinct) then
  • ?1 ?2 ?n det(A)
  • ?1 ?n a11 a22 ann (the
    trace of A)

6
5.3 Diagonalization
Definition A square nn matrix A is said to be
diagonalizable if there is an invertible nn
matrix S such that
S-1AS is a diagonal matrix
?. In this case, we also say that S diagonalizes
A.
(Note not all square matrices are
diagonalizable.)
7
Theorem A nn square matrix A is diagonalizable
if and only if A has n linearly independent
eigenvectors.
Theorem If v1, , vk are eigenvectors for a
matrix A, and the associated eigenvalues ?1, ?2,
, ?k are distinct, then
v1, , vk are linearly
independent.
Corollary If the characteristic polynomial ?A(x)
of a nn matrix A has n distinct roots (real or
complex), then A is diagonalizable.
8
Theorem If v1, , vk are n linearly independent
eigenvectors of a matrix A, and S is a nn matrix
whose i-th column is vi, then S is invertible and
S-1 A S
9
5.4 Symmetric Matrices
Recall that a nn matrix A is symmetric if aij
aji , i.e.
  • Theorem
  • All eigenvalues of a symmetric matrix are real.
  • All symmetric matrices are diagonalizable, i.e.
    we can find an invertible matrix P such that
    P-1AP diagonal matrix (whose
    diagonal elements
    are exactly the eigenvalues of A
    including
    multiplicities)
  • Every nn symmetric matrix A has n linearly
    independent eigenvectors.

10
Theorem If A is an nn symmetric matrix, then
eigenvectors of A associated with different
eigenvalues are orthogonal.
  • Theorem
  • If A is an nn symmetric matrix, then we can find
    n linearly independent eigenvectors of A that are
  • orthogonal to each other and
  • of unit length
  • (this is called an orthonormal set.)

11
What if A is not symmetric?
  • In that case A may not be diagonalizable and we
    have two choices
  • Give up diagonal form and accept a less beautiful
    form called the Jordan Form. i.e.
    P-1AP a square matrix in Jordan form
  • Use two different invertible matrices to make A
    diagonal i.e. PAQ a diagonal
    matrix

12
Jordan Form of a Matrix
The general Jordan Form is very tedious to
describe, but we can get some ideas by looking at
examples.
In this matrix, the eigenvalues are 2, -1.
for the eigenvalue 2, there are 2 linearly indep.
eigenvectors, for the eigenvalue -1, there
is only one linearly indep. eigenvector.
13
In case II, we use the fact that (AT)A is always
symmetric, and its eigenvalues are
non-negative. Lemma The eigenvalues of (AT)A are
non-negative. proof Suppose that (AT)Ax
?x for some nonzero x, then
??Ax??2 Ax Ax x ATAx x
?x ???x??2 but ??Ax??2 ù 0, hence
? ù 0.
14
Theorem (Singular Value Decomposition of Square
Matrices) Given any nn matrix A, we can find two
invertible nn matrices U and V such that
  • where
  • k is the rank of A
  • are called the singular values
    of A, and ?1 ù ?2 ù ù ?k are all the
    non-zero eigenvalues of (AT)A

15
Applications in data compression
For simplicity, we only consider square BW
pictures, in which each pixel is represented by a
whole number between 0 and 255. Suppose
that A is such a 100100 matrix. It then takes up
a bit more than 10,000 bytes of memory.
If we use the previous theorem to decompose A
into
then the ?is will be very small for large is
and when the picture contains only smooth
gradation. Hence we can drop those singular
values and get an approximation of A.
16
For example, we just keep the first 20 largest
singular values, i.e.
in this case, we only have to keep the first 20
columns of U, and the first 20 rows of V, because
all the others will not contribute to the product
at all. Hence we have
(10020) (2020) (20100)
17
(No Transcript)
18
The first matrix requires 10020 2000
bytes, the middle diagonal matrix requires only
20 bytes. the last matrix requires also 10020
2000 bytes, hence the compressed image requires a
bit more than 4020 bytes, which is about 40 of
the original. Lets look at an example and find
out how lossy this compression is.
19
Final Remarks
Even though this compression method is very
beautiful in theory, it is not used commercially
(at least not today) possibly due to the
complexity in the decomposition process. The
most popular JPEG compression format uses
Discrete Cosine Transform on 88 blocks and then
discard the insignificant elements in the
transformed 88 matrix. This process requires
only matrix multiplications, term-by-term
division, and rounding. Hence it is much faster
than the Singular Value Decomposition.
Write a Comment
User Comments (0)
About PowerShow.com